Sample records for extremely high computational

  1. Final Report Extreme Computing and U.S. Competitiveness DOE Award. DE-FG02-11ER26087/DE-SC0008764

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mustain, Christopher J.

    The Council has acted on each of the grant deliverables during the funding period. The deliverables are: (1) convening the Council’s High Performance Computing Advisory Committee (HPCAC) on a bi-annual basis; (2) broadening public awareness of high performance computing (HPC) and exascale developments; (3) assessing the industrial applications of extreme computing; and (4) establishing a policy and business case for an exascale economy.

  2. Extreme-Scale Computing Project Aims to Advance Precision Oncology | FNLCR Staging

    Cancer.gov

    Two government agencies and five national laboratories are collaborating to develop extremely high-performance computing capabilities that will analyze mountains of research and clinical data to improve scientific understanding of cancer, predict dru

  3. Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiu, Dongbin

    2017-03-03

    The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.

  4. Extreme-Scale Computing Project Aims to Advance Precision Oncology | Frederick National Laboratory for Cancer Research

    Cancer.gov

    Two government agencies and five national laboratories are collaborating to develop extremely high-performance computing capabilities that will analyze mountains of research and clinical data to improve scientific understanding of cancer, predict dru

  5. Extreme-Scale Computing Project Aims to Advance Precision Oncology | Poster

    Cancer.gov

    Two government agencies and five national laboratories are collaborating to develop extremely high-performance computing capabilities that will analyze mountains of research and clinical data to improve scientific understanding of cancer, predict drug response, and improve treatments for patients.

  6. Upper extremity pain and computer use among engineering graduate students.

    PubMed

    Schlossberg, Eric B; Morrow, Sandra; Llosa, Augusto E; Mamary, Edward; Dietrich, Peter; Rempel, David M

    2004-09-01

    The objective of this study was to investigate risk factors associated with persistent or recurrent upper extremity and neck pain among engineering graduate students. A random sample of 206 Electrical Engineering and Computer Science (EECS) graduate students at a large public university completed an online questionnaire. Approximately 60% of respondents reported upper extremity or neck pain attributed to computer use and reported a mean pain severity score of 4.5 (+/-2.2; scale 0-10). In a final logistic regression model, female gender, years of computer use, and hours of computer use per week were significantly associated with pain. The high prevalence of upper extremity pain reported by graduate students suggests a public health need to identify interventions that will reduce symptom severity and prevent impairment.

  7. Spiking Excitable Semiconductor Laser as Optical Neurons: Dynamics, Clustering and Global Emerging Behaviors

    DTIC Science & Technology

    2014-06-28

    constructed from inexpensive semiconductor lasers could lead to the development of novel neuro-inspired optical computing devices (threshold detectors ...optical computing devices (threshold detectors , logic gates, signal recognition, etc.). Other topics of research included the analysis of extreme events in...Extreme events is nowadays a highly active field of research. Rogue waves, earthquakes of high magnitude and financial crises are all rare and

  8. Optimized photonic gauge of extreme high vacuum with Petawatt lasers

    NASA Astrophysics Data System (ADS)

    Paredes, Ángel; Novoa, David; Tommasini, Daniele; Mas, Héctor

    2014-03-01

    One of the latest proposed applications of ultra-intense laser pulses is their possible use to gauge extreme high vacuum by measuring the photon radiation resulting from nonlinear Thomson scattering within a vacuum tube. Here, we provide a complete analysis of the process, computing the expected rates and spectra, both for linear and circular polarizations of the laser pulses, taking into account the effect of the time envelope in a slowly varying envelope approximation. We also design a realistic experimental configuration allowing for the implementation of the idea and compute the corresponding geometric efficiencies. Finally, we develop an optimization procedure for this photonic gauge of extreme high vacuum at high repetition rate Petawatt and multi-Petawatt laser facilities, such as VEGA, JuSPARC and ELI.

  9. A characterization of workflow management systems for extreme-scale applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferreira da Silva, Rafael; Filgueira, Rosa; Pietri, Ilia

    We present that the automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compellingmore » case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. Finally, the paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.« less

  10. A characterization of workflow management systems for extreme-scale applications

    DOE PAGES

    Ferreira da Silva, Rafael; Filgueira, Rosa; Pietri, Ilia; ...

    2017-02-16

    We present that the automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compellingmore » case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. Finally, the paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.« less

  11. Use of computer games as an intervention for stroke.

    PubMed

    Proffitt, Rachel M; Alankus, Gazihan; Kelleher, Caitlin L; Engsberg, Jack R

    2011-01-01

    Current rehabilitation for persons with hemiparesis after stroke requires high numbers of repetitions to be in accordance with contemporary motor learning principles. The motivational characteristics of computer games can be harnessed to create engaging interventions for persons with hemiparesis after stroke that incorporate this high number of repetitions. The purpose of this case report was to test the feasibility of using computer games as a 6-week home therapy intervention to improve upper extremity function for a person with stroke. One person with left upper extremity hemiparesis after stroke participated in a 6-week home therapy computer game intervention. The games were customized to her preferences and abilities and modified weekly. Her performance was tracked and analyzed. Data from pre-, mid-, and postintervention testing using standard upper extremity measures and the Reaching Performance Scale (RPS) were analyzed. After 3 weeks, the participant demonstrated increased upper extremity range of motion at the shoulder and decreased compensatory trunk movements during reaching tasks. After 6 weeks, she showed functional gains in activities of daily living (ADLs) and instrumental ADLs despite no further improvements on the RPS. Results indicate that computer games have the potential to be a useful intervention for people with stroke. Future work will add additional support to quantify the effectiveness of the games as a home therapy intervention for persons with stroke.

  12. Opportunities for nonvolatile memory systems in extreme-scale high-performance computing

    DOE PAGES

    Vetter, Jeffrey S.; Mittal, Sparsh

    2015-01-12

    For extreme-scale high-performance computing systems, system-wide power consumption has been identified as one of the key constraints moving forward, where DRAM main memory systems account for about 30 to 50 percent of a node's overall power consumption. As the benefits of device scaling for DRAM memory slow, it will become increasingly difficult to keep memory capacities balanced with increasing computational rates offered by next-generation processors. However, several emerging memory technologies related to nonvolatile memory (NVM) devices are being investigated as an alternative for DRAM. Moving forward, NVM devices could offer solutions for HPC architectures. Researchers are investigating how to integratemore » these emerging technologies into future extreme-scale HPC systems and how to expose these capabilities in the software stack and applications. In addition, current results show several of these strategies could offer high-bandwidth I/O, larger main memory capacities, persistent data structures, and new approaches for application resilience and output postprocessing, such as transaction-based incremental checkpointing and in situ visualization, respectively.« less

  13. A European Flagship Programme on Extreme Computing and Climate

    NASA Astrophysics Data System (ADS)

    Palmer, Tim

    2017-04-01

    In 2016, an outline proposal co-authored by a number of leading climate modelling scientists from around Europe for a (c. 1 billion euro) flagship project on exascale computing and high-resolution global climate modelling was sent to the EU via its Future and Emerging Flagship Technologies Programme. The project is formally entitled "A Flagship European Programme on Extreme Computing and Climate (EPECC)"? In this talk I will outline the reasons why I believe such a project is needed and describe the current status of the project. I will leave time for some discussion.

  14. Recovery Act - CAREER: Sustainable Silicon -- Energy-Efficient VLSI Interconnect for Extreme-Scale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, Patrick

    2014-01-31

    The research goal of this CAREER proposal is to develop energy-efficient, VLSI interconnect circuits and systems that will facilitate future massively-parallel, high-performance computing. Extreme-scale computing will exhibit massive parallelism on multiple vertical levels, from thou­ sands of computational units on a single processor to thousands of processors in a single data center. Unfortunately, the energy required to communicate between these units at every level (on­ chip, off-chip, off-rack) will be the critical limitation to energy efficiency. Therefore, the PI's career goal is to become a leading researcher in the design of energy-efficient VLSI interconnect for future computing systems.

  15. The association between postural alignment and psychosocial factors to upper quadrant pain in high school students: a prospective study.

    PubMed

    Brink, Yolandi; Crous, Lynette Christine; Louw, Quinette Abigail; Grimmer-Somers, Karen; Schreve, Kristiaan

    2009-12-01

    Prolonged sitting and psychosocial factors have been associated with musculoskeletal symptoms among adolescents. However, the impact of prolonged static sitting on musculoskeletal pain among South African high school students is uncertain. A prospective observational study was performed to determine whether sitting postural alignment and psychosocial factors contribute to the development of upper quadrant musculoskeletal pain (UQMP) in grade ten high school students working on desktop computers. The sitting postural alignment, depression, anxiety and computer use of 104 asymptomatic students were measured at baseline. At three and six months post baseline, the prevalence of UQMP was determined. Twenty-seven students developed UQMP due to seated or computer-related activities. An extreme cervical angle (<34.75 degrees or >43.95 degrees; OR 2.8; 95% CI: 1.1-7.3) and a combination of extreme cervical and thoracic angles (<63.1 degrees or >71.1 degrees; OR 2.2; 95% CI: 1.1-5.6) were significant postural risk factors for the development of UQMP. Boys with any extreme angle were more likely to suffer pain compared with boys with all middle range angles (OR 4.9; 95% CI: 1.0-24.5). No similar effect was found for girls. There was no strong relationship between depression, anxiety, computer exposure and UQMP among South African high school students.

  16. Dense, Efficient Chip-to-Chip Communication at the Extremes of Computing

    ERIC Educational Resources Information Center

    Loh, Matthew

    2013-01-01

    The scalability of CMOS technology has driven computation into a diverse range of applications across the power consumption, performance and size spectra. Communication is a necessary adjunct to computation, and whether this is to push data from node-to-node in a high-performance computing cluster or from the receiver of wireless link to a neural…

  17. Extreme-scale Algorithms and Solver Resilience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dongarra, Jack

    A widening gap exists between the peak performance of high-performance computers and the performance achieved by complex applications running on these platforms. Over the next decade, extreme-scale systems will present major new challenges to algorithm development that could amplify this mismatch in such a way that it prevents the productive use of future DOE Leadership computers due to the following; Extreme levels of parallelism due to multicore processors; An increase in system fault rates requiring algorithms to be resilient beyond just checkpoint/restart; Complex memory hierarchies and costly data movement in both energy and performance; Heterogeneous system architectures (mixing CPUs, GPUs,more » etc.); and Conflicting goals of performance, resilience, and power requirements.« less

  18. Applying systems biology methods to the study of human physiology in extreme environments

    PubMed Central

    2013-01-01

    Systems biology is defined in this review as ‘an iterative process of computational model building and experimental model revision with the aim of understanding or simulating complex biological systems’. We propose that, in practice, systems biology rests on three pillars: computation, the omics disciplines and repeated experimental perturbation of the system of interest. The number of ethical and physiologically relevant perturbations that can be used in experiments on healthy humans is extremely limited and principally comprises exercise, nutrition, infusions (e.g. Intralipid), some drugs and altered environment. Thus, we argue that systems biology and environmental physiology are natural symbionts for those interested in a system-level understanding of human biology. However, despite excellent progress in high-altitude genetics and several proteomics studies, systems biology research into human adaptation to extreme environments is in its infancy. A brief description and overview of systems biology in its current guise is given, followed by a mini review of computational methods used for modelling biological systems. Special attention is given to high-altitude research, metabolic network reconstruction and constraint-based modelling. PMID:23849719

  19. Impact of surface coupling grids on tropical cyclone extremes in high-resolution atmospheric simulations

    DOE PAGES

    Zarzycki, Colin M.; Reed, Kevin A.; Bacmeister, Julio T.; ...

    2016-02-25

    This article discusses the sensitivity of tropical cyclone climatology to surface coupling strategy in high-resolution configurations of the Community Earth System Model. Using two supported model setups, we demonstrate that the choice of grid on which the lowest model level wind stress and surface fluxes are computed may lead to differences in cyclone strength in multi-decadal climate simulations, particularly for the most intense cyclones. Using a deterministic framework, we show that when these surface quantities are calculated on an ocean grid that is coarser than the atmosphere, the computed frictional stress is misaligned with wind vectors in individual atmospheric gridmore » cells. This reduces the effective surface drag, and results in more intense cyclones when compared to a model configuration where the ocean and atmosphere are of equivalent resolution. Our results demonstrate that the choice of computation grid for atmosphere–ocean interactions is non-negligible when considering climate extremes at high horizontal resolution, especially when model components are on highly disparate grids.« less

  20. Uncertainties in obtaining high reliability from stress-strength models

    NASA Technical Reports Server (NTRS)

    Neal, Donald M.; Matthews, William T.; Vangel, Mark G.

    1992-01-01

    There has been a recent interest in determining high statistical reliability in risk assessment of aircraft components. The potential consequences are identified of incorrectly assuming a particular statistical distribution for stress or strength data used in obtaining the high reliability values. The computation of the reliability is defined as the probability of the strength being greater than the stress over the range of stress values. This method is often referred to as the stress-strength model. A sensitivity analysis was performed involving a comparison of reliability results in order to evaluate the effects of assuming specific statistical distributions. Both known population distributions, and those that differed slightly from the known, were considered. Results showed substantial differences in reliability estimates even for almost nondetectable differences in the assumed distributions. These differences represent a potential problem in using the stress-strength model for high reliability computations, since in practice it is impossible to ever know the exact (population) distribution. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability.

  1. Number of Black Children in Extreme Poverty Hits Record High. Analysis Background.

    ERIC Educational Resources Information Center

    Children's Defense Fund, Washington, DC.

    To examine the experiences of black children and poverty, researchers conducted a computer analysis of data from the U.S. Census Bureau's Current Population Survey, the source of official government poverty statistics. The data are through 2001. Results indicated that nearly 1 million black children were living in extreme poverty, with after-tax…

  2. MaRIE theory, modeling and computation roadmap executive summary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lookman, Turab

    The confluence of MaRIE (Matter-Radiation Interactions in Extreme) and extreme (exascale) computing timelines offers a unique opportunity in co-designing the elements of materials discovery, with theory and high performance computing, itself co-designed by constrained optimization of hardware and software, and experiments. MaRIE's theory, modeling, and computation (TMC) roadmap efforts have paralleled 'MaRIE First Experiments' science activities in the areas of materials dynamics, irradiated materials and complex functional materials in extreme conditions. The documents that follow this executive summary describe in detail for each of these areas the current state of the art, the gaps that exist and the road mapmore » to MaRIE and beyond. Here we integrate the various elements to articulate an overarching theme related to the role and consequences of heterogeneities which manifest as competing states in a complex energy landscape. MaRIE experiments will locate, measure and follow the dynamical evolution of these heterogeneities. Our TMC vision spans the various pillar science and highlights the key theoretical and experimental challenges. We also present a theory, modeling and computation roadmap of the path to and beyond MaRIE in each of the science areas.« less

  3. Application of a fast skyline computation algorithm for serendipitous searching problems

    NASA Astrophysics Data System (ADS)

    Koizumi, Kenichi; Hiraki, Kei; Inaba, Mary

    2018-02-01

    Skyline computation is a method of extracting interesting entries from a large population with multiple attributes. These entries, called skyline or Pareto optimal entries, are known to have extreme characteristics that cannot be found by outlier detection methods. Skyline computation is an important task for characterizing large amounts of data and selecting interesting entries with extreme features. When the population changes dynamically, the task of calculating a sequence of skyline sets is called continuous skyline computation. This task is known to be difficult to perform for the following reasons: (1) information of non-skyline entries must be stored since they may join the skyline in the future; (2) the appearance or disappearance of even a single entry can change the skyline drastically; (3) it is difficult to adopt a geometric acceleration algorithm for skyline computation tasks with high-dimensional datasets. Our new algorithm called jointed rooted-tree (JR-tree) manages entries using a rooted tree structure. JR-tree delays extend the tree to deep levels to accelerate tree construction and traversal. In this study, we presented the difficulties in extracting entries tagged with a rare label in high-dimensional space and the potential of fast skyline computation in low-latency cell identification technology.

  4. Connecting Performance Analysis and Visualization to Advance Extreme Scale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bremer, Peer-Timo; Mohr, Bernd; Schulz, Martin

    2015-07-29

    The characterization, modeling, analysis, and tuning of software performance has been a central topic in High Performance Computing (HPC) since its early beginnings. The overall goal is to make HPC software run faster on particular hardware, either through better scheduling, on-node resource utilization, or more efficient distributed communication.

  5. Extreme-Scale De Novo Genome Assembly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Georganas, Evangelos; Hofmeyr, Steven; Egan, Rob

    De novo whole genome assembly reconstructs genomic sequence from short, overlapping, and potentially erroneous DNA segments and is one of the most important computations in modern genomics. This work presents HipMER, a high-quality end-to-end de novo assembler designed for extreme scale analysis, via efficient parallelization of the Meraculous code. Genome assembly software has many components, each of which stresses different components of a computer system. This chapter explains the computational challenges involved in each step of the HipMer pipeline, the key distributed data structures, and communication costs in detail. We present performance results of assembling the human genome and themore » large hexaploid wheat genome on large supercomputers up to tens of thousands of cores.« less

  6. Algorithm-Based Fault Tolerance Integrated with Replication

    NASA Technical Reports Server (NTRS)

    Some, Raphael; Rennels, David

    2008-01-01

    In a proposed approach to programming and utilization of commercial off-the-shelf computing equipment, a combination of algorithm-based fault tolerance (ABFT) and replication would be utilized to obtain high degrees of fault tolerance without incurring excessive costs. The basic idea of the proposed approach is to integrate ABFT with replication such that the algorithmic portions of computations would be protected by ABFT, and the logical portions by replication. ABFT is an extremely efficient, inexpensive, high-coverage technique for detecting and mitigating faults in computer systems used for algorithmic computations, but does not protect against errors in logical operations surrounding algorithms.

  7. OPTIMIZING THROUGH CO-EVOLUTIONARY AVALANCHES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. BOETTCHER; A. PERCUS

    2000-08-01

    We explore a new general-purpose heuristic for finding high-quality solutions to hard optimization problems. The method, called extremal optimization, is inspired by ''self-organized critically,'' a concept introduced to describe emergent complexity in many physical systems. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, extremal optimization successively replaces extremely undesirable elements of a sub-optimal solution with new, random ones. Large fluctuations, called ''avalanches,'' ensue that efficiently explore many local optima. Drawing upon models used to simulate far-from-equilibrium dynamics, extremal optimization complements approximation methods inspired by equilibrium statistical physics, such as simulated annealing. With only onemore » adjustable parameter, its performance has proved competitive with more elaborate methods, especially near phase transitions. Those phase transitions are found in the parameter space of most optimization problems, and have recently been conjectured to be the origin of some of the hardest instances in computational complexity. We will demonstrate how extremal optimization can be implemented for a variety of combinatorial optimization problems. We believe that extremal optimization will be a useful tool in the investigation of phase transitions in combinatorial optimization problems, hence valuable in elucidating the origin of computational complexity.« less

  8. High resolution extremity CT for biomechanics modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ashby, A.E.; Brand, H.; Hollerbach, K.

    1995-09-23

    With the advent of ever more powerful computing and finite element analysis (FEA) capabilities, the bone and joint geometry detail available from either commercial surface definitions or from medical CT scans is inadequate. For dynamic FEA modeling of joints, precise articular contours are necessary to get appropriate contact definition. In this project, a fresh cadaver extremity was suspended in parafin in a lucite cylinder and then scanned with an industrial CT system to generate a high resolution data set for use in biomechanics modeling.

  9. Lattice Boltzmann for Airframe Noise Predictions

    NASA Technical Reports Server (NTRS)

    Barad, Michael; Kocheemoolayil, Joseph; Kiris, Cetin

    2017-01-01

    Increase predictive use of High-Fidelity Computational Aero- Acoustics (CAA) capabilities for NASA's next generation aviation concepts. CFD has been utilized substantially in analysis and design for steady-state problems (RANS). Computational resources are extremely challenged for high-fidelity unsteady problems (e.g. unsteady loads, buffet boundary, jet and installation noise, fan noise, active flow control, airframe noise, etc) ü Need novel techniques for reducing the computational resources consumed by current high-fidelity CAA Need routine acoustic analysis of aircraft components at full-scale Reynolds number from first principles Need an order of magnitude reduction in wall time to solution!

  10. Multiresolution Iterative Reconstruction in High-Resolution Extremity Cone-Beam CT

    PubMed Central

    Cao, Qian; Zbijewski, Wojciech; Sisniega, Alejandro; Yorkston, John; Siewerdsen, Jeffrey H; Stayman, J Webster

    2016-01-01

    Application of model-based iterative reconstruction (MBIR) to high resolution cone-beam CT (CBCT) is computationally challenging because of the very fine discretization (voxel size <100 µm) of the reconstructed volume. Moreover, standard MBIR techniques require that the complete transaxial support for the acquired projections is reconstructed, thus precluding acceleration by restricting the reconstruction to a region-of-interest. To reduce the computational burden of high resolution MBIR, we propose a multiresolution Penalized-Weighted Least Squares (PWLS) algorithm, where the volume is parameterized as a union of fine and coarse voxel grids as well as selective binning of detector pixels. We introduce a penalty function designed to regularize across the boundaries between the two grids. The algorithm was evaluated in simulation studies emulating an extremity CBCT system and in a physical study on a test-bench. Artifacts arising from the mismatched discretization of the fine and coarse sub-volumes were investigated. The fine grid region was parameterized using 0.15 mm voxels and the voxel size in the coarse grid region was varied by changing a downsampling factor. No significant artifacts were found in either of the regions for downsampling factors of up to 4×. For a typical extremities CBCT volume size, this downsampling corresponds to an acceleration of the reconstruction that is more than five times faster than a brute force solution that applies fine voxel parameterization to the entire volume. For certain configurations of the coarse and fine grid regions, in particular when the boundary between the regions does not cross high attenuation gradients, downsampling factors as high as 10× can be used without introducing artifacts, yielding a ~50× speedup in PWLS. The proposed multiresolution algorithm significantly reduces the computational burden of high resolution iterative CBCT reconstruction and can be extended to other applications of MBIR where computationally expensive, high-fidelity forward models are applied only to a sub-region of the field-of-view. PMID:27694701

  11. HAlign-II: efficient ultra-large multiple sequence alignment and phylogenetic tree reconstruction with distributed and parallel computing.

    PubMed

    Wan, Shixiang; Zou, Quan

    2017-01-01

    Multiple sequence alignment (MSA) plays a key role in biological sequence analyses, especially in phylogenetic tree construction. Extreme increase in next-generation sequencing results in shortage of efficient ultra-large biological sequence alignment approaches for coping with different sequence types. Distributed and parallel computing represents a crucial technique for accelerating ultra-large (e.g. files more than 1 GB) sequence analyses. Based on HAlign and Spark distributed computing system, we implement a highly cost-efficient and time-efficient HAlign-II tool to address ultra-large multiple biological sequence alignment and phylogenetic tree construction. The experiments in the DNA and protein large scale data sets, which are more than 1GB files, showed that HAlign II could save time and space. It outperformed the current software tools. HAlign-II can efficiently carry out MSA and construct phylogenetic trees with ultra-large numbers of biological sequences. HAlign-II shows extremely high memory efficiency and scales well with increases in computing resource. THAlign-II provides a user-friendly web server based on our distributed computing infrastructure. HAlign-II with open-source codes and datasets was established at http://lab.malab.cn/soft/halign.

  12. An assessment of the real-time application capabilities of the SIFT computer system

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1982-01-01

    The real-time capabilities of the SIFT computer system, a highly reliable multicomputer architecture developed to support the flight controls of a relaxed static stability aircraft, are discussed. The SIFT computer system was designed to meet extremely high reliability requirements and to facilitate a formal proof of its correctness. Although SIFT represents a significant achievement in fault-tolerant system research it presents an unusual and restrictive interface to its users. The characteristics of the user interface and its impact on application system design are assessed.

  13. Social networking among upper extremity patients.

    PubMed

    Rozental, Tamara D; George, Tina M; Chacko, Aron T

    2010-05-01

    Despite their rising popularity, the health care profession has been slow to embrace social networking sites. These are Web-based initiatives, designed to bring people with common interests or activities under a common umbrella. The purpose of this study is to evaluate social networking patterns among upper extremity patients. A total of 742 anonymous questionnaires were distributed among upper extremity outpatients, with a 62% response rate (462 were completed). Demographic characteristics (gender, age, level of education, employment, type of health insurance, and income stratification) were defined, and data on computer ownership and frequency of social networking use were collected. Social network users and nonusers were compared according to their demographic and socioeconomic characteristics. Our patient cohort consisted of 450 patients. Of those 450 patients, 418 had a high school education or higher, and 293 reported a college or graduate degree. The majority of patients (282) were employed at the time of the survey, and income was evenly distributed among U.S. Census Bureau quintiles. A total of 349 patients reported computer ownership, and 170 reported using social networking sites. When compared to nonusers, social networking users were younger (p<.001), more educated (p<.001), and more likely to be employed (p = .013). Users also had higher income levels (p=0.028) and had high rates of computer ownership (p<.001). Multivariate regression revealed that younger age (p<.001), computer ownership (p<.001), and higher education (p<.001) were independent predictors of social networking use. Most users (n = 114) regularly visit a single site. Facebook was the most popular site visited (n=142), followed by MySpace (n=28) and Twitter (n=16). Of the 450 upper extremity patients in our sample, 170 use social networking sites. Younger age, higher level of education, and computer ownership were associated with social networking use. Physicians should consider expanding their use of social networking sites to reach their online patient populations. Copyright 2010 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  14. The science of visual analysis at extreme scale

    NASA Astrophysics Data System (ADS)

    Nowell, Lucy T.

    2011-01-01

    Driven by market forces and spanning the full spectrum of computational devices, computer architectures are changing in ways that present tremendous opportunities and challenges for data analysis and visual analytic technologies. Leadership-class high performance computing system will have as many as a million cores by 2020 and support 10 billion-way concurrency, while laptop computers are expected to have as many as 1,000 cores by 2015. At the same time, data of all types are increasing exponentially and automated analytic methods are essential for all disciplines. Many existing analytic technologies do not scale to make full use of current platforms and fewer still are likely to scale to the systems that will be operational by the end of this decade. Furthermore, on the new architectures and for data at extreme scales, validating the accuracy and effectiveness of analytic methods, including visual analysis, will be increasingly important.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quon, Eliot; Platt, Andrew; Yu, Yi-Hsiang

    Extreme loads are often a key cost driver for wave energy converters (WECs). As an alternative to exhaustive Monte Carlo or long-term simulations, the most likely extreme response (MLER) method allows mid- and high-fidelity simulations to be used more efficiently in evaluating WEC response to events at the edges of the design envelope, and is therefore applicable to system design analysis. The study discussed in this paper applies the MLER method to investigate the maximum heave, pitch, and surge force of a point absorber WEC. Most likely extreme waves were obtained from a set of wave statistics data based onmore » spectral analysis and the response amplitude operators (RAOs) of the floating body; the RAOs were computed from a simple radiation-and-diffraction-theory-based numerical model. A weakly nonlinear numerical method and a computational fluid dynamics (CFD) method were then applied to compute the short-term response to the MLER wave. Effects of nonlinear wave and floating body interaction on the WEC under the anticipated 100-year waves were examined by comparing the results from the linearly superimposed RAOs, the weakly nonlinear model, and CFD simulations. Overall, the MLER method was successfully applied. In particular, when coupled to a high-fidelity CFD analysis, the nonlinear fluid dynamics can be readily captured.« less

  16. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.

    PubMed

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

  17. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

    PubMed Central

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems. PMID:29503613

  18. Computational Failure Modeling of Lower Extremities

    DTIC Science & Technology

    2012-01-01

    bone fracture, ligament tear, and muscle rupture . While these injuries may seem well-defined through medical imaging, the process of injury and the...to vehicles from improvised explosives cause severe injuries to the lower extremities, in- cluding bone fracture, ligament tear, and muscle rupture ...modeling offers a powerful tool to explore the insult-to-injury process with high-resolution. When studying a complex dynamic process such as this, it is

  19. A New Look at NASA: Strategic Research In Information Technology

    NASA Technical Reports Server (NTRS)

    Alfano, David; Tu, Eugene (Technical Monitor)

    2002-01-01

    This viewgraph presentation provides information on research undertaken by NASA to facilitate the development of information technologies. Specific ideas covered here include: 1) Bio/nano technologies: biomolecular and nanoscale systems and tools for assembly and computing; 2) Evolvable hardware: autonomous self-improving, self-repairing hardware and software for survivable space systems in extreme environments; 3) High Confidence Software Technologies: formal methods, high-assurance software design, and program synthesis; 4) Intelligent Controls and Diagnostics: Next generation machine learning, adaptive control, and health management technologies; 5) Revolutionary computing: New computational models to increase capability and robustness to enable future NASA space missions.

  20. Computational Electromagnetics

    DTIC Science & Technology

    2011-02-20

    finite differences use the continuation method instead, and have been shown to lead to unconditionally stable numerics for a wide range of realistic PDE...best previous solvers were restricted to two-dimensional (range and height) refractive index variations. The numerical method we introduced...however, is such that even its solution on the basis of Rytov’s method gives rise to extremely high computational costs. We thus resort to

  1. Computer Games as Therapy for Persons with Stroke.

    PubMed

    Lauterbach, Sarah A; Foreman, Matt H; Engsberg, Jack R

    2013-02-01

    Stroke affects approximately 800,000 individuals each year, with 65% having residual impairments. Studies have demonstrated that mass practice leads to regaining motor function in affected extremities; however, traditional therapy does not include the repetitions needed for this recovery. Videogames have been shown to be good motivators to complete repetitions. Advances in technology and low-cost hardware bring new opportunities to use computer games during stroke therapy. This study examined the use of the Microsoft (Redmond, WA) Kinect™ and Flexible Action and Articulated Skeleton Toolkit (FAAST) software as a therapy tool to play existing free computer games on the Internet. Three participants attended a 1-hour session where they played two games with upper extremity movements as game controls. Video was taken for analysis of movement repetitions, and questions were answered about participant history and their perceptions of the games. Participants remained engaged through both games; regardless of previous computer use all participants successfully played two games. Five minutes of game play averaged 34 repetitions of the affected extremity. The Intrinsic Motivation Inventory showed a high level of satisfaction in two of the three participants. The Kinect Sensor with the FAAST software has the potential to be an economical tool to be used alongside traditional therapy to increase the number of repetitions completed in a motivating and engaging way for clients.

  2. Highly-Parallel, Highly-Compact Computing Structures Implemented in Nanotechnology

    NASA Technical Reports Server (NTRS)

    Crawley, D. G.; Duff, M. J. B.; Fountain, T. J.; Moffat, C. D.; Tomlinson, C. D.

    1995-01-01

    In this paper, we describe work in which we are evaluating how the evolving properties of nano-electronic devices could best be utilized in highly parallel computing structures. Because of their combination of high performance, low power, and extreme compactness, such structures would have obvious applications in spaceborne environments, both for general mission control and for on-board data analysis. However, the anticipated properties of nano-devices mean that the optimum architecture for such systems is by no means certain. Candidates include single instruction multiple datastream (SIMD) arrays, neural networks, and multiple instruction multiple datastream (MIMD) assemblies.

  3. Risk factors for neck and upper extremity disorders among computers users and the effect of interventions: an overview of systematic reviews.

    PubMed

    Andersen, Johan H; Fallentin, Nils; Thomsen, Jane F; Mikkelsen, Sigurd

    2011-05-12

    To summarize systematic reviews that 1) assessed the evidence for causal relationships between computer work and the occurrence of carpal tunnel syndrome (CTS) or upper extremity musculoskeletal disorders (UEMSDs), or 2) reported on intervention studies among computer users/or office workers. PubMed, Embase, CINAHL and Web of Science were searched for reviews published between 1999 and 2010. Additional publications were provided by content area experts. The primary author extracted all data using a purpose-built form, while two of the authors evaluated the quality of the reviews using recommended standard criteria from AMSTAR; disagreements were resolved by discussion. The quality of evidence syntheses in the included reviews was assessed qualitatively for each outcome and for the interventions. Altogether, 1,349 review titles were identified, 47 reviews were retrieved for full text relevance assessment, and 17 reviews were finally included as being relevant and of sufficient quality. The degrees of focus and rigorousness of these 17 reviews were highly variable. Three reviews on risk factors for carpal tunnel syndrome were rated moderate to high quality, 8 reviews on risk factors for UEMSDs ranged from low to moderate/high quality, and 6 reviews on intervention studies were of moderate to high quality. The quality of the evidence for computer use as a risk factor for CTS was insufficient, while the evidence for computer use and UEMSDs was moderate regarding pain complaints and limited for specific musculoskeletal disorders. From the reviews on intervention studies no strong evidence based recommendations could be given. Computer use is associated with pain complaints, but it is still not very clear if this association is causal. The evidence for specific disorders or diseases is limited. No effective interventions have yet been documented.

  4. Total variation-based neutron computed tomography

    NASA Astrophysics Data System (ADS)

    Barnard, Richard C.; Bilheux, Hassina; Toops, Todd; Nafziger, Eric; Finney, Charles; Splitter, Derek; Archibald, Rick

    2018-05-01

    We perform the neutron computed tomography reconstruction problem via an inverse problem formulation with a total variation penalty. In the case of highly under-resolved angular measurements, the total variation penalty suppresses high-frequency artifacts which appear in filtered back projections. In order to efficiently compute solutions for this problem, we implement a variation of the split Bregman algorithm; due to the error-forgetting nature of the algorithm, the computational cost of updating can be significantly reduced via very inexact approximate linear solvers. We present the effectiveness of the algorithm in the significantly low-angular sampling case using synthetic test problems as well as data obtained from a high flux neutron source. The algorithm removes artifacts and can even roughly capture small features when an extremely low number of angles are used.

  5. ARL Collaborative Research Alliance Materials in Extreme Dynamic Environments (MEDE)

    DTIC Science & Technology

    2010-11-19

    Program Internal to the CRA Staff Rotation Lectures, Workshops, and Research Reviews Education Opportunities for Government Personnel Student ... Engagement with ARL Research Environment Industry Partnership + Collaboration Other Collaboration Opportunities High Performance Computing DoD

  6. Computer simulation of immobilized pH gradients at acidic and alkaline extremes - A quest for extended pH intervals

    NASA Technical Reports Server (NTRS)

    Mosher, Richard A.; Bier, Milan; Righetti, Pier Giorgio

    1986-01-01

    Computer simulations of the concentration profiles of simple biprotic ampholytes with Delta pKs 1, 2, and 3, on immobilized pH gradients (IPG) at extreme pH values (pH 3-4 and pH 10-11) show markedly skewed steady-state profiles with increasing kurtosis at higher Delta pK values. Across neutrality, all the peaks are symmetric irrespective of their Delta pK values, but they show very high contribution to the conductivity of the background gel and significant alteration of the local buffering capacity. The problems of skewness, due to the exponential conductivity profiles at low and high pHs, and of gel burning due to a strong electroosmotic flow generated by the net charges in the gel matrix, also at low and high pHs, are solved by incorporating in the IPG gel a strong viscosity gradient. This is generated by a gradient of linear polyacrylamide which is trapped in the gel by the polymerization process.

  7. Alpha Control - A new Concept in SPM Control

    NASA Astrophysics Data System (ADS)

    Spizig, P.; Sanchen, D.; Volswinkler, G.; Ibach, W.; Koenen, J.

    2006-03-01

    Controlling modern Scanning Probe Microscopes demands highly sophisticated electronics. While flexibility and powerful computing power is of great importance in facilitating the variety of measurement modes, extremely low noise is also a necessity. Accordingly, modern SPM Controller designs are based on digital electronics to overcome the drawbacks of analog designs. While todays SPM controllers are based on DSPs or Microprocessors and often still incorporate analog parts, we are now introducing a completely new approach: Using a Field Programmable Gate Array (FPGA) to implement the digital control tasks allows unrivalled data processing speed by computing all tasks in parallel within a single chip. Time consuming task switching between data acquisition, digital filtering, scanning and the computing of feedback signals can be completely avoided. Together with a star topology to avoid any bus limitations in accessing the variety of ADCs and DACs, this design guarantees for the first time an entirely deterministic timing capability in the nanosecond regime for all tasks. This becomes especially useful for any external experiments which must be synchronized with the scan or for high speed scans that require not only closed loop control of the scanner, but also dynamic correction of the scan movement. Delicate samples additionally benefit from extremely high sample rates, allowing highly resolved signals and low noise levels.

  8. Extreme value analysis of the time derivative of the horizontal magnetic field and computed electric field

    NASA Astrophysics Data System (ADS)

    Wintoft, Peter; Viljanen, Ari; Wik, Magnus

    2016-05-01

    High-frequency ( ≈ minutes) variability of ground magnetic fields is caused by ionospheric and magnetospheric processes driven by the changing solar wind. The varying magnetic fields induce electrical fields that cause currents to flow in man-made conductors like power grids and pipelines. Under extreme conditions the geomagnetically induced currents (GIC) may be harmful to the power grids. Increasing our understanding of the extreme events is thus important for solar-terrestrial science and space weather. In this work 1-min resolution of the time derivative of measured local magnetic fields (|dBh/dt|) and computed electrical fields (Eh), for locations in Europe, have been analysed with extreme value analysis (EVA). The EVA results in an estimate of the generalized extreme value probability distribution that is described by three parameters: location, width, and shape. The shape parameter controls the extreme behaviour. The stations cover geomagnetic latitudes from 40 to 70° N. All stations included in the study have contiguous coverage of 18 years or more with 1-min resolution data. As expected, the EVA shows that the higher latitude stations have higher probability of large |dBh/dt| and |Eh| compared to stations further south. However, the EVA also shows that the shape of the distribution changes with magnetic latitude. The high latitudes have distributions that fall off faster to zero than the low latitudes, and upward bounded distributions can not be ruled out. The transition occurs around 59-61° N magnetic latitudes. Thus, the EVA shows that the observed series north of ≈ 60° N have already measured values that are close to the expected maxima values, while stations south of ≈ ° N will measure larger values in the future.

  9. Atomistic material behavior at extreme pressures

    DOE PAGES

    Beland, Laurent K.; Osetskiy, Yury N.; Stoller, Roger E.

    2016-08-05

    Computer simulations are routinely performed to model the response of materials to extreme environments, such as neutron (or ion) irradiation. The latter involves high-energy collisions from which a recoiling atom creates a so-called atomic displacement cascade. These cascades involve coordinated motion of atoms in the form of supersonic shockwaves. These shockwaves are characterized by local atomic pressures >15 GPa and interatomic distances <2 Å. Similar pressures and interatomic distances are observed in other extreme environment, including short-pulse laser ablation, high-impact ballistic collisions and diamond anvil cells. Displacement cascade simulations using four different force fields, with initial kinetic energies ranging frommore » 1 to 40 keV, show that there is a direct relationship between these high-pressure states and stable defect production. An important shortcoming in the modeling of interatomic interactions at these short distances, which in turn determines final defect production, is brought to light.« less

  10. Properties of high quality GaP single crystals grown by computer controlled liquid encapsulated Czochralski technique

    NASA Astrophysics Data System (ADS)

    Kokubun, Y.; Washizuka, S.; Ushizawa, J.; Watanabe, M.; Fukuda, T.

    1982-11-01

    The properties of GaP single crystals grown by an automatically diameter controlled liquid encapsulated Czochralski technique using a computer have been studied. A dislocation density less than 5×104 cm-2 has been observed for crystal grown in a temperature gradient lower than 70 °C/cm near the solid-liquid interface. Crystals have about 10% higher electron mobility than that of commercially available coracle controlled crystals and have 0.2˜0.5 compensation ratios. Yellow light emitting diodes using computer controlled (100) substrates have shown extremely high external quantum efficiency of 0.3%.

  11. High-resolution stochastic generation of extreme rainfall intensity for urban drainage modelling applications

    NASA Astrophysics Data System (ADS)

    Peleg, Nadav; Blumensaat, Frank; Molnar, Peter; Fatichi, Simone; Burlando, Paolo

    2016-04-01

    Urban drainage response is highly dependent on the spatial and temporal structure of rainfall. Therefore, measuring and simulating rainfall at a high spatial and temporal resolution is a fundamental step to fully assess urban drainage system reliability and related uncertainties. This is even more relevant when considering extreme rainfall events. However, the current space-time rainfall models have limitations in capturing extreme rainfall intensity statistics for short durations. Here, we use the STREAP (Space-Time Realizations of Areal Precipitation) model, which is a novel stochastic rainfall generator for simulating high-resolution rainfall fields that preserve the spatio-temporal structure of rainfall and its statistical characteristics. The model enables a generation of rain fields at 102 m and minute scales in a fast and computer-efficient way matching the requirements for hydrological analysis of urban drainage systems. The STREAP model was applied successfully in the past to generate high-resolution extreme rainfall intensities over a small domain. A sub-catchment in the city of Luzern (Switzerland) was chosen as a case study to: (i) evaluate the ability of STREAP to disaggregate extreme rainfall intensities for urban drainage applications; (ii) assessing the role of stochastic climate variability of rainfall in flow response and (iii) evaluate the degree of non-linearity between extreme rainfall intensity and system response (i.e. flow) for a small urban catchment. The channel flow at the catchment outlet is simulated by means of a calibrated hydrodynamic sewer model.

  12. Rapid, high-resolution measurement of leaf area and leaf orientation using terrestrial LiDAR scanning data

    USDA-ARS?s Scientific Manuscript database

    The rapid evolution of high performance computing technology has allowed for the development of extremely detailed models of the urban and natural environment. Although models can now represent sub-meter-scale variability in environmental geometry, model users are often unable to specify the geometr...

  13. Organic Materials For Optical Switching

    NASA Technical Reports Server (NTRS)

    Cardelino, Beatriz H.

    1993-01-01

    Equations predict properties of candidate materials. Report presents results of theoretical study of nonlinear optical properties of organic materials. Such materials used in optical switching devices for computers and telecommunications, replacing electronic switches. Optical switching potentially offers extremely high information throughout in compact hardware.

  14. The Need for Optical Means as an Alternative for Electronic Computing

    NASA Technical Reports Server (NTRS)

    Adbeldayem, Hossin; Frazier, Donald; Witherow, William; Paley, Steve; Penn, Benjamin; Bank, Curtis; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    An increasing demand for faster computers is rapidly growing to encounter the fast growing rate of Internet, space communication, and robotic industry. Unfortunately, the Very Large Scale Integration technology is approaching its fundamental limits beyond which the device will be unreliable. Optical interconnections and optical integrated circuits are strongly believed to provide the way out of the extreme limitations imposed on the growth of speed and complexity of nowadays computations by conventional electronics. This paper demonstrates two ultra-fast, all-optical logic gates and a high-density storage medium, which are essential components in building the future optical computer.

  15. Content Range and Precision of a Computer Adaptive Test of Upper Extremity Function for Children with Cerebral Palsy

    ERIC Educational Resources Information Center

    Montpetit, Kathleen; Haley, Stephen; Bilodeau, Nathalie; Ni, Pengsheng; Tian, Feng; Gorton, George, III; Mulcahey, M. J.

    2011-01-01

    This article reports on the content range and measurement precision of an upper extremity (UE) computer adaptive testing (CAT) platform of physical function in children with cerebral palsy. Upper extremity items representing skills of all abilities were administered to 305 parents. These responses were compared with two traditional standardized…

  16. Temperature extremes: geographic patterns, recent changes, and implications for organismal vulnerabilities.

    PubMed

    Buckley, Lauren B; Huey, Raymond B

    2016-12-01

    Extreme temperatures can injure or kill organisms and can drive evolutionary patterns. Many indices of extremes have been proposed, but few attempts have been made to establish geographic patterns of extremes and to evaluate whether they align with geographic patterns in biological vulnerability and diversity. To examine these issues, we adopt the CLIMDEX indices of thermal extremes. We compute scores for each index on a geographic grid during a baseline period (1961-1990) and separately for the recent period (1991-2010). Heat extremes (temperatures above the 90th percentile during the baseline period) have become substantially more common during the recent period, particularly in the tropics. Importantly, the various indices show weak geographic concordance, implying that organisms in different regions will face different forms of thermal stress. The magnitude of recent shifts in indices is largely uncorrelated with baseline scores in those indices, suggesting that organisms are likely to face novel thermal stresses. Organismal tolerances correlate roughly with absolute metrics (mainly for cold), but poorly with metrics defined relative to local conditions. Regions with high extreme scores do not correlate closely with regions with high species diversity, human population density, or agricultural production. Even though frequency and intensity of extreme temperature events have - and are likely to have - major impacts on organisms, the impacts are likely to be geographically and taxonomically idiosyncratic and difficult to predict. © 2016 John Wiley & Sons Ltd.

  17. Center for Center for Technology for Advanced Scientific Component Software (TASCS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostadin, Damevski

    A resounding success of the Scientific Discovery through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedented computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technologymore » for Advanced Scientific Component Software (TASCS)1 tackles these these issues by exploiting component-based software development to facilitate collaborative high-performance scientific computing.« less

  18. Tempest: Tools for Addressing the Needs of Next-Generation Climate Models

    NASA Astrophysics Data System (ADS)

    Ullrich, P. A.; Guerra, J. E.; Pinheiro, M. C.; Fong, J.

    2015-12-01

    Tempest is a comprehensive simulation-to-science infrastructure that tackles the needs of next-generation, high-resolution, data intensive climate modeling activities. This project incorporates three key components: TempestDynamics, a global modeling framework for experimental numerical methods and high-performance computing; TempestRemap, a toolset for arbitrary-order conservative and consistent remapping between unstructured grids; and TempestExtremes, a suite of detection and characterization tools for identifying weather extremes in large climate datasets. In this presentation, the latest advances with the implementation of this framework will be discussed, and a number of projects now utilizing these tools will be featured.

  19. Shuttle rocket booster computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Chung, T. J.; Park, O. Y.

    1988-01-01

    Additional results and a revised and improved computer program listing from the shuttle rocket booster computational fluid dynamics formulations are presented. Numerical calculations for the flame zone of solid propellants are carried out using the Galerkin finite elements, with perturbations expanded to the zeroth, first, and second orders. The results indicate that amplification of oscillatory motions does indeed prevail in high frequency regions. For the second order system, the trend is similar to the first order system for low frequencies, but instabilities may appear at frequencies lower than those of the first order system. The most significant effect of the second order system is that the admittance is extremely oscillatory between moderately high frequency ranges.

  20. Spatial and temporal accuracy of asynchrony-tolerant finite difference schemes for partial differential equations at extreme scales

    NASA Astrophysics Data System (ADS)

    Kumari, Komal; Donzis, Diego

    2017-11-01

    Highly resolved computational simulations on massively parallel machines are critical in understanding the physics of a vast number of complex phenomena in nature governed by partial differential equations. Simulations at extreme levels of parallelism present many challenges with communication between processing elements (PEs) being a major bottleneck. In order to fully exploit the computational power of exascale machines one needs to devise numerical schemes that relax global synchronizations across PEs. This asynchronous computations, however, have a degrading effect on the accuracy of standard numerical schemes.We have developed asynchrony-tolerant (AT) schemes that maintain order of accuracy despite relaxed communications. We show, analytically and numerically, that these schemes retain their numerical properties with multi-step higher order temporal Runge-Kutta schemes. We also show that for a range of optimized parameters,the computation time and error for AT schemes is less than their synchronous counterpart. Stability of the AT schemes which depends upon history and random nature of delays, are also discussed. Support from NSF is gratefully acknowledged.

  1. Field Scale Monitoring and Modeling of Water and Chemical Transfer in the Vadose Zone

    USDA-ARS?s Scientific Manuscript database

    Natural resource systems involve highly complex interactions of soil-plant-atmosphere-management components that are extremely difficult to quantitatively describe. Computer simulations for prediction and management of watersheds, water supply areas, and agricultural fields and farms have become inc...

  2. A general-purpose computer program for studying ultrasonic beam patterns generated with acoustic lenses

    NASA Technical Reports Server (NTRS)

    Roberti, Dino; Ludwig, Reinhold; Looft, Fred J.

    1988-01-01

    A 3-D computer model of a piston radiator with lenses for focusing and defocusing is presented. To achieve high-resolution imaging, the frequency of the transmitted and received ultrasound must be as high as 10 MHz. Current ultrasonic transducers produce an extremely narrow beam at these high frequencies and thus are not appropriate for imaging schemes such as synthetic-aperture focus techniques (SAFT). Consequently, a numerical analysis program has been developed to determine field intensity patterns that are radiated from ultrasonic transducers with lenses. Lens shapes are described and the field intensities are numerically predicted and compared with experimental results.

  3. BELM: Bayesian extreme learning machine.

    PubMed

    Soria-Olivas, Emilio; Gómez-Sanchis, Juan; Martín, José D; Vila-Francés, Joan; Martínez, Marcelino; Magdalena, José R; Serrano, Antonio J

    2011-03-01

    The theory of extreme learning machine (ELM) has become very popular on the last few years. ELM is a new approach for learning the parameters of the hidden layers of a multilayer neural network (as the multilayer perceptron or the radial basis function neural network). Its main advantage is the lower computational cost, which is especially relevant when dealing with many patterns defined in a high-dimensional space. This brief proposes a bayesian approach to ELM, which presents some advantages over other approaches: it allows the introduction of a priori knowledge; obtains the confidence intervals (CIs) without the need of applying methods that are computationally intensive, e.g., bootstrap; and presents high generalization capabilities. Bayesian ELM is benchmarked against classical ELM in several artificial and real datasets that are widely used for the evaluation of machine learning algorithms. Achieved results show that the proposed approach produces a competitive accuracy with some additional advantages, namely, automatic production of CIs, reduction of probability of model overfitting, and use of a priori knowledge.

  4. PROPOSED SIAM PROBLEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BAILEY, DAVID H.; BORWEIN, JONATHAN M.

    A recent paper by the present authors, together with mathematical physicists David Broadhurst and M. Larry Glasser, explored Bessel moment integrals, namely definite integrals of the general form {integral}{sub 0}{sup {infinity}} t{sup m}f{sup n}(t) dt, where the function f(t) is one of the classical Bessel functions. In that paper, numerous previously unknown analytic evaluations were obtained, using a combination of analytic methods together with some fairly high-powered numerical computations, often performed on highly parallel computers. In several instances, while we were able to numerically discover what appears to be a solid analytic identity, based on extremely high-precision numerical computations, wemore » were unable to find a rigorous proof. Thus we present here a brief list of some of these unproven but numerically confirmed identities.« less

  5. Risk Factors for Neck and Upper Extremity Disorders among Computers Users and the Effect of Interventions: An Overview of Systematic Reviews

    PubMed Central

    Andersen, Johan H.; Fallentin, Nils; Thomsen, Jane F.; Mikkelsen, Sigurd

    2011-01-01

    Background To summarize systematic reviews that 1) assessed the evidence for causal relationships between computer work and the occurrence of carpal tunnel syndrome (CTS) or upper extremity musculoskeletal disorders (UEMSDs), or 2) reported on intervention studies among computer users/or office workers. Methodology/Principal Findings PubMed, Embase, CINAHL and Web of Science were searched for reviews published between 1999 and 2010. Additional publications were provided by content area experts. The primary author extracted all data using a purpose-built form, while two of the authors evaluated the quality of the reviews using recommended standard criteria from AMSTAR; disagreements were resolved by discussion. The quality of evidence syntheses in the included reviews was assessed qualitatively for each outcome and for the interventions. Altogether, 1,349 review titles were identified, 47 reviews were retrieved for full text relevance assessment, and 17 reviews were finally included as being relevant and of sufficient quality. The degrees of focus and rigorousness of these 17 reviews were highly variable. Three reviews on risk factors for carpal tunnel syndrome were rated moderate to high quality, 8 reviews on risk factors for UEMSDs ranged from low to moderate/high quality, and 6 reviews on intervention studies were of moderate to high quality. The quality of the evidence for computer use as a risk factor for CTS was insufficient, while the evidence for computer use and UEMSDs was moderate regarding pain complaints and limited for specific musculoskeletal disorders. From the reviews on intervention studies no strong evidence based recommendations could be given. Conclusions/Significance Computer use is associated with pain complaints, but it is still not very clear if this association is causal. The evidence for specific disorders or diseases is limited. No effective interventions have yet been documented. PMID:21589875

  6. Extremely broadband, on-chip optical nonreciprocity enabled by mimicking nonlinear anti-adiabatic quantum jumps near exceptional points

    NASA Astrophysics Data System (ADS)

    Choi, Youngsun; Hahn, Choloong; Yoon, Jae Woong; Song, Seok Ho; Berini, Pierre

    2017-01-01

    Time-asymmetric state-evolution properties while encircling an exceptional point are presently of great interest in search of new principles for controlling atomic and optical systems. Here, we show that encircling-an-exceptional-point interactions that are essentially reciprocal in the linear interaction regime make a plausible nonlinear integrated optical device architecture highly nonreciprocal over an extremely broad spectrum. In the proposed strategy, we describe an experimentally realizable coupled-waveguide structure that supports an encircling-an-exceptional-point parametric evolution under the influence of a gain saturation nonlinearity. Using an intuitive time-dependent Hamiltonian and rigorous numerical computations, we demonstrate strictly nonreciprocal optical transmission with a forward-to-backward transmission ratio exceeding 10 dB and high forward transmission efficiency (~100%) persisting over an extremely broad bandwidth approaching 100 THz. This predicted performance strongly encourages experimental realization of the proposed concept to establish a practical on-chip optical nonreciprocal element for ultra-short laser pulses and broadband high-density optical signal processing.

  7. The Mathematical and Computer Aided Analysis of the Contact Stress of the Surface With 4th Order

    NASA Astrophysics Data System (ADS)

    Huran, Liu

    Inspired from some gears with heavy power transmission in practical usage after serious plastic deformation in metallurgical industry, we believe that there must existed some kind of gear profile which is most suitable in both the contact and bending fatigue strength. From careful analysis and deep going investigation, we think that it is the profile of equal conjugate curvature with high order of contact, and analyzed the forming principle of this kind of profile. Based on the second curve and comparative analysis of fourth order curves, combined with Chebyshev polynomial terms of higher order contact with tooth contact stress formula derived. Note high exposure in the case of two extreme points of stress and extreme positions and the derived extreme contact stress formula. Finally, a pair of conjugate gear tooth profile curvature provides specific contact stress calculation.

  8. A low cost, high precision extreme/harsh cold environment, autonomous sensor data gathering and transmission platform.

    NASA Astrophysics Data System (ADS)

    Chetty, S.; Field, L. A.

    2014-12-01

    SWIMS III, is a low cost, autonomous sensor data gathering platform developed specifically for extreme/harsh cold environments. Arctic ocean's continuing decrease of summer-time ice is related to rapidly diminishing multi-year ice due to the effects of climate change. Ice911 Research aims to develop environmentally inert materials that when deployed will increase the albedo, enabling the formation and/preservation of multi-year ice. SWIMS III's sophisticated autonomous sensors are designed to measure the albedo, weather, water temperature and other environmental parameters. This platform uses low cost, high accuracy/precision sensors, extreme environment command and data handling computer system using satellite and terrestrial wireless solution. The system also incorporates tilt sensors and sonar based ice thickness sensors. The system is light weight and can be deployed by hand by a single person. This presentation covers the technical, and design challenges in developing and deploying these platforms.

  9. Precision measurements and computations of transition energies in rotationally cold triatomic hydrogen ions up to the midvisible spectral range.

    PubMed

    Pavanello, Michele; Adamowicz, Ludwik; Alijah, Alexander; Zobov, Nikolai F; Mizus, Irina I; Polyansky, Oleg L; Tennyson, Jonathan; Szidarovszky, Tamás; Császár, Attila G; Berg, Max; Petrignani, Annemieke; Wolf, Andreas

    2012-01-13

    First-principles computations and experimental measurements of transition energies are carried out for vibrational overtone lines of the triatomic hydrogen ion H(3)(+) corresponding to floppy vibrations high above the barrier to linearity. Action spectroscopy is improved to detect extremely weak visible-light spectral lines on cold trapped H(3)(+) ions. A highly accurate potential surface is obtained from variational calculations using explicitly correlated Gaussian wave function expansions. After nonadiabatic corrections, the floppy H(3)(+) vibrational spectrum is reproduced at the 0.1 cm(-1) level up to 16600 cm(-1).

  10. Equality between gravitational and electromagnetic absorption cross sections of extreme Reissner-Nordstroem black holes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliveira, Ednilton S.; Crispino, Luis C. B.; Higuchi, Atsushi

    2011-10-15

    The absorption cross section of Reissner-Nordstroem black holes for the gravitational field is computed numerically, taking into account the coupling of the electromagnetic and gravitational perturbations. Our results are in excellent agreement with low- and high-frequency approximations. We find equality between gravitational and electromagnetic absorption cross sections of extreme Reissner-Nordstroem black holes for all frequencies, which we explain analytically. This gives the first example of objects in general relativity in four dimensions that absorb the electromagnetic and gravitational waves in exactly the same way.

  11. Binary-selectable detector holdoff circuit

    NASA Technical Reports Server (NTRS)

    Kadrmas, K. A.

    1974-01-01

    High-speed switching circuit protects detectors from sudden, extremely-intense backscattered radiation that results from short-range atmospheric dust layers, or low-level clouds, entering laser/radar field of view. Function of circuit is to provide computer-controlled switching of photodiode detector, preamplifier power-supply voltages, in approximately 10 nanoseconds.

  12. 2016 Final Reports from the Los Alamos National Laboratory Computational Physics Student Summer Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Runnels, Scott Robert; Bachrach, Harrison Ian; Carlson, Nils

    The two primary purposes of LANL’s Computational Physics Student Summer Workshop are (1) To educate graduate and exceptional undergraduate students in the challenges and applications of computational physics of interest to LANL, and (2) Entice their interest toward those challenges. Computational physics is emerging as a discipline in its own right, combining expertise in mathematics, physics, and computer science. The mathematical aspects focus on numerical methods for solving equations on the computer as well as developing test problems with analytical solutions. The physics aspects are very broad, ranging from low-temperature material modeling to extremely high temperature plasma physics, radiation transportmore » and neutron transport. The computer science issues are concerned with matching numerical algorithms to emerging architectures and maintaining the quality of extremely large codes built to perform multi-physics calculations. Although graduate programs associated with computational physics are emerging, it is apparent that the pool of U.S. citizens in this multi-disciplinary field is relatively small and is typically not focused on the aspects that are of primary interest to LANL. Furthermore, more structured foundations for LANL interaction with universities in computational physics is needed; historically interactions rely heavily on individuals’ personalities and personal contacts. Thus a tertiary purpose of the Summer Workshop is to build an educational network of LANL researchers, university professors, and emerging students to advance the field and LANL’s involvement in it.« less

  13. Relations between work and upper extremity musculoskeletal problems (UEMSP) and the moderating role of psychosocial work factors on the relation between computer work and UEMSP.

    PubMed

    Nicolakakis, Nektaria; Stock, Susan R; Abrahamowicz, Michal; Kline, Rex; Messing, Karen

    2017-11-01

    Computer work has been identified as a risk factor for upper extremity musculoskeletal problems (UEMSP). But few studies have investigated how psychosocial and organizational work factors affect this relation. Nor have gender differences in the relation between UEMSP and these work factors  been studied. We sought to estimate: (1) the association between UEMSP and a range of physical, psychosocial and organizational work exposures, including the duration of computer work, and (2) the moderating effect of psychosocial work exposures on the relation between computer work and UEMSP. Using 2007-2008 Québec survey data on 2478 workers, we carried out gender-stratified multivariable logistic regression modeling and two-way interaction analyses. In both genders, odds of UEMSP were higher with exposure to high physical work demands and emotionally demanding work. Additionally among women, UEMSP were associated with duration of occupational computer exposure, sexual harassment, tense situations when dealing with clients, high quantitative demands and lack of prospects for promotion, and among men, with low coworker support, episodes of unemployment, low job security and contradictory work demands. Among women, the effect of computer work on UEMSP was considerably increased in the presence of emotionally demanding work, and may also be moderated by low recognition at work, contradictory work demands, and low supervisor support. These results suggest that the relations between UEMSP and computer work are moderated by psychosocial work exposures and that the relations between working conditions and UEMSP are somewhat different for each gender, highlighting the complexity of these relations and the importance of considering gender.

  14. Joint probabilities of extreme precipitation and wind gusts in Germany

    NASA Astrophysics Data System (ADS)

    von Waldow, H.; Martius, O.

    2012-04-01

    Extreme meteorological events such as storms, heavy rain, floods, droughts and heat waves can have devastating consequences for human health, infrastructure and ecosystems. Concomitantly occurring extreme events might interact synergistically to produce a particularly hazardous impact. The joint occurrence of droughts and heat waves, for example, can have a very different impact on human health and ecosystems both in quantity and quality, than just one of the two extreme events. The co-occurrence of certain types of extreme events is plausible from physical and dynamical considerations, for example heavy precipitation and high wind speeds in the pathway of strong extratropical cyclones. The winter storm Kyrill not only caused wind gust speeds well in excess of 30 m/s across Europe, but also brought 24 h precipitation sums greater than the mean January accumulations in some regions. However, the existence of such compound risks is currently not accounted for by insurance companies, who assume independence of extreme weather events to calculate their premiums. While there are established statistical methods to model the extremes of univariate meteorological variables, the modelling of multidimensional extremes calls for an approach that is tailored to the specific problem at hand. A first step involves defining extreme bivariate wind/precipitation events. Because precipitation and wind gusts caused by the same cyclone or convective cell do not occur at exactly the same location and at the same time, it is necessary to find a sound definition of "extreme compound event" for this case. We present a data driven method to choose appropriate time and space intervals that define "concomitance" for wind and precipitation extremes. Based on station data of wind speed and gridded precipitation data, we arrive at time and space intervals that compare well with the typical time and space scales of extratropical cyclones, i.e. a maximum time lag of 1 day and a maximum distance of about 300 km between associated wind and rain events. After modelling extreme precipitation and wind separately, we explore the practicability of characterising their joint distribution using a bivariate threshold excess model. In particular, we present different dependence measures and report about the computational feasibility and available computer codes.

  15. Diagnostic performance and radiation dose of lower extremity CT angiography using a 128-slice dual source CT at 80 kVp and high pitch.

    PubMed

    Kim, Jin Woo; Choo, Ki Seok; Jeon, Ung Bae; Kim, Tae Un; Hwang, Jae Yeon; Yeom, Jeong A; Jeong, Hee Seok; Choi, Yoon Young; Nam, Kyung Jin; Kim, Chang Won; Jeong, Dong Wook; Lim, Soo Jin

    2016-07-01

    Multi-detector computed tomography (MDCT) angiography is now used for the diagnosing patients with peripheral arterial disease. The dose of radiation is related to variable factors, such as tube current, tube voltage, and helical pitch. To assess the diagnostic performance and radiation dose of lower extremity CT angiography (CTA) using a 128-slice dual source CT at 80 kVp and high pitch in patients with critical limb ischemia (CLI). Twenty-eight patients (mean, 64.1 years; range, 39-80 years) with CLI were enrolled in this retrospective study and underwent CTA using a 128-slice dual source CT at 80 kVp and high pitch and subsequent intra-arterial digital subtraction angiography (DSA), which was used as a reference standard for assessing diagnostic performance. For arterial segments with significant disease (>50% stenosis), overall sensitivity, specificity, and accuracy of lower extremity CTA were 94.8% (95% CI, 91.7-98.0%), 91.5% (95% CI, 87.7-95.2%), and 93.1% (95% CI, 90.6-95.6%), respectively, and its positive and negative predictive values were 91.0% (95% CI, 87.1-95.0%), and 95.1% (95% CI, 92.1-98.1%), respectively. Mean radiation dose delivered to lower extremities was 266.6 mGy.cm. Lower extremity CTA using a 128-slice dual source CT at 80 kVp and high pitch was found to have good diagnostic performance for the assessment of patients with CLI using an extremely low radiation dose. © The Foundation Acta Radiologica 2015.

  16. Gravitational Waves From the Kerr/CFT Correspondence

    NASA Astrophysics Data System (ADS)

    Porfyriadis, Achilleas

    Astronomical observation suggests the existence of near-extreme Kerr black holes in the sky. Properties of diffeomorphisms imply that dynamics of the near-horizon region of near-extreme Kerr are governed by an infinite-dimensional conformal symmetry. This symmetry may be exploited to analytically, rather than numerically, compute a variety of potentially observable processes. In this thesis we compute the gravitational radiation emitted by a small compact object that orbits in the near-horizon region and plunges into the horizon of a large rapidly rotating black hole. We study the holographically dual processes in the context of the Kerr/CFT correspondence and find our conformal field theory (CFT) computations in perfect agreement with the gravity results. We compute the radiation emitted by a particle on the innermost stable circular orbit (ISCO) of a rapidly spinning black hole. We confirm previous estimates of the overall scaling of the power radiated, but show that there are also small oscillations all the way to extremality. Furthermore, we reveal an intricate mode-by-mode structure in the flux to infinity, with only certain modes having the dominant scaling. The scaling of each mode is controlled by its conformal weight. Massive objects in adiabatic quasi-circular inspiral towards a near-extreme Kerr black hole quickly plunge into the horizon after passing the ISCO. The post-ISCO plunge trajectory is shown to be related by a conformal map to a circular orbit. Conformal symmetry of the near-horizon region is then used to compute analytically the gravitational radiation produced during the plunge phase. Most extreme-mass-ratio-inspirals of small compact objects into supermassive black holes end with a fast plunge from an eccentric last stable orbit. We use conformal transformations to analytically solve for the radiation emitted from various fast plunges into extreme and near-extreme Kerr black holes.

  17. Optical Computers and Space Technology

    NASA Technical Reports Server (NTRS)

    Abdeldayem, Hossin A.; Frazier, Donald O.; Penn, Benjamin; Paley, Mark S.; Witherow, William K.; Banks, Curtis; Hicks, Rosilen; Shields, Angela

    1995-01-01

    The rapidly increasing demand for greater speed and efficiency on the information superhighway requires significant improvements over conventional electronic logic circuits. Optical interconnections and optical integrated circuits are strong candidates to provide the way out of the extreme limitations imposed on the growth of speed and complexity of nowadays computations by the conventional electronic logic circuits. The new optical technology has increased the demand for high quality optical materials. NASA's recent involvement in processing optical materials in space has demonstrated that a new and unique class of high quality optical materials are processible in a microgravity environment. Microgravity processing can induce improved orders in these materials and could have a significant impact on the development of optical computers. We will discuss NASA's role in processing these materials and report on some of the associated nonlinear optical properties which are quite useful for optical computers technology.

  18. Computational discovery of extremal microstructure families

    PubMed Central

    Chen, Desai; Skouras, Mélina; Zhu, Bo; Matusik, Wojciech

    2018-01-01

    Modern fabrication techniques, such as additive manufacturing, can be used to create materials with complex custom internal structures. These engineered materials exhibit a much broader range of bulk properties than their base materials and are typically referred to as metamaterials or microstructures. Although metamaterials with extraordinary properties have many applications, designing them is very difficult and is generally done by hand. We propose a computational approach to discover families of microstructures with extremal macroscale properties automatically. Using efficient simulation and sampling techniques, we compute the space of mechanical properties covered by physically realizable microstructures. Our system then clusters microstructures with common topologies into families. Parameterized templates are eventually extracted from families to generate new microstructure designs. We demonstrate these capabilities on the computational design of mechanical metamaterials and present five auxetic microstructure families with extremal elastic material properties. Our study opens the way for the completely automated discovery of extremal microstructures across multiple domains of physics, including applications reliant on thermal, electrical, and magnetic properties. PMID:29376124

  19. Extreme events in total ozone over Arosa - Part 1: Application of extreme value theory

    NASA Astrophysics Data System (ADS)

    Rieder, H. E.; Staehelin, J.; Maeder, J. A.; Peter, T.; Ribatet, M.; Davison, A. C.; Stübi, R.; Weihs, P.; Holawe, F.

    2010-10-01

    In this study ideas from extreme value theory are for the first time applied in the field of stratospheric ozone research, because statistical analysis showed that previously used concepts assuming a Gaussian distribution (e.g. fixed deviations from mean values) of total ozone data do not adequately address the structure of the extremes. We show that statistical extreme value methods are appropriate to identify ozone extremes and to describe the tails of the Arosa (Switzerland) total ozone time series. In order to accommodate the seasonal cycle in total ozone, a daily moving threshold was determined and used, with tools from extreme value theory, to analyse the frequency of days with extreme low (termed ELOs) and high (termed EHOs) total ozone at Arosa. The analysis shows that the Generalized Pareto Distribution (GPD) provides an appropriate model for the frequency distribution of total ozone above or below a mathematically well-defined threshold, thus providing a statistical description of ELOs and EHOs. The results show an increase in ELOs and a decrease in EHOs during the last decades. The fitted model represents the tails of the total ozone data set with high accuracy over the entire range (including absolute monthly minima and maxima), and enables a precise computation of the frequency distribution of ozone mini-holes (using constant thresholds). Analyzing the tails instead of a small fraction of days below constant thresholds provides deeper insight into the time series properties. Fingerprints of dynamical (e.g. ENSO, NAO) and chemical features (e.g. strong polar vortex ozone loss), and major volcanic eruptions, can be identified in the observed frequency of extreme events throughout the time series. Overall the new approach to analysis of extremes provides more information on time series properties and variability than previous approaches that use only monthly averages and/or mini-holes and mini-highs.

  20. Extreme events in total ozone over Arosa - Part 1: Application of extreme value theory

    NASA Astrophysics Data System (ADS)

    Rieder, H. E.; Staehelin, J.; Maeder, J. A.; Peter, T.; Ribatet, M.; Davison, A. C.; Stübi, R.; Weihs, P.; Holawe, F.

    2010-05-01

    In this study ideas from extreme value theory are for the first time applied in the field of stratospheric ozone research, because statistical analysis showed that previously used concepts assuming a Gaussian distribution (e.g. fixed deviations from mean values) of total ozone data do not adequately address the structure of the extremes. We show that statistical extreme value methods are appropriate to identify ozone extremes and to describe the tails of the Arosa (Switzerland) total ozone time series. In order to accommodate the seasonal cycle in total ozone, a daily moving threshold was determined and used, with tools from extreme value theory, to analyse the frequency of days with extreme low (termed ELOs) and high (termed EHOs) total ozone at Arosa. The analysis shows that the Generalized Pareto Distribution (GPD) provides an appropriate model for the frequency distribution of total ozone above or below a mathematically well-defined threshold, thus providing a statistical description of ELOs and EHOs. The results show an increase in ELOs and a decrease in EHOs during the last decades. The fitted model represents the tails of the total ozone data set with high accuracy over the entire range (including absolute monthly minima and maxima), and enables a precise computation of the frequency distribution of ozone mini-holes (using constant thresholds). Analyzing the tails instead of a small fraction of days below constant thresholds provides deeper insight into the time series properties. Fingerprints of dynamical (e.g. ENSO, NAO) and chemical features (e.g. strong polar vortex ozone loss), and major volcanic eruptions, can be identified in the observed frequency of extreme events throughout the time series. Overall the new approach to analysis of extremes provides more information on time series properties and variability than previous approaches that use only monthly averages and/or mini-holes and mini-highs.

  1. Computer work and musculoskeletal disorders of the neck and upper extremity: A systematic review

    PubMed Central

    2010-01-01

    Background This review examines the evidence for an association between computer work and neck and upper extremity disorders (except carpal tunnel syndrome). Methods A systematic critical review of studies of computer work and musculoskeletal disorders verified by a physical examination was performed. Results A total of 22 studies (26 articles) fulfilled the inclusion criteria. Results show limited evidence for a causal relationship between computer work per se, computer mouse and keyboard time related to a diagnosis of wrist tendonitis, and for an association between computer mouse time and forearm disorders. Limited evidence was also found for a causal relationship between computer work per se and computer mouse time related to tension neck syndrome, but the evidence for keyboard time was insufficient. Insufficient evidence was found for an association between other musculoskeletal diagnoses of the neck and upper extremities, including shoulder tendonitis and epicondylitis, and any aspect of computer work. Conclusions There is limited epidemiological evidence for an association between aspects of computer work and some of the clinical diagnoses studied. None of the evidence was considered as moderate or strong and there is a need for more and better documentation. PMID:20429925

  2. Rotating Desk for Collaboration by Two Computer Programmers

    NASA Technical Reports Server (NTRS)

    Riley, John Thomas

    2005-01-01

    A special-purpose desk has been designed to facilitate collaboration by two computer programmers sharing one desktop computer or computer terminal. The impetus for the design is a trend toward what is known in the software industry as extreme programming an approach intended to ensure high quality without sacrificing the quantity of computer code produced. Programmers working in pairs is a major feature of extreme programming. The present desk design minimizes the stress of the collaborative work environment. It supports both quality and work flow by making it unnecessary for programmers to get in each other s way. The desk (see figure) includes a rotating platform that supports a computer video monitor, keyboard, and mouse. The desk enables one programmer to work on the keyboard for any amount of time and then the other programmer to take over without breaking the train of thought. The rotating platform is supported by a turntable bearing that, in turn, is supported by a weighted base. The platform contains weights to improve its balance. The base includes a stand for a computer, and is shaped and dimensioned to provide adequate foot clearance for both users. The platform includes an adjustable stand for the monitor, a surface for the keyboard and mouse, and spaces for work papers, drinks, and snacks. The heights of the monitor, keyboard, and mouse are set to minimize stress. The platform can be rotated through an angle of 40 to give either user a straight-on view of the monitor and full access to the keyboard and mouse. Magnetic latches keep the platform preferentially at either of the two extremes of rotation. To switch between users, one simply grabs the edge of the platform and pulls it around. The magnetic latch is easily released, allowing the platform to rotate freely to the position of the other user

  3. Hot spots of multivariate extreme anomalies in Earth observations

    NASA Astrophysics Data System (ADS)

    Flach, M.; Sippel, S.; Bodesheim, P.; Brenning, A.; Denzler, J.; Gans, F.; Guanche, Y.; Reichstein, M.; Rodner, E.; Mahecha, M. D.

    2016-12-01

    Anomalies in Earth observations might indicate data quality issues, extremes or the change of underlying processes within a highly multivariate system. Thus, considering the multivariate constellation of variables for extreme detection yields crucial additional information over conventional univariate approaches. We highlight areas in which multivariate extreme anomalies are more likely to occur, i.e. hot spots of extremes in global atmospheric Earth observations that impact the Biosphere. In addition, we present the year of the most unusual multivariate extreme between 2001 and 2013 and show that these coincide with well known high impact extremes. Technically speaking, we account for multivariate extremes by using three sophisticated algorithms adapted from computer science applications. Namely an ensemble of the k-nearest neighbours mean distance, a kernel density estimation and an approach based on recurrences is used. However, the impact of atmosphere extremes on the Biosphere might largely depend on what is considered to be normal, i.e. the shape of the mean seasonal cycle and its inter-annual variability. We identify regions with similar mean seasonality by means of dimensionality reduction in order to estimate in each region both the `normal' variance and robust thresholds for detecting the extremes. In addition, we account for challenges like heteroscedasticity in Northern latitudes. Apart from hot spot areas, those anomalies in the atmosphere time series are of particular interest, which can only be detected by a multivariate approach but not by a simple univariate approach. Such an anomalous constellation of atmosphere variables is of interest if it impacts the Biosphere. The multivariate constellation of such an anomalous part of a time series is shown in one case study indicating that multivariate anomaly detection can provide novel insights into Earth observations.

  4. Facilitating Co-Design for Extreme-Scale Systems Through Lightweight Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engelmann, Christian; Lauer, Frank

    This work focuses on tools for investigating algorithm performance at extreme scale with millions of concurrent threads and for evaluating the impact of future architecture choices to facilitate the co-design of high-performance computing (HPC) architectures and applications. The approach focuses on lightweight simulation of extreme-scale HPC systems with the needed amount of accuracy. The prototype presented in this paper is able to provide this capability using a parallel discrete event simulation (PDES), such that a Message Passing Interface (MPI) application can be executed at extreme scale, and its performance properties can be evaluated. The results of an initial prototype aremore » encouraging as a simple 'hello world' MPI program could be scaled up to 1,048,576 virtual MPI processes on a four-node cluster, and the performance properties of two MPI programs could be evaluated at up to 16,384 virtual MPI processes on the same system.« less

  5. [The informative value of the functional step test for the purpose of computed optical topography in the children presenting with the functional disorders of the musculoskeletal system].

    PubMed

    Trukhmanov, I M; Suslova, G A; Ponomarenko, G N

    This paper is devoted to the characteristic of the informative value of the functional step test with the application of the heel cushions in the children for the purpose of differential diagnostics of anatomic and functional differences in the length of the lower extremities. A total of 85 schoolchildren with different length of the lower extremities have been examined. The comparative evaluation of the results of clinical and instrumental examinations was undertaken. The data obtained with the help of the functional step test give evidence of its very high sensitivity, specificity, and clinical significant as a tool for the examination of the children with different length of the low extremities. It is concluded that the test is one of the most informative predictors of the effectiveness of rehabilitation in the children with different length of the lower extremities.

  6. Pen-based computers: Computers without keys

    NASA Technical Reports Server (NTRS)

    Conklin, Cheryl L.

    1994-01-01

    The National Space Transportation System (NSTS) is comprised of many diverse and highly complex systems incorporating the latest technologies. Data collection associated with ground processing of the various Space Shuttle system elements is extremely challenging due to the many separate processing locations where data is generated. This presents a significant problem when the timely collection, transfer, collation, and storage of data is required. This paper describes how new technology, referred to as Pen-Based computers, is being used to transform the data collection process at Kennedy Space Center (KSC). Pen-Based computers have streamlined procedures, increased data accuracy, and now provide more complete information than previous methods. The end results is the elimination of Shuttle processing delays associated with data deficiencies.

  7. Exploring the Feasibility of a DNA Computer: Design of an ALU Using Sticker-Based DNA Model.

    PubMed

    Sarkar, Mayukh; Ghosal, Prasun; Mohanty, Saraju P

    2017-09-01

    Since its inception, DNA computing has advanced to offer an extremely powerful, energy-efficient emerging technology for solving hard computational problems with its inherent massive parallelism and extremely high data density. This would be much more powerful and general purpose when combined with other existing well-known algorithmic solutions that exist for conventional computing architectures using a suitable ALU. Thus, a specifically designed DNA Arithmetic and Logic Unit (ALU) that can address operations suitable for both domains can mitigate the gap between these two. An ALU must be able to perform all possible logic operations, including NOT, OR, AND, XOR, NOR, NAND, and XNOR; compare, shift etc., integer and floating point arithmetic operations (addition, subtraction, multiplication, and division). In this paper, design of an ALU has been proposed using sticker-based DNA model with experimental feasibility analysis. Novelties of this paper may be in manifold. First, the integer arithmetic operations performed here are 2s complement arithmetic, and the floating point operations follow the IEEE 754 floating point format, resembling closely to a conventional ALU. Also, the output of each operation can be reused for any next operation. So any algorithm or program logic that users can think of can be implemented directly on the DNA computer without any modification. Second, once the basic operations of sticker model can be automated, the implementations proposed in this paper become highly suitable to design a fully automated ALU. Third, proposed approaches are easy to implement. Finally, these approaches can work on sufficiently large binary numbers.

  8. 2015 Final Reports from the Los Alamos National Laboratory Computational Physics Student Summer Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Runnels, Scott Robert; Caldwell, Wendy; Brown, Barton Jed

    The two primary purposes of LANL’s Computational Physics Student Summer Workshop are (1) To educate graduate and exceptional undergraduate students in the challenges and applications of computational physics of interest to LANL, and (2) Entice their interest toward those challenges. Computational physics is emerging as a discipline in its own right, combining expertise in mathematics, physics, and computer science. The mathematical aspects focus on numerical methods for solving equations on the computer as well as developing test problems with analytical solutions. The physics aspects are very broad, ranging from low-temperature material modeling to extremely high temperature plasma physics, radiation transportmore » and neutron transport. The computer science issues are concerned with matching numerical algorithms to emerging architectures and maintaining the quality of extremely large codes built to perform multi-physics calculations. Although graduate programs associated with computational physics are emerging, it is apparent that the pool of U.S. citizens in this multi-disciplinary field is relatively small and is typically not focused on the aspects that are of primary interest to LANL. Furthermore, more structured foundations for LANL interaction with universities in computational physics is needed; historically interactions rely heavily on individuals’ personalities and personal contacts. Thus a tertiary purpose of the Summer Workshop is to build an educational network of LANL researchers, university professors, and emerging students to advance the field and LANL’s involvement in it. This report includes both the background for the program and the reports from the students.« less

  9. Extreme Programming: Maestro Style

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey; Fox, Jason; Rabe, Kenneth; Shu, I-Hsiang; Powell, Mark

    2009-01-01

    "Extreme Programming: Maestro Style" is the name of a computer programming methodology that has evolved as a custom version of a methodology, called extreme programming that has been practiced in the software industry since the late 1990s. The name of this version reflects its origin in the work of the Maestro team at NASA's Jet Propulsion Laboratory that develops software for Mars exploration missions. Extreme programming is oriented toward agile development of software resting on values of simplicity, communication, testing, and aggressiveness. Extreme programming involves use of methods of rapidly building and disseminating institutional knowledge among members of a computer-programming team to give all the members a shared view that matches the view of the customers for whom the software system is to be developed. Extreme programming includes frequent planning by programmers in collaboration with customers, continually examining and rewriting code in striving for the simplest workable software designs, a system metaphor (basically, an abstraction of the system that provides easy-to-remember software-naming conventions and insight into the architecture of the system), programmers working in pairs, adherence to a set of coding standards, collaboration of customers and programmers, frequent verbal communication, frequent releases of software in small increments of development, repeated testing of the developmental software by both programmers and customers, and continuous interaction between the team and the customers. The environment in which the Maestro team works requires the team to quickly adapt to changing needs of its customers. In addition, the team cannot afford to accept unnecessary development risk. Extreme programming enables the Maestro team to remain agile and provide high-quality software and service to its customers. However, several factors in the Maestro environment have made it necessary to modify some of the conventional extreme-programming practices. The single most influential of these factors is that continuous interaction between customers and programmers is not feasible.

  10. Numerical modeling of macrodispersion in heterogeneous media: a comparison of multi-Gaussian and non-multi-Gaussian models

    NASA Astrophysics Data System (ADS)

    Wen, Xian-Huan; Gómez-Hernández, J. Jaime

    1998-03-01

    The macrodispersion of an inert solute in a 2-D heterogeneous porous media is estimated numerically in a series of fields of varying heterogeneity. Four different random function (RF) models are used to model log-transmissivity (ln T) spatial variability, and for each of these models, ln T variance is varied from 0.1 to 2.0. The four RF models share the same univariate Gaussian histogram and the same isotropic covariance, but differ from one another in terms of the spatial connectivity patterns at extreme transmissivity values. More specifically, model A is a multivariate Gaussian model for which, by definition, extreme values (both high and low) are spatially uncorrelated. The other three models are non-multi-Gaussian: model B with high connectivity of high extreme values, model C with high connectivity of low extreme values, and model D with high connectivities of both high and low extreme values. Residence time distributions (RTDs) and macrodispersivities (longitudinal and transverse) are computed on ln T fields corresponding to the different RF models, for two different flow directions and at several scales. They are compared with each other, as well as with predicted values based on first-order analytical results. Numerically derived RTDs and macrodispersivities for the multi-Gaussian model are in good agreement with analytically derived values using first-order theories for log-transmissivity variance up to 2.0. The results from the non-multi-Gaussian models differ from each other and deviate largely from the multi-Gaussian results even when ln T variance is small. RTDs in non-multi-Gaussian realizations with high connectivity at high extreme values display earlier breakthrough than in multi-Gaussian realizations, whereas later breakthrough and longer tails are observed for RTDs from non-multi-Gaussian realizations with high connectivity at low extreme values. Longitudinal macrodispersivities in the non-multi-Gaussian realizations are, in general, larger than in the multi-Gaussian ones, while transverse macrodispersivities in the non-multi-Gaussian realizations can be larger or smaller than in the multi-Gaussian ones depending on the type of connectivity at extreme values. Comparing the numerical results for different flow directions, it is confirmed that macrodispersivities in multi-Gaussian realizations with isotropic spatial correlation are not flow direction-dependent. Macrodispersivities in the non-multi-Gaussian realizations, however, are flow direction-dependent although the covariance of ln T is isotropic (the same for all four models). It is important to account for high connectivities at extreme transmissivity values, a likely situation in some geological formations. Some of the discrepancies between first-order-based analytical results and field-scale tracer test data may be due to the existence of highly connected paths of extreme conductivity values.

  11. Opportunities for Computational Discovery in Basic Energy Sciences

    NASA Astrophysics Data System (ADS)

    Pederson, Mark

    2011-03-01

    An overview of the broad-ranging support of computational physics and computational science within the Department of Energy Office of Science will be provided. Computation as the third branch of physics is supported by all six offices (Advanced Scientific Computing, Basic Energy, Biological and Environmental, Fusion Energy, High-Energy Physics, and Nuclear Physics). Support focuses on hardware, software and applications. Most opportunities within the fields of~condensed-matter physics, chemical-physics and materials sciences are supported by the Officeof Basic Energy Science (BES) or through partnerships between BES and the Office for Advanced Scientific Computing. Activities include radiation sciences, catalysis, combustion, materials in extreme environments, energy-storage materials, light-harvesting and photovoltaics, solid-state lighting and superconductivity.~ A summary of two recent reports by the computational materials and chemical communities on the role of computation during the next decade will be provided. ~In addition to materials and chemistry challenges specific to energy sciences, issues identified~include a focus on the role of the domain scientist in integrating, expanding and sustaining applications-oriented capabilities on evolving high-performance computing platforms and on the role of computation in accelerating the development of innovative technologies. ~~

  12. Detonation product EOS studies: Using ISLS to refine CHEETAH

    NASA Astrophysics Data System (ADS)

    Zaug, Joseph; Fried, Larry; Hansen, Donald

    2001-06-01

    Knowledge of an effective interatomic potential function underlies any effort to predict or rationalize the properties of solids and liquids. The experiments we undertake are directed towards determination of equilibrium and dynamic properties of simple fluids at densities sufficiently high that traditional computational methods and semi-empirical forms successful at ambient conditions may require reconsideration. In this paper we present high-pressure and temperature experimental sound speed data on a suite of non-ideal simple fluids and fluid mixtures. Impulsive Stimulated Light Scattering conducted in the diamond-anvil cell offers an experimental approach to determine cross-pair potential interactions through equation of state determinations. In addition the kinetics of structural relaxation in fluids can be studied. We compare our experimental results with our thermochemical computational model CHEETAH. Computational models are systematically improved with each addition of experimental data. Experimentally grounded computational models provide a good basis to confidently understand the chemical nature of reactions at extreme conditions.

  13. In Situ Methods, Infrastructures, and Applications on High Performance Computing Platforms, a State-of-the-art (STAR) Report

    DOE PAGES

    Bethel, EW; Bauer, A; Abbasi, H; ...

    2016-06-10

    The considerable interest in the high performance computing (HPC) community regarding analyzing and visualization data without first writing to disk, i.e., in situ processing, is due to several factors. First is an I/O cost savings, where data is analyzed /visualized while being generated, without first storing to a filesystem. Second is the potential for increased accuracy, where fine temporal sampling of transient analysis might expose some complex behavior missed in coarse temporal sampling. Third is the ability to use all available resources, CPU’s and accelerators, in the computation of analysis products. This STAR paper brings together researchers, developers and practitionersmore » using in situ methods in extreme-scale HPC with the goal to present existing methods, infrastructures, and a range of computational science and engineering applications using in situ analysis and visualization.« less

  14. On the impacts of computing daily temperatures as the average of the daily minimum and maximum temperatures

    NASA Astrophysics Data System (ADS)

    Villarini, Gabriele; Khouakhi, Abdou; Cunningham, Evan

    2017-12-01

    Daily temperature values are generally computed as the average of the daily minimum and maximum observations, which can lead to biases in the estimation of daily averaged values. This study examines the impacts of these biases on the calculation of climatology and trends in temperature extremes at 409 sites in North America with at least 25 years of complete hourly records. Our results show that the calculation of daily temperature based on the average of minimum and maximum daily readings leads to an overestimation of the daily values of 10+ % when focusing on extremes and values above (below) high (low) thresholds. Moreover, the effects of the data processing method on trend estimation are generally small, even though the use of the daily minimum and maximum readings reduces the power of trend detection ( 5-10% fewer trends detected in comparison with the reference data).

  15. SYNCHROTRON X-RAY ABSORPTION-EDGE COMPUTED MICROTOMOGRAPHY IMAGING OF THALLIUM COMPARTMENTALIZATION IN IBERIS INTERMEDIA

    EPA Science Inventory

    Thallium (TI) is an extremely toxic metal which, due to its similarities to K, is readily taken up by plants. Thallium is efficiently hyperaccumulated in Iberis intermedia as TI(I). Distribution and compartmentalization of TI in I. intermedia is highes...

  16. Integration of nanoscale memristor synapses in neuromorphic computing architectures

    NASA Astrophysics Data System (ADS)

    Indiveri, Giacomo; Linares-Barranco, Bernabé; Legenstein, Robert; Deligeorgis, George; Prodromakis, Themistoklis

    2013-09-01

    Conventional neuro-computing architectures and artificial neural networks have often been developed with no or loose connections to neuroscience. As a consequence, they have largely ignored key features of biological neural processing systems, such as their extremely low-power consumption features or their ability to carry out robust and efficient computation using massively parallel arrays of limited precision, highly variable, and unreliable components. Recent developments in nano-technologies are making available extremely compact and low power, but also variable and unreliable solid-state devices that can potentially extend the offerings of availing CMOS technologies. In particular, memristors are regarded as a promising solution for modeling key features of biological synapses due to their nanoscale dimensions, their capacity to store multiple bits of information per element and the low energy required to write distinct states. In this paper, we first review the neuro- and neuromorphic computing approaches that can best exploit the properties of memristor and scale devices, and then propose a novel hybrid memristor-CMOS neuromorphic circuit which represents a radical departure from conventional neuro-computing approaches, as it uses memristors to directly emulate the biophysics and temporal dynamics of real synapses. We point out the differences between the use of memristors in conventional neuro-computing architectures and the hybrid memristor-CMOS circuit proposed, and argue how this circuit represents an ideal building block for implementing brain-inspired probabilistic computing paradigms that are robust to variability and fault tolerant by design.

  17. Multiscale Modeling of Ultra High Temperature Ceramics (UHTC) ZrB2 and HfB2: Application to Lattice Thermal Conductivity

    NASA Technical Reports Server (NTRS)

    Lawson, John W.; Daw, Murray S.; Squire, Thomas H.; Bauschlicher, Charles W.

    2012-01-01

    We are developing a multiscale framework in computational modeling for the ultra high temperature ceramics (UHTC) ZrB2 and HfB2. These materials are characterized by high melting point, good strength, and reasonable oxidation resistance. They are candidate materials for a number of applications in extreme environments including sharp leading edges of hypersonic aircraft. In particular, we used a combination of ab initio methods, atomistic simulations and continuum computations to obtain insights into fundamental properties of these materials. Ab initio methods were used to compute basic structural, mechanical and thermal properties. From these results, a database was constructed to fit a Tersoff style interatomic potential suitable for atomistic simulations. These potentials were used to evaluate the lattice thermal conductivity of single crystals and the thermal resistance of simple grain boundaries. Finite element method (FEM) computations using atomistic results as inputs were performed with meshes constructed on SEM images thereby modeling the realistic microstructure. These continuum computations showed the reduction in thermal conductivity due to the grain boundary network.

  18. Cascaded VLSI neural network architecture for on-line learning

    NASA Technical Reports Server (NTRS)

    Thakoor, Anilkumar P. (Inventor); Duong, Tuan A. (Inventor); Daud, Taher (Inventor)

    1992-01-01

    High-speed, analog, fully-parallel, and asynchronous building blocks are cascaded for larger sizes and enhanced resolution. A hardware compatible algorithm permits hardware-in-the-loop learning despite limited weight resolution. A computation intensive feature classification application was demonstrated with this flexible hardware and new algorithm at high speed. This result indicates that these building block chips can be embedded as an application specific coprocessor for solving real world problems at extremely high data rates.

  19. Extreme air-sea surface turbulent fluxes in mid latitudes - estimation, origins and mechanisms

    NASA Astrophysics Data System (ADS)

    Gulev, Sergey; Natalia, Tilinina

    2014-05-01

    Extreme turbulent heat fluxes in the North Atlantic and North Pacific mid latitudes were estimated from the modern era and first generation reanalyses (NCEP-DOE, ERA-Interim, MERRA NCEP-CFSR, JRA-25) for the period from 1979 onwards. We used direct surface turbulent flux output as well as reanalysis state variables from which fluxes have been computed using COARE-3 bulk algorithm. For estimation of extreme flux values we analyzed surface flux probability density distribution which was approximated by Modified Fisher-Tippett distribution. In all reanalyses extreme turbulent heat fluxes amount to 1500-2000 W/m2 (for the 99th percentile) and can exceed 2000 W/m2 for higher percentiles in the western boundary current extension (WBCE) regions. Different reanalyses show significantly different shape of MFT distribution, implying considerable differences in the estimates of extreme fluxes. The highest extreme turbulent latent heat fluxes are diagnosed in NCEP-DOE, ERA-Interim and NCEP-CFSR reanalyses with the smallest being in MERRA. These differences may not necessarily reflect the differences in mean values. Analysis shows that differences in statistical properties of the state variables are the major source of differences in the shape of PDF of fluxes and in the estimates of extreme fluxes while the contribution of computational schemes used in different reanalyses is minor. The strongest differences in the characteristics of probability distributions of surface fluxes and extreme surface flux values between different reanalyses are found in the WBCE extension regions and high latitudes. In the next instance we analyzed the mechanisms responsible for forming surface turbulent fluxes and their potential role in changes of midlatitudinal heat balance. Midlatitudinal cyclones were considered as the major mechanism responsible for extreme turbulent fluxes which are typically occur during the cold air outbreaks in the rear parts of cyclones when atmospheric conditions provide locally high winds and air-sea temperature gradients. For this purpose we linked characteristics of cyclone activity over the midlatitudinal oceans with the extreme surface turbulent heat fluxes. Cyclone tracks and parameters of cyclone life cycle (deepening rates, propagation velocities, life time and clustering) were derived from the same reanalyses using state of the art numerical tracking algorithm. The main questions addressed in this study are (i) through which mechanisms extreme surface fluxes are associated with cyclone activity? and (ii) which types of cyclones are responsible for forming extreme turbulent fluxes? Our analysis shows that extreme surface fluxes are typically associated not with cyclones themselves but rather with cyclone-anticyclone interaction zones. This implies that North Atlantic and North Pacific series of intense cyclones do not result in the anomalous surface fluxes. Alternatively, extreme fluxes are most frequently associated with blocking situations, particularly with the intensification of the Siberian and North American Anticyclones providing cold-air outbreaks over WBC regions.

  20. On high heels and short muscles: A multiscale model for sarcomere loss in the gastrocnemius muscle

    PubMed Central

    Zöllner, Alexander M.; Pok, Jacquelynn M.; McWalter, Emily J.; Gold, Garry E.; Kuhl, Ellen

    2014-01-01

    High heels are a major source of chronic lower limb pain. Yet, more than one third of all women compromise health for looks and wear high heels on a daily basis. Changing from flat footwear to high heels induces chronic muscle shortening associated with discomfort, fatigue, reduced shock absorption, and increased injury risk. However, the long-term effects of high-heeled footwear on the musculoskeletal kinematics of the lower extremities remain poorly understood. Here we create a multiscale computational model for chronic muscle adaptation to characterize the acute and chronic effects of global muscle shortening on local sarcomere lengths. We perform a case study of a healthy female subject and show that raising the heel by 13 cm shortens the gastrocnemius muscle by 5% while the Achilles tendon remains virtually unaffected. Our computational simulation indicates that muscle shortening displays significant regional variations with extreme values of 22% in the central gastrocnemius. Our model suggests that the muscle gradually adjusts to its new functional length by a chronic loss of sarcomeres in series. Sarcomere loss varies significantly across the muscle with an average loss of 9%, virtually no loss at the proximal and distal ends, and a maximum loss of 39% in the central region. These changes reposition the remaining sarcomeres back into their optimal operating regime. Computational modeling of chronic muscle shortening provides a valuable tool to shape our understanding of the underlying mechanisms of muscle adaptation. Our study could open new avenues in orthopedic surgery and enhance treatment for patients with muscle contracture caused by other conditions than high heel wear such as paralysis, muscular atrophy, and muscular dystrophy. PMID:25451524

  1. Framework for Detection and Localization of Extreme Climate Event with Pixel Recursive Super Resolution

    NASA Astrophysics Data System (ADS)

    Kim, S. K.; Lee, J.; Zhang, C.; Ames, S.; Williams, D. N.

    2017-12-01

    Deep learning techniques have been successfully applied to solve many problems in climate and geoscience using massive-scaled observed and modeled data. For extreme climate event detections, several models based on deep neural networks have been recently proposed and attend superior performance that overshadows all previous handcrafted expert based method. The issue arising, though, is that accurate localization of events requires high quality of climate data. In this work, we propose framework capable of detecting and localizing extreme climate events in very coarse climate data. Our framework is based on two models using deep neural networks, (1) Convolutional Neural Networks (CNNs) to detect and localize extreme climate events, and (2) Pixel recursive recursive super resolution model to reconstruct high resolution climate data from low resolution climate data. Based on our preliminary work, we have presented two CNNs in our framework for different purposes, detection and localization. Our results using CNNs for extreme climate events detection shows that simple neural nets can capture the pattern of extreme climate events with high accuracy from very coarse reanalysis data. However, localization accuracy is relatively low due to the coarse resolution. To resolve this issue, the pixel recursive super resolution model reconstructs the resolution of input of localization CNNs. We present a best networks using pixel recursive super resolution model that synthesizes details of tropical cyclone in ground truth data while enhancing their resolution. Therefore, this approach not only dramat- ically reduces the human effort, but also suggests possibility to reduce computing cost required for downscaling process to increase resolution of data.

  2. Computed tomography, anatomy and morphometry of the lower extremity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoogewoud, H.M.; Rager, G.; Burch, H.

    1989-01-01

    This book presents up-to-date information on CT imaging of the lower extremity. It includes an atlas correlating new, high-resolution CT scans with identical thin anatomical slices covering the lower extremity from the crista iliaca to the planta pedis. Additional figures, including CT arthrograms of the hip, knee and ankle, depict the anatomy in detail The technique and clinical relevance of CT measurements especially in orthopedic surgery are also clearly explained. Of special interest is the new method developed by the authors for assessing the coverage of the femoral head. The special morphometry software and a 3D program allowing representation inmore » space make it possible to precisely and accurately measure the coverage with normal CT scans of the hip.« less

  3. Computer aided flexible envelope designs

    NASA Technical Reports Server (NTRS)

    Resch, R. D.

    1975-01-01

    Computer aided design methods are presented for the design and construction of strong, lightweight structures which require complex and precise geometric definition. The first, flexible structures, is a unique system of modeling folded plate structures and space frames. It is possible to continuously vary the geometry of a space frame to produce large, clear spans with curvature. The second method deals with developable surfaces, where both folding and bending are explored with the observed constraint of available building materials, and what minimal distortion result in maximum design capability. Alternative inexpensive fabrication techniques are being developed to achieve computer defined enclosures which are extremely lightweight and mathematically highly precise.

  4. Advanced computer graphic techniques for laser range finder (LRF) simulation

    NASA Astrophysics Data System (ADS)

    Bedkowski, Janusz; Jankowski, Stanislaw

    2008-11-01

    This paper show an advanced computer graphic techniques for laser range finder (LRF) simulation. The LRF is the common sensor for unmanned ground vehicle, autonomous mobile robot and security applications. The cost of the measurement system is extremely high, therefore the simulation tool is designed. The simulation gives an opportunity to execute algorithm such as the obstacle avoidance[1], slam for robot localization[2], detection of vegetation and water obstacles in surroundings of the robot chassis[3], LRF measurement in crowd of people[1]. The Axis Aligned Bounding Box (AABB) and alternative technique based on CUDA (NVIDIA Compute Unified Device Architecture) is presented.

  5. Temperature Distribution Within a Defect-Free Silicon Carbide Diode Predicted by a Computational Model

    NASA Technical Reports Server (NTRS)

    Kuczmarski, Maria A.; Neudeck, Philip G.

    2000-01-01

    Most solid-state electronic devices diodes, transistors, and integrated circuits are based on silicon. Although this material works well for many applications, its properties limit its ability to function under extreme high-temperature or high-power operating conditions. Silicon carbide (SiC), with its desirable physical properties, could someday replace silicon for these types of applications. A major roadblock to realizing this potential is the quality of SiC material that can currently be produced. Semiconductors require very uniform, high-quality material, and commercially available SiC tends to suffer from defects in the crystalline structure that have largely been eliminated in silicon. In some power circuits, these defects can focus energy into an extremely small area, leading to overheating that can damage the device. In an effort to better understand the way that these defects affect the electrical performance and reliability of an SiC device in a power circuit, the NASA Glenn Research Center at Lewis Field began an in-house three-dimensional computational modeling effort. The goal is to predict the temperature distributions within a SiC diode structure subjected to the various transient overvoltage breakdown stresses that occur in power management circuits. A commercial computational fluid dynamics computer program (FLUENT-Fluent, Inc., Lebanon, New Hampshire) was used to build a model of a defect-free SiC diode and generate a computational mesh. A typical breakdown power density was applied over 0.5 msec in a heated layer at the junction between the p-type SiC and n-type SiC, and the temperature distribution throughout the diode was then calculated. The peak temperature extracted from the computational model agreed well (within 6 percent) with previous first-order calculations of the maximum expected temperature at the end of the breakdown pulse. This level of agreement is excellent for a model of this type and indicates that three-dimensional computational modeling can provide useful predictions for this class of problem. The model is now being extended to include the effects of crystal defects. The model will provide unique insights into how high the temperature rises in the vicinity of the defects in a diode at various power densities and pulse durations. This information also will help researchers in understanding and designing SiC devices for safe and reliable operation in high-power circuits.

  6. Multiscale Modeling of UHTC: Thermal Conductivity

    NASA Technical Reports Server (NTRS)

    Lawson, John W.; Murry, Daw; Squire, Thomas; Bauschlicher, Charles W.

    2012-01-01

    We are developing a multiscale framework in computational modeling for the ultra high temperature ceramics (UHTC) ZrB2 and HfB2. These materials are characterized by high melting point, good strength, and reasonable oxidation resistance. They are candidate materials for a number of applications in extreme environments including sharp leading edges of hypersonic aircraft. In particular, we used a combination of ab initio methods, atomistic simulations and continuum computations to obtain insights into fundamental properties of these materials. Ab initio methods were used to compute basic structural, mechanical and thermal properties. From these results, a database was constructed to fit a Tersoff style interatomic potential suitable for atomistic simulations. These potentials were used to evaluate the lattice thermal conductivity of single crystals and the thermal resistance of simple grain boundaries. Finite element method (FEM) computations using atomistic results as inputs were performed with meshes constructed on SEM images thereby modeling the realistic microstructure. These continuum computations showed the reduction in thermal conductivity due to the grain boundary network.

  7. Random sampling technique for ultra-fast computations of molecular opacities for exoplanet atmospheres

    NASA Astrophysics Data System (ADS)

    Min, M.

    2017-10-01

    Context. Opacities of molecules in exoplanet atmospheres rely on increasingly detailed line-lists for these molecules. The line lists available today contain for many species up to several billions of lines. Computation of the spectral line profile created by pressure and temperature broadening, the Voigt profile, of all of these lines is becoming a computational challenge. Aims: We aim to create a method to compute the Voigt profile in a way that automatically focusses the computation time into the strongest lines, while still maintaining the continuum contribution of the high number of weaker lines. Methods: Here, we outline a statistical line sampling technique that samples the Voigt profile quickly and with high accuracy. The number of samples is adjusted to the strength of the line and the local spectral line density. This automatically provides high accuracy line shapes for strong lines or lines that are spectrally isolated. The line sampling technique automatically preserves the integrated line opacity for all lines, thereby also providing the continuum opacity created by the large number of weak lines at very low computational cost. Results: The line sampling technique is tested for accuracy when computing line spectra and correlated-k tables. Extremely fast computations ( 3.5 × 105 lines per second per core on a standard current day desktop computer) with high accuracy (≤1% almost everywhere) are obtained. A detailed recipe on how to perform the computations is given.

  8. GROMACS 4:  Algorithms for Highly Efficient, Load-Balanced, and Scalable Molecular Simulation.

    PubMed

    Hess, Berk; Kutzner, Carsten; van der Spoel, David; Lindahl, Erik

    2008-03-01

    Molecular simulation is an extremely useful, but computationally very expensive tool for studies of chemical and biomolecular systems. Here, we present a new implementation of our molecular simulation toolkit GROMACS which now both achieves extremely high performance on single processors from algorithmic optimizations and hand-coded routines and simultaneously scales very well on parallel machines. The code encompasses a minimal-communication domain decomposition algorithm, full dynamic load balancing, a state-of-the-art parallel constraint solver, and efficient virtual site algorithms that allow removal of hydrogen atom degrees of freedom to enable integration time steps up to 5 fs for atomistic simulations also in parallel. To improve the scaling properties of the common particle mesh Ewald electrostatics algorithms, we have in addition used a Multiple-Program, Multiple-Data approach, with separate node domains responsible for direct and reciprocal space interactions. Not only does this combination of algorithms enable extremely long simulations of large systems but also it provides that simulation performance on quite modest numbers of standard cluster nodes.

  9. Coupled hydro-meteorological modelling on a HPC platform for high-resolution extreme weather impact study

    NASA Astrophysics Data System (ADS)

    Zhu, Dehua; Echendu, Shirley; Xuan, Yunqing; Webster, Mike; Cluckie, Ian

    2016-11-01

    Impact-focused studies of extreme weather require coupling of accurate simulations of weather and climate systems and impact-measuring hydrological models which themselves demand larger computer resources. In this paper, we present a preliminary analysis of a high-performance computing (HPC)-based hydrological modelling approach, which is aimed at utilizing and maximizing HPC power resources, to support the study on extreme weather impact due to climate change. Here, four case studies are presented through implementation on the HPC Wales platform of the UK mesoscale meteorological Unified Model (UM) with high-resolution simulation suite UKV, alongside a Linux-based hydrological model, Hydrological Predictions for the Environment (HYPE). The results of this study suggest that the coupled hydro-meteorological model was still able to capture the major flood peaks, compared with the conventional gauge- or radar-driving forecast, but with the added value of much extended forecast lead time. The high-resolution rainfall estimation produced by the UKV performs similarly to that of radar rainfall products in the first 2-3 days of tested flood events, but the uncertainties particularly increased as the forecast horizon goes beyond 3 days. This study takes a step forward to identify how the online mode approach can be used, where both numerical weather prediction and the hydrological model are executed, either simultaneously or on the same hardware infrastructures, so that more effective interaction and communication can be achieved and maintained between the models. But the concluding comments are that running the entire system on a reasonably powerful HPC platform does not yet allow for real-time simulations, even without the most complex and demanding data simulation part.

  10. Solar Weather Ice Monitoring Station (SWIMS). A low cost, extreme/harsh environment, solar powered, autonomous sensor data gathering and transmission system

    NASA Astrophysics Data System (ADS)

    Chetty, S.; Field, L. A.

    2013-12-01

    The Arctic ocean's continuing decrease of summer-time ice is related to rapidly diminishing multi-year ice due to the effects of climate change. Ice911 Research aims to develop environmentally respectful materials that when deployed will increase the albedo, enhancing the formation and/preservation of multi-year ice. Small scale deployments using various materials have been done in Canada, California's Sierra Nevada Mountains and a pond in Minnesota to test the albedo performance and environmental characteristics of these materials. SWIMS is a sophisticated autonomous sensor system being developed to measure the albedo, weather, water temperature and other environmental parameters. The system (SWIMS) employs low cost, high accuracy/precision sensors, high resolution cameras, and an extreme environment command and data handling computer system using satellite and terrestrial wireless communication. The entire system is solar powered with redundant battery backup on a floating buoy platform engineered for low temperature (-40C) and high wind conditions. The system also incorporates tilt sensors, sonar based ice thickness sensors and a weather station. To keep the costs low, each SWIMS unit measures incoming and reflected radiation from the four quadrants around the buoy. This allows data from four sets of sensors, cameras, weather station, water temperature probe to be collected and transmitted by a single on-board solar powered computer. This presentation covers the technical, logistical and cost challenges in designing, developing and deploying these stations in remote, extreme environments. Image captured by camera #3 of setting sun on the SWIMS station One of the images captured by SWIMS Camera #4

  11. Evaluation of uncertainties in mean and extreme precipitation under climate change for northwestern Mediterranean watersheds from high-resolution Med and Euro-CORDEX ensembles

    NASA Astrophysics Data System (ADS)

    Colmet-Daage, Antoine; Sanchez-Gomez, Emilia; Ricci, Sophie; Llovel, Cécile; Borrell Estupina, Valérie; Quintana-Seguí, Pere; Llasat, Maria Carmen; Servat, Eric

    2018-01-01

    The climate change impact on mean and extreme precipitation events in the northern Mediterranean region is assessed using high-resolution EuroCORDEX and MedCORDEX simulations. The focus is made on three regions, Lez and Aude located in France, and Muga located in northeastern Spain, and eight pairs of global and regional climate models are analyzed with respect to the SAFRAN product. First the model skills are evaluated in terms of bias for the precipitation annual cycle over historical period. Then future changes in extreme precipitation, under two emission scenarios, are estimated through the computation of past/future change coefficients of quantile-ranked model precipitation outputs. Over the 1981-2010 period, the cumulative precipitation is overestimated for most models over the mountainous regions and underestimated over the coastal regions in autumn and higher-order quantile. The ensemble mean and the spread for future period remain unchanged under RCP4.5 scenario and decrease under RCP8.5 scenario. Extreme precipitation events are intensified over the three catchments with a smaller ensemble spread under RCP8.5 revealing more evident changes, especially in the later part of the 21st century.

  12. Structural Extremes in a Cretaceous Dinosaur

    PubMed Central

    Sereno, Paul C.; Wilson, Jeffrey A.; Witmer, Lawrence M.; Whitlock, John A.; Maga, Abdoulaye; Ide, Oumarou; Rowe, Timothy A.

    2007-01-01

    Fossils of the Early Cretaceous dinosaur, Nigersaurus taqueti, document for the first time the cranial anatomy of a rebbachisaurid sauropod. Its extreme adaptations for herbivory at ground-level challenge current hypotheses regarding feeding function and feeding strategy among diplodocoids, the larger clade of sauropods that includes Nigersaurus. We used high resolution computed tomography, stereolithography, and standard molding and casting techniques to reassemble the extremely fragile skull. Computed tomography also allowed us to render the first endocast for a sauropod preserving portions of the olfactory bulbs, cerebrum and inner ear, the latter permitting us to establish habitual head posture. To elucidate evidence of tooth wear and tooth replacement rate, we used photographic-casting techniques and crown thin sections, respectively. To reconstruct its 9-meter postcranial skeleton, we combined and size-adjusted multiple partial skeletons. Finally, we used maximum parsimony algorithms on character data to obtain the best estimate of phylogenetic relationships among diplodocoid sauropods. Nigersaurus taqueti shows extreme adaptations for a dinosaurian herbivore including a skull of extremely light construction, tooth batteries located at the distal end of the jaws, tooth replacement as fast as one per month, an expanded muzzle that faces directly toward the ground, and hollow presacral vertebral centra with more air sac space than bone by volume. A cranial endocast provides the first reasonably complete view of a sauropod brain including its small olfactory bulbs and cerebrum. Skeletal and dental evidence suggests that Nigersaurus was a ground-level herbivore that gathered and sliced relatively soft vegetation, the culmination of a low-browsing feeding strategy first established among diplodocoids during the Jurassic. PMID:18030355

  13. Computer-automated opponent for manned air-to-air combat simulations

    NASA Technical Reports Server (NTRS)

    Hankins, W. W., III

    1979-01-01

    Two versions of a real-time digital-computer program that operates a fighter airplane interactively against a human pilot in simulated air combat were evaluated. They function by replacing one of two pilots in the Langley differential maneuvering simulator. Both versions make maneuvering decisions from identical information and logic; they differ essentially in the aerodynamic models that they control. One is very complete, but the other is much simpler, primarily characterizing the airplane's performance (lift, drag, and thrust). Both models competed extremely well against highly trained U.S. fighter pilots.

  14. An evaluation of the state of time synchronization on leadership class supercomputers

    DOE PAGES

    Jones, Terry; Ostrouchov, George; Koenig, Gregory A.; ...

    2017-10-09

    We present a detailed examination of time agreement characteristics for nodes within extreme-scale parallel computers. Using a software tool we introduce in this paper, we quantify attributes of clock skew among nodes in three representative high-performance computers sited at three national laboratories. Our measurements detail the statistical properties of time agreement among nodes and how time agreement drifts over typical application execution durations. We discuss the implications of our measurements, why the current state of the field is inadequate, and propose strategies to address observed shortcomings.

  15. An evaluation of the state of time synchronization on leadership class supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Terry; Ostrouchov, George; Koenig, Gregory A.

    We present a detailed examination of time agreement characteristics for nodes within extreme-scale parallel computers. Using a software tool we introduce in this paper, we quantify attributes of clock skew among nodes in three representative high-performance computers sited at three national laboratories. Our measurements detail the statistical properties of time agreement among nodes and how time agreement drifts over typical application execution durations. We discuss the implications of our measurements, why the current state of the field is inadequate, and propose strategies to address observed shortcomings.

  16. Toward Improving Predictability of Extreme Hydrometeorological Events: the Use of Multi-scale Climate Modeling in the Northern High Plains

    NASA Astrophysics Data System (ADS)

    Munoz-Arriola, F.; Torres-Alavez, J.; Mohamad Abadi, A.; Walko, R. L.

    2014-12-01

    Our goal is to investigate possible sources of predictability of hydrometeorological extreme events in the Northern High Plains. Hydrometeorological extreme events are considered the most costly natural phenomena. Water deficits and surpluses highlight how the water-climate interdependence becomes crucial in areas where single activities drive economies such as Agriculture in the NHP. Nonetheless we recognize the Water-Climate interdependence and the regulatory role that human activities play, we still grapple to identify what sources of predictability could be added to flood and drought forecasts. To identify the benefit of multi-scale climate modeling and the role of initial conditions on flood and drought predictability on the NHP, we use the Ocean Land Atmospheric Model (OLAM). OLAM is characterized by a dynamic core with a global geodesic grid with hexagonal (and variably refined) mesh cells and a finite volume discretization of the full compressible Navier Stokes equations, a cut-grid cell method for topography (that reduces error in computational gradient computation and anomalous vertical dispersion). Our hypothesis is that wet conditions will drive OLAM's simulations of precipitation to wetter conditions affecting both flood forecast and drought forecast. To test this hypothesis we simulate precipitation during identified historical flood events followed by drought events in the NHP (i.e. 2011-2012 years). We initialized OLAM with CFS-data 1-10 days previous to a flooding event (as initial conditions) to explore (1) short-term and high-resolution and (2) long-term and coarse-resolution simulations of flood and drought events, respectively. While floods are assessed during a maximum of 15-days refined-mesh simulations, drought is evaluated during the following 15 months. Simulated precipitation will be compared with the Sub-continental Observation Dataset, a gridded 1/16th degree resolution data obtained from climatological stations in Canada, US, and Mexico. This in-progress research will ultimately contribute to integrate OLAM and VIC models and improve predictability of extreme hydrometeorological events.

  17. OpenCluster: A Flexible Distributed Computing Framework for Astronomical Data Processing

    NASA Astrophysics Data System (ADS)

    Wei, Shoulin; Wang, Feng; Deng, Hui; Liu, Cuiyin; Dai, Wei; Liang, Bo; Mei, Ying; Shi, Congming; Liu, Yingbo; Wu, Jingping

    2017-02-01

    The volume of data generated by modern astronomical telescopes is extremely large and rapidly growing. However, current high-performance data processing architectures/frameworks are not well suited for astronomers because of their limitations and programming difficulties. In this paper, we therefore present OpenCluster, an open-source distributed computing framework to support rapidly developing high-performance processing pipelines of astronomical big data. We first detail the OpenCluster design principles and implementations and present the APIs facilitated by the framework. We then demonstrate a case in which OpenCluster is used to resolve complex data processing problems for developing a pipeline for the Mingantu Ultrawide Spectral Radioheliograph. Finally, we present our OpenCluster performance evaluation. Overall, OpenCluster provides not only high fault tolerance and simple programming interfaces, but also a flexible means of scaling up the number of interacting entities. OpenCluster thereby provides an easily integrated distributed computing framework for quickly developing a high-performance data processing system of astronomical telescopes and for significantly reducing software development expenses.

  18. Determination of consistent patterns of range of motion in the ankle joint with a computed tomography stress-test.

    PubMed

    Tuijthof, Gabriëlle Josephine Maria; Zengerink, Maartje; Beimers, Lijkele; Jonges, Remmet; Maas, Mario; van Dijk, Cornelis Niek; Blankevoort, Leendert

    2009-07-01

    Measuring the range of motion of the ankle joint can assist in accurate diagnosis of ankle laxity. A computed tomography-based stress-test (3D CT stress-test) was used that determines the three-dimensional position and orientation of tibial, calcaneal and talar bones. The goal was to establish a quantitative database of the normal ranges of motion of the talocrural and subtalar joints. A clinical case on suspected subtalar instability demonstrated the relevance the proposed method. The range of motion was measured for the ankle joints in vivo for 20 subjects using the 3D CT stress-test. Motion of the tibia and calcaneus relative to the talus for eight extreme foot positions were described by helical parameters. High consistency for finite helical axis orientation (n) and rotation (theta) was shown for: talocrural extreme dorsiflexion to extreme plantarflexion (root mean square direction deviation (eta) 5.3 degrees and theta: SD 11.0 degrees), talorucral and subtalar extreme combined eversion-dorsiflexion to combined inversion-plantarflexion (eta: 6.7 degrees , theta: SD 9.0 degrees and eta:6.3 degrees , theta: SD 5.1 degrees), and subtalar extreme inversion to extreme eversion (eta: 6.4 degrees, theta: SD 5.9 degrees). Nearly all dorsi--and plantarflexion occurs in the talocrural joint (theta: mean 63.3 degrees (SD 11 degrees)). The inversion and internal rotation components for extreme eversion to inversion were approximately three times larger for the subtalar joint (theta: mean 22.9 degrees and 29.1 degrees) than for the talocrural joint (theta: mean 8.8 degrees and 10.7 degrees). Comparison of the ranges of motion of the pathologic ankle joint with the healthy subjects showed an increased inversion and axial rotation in the talocrural joint instead of in the suspected subtalar joint. The proposed diagnostic technique and the acquired database of helical parameters of ankle joint ranges of motion are suitable to apply in clinical cases.

  19. Condor-COPASI: high-throughput computing for biochemical networks

    PubMed Central

    2012-01-01

    Background Mathematical modelling has become a standard technique to improve our understanding of complex biological systems. As models become larger and more complex, simulations and analyses require increasing amounts of computational power. Clusters of computers in a high-throughput computing environment can help to provide the resources required for computationally expensive model analysis. However, exploiting such a system can be difficult for users without the necessary expertise. Results We present Condor-COPASI, a server-based software tool that integrates COPASI, a biological pathway simulation tool, with Condor, a high-throughput computing environment. Condor-COPASI provides a web-based interface, which makes it extremely easy for a user to run a number of model simulation and analysis tasks in parallel. Tasks are transparently split into smaller parts, and submitted for execution on a Condor pool. Result output is presented to the user in a number of formats, including tables and interactive graphical displays. Conclusions Condor-COPASI can effectively use a Condor high-throughput computing environment to provide significant gains in performance for a number of model simulation and analysis tasks. Condor-COPASI is free, open source software, released under the Artistic License 2.0, and is suitable for use by any institution with access to a Condor pool. Source code is freely available for download at http://code.google.com/p/condor-copasi/, along with full instructions on deployment and usage. PMID:22834945

  20. Extremely Low Operating Current Resistive Memory Based on Exfoliated 2D Perovskite Single Crystals for Neuromorphic Computing.

    PubMed

    Tian, He; Zhao, Lianfeng; Wang, Xuefeng; Yeh, Yao-Wen; Yao, Nan; Rand, Barry P; Ren, Tian-Ling

    2017-12-26

    Extremely low energy consumption neuromorphic computing is required to achieve massively parallel information processing on par with the human brain. To achieve this goal, resistive memories based on materials with ionic transport and extremely low operating current are required. Extremely low operating current allows for low power operation by minimizing the program, erase, and read currents. However, materials currently used in resistive memories, such as defective HfO x , AlO x , TaO x , etc., cannot suppress electronic transport (i.e., leakage current) while allowing good ionic transport. Here, we show that 2D Ruddlesden-Popper phase hybrid lead bromide perovskite single crystals are promising materials for low operating current nanodevice applications because of their mixed electronic and ionic transport and ease of fabrication. Ionic transport in the exfoliated 2D perovskite layer is evident via the migration of bromide ions. Filaments with a diameter of approximately 20 nm are visualized, and resistive memories with extremely low program current down to 10 pA are achieved, a value at least 1 order of magnitude lower than conventional materials. The ionic migration and diffusion as an artificial synapse is realized in the 2D layered perovskites at the pA level, which can enable extremely low energy neuromorphic computing.

  1. Single image super-resolution via regularized extreme learning regression for imagery from microgrid polarimeters

    NASA Astrophysics Data System (ADS)

    Sargent, Garrett C.; Ratliff, Bradley M.; Asari, Vijayan K.

    2017-08-01

    The advantage of division of focal plane imaging polarimeters is their ability to obtain temporally synchronized intensity measurements across a scene; however, they sacrifice spatial resolution in doing so due to their spatially modulated arrangement of the pixel-to-pixel polarizers and often result in aliased imagery. Here, we propose a super-resolution method based upon two previously trained extreme learning machines (ELM) that attempt to recover missing high frequency and low frequency content beyond the spatial resolution of the sensor. This method yields a computationally fast and simple way of recovering lost high and low frequency content from demosaicing raw microgrid polarimetric imagery. The proposed method outperforms other state-of-the-art single-image super-resolution algorithms in terms of structural similarity and peak signal-to-noise ratio.

  2. Remote sensor digital image data analysis using the General Electric Image 100 analysis system (a study of analysis speed, cost, and performance)

    NASA Technical Reports Server (NTRS)

    Mcmurtry, G. J.; Petersen, G. W. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. It was found that the high speed man machine interaction capability is a distinct advantage of the image 100; however, the small size of the digital computer in the system is a definite limitation. The system can be highly useful in an analysis mode in which it complements a large general purpose computer. The image 100 was found to be extremely valuable in the analysis of aircraft MSS data where the spatial resolution begins to approach photographic quality and the analyst can exercise interpretation judgements and readily interact with the machine.

  3. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Mid-year report FY17 Q2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth D.; Pugmire, David; Rogers, David

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less

  4. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY17.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth D.; Pugmire, David; Rogers, David

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less

  5. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem. Mid-year report FY16 Q2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth D.; Sewell, Christopher; Childs, Hank

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less

  6. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY15 Q4.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth D.; Sewell, Christopher; Childs, Hank

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less

  7. Programmable Synaptic Metaplasticity and below Femtojoule Spiking Energy Realized in Graphene-Based Neuromorphic Memristor.

    PubMed

    Liu, Bo; Liu, Zhiwei; Chiu, In-Shiang; Di, MengFu; Wu, YongRen; Wang, Jer-Chyi; Hou, Tuo-Hung; Lai, Chao-Sung

    2018-06-20

    Memristors with rich interior dynamics of ion migration are promising for mimicking various biological synaptic functions in neuromorphic hardware systems. A graphene-based memristor shows an extremely low energy consumption of less than a femtojoule per spike, by taking advantage of weak surface van der Waals interaction of graphene. The device also shows an intriguing programmable metaplasticity property in which the synaptic plasticity depends on the history of the stimuli and yet allows rapid reconfiguration via an immediate stimulus. This graphene-based memristor could be a promising building block toward designing highly versatile and extremely energy efficient neuromorphic computing systems.

  8. About climate variabilitiy leading the hydric condition of the soil in the rainfed region of Argentina

    NASA Astrophysics Data System (ADS)

    Pántano, V. C.; Penalba, O. C.

    2013-05-01

    Extreme events of temperature and rainfall have a socio-economic impact in the rainfed agriculture production region in Argentina. The magnitude of the impact can be analyzed through the water balance which integrates the characteristics of the soil and climate conditions. Changes observed in climate variables during the last decades affected the components of the water balance. As a result, a displacement of the agriculture border towards the west was produced, improving the agricultural production of the region. The objective of this work is to analyze how the variability of rainfall and temperature leads the hydric condition of the soil, with special focus on extreme events. The hydric conditions of the soil (HC= Excess- Deficit) were estimated from the monthly water balance (Thornthwaite and Mather method, 1957), using monthly potential evapotranspiration (PET) and monthly accumulated rainfall (R) for 33 stations (period 1970-2006). Information of temperature and rainfall was provided by National Weather Service and the effective capacity of soil water was considered from Forte Lay and Spescha (2001). An agricultural extreme condition occurs when soil moisture and rainfall are inadequate or excessive for the development of the crops. In this study, we define an extreme event when the variable is less (greater) than its 20% and 10% (80% and 90%) percentile. In order to evaluate how sensitive is the HC to water and heat stress in the region, different conditional probabilities were evaluated. There is a weaker response of HC to extreme low PET while extreme low R leads high values of HC. However, this behavior is not always observed, especially in the western region where extreme high and low PET show a stronger influence over the HC. Finally, to analyze the temporal variability of extreme PET and R, leading hydric condition of the soil, the number of stations presenting extreme conditions was computed for each month. As an example, interesting results were observed for April. During this month, the water recharge of the soil is crucial to let the winter crops manage with the scarce rainfalls occurring in the following months. In 1970, 1974, 1977, 1978 and 1997 more than 50% of the stations were under extreme high PET; while 1970, 1974, 1978 and 1988 presented more than 40% under extreme low R. Thus, the 70s was the more threatened decade of the period. Since the 80s (except for 1997), extreme dry events due to one variable or the other are mostly presented separately, over smaller areas. The response of the spatial distribution of HC is stronger when both variables present extreme conditions. In particular, during 1997 the region presents extreme low values of HC as a consequence of extreme low R and high PET. Communities dependent on agriculture are highly sensitive to climate variability and its extremes. In the studied region, it was shown that scarce water and heat stress contribute to the resulting hydric condition, producing strong impact over different productive activities. Extreme temperature seems to have a stronger influence over extreme unfavorable hydric conditions.

  9. xSDK Foundations: Toward an Extreme-scale Scientific Software Development Kit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heroux, Michael A.; Bartlett, Roscoe; Demeshko, Irina

    Here, extreme-scale computational science increasingly demands multiscale and multiphysics formulations. Combining software developed by independent groups is imperative: no single team has resources for all predictive science and decision support capabilities. Scientific libraries provide high-quality, reusable software components for constructing applications with improved robustness and portability. However, without coordination, many libraries cannot be easily composed. Namespace collisions, inconsistent arguments, lack of third-party software versioning, and additional difficulties make composition costly. The Extreme-scale Scientific Software Development Kit (xSDK) defines community policies to improve code quality and compatibility across independently developed packages (hypre, PETSc, SuperLU, Trilinos, and Alquimia) and provides a foundationmore » for addressing broader issues in software interoperability, performance portability, and sustainability. The xSDK provides turnkey installation of member software and seamless combination of aggregate capabilities, and it marks first steps toward extreme-scale scientific software ecosystems from which future applications can be composed rapidly with assured quality and scalability.« less

  10. xSDK Foundations: Toward an Extreme-scale Scientific Software Development Kit

    DOE PAGES

    Heroux, Michael A.; Bartlett, Roscoe; Demeshko, Irina; ...

    2017-03-01

    Here, extreme-scale computational science increasingly demands multiscale and multiphysics formulations. Combining software developed by independent groups is imperative: no single team has resources for all predictive science and decision support capabilities. Scientific libraries provide high-quality, reusable software components for constructing applications with improved robustness and portability. However, without coordination, many libraries cannot be easily composed. Namespace collisions, inconsistent arguments, lack of third-party software versioning, and additional difficulties make composition costly. The Extreme-scale Scientific Software Development Kit (xSDK) defines community policies to improve code quality and compatibility across independently developed packages (hypre, PETSc, SuperLU, Trilinos, and Alquimia) and provides a foundationmore » for addressing broader issues in software interoperability, performance portability, and sustainability. The xSDK provides turnkey installation of member software and seamless combination of aggregate capabilities, and it marks first steps toward extreme-scale scientific software ecosystems from which future applications can be composed rapidly with assured quality and scalability.« less

  11. Final Report: Ionization chemistry of high temperature molecular fluids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fried, L E

    2007-02-26

    With the advent of coupled chemical/hydrodynamic reactive flow models for high explosives, understanding detonation chemistry is of increasing importance to DNT. The accuracy of first principles detonation codes, such as CHEETAH, are dependent on an accurate representation of the species present under detonation conditions. Ionic species and non-molecular phases are not currently included coupled chemistry/hydrodynamic simulations. This LDRD will determine the prevalence of such species during high explosive detonations, by carrying out experimental and computational investigation of common detonation products under extreme conditions. We are studying the phase diagram of detonation products such as H{sub 2}O, or NH{sub 3} andmore » mixtures under conditions of extreme pressure (P > 1 GPa) and temperature (T > 1000K). Under these conditions, the neutral molecular form of matter transforms to a phase dominated by ions. The phase boundaries of such a region are unknown.« less

  12. Didactic Dissonance: Teacher Roles in Computer Gaming Situations in Kindergartens

    ERIC Educational Resources Information Center

    Vangsnes, Vigdis; Økland, Nils Tore Gram

    2015-01-01

    In computer gaming situations in kindergartens, the pre-school teacher's function can be viewed in a continuum. At one extreme is the teacher who takes an intervening role and at the other extreme is the teacher who chooses to restrict herself/himself to an organising or distal role. This study shows that both the intervening position and the…

  13. Extreme hydronephrosis due to uretropelvic junction obstruction in infant (case report).

    PubMed

    Krzemień, Grażyna; Szmigielska, Agnieszka; Bombiński, Przemysław; Barczuk, Marzena; Biejat, Agnieszka; Warchoł, Stanisław; Dudek-Warchoł, Teresa

    2016-01-01

    Hydronephrosis is the one of the most common congenital abnormalities of urinary tract. The left kidney is more commonly affected than the right side and is more common in males. To determine the role of ultrasonography, renal dynamic scintigraphy and lowerdose computed tomography urography in preoperative diagnostic workup of infant with extreme hydronephrosis. We presented the boy with antenatally diagnosed hydronephrosis. In serial, postnatal ultrasonography, renal scintigraphy and computed tomography urography we observed slightly declining function in the dilated kidney and increasing pelvic dilatation. Pyeloplasty was performed at the age of four months with good result. Results of ultrasonography and renal dynamic scintigraphy in child with extreme hydronephrosis can be difficult to asses, therefore before the surgical procedure a lower-dose computed tomography urography should be performed.

  14. Entropy, extremality, euclidean variations, and the equations of motion

    NASA Astrophysics Data System (ADS)

    Dong, Xi; Lewkowycz, Aitor

    2018-01-01

    We study the Euclidean gravitational path integral computing the Rényi entropy and analyze its behavior under small variations. We argue that, in Einstein gravity, the extremality condition can be understood from the variational principle at the level of the action, without having to solve explicitly the equations of motion. This set-up is then generalized to arbitrary theories of gravity, where we show that the respective entanglement entropy functional needs to be extremized. We also extend this result to all orders in Newton's constant G N , providing a derivation of quantum extremality. Understanding quantum extremality for mixtures of states provides a generalization of the dual of the boundary modular Hamiltonian which is given by the bulk modular Hamiltonian plus the area operator, evaluated on the so-called modular extremal surface. This gives a bulk prescription for computing the relative entropies to all orders in G N . We also comment on how these ideas can be used to derive an integrated version of the equations of motion, linearized around arbitrary states.

  15. Automation Rover for Extreme Environments

    NASA Technical Reports Server (NTRS)

    Sauder, Jonathan; Hilgemann, Evan; Johnson, Michael; Parness, Aaron; Hall, Jeffrey; Kawata, Jessie; Stack, Kathryn

    2017-01-01

    Almost 2,300 years ago the ancient Greeks built the Antikythera automaton. This purely mechanical computer accurately predicted past and future astronomical events long before electronics existed1. Automata have been credibly used for hundreds of years as computers, art pieces, and clocks. However, in the past several decades automata have become less popular as the capabilities of electronics increased, leaving them an unexplored solution for robotic spacecraft. The Automaton Rover for Extreme Environments (AREE) proposes an exciting paradigm shift from electronics to a fully mechanical system, enabling longitudinal exploration of the most extreme environments within the solar system.

  16. ASCR Cybersecurity for Scientific Computing Integrity - Research Pathways and Ideas Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peisert, Sean; Potok, Thomas E.; Jones, Todd

    At the request of the U.S. Department of Energy's (DOE) Office of Science (SC) Advanced Scientific Computing Research (ASCR) program office, a workshop was held June 2-3, 2015, in Gaithersburg, MD, to identify potential long term (10 to +20 year) cybersecurity fundamental basic research and development challenges, strategies and roadmap facing future high performance computing (HPC), networks, data centers, and extreme-scale scientific user facilities. This workshop was a follow-on to the workshop held January 7-9, 2015, in Rockville, MD, that examined higher level ideas about scientific computing integrity specific to the mission of the DOE Office of Science. Issues includedmore » research computation and simulation that takes place on ASCR computing facilities and networks, as well as network-connected scientific instruments, such as those run by various DOE Office of Science programs. Workshop participants included researchers and operational staff from DOE national laboratories, as well as academic researchers and industry experts. Participants were selected based on the submission of abstracts relating to the topics discussed in the previous workshop report [1] and also from other ASCR reports, including "Abstract Machine Models and Proxy Architectures for Exascale Computing" [27], the DOE "Preliminary Conceptual Design for an Exascale Computing Initiative" [28], and the January 2015 machine learning workshop [29]. The workshop was also attended by several observers from DOE and other government agencies. The workshop was divided into three topic areas: (1) Trustworthy Supercomputing, (2) Extreme-Scale Data, Knowledge, and Analytics for Understanding and Improving Cybersecurity, and (3) Trust within High-end Networking and Data Centers. Participants were divided into three corresponding teams based on the category of their abstracts. The workshop began with a series of talks from the program manager and workshop chair, followed by the leaders for each of the three topics and a representative of each of the four major DOE Office of Science Advanced Scientific Computing Research Facilities: the Argonne Leadership Computing Facility (ALCF), the Energy Sciences Network (ESnet), the National Energy Research Scientific Computing Center (NERSC), and the Oak Ridge Leadership Computing Facility (OLCF). The rest of the workshop consisted of topical breakout discussions and focused writing periods that produced much of this report.« less

  17. Markov Chains For Testing Redundant Software

    NASA Technical Reports Server (NTRS)

    White, Allan L.; Sjogren, Jon A.

    1990-01-01

    Preliminary design developed for validation experiment that addresses problems unique to assuring extremely high quality of multiple-version programs in process-control software. Approach takes into account inertia of controlled system in sense it takes more than one failure of control program to cause controlled system to fail. Verification procedure consists of two steps: experimentation (numerical simulation) and computation, with Markov model for each step.

  18. A Discussion of Using a Reconfigurable Processor to Implement the Discrete Fourier Transform

    NASA Technical Reports Server (NTRS)

    White, Michael J.

    2004-01-01

    This paper presents the design and implementation of the Discrete Fourier Transform (DFT) algorithm on a reconfigurable processor system. While highly applicable to many engineering problems, the DFT is an extremely computationally intensive algorithm. Consequently, the eventual goal of this work is to enhance the execution of a floating-point precision DFT algorithm by off loading the algorithm from the computing system. This computing system, within the context of this research, is a typical high performance desktop computer with an may of field programmable gate arrays (FPGAs). FPGAs are hardware devices that are configured by software to execute an algorithm. If it is desired to change the algorithm, the software is changed to reflect the modification, then download to the FPGA, which is then itself modified. This paper will discuss methodology for developing the DFT algorithm to be implemented on the FPGA. We will discuss the algorithm, the FPGA code effort, and the results to date.

  19. Grid-converged solution and analysis of the unsteady viscous flow in a two-dimensional shock tube

    NASA Astrophysics Data System (ADS)

    Zhou, Guangzhao; Xu, Kun; Liu, Feng

    2018-01-01

    The flow in a shock tube is extremely complex with dynamic multi-scale structures of sharp fronts, flow separation, and vortices due to the interaction of the shock wave, the contact surface, and the boundary layer over the side wall of the tube. Prediction and understanding of the complex fluid dynamics are of theoretical and practical importance. It is also an extremely challenging problem for numerical simulation, especially at relatively high Reynolds numbers. Daru and Tenaud ["Evaluation of TVD high resolution schemes for unsteady viscous shocked flows," Comput. Fluids 30, 89-113 (2001)] proposed a two-dimensional model problem as a numerical test case for high-resolution schemes to simulate the flow field in a square closed shock tube. Though many researchers attempted this problem using a variety of computational methods, there is not yet an agreed-upon grid-converged solution of the problem at the Reynolds number of 1000. This paper presents a rigorous grid-convergence study and the resulting grid-converged solutions for this problem by using a newly developed, efficient, and high-order gas-kinetic scheme. Critical data extracted from the converged solutions are documented as benchmark data. The complex fluid dynamics of the flow at Re = 1000 are discussed and analyzed in detail. Major phenomena revealed by the numerical computations include the downward concentration of the fluid through the curved shock, the formation of the vortices, the mechanism of the shock wave bifurcation, the structure of the jet along the bottom wall, and the Kelvin-Helmholtz instability near the contact surface. Presentation and analysis of those flow processes provide important physical insight into the complex flow physics occurring in a shock tube.

  20. A Fast SVD-Hidden-nodes based Extreme Learning Machine for Large-Scale Data Analytics.

    PubMed

    Deng, Wan-Yu; Bai, Zuo; Huang, Guang-Bin; Zheng, Qing-Hua

    2016-05-01

    Big dimensional data is a growing trend that is emerging in many real world contexts, extending from web mining, gene expression analysis, protein-protein interaction to high-frequency financial data. Nowadays, there is a growing consensus that the increasing dimensionality poses impeding effects on the performances of classifiers, which is termed as the "peaking phenomenon" in the field of machine intelligence. To address the issue, dimensionality reduction is commonly employed as a preprocessing step on the Big dimensional data before building the classifiers. In this paper, we propose an Extreme Learning Machine (ELM) approach for large-scale data analytic. In contrast to existing approaches, we embed hidden nodes that are designed using singular value decomposition (SVD) into the classical ELM. These SVD nodes in the hidden layer are shown to capture the underlying characteristics of the Big dimensional data well, exhibiting excellent generalization performances. The drawback of using SVD on the entire dataset, however, is the high computational complexity involved. To address this, a fast divide and conquer approximation scheme is introduced to maintain computational tractability on high volume data. The resultant algorithm proposed is labeled here as Fast Singular Value Decomposition-Hidden-nodes based Extreme Learning Machine or FSVD-H-ELM in short. In FSVD-H-ELM, instead of identifying the SVD hidden nodes directly from the entire dataset, SVD hidden nodes are derived from multiple random subsets of data sampled from the original dataset. Comprehensive experiments and comparisons are conducted to assess the FSVD-H-ELM against other state-of-the-art algorithms. The results obtained demonstrated the superior generalization performance and efficiency of the FSVD-H-ELM. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. A direct method for computing extreme value (Gumbel) parameters for gapped biological sequence alignments.

    PubMed

    Quinn, Terrance; Sinkala, Zachariah

    2014-01-01

    We develop a general method for computing extreme value distribution (Gumbel, 1958) parameters for gapped alignments. Our approach uses mixture distribution theory to obtain associated BLOSUM matrices for gapped alignments, which in turn are used for determining significance of gapped alignment scores for pairs of biological sequences. We compare our results with parameters already obtained in the literature.

  2. Evolving the Land Information System into a Cloud Computing Service

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Houser, Paul R.

    The Land Information System (LIS) was developed to use advanced flexible land surface modeling and data assimilation frameworks to integrate extremely large satellite- and ground-based observations with advanced land surface models to produce continuous high-resolution fields of land surface states and fluxes. The resulting fields are extremely useful for drought and flood assessment, agricultural planning, disaster management, weather and climate forecasting, water resources assessment, and the like. We envisioned transforming the LIS modeling system into a scientific cloud computing-aware web and data service that would allow clients to easily setup and configure for use in addressing large water management issues.more » The focus of this Phase 1 project was to determine the scientific, technical, commercial merit and feasibility of the proposed LIS-cloud innovations that are currently barriers to broad LIS applicability. We (a) quantified the barriers to broad LIS utility and commercialization (high performance computing, big data, user interface, and licensing issues); (b) designed the proposed LIS-cloud web service, model-data interface, database services, and user interfaces; (c) constructed a prototype LIS user interface including abstractions for simulation control, visualization, and data interaction, (d) used the prototype to conduct a market analysis and survey to determine potential market size and competition, (e) identified LIS software licensing and copyright limitations and developed solutions, and (f) developed a business plan for development and marketing of the LIS-cloud innovation. While some significant feasibility issues were found in the LIS licensing, overall a high degree of LIS-cloud technical feasibility was found.« less

  3. Scalable Algorithms for Clustering Large Geospatiotemporal Data Sets on Manycore Architectures

    NASA Astrophysics Data System (ADS)

    Mills, R. T.; Hoffman, F. M.; Kumar, J.; Sreepathi, S.; Sripathi, V.

    2016-12-01

    The increasing availability of high-resolution geospatiotemporal data sets from sources such as observatory networks, remote sensing platforms, and computational Earth system models has opened new possibilities for knowledge discovery using data sets fused from disparate sources. Traditional algorithms and computing platforms are impractical for the analysis and synthesis of data sets of this size; however, new algorithmic approaches that can effectively utilize the complex memory hierarchies and the extremely high levels of available parallelism in state-of-the-art high-performance computing platforms can enable such analysis. We describe a massively parallel implementation of accelerated k-means clustering and some optimizations to boost computational intensity and utilization of wide SIMD lanes on state-of-the art multi- and manycore processors, including the second-generation Intel Xeon Phi ("Knights Landing") processor based on the Intel Many Integrated Core (MIC) architecture, which includes several new features, including an on-package high-bandwidth memory. We also analyze the code in the context of a few practical applications to the analysis of climatic and remotely-sensed vegetation phenology data sets, and speculate on some of the new applications that such scalable analysis methods may enable.

  4. Message Passing vs. Shared Address Space on a Cluster of SMPs

    NASA Technical Reports Server (NTRS)

    Shan, Hongzhang; Singh, Jaswinder Pal; Oliker, Leonid; Biswas, Rupak

    2000-01-01

    The convergence of scalable computer architectures using clusters of PCs (or PC-SMPs) with commodity networking has become an attractive platform for high end scientific computing. Currently, message-passing and shared address space (SAS) are the two leading programming paradigms for these systems. Message-passing has been standardized with MPI, and is the most common and mature programming approach. However message-passing code development can be extremely difficult, especially for irregular structured computations. SAS offers substantial ease of programming, but may suffer from performance limitations due to poor spatial locality, and high protocol overhead. In this paper, we compare the performance of and programming effort, required for six applications under both programming models on a 32 CPU PC-SMP cluster. Our application suite consists of codes that typically do not exhibit high efficiency under shared memory programming. due to their high communication to computation ratios and complex communication patterns. Results indicate that SAS can achieve about half the parallel efficiency of MPI for most of our applications: however, on certain classes of problems SAS performance is competitive with MPI. We also present new algorithms for improving the PC cluster performance of MPI collective operations.

  5. Revisiting the Quantum Brain Hypothesis: Toward Quantum (Neuro)biology?

    PubMed Central

    Jedlicka, Peter

    2017-01-01

    The nervous system is a non-linear dynamical complex system with many feedback loops. A conventional wisdom is that in the brain the quantum fluctuations are self-averaging and thus functionally negligible. However, this intuition might be misleading in the case of non-linear complex systems. Because of an extreme sensitivity to initial conditions, in complex systems the microscopic fluctuations may be amplified and thereby affect the system’s behavior. In this way quantum dynamics might influence neuronal computations. Accumulating evidence in non-neuronal systems indicates that biological evolution is able to exploit quantum stochasticity. The recent rise of quantum biology as an emerging field at the border between quantum physics and the life sciences suggests that quantum events could play a non-trivial role also in neuronal cells. Direct experimental evidence for this is still missing but future research should address the possibility that quantum events contribute to an extremely high complexity, variability and computational power of neuronal dynamics. PMID:29163041

  6. Revisiting the Quantum Brain Hypothesis: Toward Quantum (Neuro)biology?

    PubMed

    Jedlicka, Peter

    2017-01-01

    The nervous system is a non-linear dynamical complex system with many feedback loops. A conventional wisdom is that in the brain the quantum fluctuations are self-averaging and thus functionally negligible. However, this intuition might be misleading in the case of non-linear complex systems. Because of an extreme sensitivity to initial conditions, in complex systems the microscopic fluctuations may be amplified and thereby affect the system's behavior. In this way quantum dynamics might influence neuronal computations. Accumulating evidence in non-neuronal systems indicates that biological evolution is able to exploit quantum stochasticity. The recent rise of quantum biology as an emerging field at the border between quantum physics and the life sciences suggests that quantum events could play a non-trivial role also in neuronal cells. Direct experimental evidence for this is still missing but future research should address the possibility that quantum events contribute to an extremely high complexity, variability and computational power of neuronal dynamics.

  7. Flight Avionics Hardware Roadmap

    NASA Technical Reports Server (NTRS)

    Some, Raphael; Goforth, Monte; Chen, Yuan; Powell, Wes; Paulick, Paul; Vitalpur, Sharada; Buscher, Deborah; Wade, Ray; West, John; Redifer, Matt; hide

    2014-01-01

    The Avionics Technology Roadmap takes an 80% approach to technology investment in spacecraft avionics. It delineates a suite of technologies covering foundational, component, and subsystem-levels, which directly support 80% of future NASA space mission needs. The roadmap eschews high cost, limited utility technologies in favor of lower cost, and broadly applicable technologies with high return on investment. The roadmap is also phased to support future NASA mission needs and desires, with a view towards creating an optimized investment portfolio that matures specific, high impact technologies on a schedule that matches optimum insertion points of these technologies into NASA missions. The roadmap looks out over 15+ years and covers some 114 technologies, 58 of which are targeted for TRL6 within 5 years, with 23 additional technologies to be at TRL6 by 2020. Of that number, only a few are recommended for near term investment: 1. Rad Hard High Performance Computing 2. Extreme temperature capable electronics and packaging 3. RFID/SAW-based spacecraft sensors and instruments 4. Lightweight, low power 2D displays suitable for crewed missions 5. Radiation tolerant Graphics Processing Unit to drive crew displays 6. Distributed/reconfigurable, extreme temperature and radiation tolerant, spacecraft sensor controller and sensor modules 7. Spacecraft to spacecraft, long link data communication protocols 8. High performance and extreme temperature capable C&DH subsystem In addition, the roadmap team recommends several other activities that it believes are necessary to advance avionics technology across NASA: center dot Engage the OCT roadmap teams to coordinate avionics technology advances and infusion into these roadmaps and their mission set center dot Charter a team to develop a set of use cases for future avionics capabilities in order to decouple this roadmap from specific missions center dot Partner with the Software Steering Committee to coordinate computing hardware and software technology roadmaps and investment recommendations center dot Continue monitoring foundational technologies upon which future avionics technologies will be dependent, e.g., RHBD and COTS semiconductor technologies

  8. Extreme storm surge and wind wave climate scenario simulations at the Venetian littoral

    NASA Astrophysics Data System (ADS)

    Lionello, P.; Galati, M. B.; Elvini, E.

    Scenario climate projections for extreme marine storms producing storm surges and wind waves are very important for the northern flat coast of the Adriatic Sea, where the area at risk includes a unique cultural and environmental heritage, and important economic activities. This study uses a shallow water model and a spectral wave model for computing the storm surge and the wind wave field, respectively, from the sea level pressure and wind fields that have been computed by the RegCM regional climate model. Simulations cover the period 1961-1990 for the present climate (control simulations) and the period 2071-2100 for the A2 and B2 scenarios. Generalized Extreme Value analysis is used for estimating values for the 10 and 100 year return times. The adequacy of these modeling tools for a reliable estimation of the climate change signal, without needing further downscaling is shown. However, this study has mainly a methodological value, because issues such as interdecadal variability and intermodel variability cannot be addressed, since the analysis is based on single model 30-year long simulations. The control simulation looks reasonably accurate for extreme value analysis, though it overestimates/underestimates the frequency of high/low surge and wind wave events with respect to observations. Scenario simulations suggest higher frequency of intense storms for the B2 scenario, but not for the A2. Likely, these differences are not the effect of climate change, but of climate multidecadal variability. Extreme storms are stronger in future scenarios, but differences are not statistically significant. Therefore this study does not provide convincing evidence for more stormy conditions in future scenarios.

  9. XVIS: Visualization for the Extreme-Scale Scientific-Computation Ecosystem Final Scientific/Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geveci, Berk; Maynard, Robert

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. The XVis project brought together collaborators from predominant DOE projects for visualization on accelerators and combining their respectivemore » features into a new visualization toolkit called VTK-m.« less

  10. Climate Change and Hydrological Extreme Events - Risks and Perspectives for Water Management in Bavaria and Québec

    NASA Astrophysics Data System (ADS)

    Ludwig, R.

    2017-12-01

    There is as yet no confirmed knowledge whether and how climate change contributes to the magnitude and frequency of hydrological extreme events and how regional water management could adapt to the corresponding risks. The ClimEx project (2015-2019) investigates the effects of climate change on the meteorological and hydrological extreme events and their implications for water management in Bavaria and Québec. High Performance Computing is employed to enable the complex simulations in a hydro-climatological model processing chain, resulting in a unique high-resolution and transient (1950-2100) dataset of climatological and meteorological forcing and hydrological response: (1) The climate module has developed a large ensemble of high resolution data (12km) of the CRCM5 RCM for Central Europe and North-Eastern North America, downscaled from 50 members of the CanESM2 GCM. The dataset is complemented by all available data from the Euro-CORDEX project to account for the assessment of both natural climate variability and climate change. The large ensemble with several thousand model years provides the potential to catch rare extreme events and thus improves the process understanding of extreme events with return periods of 1000+ years. (2) The hydrology module comprises process-based and spatially explicit model setups (e.g. WaSiM) for all major catchments in Bavaria and Southern Québec in high temporal (3h) and spatial (500m) resolution. The simulations form the basis for in depth analysis of hydrological extreme events based on the inputs from the large climate model dataset. The specific data situation enables to establish a new method for `virtual perfect prediction', which assesses climate change impacts on flood risk and water resources management by identifying patterns in the data which reveal preferential triggers of hydrological extreme events. The presentation will highlight first results from the analysis of the large scale ClimEx model ensemble, showing the current and future ratio of natural variability and climate change impacts on meteorological extreme events. Selected data from the ensemble is used to drive a hydrological model experiment to illustrate the capacity to better determine the recurrence periods of hydrological extreme events under conditions of climate change.

  11. Non-numeric computation for high eccentricity orbits. [Earth satellite orbit perturbation

    NASA Technical Reports Server (NTRS)

    Sridharan, R.; Renard, M. L.

    1975-01-01

    Geocentric orbits of large eccentricity (e = 0.9 to 0.95) are significantly perturbed in cislunar space by the sun and moon. The time-history of the height of perigee, subsequent to launch, is particularly critical. The determination of 'launch windows' is mostly concerned with preventing the height of perigee from falling below its low initial value before the mission lifetime has elapsed. Between the extremes of high accuracy digital integration of the equations of motion and of using an approximate, but very fast, stability criteria method, this paper is concerned with the developement of a method of intermediate complexity using non-numeric computation. The computer is used as the theory generator to generalize Lidov's theory using six osculating elements. Symbolic integration is completely automatized and the output is a set of condensed formulae well suited for repeated applications in launch window analysis. Examples of applications are given.

  12. Detonation Product EOS Studies: Using ISLS to Refine Cheetah

    NASA Astrophysics Data System (ADS)

    Zaug, J. M.; Howard, W. M.; Fried, L. E.; Hansen, D. W.

    2002-07-01

    Knowledge of an effective interatomic potential function underlies any effort to predict or rationalize the properties of solids and liquids. The experiments we undertake are directed towards determination of equilibrium and dynamic properties of simple fluids at densities sufficiently high that traditional computational methods and semi-empirical forms successful at ambient conditions may require reconsideration. In this paper we present high-pressure and temperature experimental sound speed data on a simple fluid, methanol. Impulsive Stimulated Light Scattering (ISLS) conducted on diamond-anvil cell (DAC) encapsulated samples offers an experimental approach to determine cross-pair potential interactions through equation of state determinations. In addition the kinetics of structural relaxation in fluids can be studied. We compare our experimental results with our thermochemical computational model Cheetah. Experimentally grounded computational models provide a good basis to confidently understand the chemical nature of reactions at extreme conditions.

  13. Multiple burn fuel-optimal orbit transfers: Numerical trajectory computation and neighboring optimal feedback guidance

    NASA Technical Reports Server (NTRS)

    Chuang, C.-H.; Goodson, Troy D.; Ledsinger, Laura A.

    1995-01-01

    This report describes current work in the numerical computation of multiple burn, fuel-optimal orbit transfers and presents an analysis of the second variation for extremal multiple burn orbital transfers as well as a discussion of a guidance scheme which may be implemented for such transfers. The discussion of numerical computation focuses on the use of multivariate interpolation to aid the computation in the numerical optimization. The second variation analysis includes the development of the conditions for the examination of both fixed and free final time transfers. Evaluations for fixed final time are presented for extremal one, two, and three burn solutions of the first variation. The free final time problem is considered for an extremal two burn solution. In addition, corresponding changes of the second variation formulation over thrust arcs and coast arcs are included. The guidance scheme discussed is an implicit scheme which implements a neighboring optimal feedback guidance strategy to calculate both thrust direction and thrust on-off times.

  14. Effects of ergonomic intervention on work-related upper extremity musculoskeletal disorders among computer workers: a randomized controlled trial.

    PubMed

    Esmaeilzadeh, Sina; Ozcan, Emel; Capan, Nalan

    2014-01-01

    The aim of the study was to determine effects of ergonomic intervention on work-related upper extremity musculoskeletal disorders (WUEMSDs) among computer workers. Four hundred computer workers answered a questionnaire on work-related upper extremity musculoskeletal symptoms (WUEMSS). Ninety-four subjects with WUEMSS using computers at least 3 h a day participated in a prospective, randomized controlled 6-month intervention. Body posture and workstation layouts were assessed by the Ergonomic Questionnaire. We used the Visual Analogue Scale to assess the intensity of WUEMSS. The Upper Extremity Function Scale was used to evaluate functional limitations at the neck and upper extremities. Health-related quality of life was assessed with the Short Form-36. After baseline assessment, those in the intervention group participated in a multicomponent ergonomic intervention program including a comprehensive ergonomic training consisting of two interactive sessions, an ergonomic training brochure, and workplace visits with workstation adjustments. Follow-up assessment was conducted after 6 months. In the intervention group, body posture (p < 0.001) and workstation layout (p = 0.002) improved over 6 months; furthermore, intensity (p < 0.001), duration (p < 0.001), and frequency (p = 0.009) of WUEMSS decreased significantly in the intervention group compared with the control group. Additionally, the functional status (p = 0.001), and physical (p < 0.001), and mental (p = 0.035) health-related quality of life improved significantly compared with the controls. There was no improvement of work day loss due to WUEMSS (p > 0.05). Ergonomic intervention programs may be effective in reducing ergonomic risk factors among computer workers and consequently in the secondary prevention of WUEMSDs.

  15. Towards validated chemistry at extreme conditions: reactive MD simulations of shocked Polyvinyl Nitrate and Nitromethane

    NASA Astrophysics Data System (ADS)

    Islam, Md Mahbubul; Strachan, Alejandro

    A detailed atomistic-level understanding of the ultrafast chemistry of detonation processes of high energy materials is crucial to understand their performance and safety. Recent advances in laser shocks and ultra-fast spectroscopy is yielding the first direct experimental evidence of chemistry at extreme conditions. At the same time, reactive molecular dynamics (MD) in current high-performance computing platforms enable an atomic description of shock-induced chemistry with length and timescales approaching those of experiments. We use MD simulations with the reactive force field ReaxFF to investigate the shock-induced chemical decomposition mechanisms of polyvinyl nitrate (PVN) and nitromethane (NM). The effect of shock pressure on chemical reaction mechanisms and kinetics of both the materials are investigated. For direct comparison of our simulation results with experimentally derived IR absorption data, we performed spectral analysis using atomistic velocity at various shock conditions. The combination of reactive MD simulations and ultrafast spectroscopy enables both the validation of ReaxFF at extreme conditions and contributes to the interpretation of the experimental data relating changes in spectral features to atomic processes. Office of Naval Research MURI program.

  16. Nanoelectromechanical systems: Nanodevice motion at microwave frequencies

    NASA Astrophysics Data System (ADS)

    Henry Huang, Xue Ming; Zorman, Christian A.; Mehregany, Mehran; Roukes, Michael L.

    2003-01-01

    It has been almost forgotten that the first computers envisaged by Charles Babbage in the early 1800s were mechanical and not electronic, but the development of high-frequency nanoelectromechanical systems is now promising a range of new applications, including sensitive mechanical charge detectors and mechanical devices for high-frequency signal processing, biological imaging and quantum measurement. Here we describe the construction of nanodevices that will operate with fundamental frequencies in the previously inaccessible microwave range (greater than 1 gigahertz). This achievement represents a significant advance in the quest for extremely high-frequency nanoelectromechanical systems.

  17. The Influence of Recurrent Modes of Climate Variability on the Occurrence of Monthly Temperature Extremes Over South America

    NASA Astrophysics Data System (ADS)

    Loikith, Paul C.; Detzer, Judah; Mechoso, Carlos R.; Lee, Huikyo; Barkhordarian, Armineh

    2017-10-01

    The associations between extreme temperature months and four prominent modes of recurrent climate variability are examined over South America. Associations are computed as the percent of extreme temperature months concurrent with the upper and lower quartiles of the El Niño-Southern Oscillation (ENSO), the Atlantic Niño, the Pacific Decadal Oscillation (PDO), and the Southern Annular Mode (SAM) index distributions, stratified by season. The relationship is strongest for ENSO, with nearly every extreme temperature month concurrent with the upper or lower quartiles of its distribution in portions of northwestern South America during some seasons. The likelihood of extreme warm temperatures is enhanced over parts of northern South America when the Atlantic Niño index is in the upper quartile, while cold extremes are often association with the lowest quartile. Concurrent precipitation anomalies may contribute to these relations. The PDO shows weak associations during December, January, and February, while in June, July, and August its relationship with extreme warm temperatures closely matches that of ENSO. This may be due to the positive relationship between the PDO and ENSO, rather than the PDO acting as an independent physical mechanism. Over Patagonia, the SAM is highly influential during spring and fall, with warm and cold extremes being associated with positive and negative phases of the SAM, respectively. Composites of sea level pressure anomalies for extreme temperature months over Patagonia suggest an important role of local synoptic scale weather variability in addition to a favorable SAM for the occurrence of these extremes.

  18. Space-time characteristics and statistical predictability of extreme daily precipitation events in the Ohio River Basin

    NASA Astrophysics Data System (ADS)

    Farnham, D. J.; Doss-Gollin, J.; Lall, U.

    2016-12-01

    In this study we identify the atmospheric conditions that precede and accompany regional extreme precipitation events with the potential to cause flooding. We begin by identifying a coherent space-time structure in the record of extreme precipitation within the Ohio River Basin through both a Hidden Markov Model and a composite analysis. The transition probabilities associated with the Hidden Markov Model illustrate a tendency for west to east migration of extreme precipitation events (> 99th percentile) at individual stations within the Ohio River Basin. We compute a record of regional extreme precipitation days by requiring that > p% of the basin's stations simultaneously experience extreme precipitation days. A composite analysis of low-level geopotential heights and column integrated precipitable water content for all non-summer seasons confirms a west to east migration and intensification of 1) a low (high) pressure center to the west (east) of the basin, and 2) enhanced precipitable water vapor content that stretches from the Gulf of Mexico to the Northeast US region in the days leading up to regional extreme precipitation days. We define a daily dipole index to summarize the strength of the paired cylonic and aniticyclonic systems to the west and east of the basin and analyze its temporal characteristics and its relationship to the regional extreme precipitation events. Lastly, we investigate and discuss the subseasonal predictability of individual extreme precipitation events and the seasonal predictability of active and inactive seasons, where the activity level is defined by the expected frequency of regional extreme precipitation events.

  19. A highly efficient multi-core algorithm for clustering extremely large datasets

    PubMed Central

    2010-01-01

    Background In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer. Results We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization. Conclusions Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer. PMID:20370922

  20. The nonequilibrium quantum many-body problem as a paradigm for extreme data science

    NASA Astrophysics Data System (ADS)

    Freericks, J. K.; Nikolić, B. K.; Frieder, O.

    2014-12-01

    Generating big data pervades much of physics. But some problems, which we call extreme data problems, are too large to be treated within big data science. The nonequilibrium quantum many-body problem on a lattice is just such a problem, where the Hilbert space grows exponentially with system size and rapidly becomes too large to fit on any computer (and can be effectively thought of as an infinite-sized data set). Nevertheless, much progress has been made with computational methods on this problem, which serve as a paradigm for how one can approach and attack extreme data problems. In addition, viewing these physics problems from a computer-science perspective leads to new approaches that can be tried to solve more accurately and for longer times. We review a number of these different ideas here.

  1. Simulation of the 23 July 2012 Extreme Space Weather Event: What if This Extremely Rare CME Was Earth Directed?

    NASA Technical Reports Server (NTRS)

    Ngwira, Chigomezyo M.; Pulkkinen, Antti; Mays, M. Leila; Kuznetsova, Maria M.; Galvin, A. B.; Simunac, Kristin; Baker, Daniel N.; Li, Xinlin; Zheng, Yihua; Glocer, Alex

    2013-01-01

    Extreme space weather events are known to cause adverse impacts on critical modern day technological infrastructure such as high-voltage electric power transmission grids. On 23 July 2012, NASA's Solar Terrestrial Relations Observatory-Ahead (STEREO-A) spacecraft observed in situ an extremely fast coronal mass ejection (CME) that traveled 0.96 astronomical units (approx. 1 AU) in about 19 h. Here we use the SpaceWeather Modeling Framework (SWMF) to perform a simulation of this rare CME.We consider STEREO-A in situ observations to represent the upstream L1 solar wind boundary conditions. The goal of this study is to examine what would have happened if this Rare-type CME was Earth-bound. Global SWMF-generated ground geomagnetic field perturbations are used to compute the simulated induced geoelectric field at specific ground-based active INTERMAGNET magnetometer sites. Simulation results show that while modeled global SYM-H index, a high-resolution equivalent of the Dst index, was comparable to previously observed severe geomagnetic storms such as the Halloween 2003 storm, the 23 July CME would have produced some of the largest geomagnetically induced electric fields, making it very geoeffective. These results have important practical applications for risk management of electrical power grids.

  2. Quantum rendering

    NASA Astrophysics Data System (ADS)

    Lanzagorta, Marco O.; Gomez, Richard B.; Uhlmann, Jeffrey K.

    2003-08-01

    In recent years, computer graphics has emerged as a critical component of the scientific and engineering process, and it is recognized as an important computer science research area. Computer graphics are extensively used for a variety of aerospace and defense training systems and by Hollywood's special effects companies. All these applications require the computer graphics systems to produce high quality renderings of extremely large data sets in short periods of time. Much research has been done in "classical computing" toward the development of efficient methods and techniques to reduce the rendering time required for large datasets. Quantum Computing's unique algorithmic features offer the possibility of speeding up some of the known rendering algorithms currently used in computer graphics. In this paper we discuss possible implementations of quantum rendering algorithms. In particular, we concentrate on the implementation of Grover's quantum search algorithm for Z-buffering, ray-tracing, radiosity, and scene management techniques. We also compare the theoretical performance between the classical and quantum versions of the algorithms.

  3. Multicore Challenges and Benefits for High Performance Scientific Computing

    DOE PAGES

    Nielsen, Ida M. B.; Janssen, Curtis L.

    2008-01-01

    Until recently, performance gains in processors were achieved largely by improvements in clock speeds and instruction level parallelism. Thus, applications could obtain performance increases with relatively minor changes by upgrading to the latest generation of computing hardware. Currently, however, processor performance improvements are realized by using multicore technology and hardware support for multiple threads within each core, and taking full advantage of this technology to improve the performance of applications requires exposure of extreme levels of software parallelism. We will here discuss the architecture of parallel computers constructed from many multicore chips as well as techniques for managing the complexitymore » of programming such computers, including the hybrid message-passing/multi-threading programming model. We will illustrate these ideas with a hybrid distributed memory matrix multiply and a quantum chemistry algorithm for energy computation using Møller–Plesset perturbation theory.« less

  4. A hydro-meteorological model chain to assess the influence of natural variability and impacts of climate change on extreme events and propose optimal water management

    NASA Astrophysics Data System (ADS)

    von Trentini, F.; Willkofer, F.; Wood, R. R.; Schmid, F. J.; Ludwig, R.

    2017-12-01

    The ClimEx project (Climate change and hydrological extreme events - risks and perspectives for water management in Bavaria and Québec) focuses on the effects of climate change on hydro-meteorological extreme events and their implications for water management in Bavaria and Québec. Therefore, a hydro-meteorological model chain is applied. It employs high performance computing capacity of the Leibniz Supercomputing Centre facility SuperMUC to dynamically downscale 50 members of the Global Circulation Model CanESM2 over European and Eastern North American domains using the Canadian Regional Climate Model (RCM) CRCM5. Over Europe, the unique single model ensemble is conjointly analyzed with the latest information provided through the CORDEX-initiative, to better assess the influence of natural climate variability and climatic change in the dynamics of extreme events. Furthermore, these 50 members of a single RCM will enhance extreme value statistics (extreme return periods) by exploiting the available 1500 model years for the reference period from 1981 to 2010. Hence, the RCM output is applied to drive the process based, fully distributed, and deterministic hydrological model WaSiM in high temporal (3h) and spatial (500m) resolution. WaSiM and the large ensemble are further used to derive a variety of hydro-meteorological patterns leading to severe flood events. A tool for virtual perfect prediction shall provide a combination of optimal lead time and management strategy to mitigate certain flood events following these patterns.

  5. Instability of Poiseuille flow at extreme Mach numbers: linear analysis and simulations.

    PubMed

    Xie, Zhimin; Girimaji, Sharath S

    2014-04-01

    We develop the perturbation equations to describe instability evolution in Poiseuille flow at the limit of very high Mach numbers. At this limit the equation governing the flow is the pressure-released Navier-Stokes equation. The ensuing semianalytical solution is compared against simulations performed using the gas-kinetic method (GKM), resulting in excellent agreement. A similar comparison between analytical and computational results of small perturbation growth is performed at the incompressible (zero Mach number) limit, again leading to excellent agreement. The study accomplishes two important goals: it (i) contrasts the small perturbation evolution in Poiseuille flows at extreme Mach numbers and (ii) provides important verification of the GKM simulation scheme.

  6. Combining local search with co-evolution in a remarkably simple way

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boettcher, S.; Percus, A.

    2000-05-01

    The authors explore a new general-purpose heuristic for finding high-quality solutions to hard optimization problem. The method, called extremal optimization, is inspired by self-organized criticality, a concept introduced to describe emergent complexity in physical systems. In contrast to genetic algorithms, which operate on an entire gene-pool of possible solutions, extremal optimization successively replaces extremely undesirable elements of a single sub-optimal solution with new, random ones. Large fluctuations, or avalanches, ensue that efficiently explore many local optima. Drawing upon models used to simulate far-from-equilibrium dynamics, extremal optimization complements heuristics inspired by equilibrium statistical physics, such as simulated annealing. With only onemore » adjustable parameter, its performance has proved competitive with more elaborate methods, especially near phase transitions. Phase transitions are found in many combinatorial optimization problems, and have been conjectured to occur in the region of parameter space containing the hardest instances. We demonstrate how extremal optimization can be implemented for a variety of hard optimization problems. We believe that this will be a useful tool in the investigation of phase transitions in combinatorial optimization, thereby helping to elucidate the origin of computational complexity.« less

  7. SharP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venkata, Manjunath Gorentla; Aderholdt, William F

    The pre-exascale systems are expected to have a significant amount of hierarchical and heterogeneous on-node memory, and this trend of system architecture in extreme-scale systems is expected to continue into the exascale era. along with hierarchical-heterogeneous memory, the system typically has a high-performing network ad a compute accelerator. This system architecture is not only effective for running traditional High Performance Computing (HPC) applications (Big-Compute), but also for running data-intensive HPC applications and Big-Data applications. As a consequence, there is a growing desire to have a single system serve the needs of both Big-Compute and Big-Data applications. Though the system architecturemore » supports the convergence of the Big-Compute and Big-Data, the programming models and software layer have yet to evolve to support either hierarchical-heterogeneous memory systems or the convergence. A programming abstraction to address this problem. The programming abstraction is implemented as a software library and runs on pre-exascale and exascale systems supporting current and emerging system architecture. Using distributed data-structures as a central concept, it provides (1) a simple, usable, and portable abstraction for hierarchical-heterogeneous memory and (2) a unified programming abstraction for Big-Compute and Big-Data applications.« less

  8. Suggested Approaches to the Measurement of Computer Anxiety.

    ERIC Educational Resources Information Center

    Toris, Carol

    Psychologists can gain insight into human behavior by examining what people feel about, know about, and do with, computers. Two extreme reactions to computers are computer phobia, or anxiety, and computer addiction, or "hacking". A four-part questionnaire was developed to measure computer anxiety. The first part is a projective technique which…

  9. A site oriented supercomputer for theoretical physics: The Fermilab Advanced Computer Program Multi Array Processor System (ACMAPS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nash, T.; Atac, R.; Cook, A.

    1989-03-06

    The ACPMAPS multipocessor is a highly cost effective, local memory parallel computer with a hypercube or compound hypercube architecture. Communication requires the attention of only the two communicating nodes. The design is aimed at floating point intensive, grid like problems, particularly those with extreme computing requirements. The processing nodes of the system are single board array processors, each with a peak power of 20 Mflops, supported by 8 Mbytes of data and 2 Mbytes of instruction memory. The system currently being assembled has a peak power of 5 Gflops. The nodes are based on the Weitek XL Chip set. Themore » system delivers performance at approximately $300/Mflop. 8 refs., 4 figs.« less

  10. Reviewing Some Crucial Concepts of Gibbs Energy in Chemical Equilibrium Using a Computer-Assisted, Guided-Problem-Solving Approach

    ERIC Educational Resources Information Center

    Borge, Javier

    2015-01-01

    G, G°, [delta][subscript r]G, [delta][subscript r]G°, [delta]G, and [delta]G° are essential quantities to master the chemical equilibrium. Although the number of publications devoted to explaining these items is extremely high, it seems that they do not produce the desired effect because some articles and textbooks are still being written with…

  11. The Role of the Goldstone Apple Valley Radio Telescope Project in Promoting Scientific Efficacy among Middle and High School Students.

    ERIC Educational Resources Information Center

    Ibe, Mary; Deutscher, Rebecca

    This study investigated the effects on student scientific efficacy after participation in the Goldstone Apple Valley Radio Telescope (GAVRT) project. In the GAVRT program, students use computers to record extremely faint radio waves collected by the telescope and analyze real data. Scientific efficacy is a type of self-knowledge a person uses to…

  12. Optimizing high performance computing workflow for protein functional annotation.

    PubMed

    Stanberry, Larissa; Rekepalli, Bhanu; Liu, Yuan; Giblock, Paul; Higdon, Roger; Montague, Elizabeth; Broomall, William; Kolker, Natali; Kolker, Eugene

    2014-09-10

    Functional annotation of newly sequenced genomes is one of the major challenges in modern biology. With modern sequencing technologies, the protein sequence universe is rapidly expanding. Newly sequenced bacterial genomes alone contain over 7.5 million proteins. The rate of data generation has far surpassed that of protein annotation. The volume of protein data makes manual curation infeasible, whereas a high compute cost limits the utility of existing automated approaches. In this work, we present an improved and optmized automated workflow to enable large-scale protein annotation. The workflow uses high performance computing architectures and a low complexity classification algorithm to assign proteins into existing clusters of orthologous groups of proteins. On the basis of the Position-Specific Iterative Basic Local Alignment Search Tool the algorithm ensures at least 80% specificity and sensitivity of the resulting classifications. The workflow utilizes highly scalable parallel applications for classification and sequence alignment. Using Extreme Science and Engineering Discovery Environment supercomputers, the workflow processed 1,200,000 newly sequenced bacterial proteins. With the rapid expansion of the protein sequence universe, the proposed workflow will enable scientists to annotate big genome data.

  13. Optimizing high performance computing workflow for protein functional annotation

    PubMed Central

    Stanberry, Larissa; Rekepalli, Bhanu; Liu, Yuan; Giblock, Paul; Higdon, Roger; Montague, Elizabeth; Broomall, William; Kolker, Natali; Kolker, Eugene

    2014-01-01

    Functional annotation of newly sequenced genomes is one of the major challenges in modern biology. With modern sequencing technologies, the protein sequence universe is rapidly expanding. Newly sequenced bacterial genomes alone contain over 7.5 million proteins. The rate of data generation has far surpassed that of protein annotation. The volume of protein data makes manual curation infeasible, whereas a high compute cost limits the utility of existing automated approaches. In this work, we present an improved and optmized automated workflow to enable large-scale protein annotation. The workflow uses high performance computing architectures and a low complexity classification algorithm to assign proteins into existing clusters of orthologous groups of proteins. On the basis of the Position-Specific Iterative Basic Local Alignment Search Tool the algorithm ensures at least 80% specificity and sensitivity of the resulting classifications. The workflow utilizes highly scalable parallel applications for classification and sequence alignment. Using Extreme Science and Engineering Discovery Environment supercomputers, the workflow processed 1,200,000 newly sequenced bacterial proteins. With the rapid expansion of the protein sequence universe, the proposed workflow will enable scientists to annotate big genome data. PMID:25313296

  14. Use of Multiple GPUs to Speedup the Execution of a Three-Dimensional Computational Model of the Innate Immune System

    NASA Astrophysics Data System (ADS)

    Xavier, M. P.; do Nascimento, T. M.; dos Santos, R. W.; Lobosco, M.

    2014-03-01

    The development of computational systems that mimics the physiological response of organs or even the entire body is a complex task. One of the issues that makes this task extremely complex is the huge computational resources needed to execute the simulations. For this reason, the use of parallel computing is mandatory. In this work, we focus on the simulation of temporal and spatial behaviour of some human innate immune system cells and molecules in a small three-dimensional section of a tissue. To perform this simulation, we use multiple Graphics Processing Units (GPUs) in a shared-memory environment. Despite of high initialization and communication costs imposed by the use of GPUs, the techniques used to implement the HIS simulator have shown to be very effective to achieve this purpose.

  15. Estimation of in-situ bioremediation system cost using a hybrid Extreme Learning Machine (ELM)-particle swarm optimization approach

    NASA Astrophysics Data System (ADS)

    Yadav, Basant; Ch, Sudheer; Mathur, Shashi; Adamowski, Jan

    2016-12-01

    In-situ bioremediation is the most common groundwater remediation procedure used for treating organically contaminated sites. A simulation-optimization approach, which incorporates a simulation model for groundwaterflow and transport processes within an optimization program, could help engineers in designing a remediation system that best satisfies management objectives as well as regulatory constraints. In-situ bioremediation is a highly complex, non-linear process and the modelling of such a complex system requires significant computational exertion. Soft computing techniques have a flexible mathematical structure which can generalize complex nonlinear processes. In in-situ bioremediation management, a physically-based model is used for the simulation and the simulated data is utilized by the optimization model to optimize the remediation cost. The recalling of simulator to satisfy the constraints is an extremely tedious and time consuming process and thus there is need for a simulator which can reduce the computational burden. This study presents a simulation-optimization approach to achieve an accurate and cost effective in-situ bioremediation system design for groundwater contaminated with BTEX (Benzene, Toluene, Ethylbenzene, and Xylenes) compounds. In this study, the Extreme Learning Machine (ELM) is used as a proxy simulator to replace BIOPLUME III for the simulation. The selection of ELM is done by a comparative analysis with Artificial Neural Network (ANN) and Support Vector Machine (SVM) as they were successfully used in previous studies of in-situ bioremediation system design. Further, a single-objective optimization problem is solved by a coupled Extreme Learning Machine (ELM)-Particle Swarm Optimization (PSO) technique to achieve the minimum cost for the in-situ bioremediation system design. The results indicate that ELM is a faster and more accurate proxy simulator than ANN and SVM. The total cost obtained by the ELM-PSO approach is held to a minimum while successfully satisfying all the regulatory constraints of the contaminated site.

  16. An Efficient Means of Determining the Newtonian Potential for Highly Flattened Mass Distributions

    NASA Astrophysics Data System (ADS)

    Cohl, H.

    1999-05-01

    In this dissertation talk we present a mathematical result that, to the best of our knowledge, has been previously undiscovered. That is, the Green's function in a variety of orthogonal coordinate systems may be expressed in terms of a single sum over the azimuthal quantum number, m, of terms involving Toroidal Harmonics. We show how this new addition theorem can be effectively applied to a variety of potential problems in gravitation, electrostatics and magnetostatics and, in particular, demonstrate how it may be used to analyze the properties of general nonaxisymmetric disk systems with and without vertical extent. Finally, we describe our numerical implementation of the addition theorem in order to determine the Newtonian potential extremely close to highly flattened mass distributions. This yields an extremely efficient technique for computing the boundary values in a general algorithm that is designed to solve the 3D Poisson equation on a cylindrical coordinate lattice. We acknowledge support from the U.S. National Science Foundation through grant AST-9528424 and DGE-9355007, the latter of which has been issued through the NSF's Graduate Traineeships Program. This work also has been supported, in part, by grants of high-performance-computing time on NPACI facilities at SDSC and UT, Austin, and through the PET program of NAVOCEANO DoD Major Shared Resource Center in Stennis, MS.

  17. Epidemic failure detection and consensus for extreme parallelism

    DOE PAGES

    Katti, Amogh; Di Fatta, Giuseppe; Naughton, Thomas; ...

    2017-02-01

    Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum s User Level Failure Mitigation proposal has introduced an operation, MPI Comm shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI Comm shrink operation requires a failure detection and consensus algorithm. This paper presents three novel failure detection and consensus algorithms using Gossiping. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that inmore » all algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus. The third approach is a three-phase distributed failure detection and consensus algorithm and provides consistency guarantees even in very large and extreme-scale systems while at the same time being memory and bandwidth efficient.« less

  18. Spin-dependent post-Newtonian parameters from EMRI computation in Kerr background

    NASA Astrophysics Data System (ADS)

    Friedman, John; Le Tiec, Alexandre; Shah, Abhay

    2013-04-01

    Because the extreme mass-ratio inspiral (EMRI) approximation is accurate to all orders in v/c, it can be used to find high order post-Newtonian parameters that are not yet analytically accessible. We report here on progress in computing spin-dependent, conservative, post-Newtonian parameters from a radiation-gauge computation for a particle in circular orbit in a family of Kerr geometries. For a particle with 4-velocity u^α= U k^α, with k^α the helical Killing vector of the perturbed spacetime, the renormalized perturbation δU, when written as a function of the particle's angular velocity, is invariant under gauge transformations generated by helically symmetric vectors. The EMRI computations are done in a modified radiation gauge. Extracted parameters are compared to previously known and newly computed spin-dependent post-Newtonian terms. This work is modeled on earlier computations by Blanchet, Detweiler, Le Tiec and Whiting of spin-independent terms for a particle in circular orbit in a Schwarzschild geometry.

  19. Single-electron random-number generator (RNG) for highly secure ubiquitous computing applications

    NASA Astrophysics Data System (ADS)

    Uchida, Ken; Tanamoto, Tetsufumi; Fujita, Shinobu

    2007-11-01

    Since the security of all modern cryptographic techniques relies on unpredictable and irreproducible digital keys generated by random-number generators (RNGs), the realization of high-quality RNG is essential for secure communications. In this report, a new RNG, which utilizes single-electron phenomena, is proposed. A room-temperature operating silicon single-electron transistor (SET) having nearby an electron pocket is used as a high-quality, ultra-small RNG. In the proposed RNG, stochastic single-electron capture/emission processes to/from the electron pocket are detected with high sensitivity by the SET, and result in giant random telegraphic signals (GRTS) on the SET current. It is experimentally demonstrated that the single-electron RNG generates extremely high-quality random digital sequences at room temperature, in spite of its simple configuration. Because of its small-size and low-power properties, the single-electron RNG is promising as a key nanoelectronic device for future ubiquitous computing systems with highly secure mobile communication capabilities.

  20. Acceleration and torque feedback for robotic control - Experimental results

    NASA Technical Reports Server (NTRS)

    Mclnroy, John E.; Saridis, George N.

    1990-01-01

    Gross motion control of robotic manipulators typically requires significant on-line computations to compensate for nonlinear dynamics due to gravity, Coriolis, centripetal, and friction nonlinearities. One controller proposed by Luo and Saridis avoids these computations by feeding back joint acceleration and torque. This study implements the controller on a Puma 600 robotic manipulator. Joint acceleration measurement is obtained by measuring linear accelerations of each joint, and deriving a computationally efficient transformation from the linear measurements to the angular accelerations. Torque feedback is obtained by using the previous torque sent to the joints. The implementation has stability problems on the Puma 600 due to the extremely high gains inherent in the feedback structure. Since these high gains excite frequency modes in the Puma 600, the algorithm is modified to decrease the gain inherent in the feedback structure. The resulting compensator is stable and insensitive to high frequency unmodeled dynamics. Moreover, a second compensator is proposed which uses acceleration and torque feedback, but still allows nonlinear terms to be fed forward. Thus, by feeding the increment in the easily calculated gravity terms forward, improved responses are obtained. Both proposed compensators are implemented, and the real time results are compared to those obtained with the computed torque algorithm.

  1. Global Weirding? - Using Very Large Ensembles and Extreme Value Theory to assess Changes in Extreme Weather Events Today

    NASA Astrophysics Data System (ADS)

    Otto, F. E. L.; Mitchell, D.; Sippel, S.; Black, M. T.; Dittus, A. J.; Harrington, L. J.; Mohd Saleh, N. H.

    2014-12-01

    A shift in the distribution of socially-relevant climate variables such as daily minimum winter temperatures and daily precipitation extremes, has been attributed to anthropogenic climate change for various mid-latitude regions. However, while there are many process-based arguments suggesting also a change in the shape of these distributions, attribution studies demonstrating this have not currently been undertaken. Here we use a very large initial condition ensemble of ~40,000 members simulating the European winter 2013/2014 using the distributed computing infrastructure under the weather@home project. Two separate scenarios are used:1. current climate conditions, and 2. a counterfactual scenario of "world that might have been" without anthropogenic forcing. Specifically focusing on extreme events, we assess how the estimated parameters of the Generalized Extreme Value (GEV) distribution vary depending on variable-type, sampling frequency (daily, monthly, …) and geographical region. We find that the location parameter changes for most variables but, depending on the region and variables, we also find significant changes in scale and shape parameters. The very large ensemble allows, furthermore, to assess whether such findings in the fitted GEV distributions are consistent with an empirical analysis of the model data, and whether the most extreme data still follow a known underlying distribution that in a small sample size might otherwise be thought of as an out-lier. The ~40,000 member ensemble is simulated using 12 different SST patterns (1 'observed', and 11 best guesses of SSTs with no anthropogenic warming). The range in SSTs, along with the corresponding changings in the NAO and high-latitude blocking inform on the dynamics governing some of these extreme events. While strong tele-connection patterns are not found in this particular experiment, the high number of simulated extreme events allows for a more thorough analysis of the dynamics than has been performed before. Therefore, combining extreme value theory with very large ensemble simulations allows us to understand the dynamics of changes in extreme events which is not possible just using the former but also shows in which cases statistics combined with smaller ensembles give as valid results as very large initial conditions.

  2. Systematic Testing of Belief-Propagation Estimates for Absolute Free Energies in Atomistic Peptides and Proteins.

    PubMed

    Donovan-Maiye, Rory M; Langmead, Christopher J; Zuckerman, Daniel M

    2018-01-09

    Motivated by the extremely high computing costs associated with estimates of free energies for biological systems using molecular simulations, we further the exploration of existing "belief propagation" (BP) algorithms for fixed-backbone peptide and protein systems. The precalculation of pairwise interactions among discretized libraries of side-chain conformations, along with representation of protein side chains as nodes in a graphical model, enables direct application of the BP approach, which requires only ∼1 s of single-processor run time after the precalculation stage. We use a "loopy BP" algorithm, which can be seen as an approximate generalization of the transfer-matrix approach to highly connected (i.e., loopy) graphs, and it has previously been applied to protein calculations. We examine the application of loopy BP to several peptides as well as the binding site of the T4 lysozyme L99A mutant. The present study reports on (i) the comparison of the approximate BP results with estimates from unbiased estimators based on the Amber99SB force field; (ii) investigation of the effects of varying library size on BP predictions; and (iii) a theoretical discussion of the discretization effects that can arise in BP calculations. The data suggest that, despite their approximate nature, BP free-energy estimates are highly accurate-indeed, they never fall outside confidence intervals from unbiased estimators for the systems where independent results could be obtained. Furthermore, we find that libraries of sufficiently fine discretization (which diminish library-size sensitivity) can be obtained with standard computing resources in most cases. Altogether, the extremely low computing times and accurate results suggest the BP approach warrants further study.

  3. Reconstructing metabolic flux vectors from extreme pathways: defining the alpha-spectrum.

    PubMed

    Wiback, Sharon J; Mahadevan, Radhakrishnan; Palsson, Bernhard Ø

    2003-10-07

    The move towards genome-scale analysis of cellular functions has necessitated the development of analytical (in silico) methods to understand such large and complex biochemical reaction networks. One such method is extreme pathway analysis that uses stoichiometry and thermodynamic irreversibly to define mathematically unique, systemic metabolic pathways. These extreme pathways form the edges of a high-dimensional convex cone in the flux space that contains all the attainable steady state solutions, or flux distributions, for the metabolic network. By definition, any steady state flux distribution can be described as a nonnegative linear combination of the extreme pathways. To date, much effort has been focused on calculating, defining, and understanding these extreme pathways. However, little work has been performed to determine how these extreme pathways contribute to a given steady state flux distribution. This study represents an initial effort aimed at defining how physiological steady state solutions can be reconstructed from a network's extreme pathways. In general, there is not a unique set of nonnegative weightings on the extreme pathways that produce a given steady state flux distribution but rather a range of possible values. This range can be determined using linear optimization to maximize and minimize the weightings of a particular extreme pathway in the reconstruction, resulting in what we have termed the alpha-spectrum. The alpha-spectrum defines which extreme pathways can and cannot be included in the reconstruction of a given steady state flux distribution and to what extent they individually contribute to the reconstruction. It is shown that accounting for transcriptional regulatory constraints can considerably shrink the alpha-spectrum. The alpha-spectrum is computed and interpreted for two cases; first, optimal states of a skeleton representation of core metabolism that include transcriptional regulation, and second for human red blood cell metabolism under various physiological, non-optimal conditions.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ozaki, N.; Nellis, W. J.; Mashimo, T.

    Materials at high pressures and temperatures are of great current interest for warm dense matter physics, planetary sciences, and inertial fusion energy research. Shock-compression equation-of-state data and optical reflectivities of the fluid dense oxide, Gd 3Ga 5O 12 (GGG), were measured at extremely high pressures up to 2.6 TPa (26 Mbar) generated by high-power laser irradiation and magnetically-driven hypervelocity impacts. Above 0.75 TPa, the GGG Hugoniot data approach/reach a universal linear line of fluid metals, and the optical reflectivity most likely reaches a constant value indicating that GGG undergoes a crossover from fluid semiconductor to poor metal with minimum metallicmore » conductivity (MMC). These results suggest that most fluid compounds, e.g., strong planetary oxides, reach a common state on the universal Hugoniot of fluid metals (UHFM) with MMC at sufficiently extreme pressures and temperatures. Lastly, the systematic behaviors of warm dense fluid would be useful benchmarks for developing theoretical equation-of-state and transport models in the warm dense matter regime in determining computational predictions.« less

  5. Subseasonal to Seasonal Predictions of U.S. West Coast High Water Levels

    NASA Astrophysics Data System (ADS)

    Khouakhi, A.; Villarini, G.; Zhang, W.; Slater, L. J.

    2017-12-01

    Extreme sea levels pose a significant threat to coastal communities, ecosystems, and assets, as they are conducive to coastal flooding, coastal erosion and inland salt-water intrusion. As sea levels continue to rise, these sea level extremes - including occasional minor coastal flooding experienced during high tide (nuisance floods) - are of concern. Extreme sea levels are increasing at many locations around the globe and have been attributed largely to rising mean sea levels associated with intra-seasonal to interannual climate processes such as the El Niño-Southern Oscillation (ENSO). Here, intra-seasonal to seasonal probabilistic forecasts of high water levels are computed at the Toke Point tide gage station on the US west coast. We first identify the main climate drivers that are responsible for high water levels and examine their predictability using General Circulation Models (GCMs) from the North American Multi-Model Ensemble (NMME). These drivers are then used to develop a probabilistic framework for the seasonal forecasting of high water levels. We focus on the climate controls on the frequency of high water levels using the number of exceedances above the 99.5th percentile and above the nuisance flood level established by the National Weather Service. Our findings indicate good forecast skill at the shortest lead time, with the skill that decreases as we increase the lead time. In general, these models aptly capture the year-to-year variability in the observational records.

  6. Simulating the Thermal Response of High Explosives on Time Scales of Days to Microseconds

    NASA Astrophysics Data System (ADS)

    Yoh, Jack J.; McClelland, Matthew A.

    2004-07-01

    We present an overview of computational techniques for simulating the thermal cookoff of high explosives using a multi-physics hydrodynamics code, ALE3D. Recent improvements to the code have aided our computational capability in modeling the response of energetic materials systems exposed to extreme thermal environments, such as fires. We consider an idealized model process for a confined explosive involving the transition from slow heating to rapid deflagration in which the time scale changes from days to hundreds of microseconds. The heating stage involves thermal expansion and decomposition according to an Arrhenius kinetics model while a pressure-dependent burn model is employed during the explosive phase. We describe and demonstrate the numerical strategies employed to make the transition from slow to fast dynamics.

  7. Estimation of local extreme suspended sediment concentrations in California Rivers.

    PubMed

    Tramblay, Yves; Saint-Hilaire, André; Ouarda, Taha B M J; Moatar, Florentina; Hecht, Barry

    2010-09-01

    The total amount of suspended sediment load carried by a stream during a year is usually transported during one or several extreme events related to high river flow and intense rainfall, leading to very high suspended sediment concentrations (SSCs). In this study quantiles of SSC derived from annual maximums and the 99th percentile of SSC series are considered to be estimated locally in a site-specific approach using regional information. Analyses of relationships between physiographic characteristics and the selected indicators were undertaken using the localities of 5-km radius draining of each sampling site. Multiple regression models were built to test the regional estimation for these indicators of suspended sediment transport. To assess the accuracy of the estimates, a Jack-Knife re-sampling procedure was used to compute the relative bias and root mean square error of the models. Results show that for the 19 stations considered in California, the extreme SSCs can be estimated with 40-60% uncertainty, depending on the presence of flow regulation in the basin. This modelling approach is likely to prove functional in other Mediterranean climate watersheds since they appear useful in California, where geologic, climatic, physiographic, and land-use conditions are highly variable. Copyright 2010 Elsevier B.V. All rights reserved.

  8. A frozen Gaussian approximation-based multi-level particle swarm optimization for seismic inversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Jinglai, E-mail: jinglaili@sjtu.edu.cn; Lin, Guang, E-mail: lin491@purdue.edu; Computational Sciences and Mathematics Division, Pacific Northwest National Laboratory, Richland, WA 99352

    2015-09-01

    In this paper, we propose a frozen Gaussian approximation (FGA)-based multi-level particle swarm optimization (MLPSO) method for seismic inversion of high-frequency wave data. The method addresses two challenges in it: First, the optimization problem is highly non-convex, which makes hard for gradient-based methods to reach global minima. This is tackled by MLPSO which can escape from undesired local minima. Second, the character of high-frequency of seismic waves requires a large number of grid points in direct computational methods, and thus renders an extremely high computational demand on the simulation of each sample in MLPSO. We overcome this difficulty by threemore » steps: First, we use FGA to compute high-frequency wave propagation based on asymptotic analysis on phase plane; Then we design a constrained full waveform inversion problem to prevent the optimization search getting into regions of velocity where FGA is not accurate; Last, we solve the constrained optimization problem by MLPSO that employs FGA solvers with different fidelity. The performance of the proposed method is demonstrated by a two-dimensional full-waveform inversion example of the smoothed Marmousi model.« less

  9. Computer Proficiency Questionnaire: Assessing Low and High Computer Proficient Seniors

    PubMed Central

    Boot, Walter R.; Charness, Neil; Czaja, Sara J.; Sharit, Joseph; Rogers, Wendy A.; Fisk, Arthur D.; Mitzner, Tracy; Lee, Chin Chin; Nair, Sankaran

    2015-01-01

    Purpose of the Study: Computers and the Internet have the potential to enrich the lives of seniors and aid in the performance of important tasks required for independent living. A prerequisite for reaping these benefits is having the skills needed to use these systems, which is highly dependent on proper training. One prerequisite for efficient and effective training is being able to gauge current levels of proficiency. We developed a new measure (the Computer Proficiency Questionnaire, or CPQ) to measure computer proficiency in the domains of computer basics, printing, communication, Internet, calendaring software, and multimedia use. Our aim was to develop a measure appropriate for individuals with a wide range of proficiencies from noncomputer users to extremely skilled users. Design and Methods: To assess the reliability and validity of the CPQ, a diverse sample of older adults, including 276 older adults with no or minimal computer experience, was recruited and asked to complete the CPQ. Results: The CPQ demonstrated excellent reliability (Cronbach’s α = .98), with subscale reliabilities ranging from .86 to .97. Age, computer use, and general technology use all predicted CPQ scores. Factor analysis revealed three main factors of proficiency related to Internet and e-mail use; communication and calendaring; and computer basics. Based on our findings, we also developed a short-form CPQ (CPQ-12) with similar properties but 21 fewer questions. Implications: The CPQ and CPQ-12 are useful tools to gauge computer proficiency for training and research purposes, even among low computer proficient older adults. PMID:24107443

  10. An Investigation of Wave Impact Duration in High-Speed Planing Craft in Rough Water

    DTIC Science & Technology

    2014-04-01

    10 Figure 9. CCD’s CObIA cRIO Data Acquisition System.............................................................11 Figure 10...environment. CCD’s CObIA system shown in Figure 9, based on National Instruments Compact RIO, has proven itself suitable for seakeeping measurements in...even the most extreme conditions. Figure 9. CCD’s CObIA cRIO Data Acquisition System Personal computers have also improved, allowing engineers to

  11. R&D100: Lightweight Distributed Metric Service

    ScienceCinema

    Gentile, Ann; Brandt, Jim; Tucker, Tom; Showerman, Mike

    2018-06-12

    On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.

  12. R&D100: Lightweight Distributed Metric Service

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gentile, Ann; Brandt, Jim; Tucker, Tom

    2015-11-19

    On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.

  13. Evaluating the Effects of Interface Disruption Using fNIR Spectroscopy

    DTIC Science & Technology

    2011-02-28

    Error Potentials in Brain- Computer Interfaces. ADVANCES IN COGNITIVE NEURODYNAMICS , 2008: p. 777-782. 21. Nieuwenhuis S, et al., Psychophysiology...introduced until the 1990‟s and holds great potential for extremely non-invasive cognitive state measurement. It is significantly easier and faster to...As a reminder, the general protocol is as follows: 1) Researchers gather benchmark tasks from cognitive psychology that elicit high and low

  14. Holographic Adaptive Laser Optics System (HALOS): Fast, Autonomous Aberration Correction

    NASA Astrophysics Data System (ADS)

    Andersen, G.; MacDonald, K.; Gelsinger-Austin, P.

    2013-09-01

    We present an adaptive optics system which uses a multiplexed hologram to deconvolve the phase aberrations in an input beam. This wavefront characterization is extremely fast as it is based on simple measurements of the intensity of focal spots and does not require any computations. Furthermore, the system does not require a computer in the loop and is thus much cheaper, less complex and more robust as well. A fully functional, closed-loop prototype incorporating a 32-element MEMS mirror has been constructed. The unit has a footprint no larger than a laptop but runs at a bandwidth of 100kHz over an order of magnitude faster than comparable, conventional systems occupying a significantly larger volume. Additionally, since the sensing is based on parallel, all-optical processing, the speed is independent of actuator number running at the same bandwidth for one actuator as for a million. We are developing the HALOS technology with a view towards next-generation surveillance systems for extreme adaptive optics applications. These include imaging, lidar and free-space optical communications for unmanned aerial vehicles and SSA. The small volume is ideal for UAVs, while the high speed and high resolution will be of great benefit to the ground-based observation of space-based objects.

  15. Cosmological neutrino simulations at extreme scale

    DOE PAGES

    Emberson, J. D.; Yu, Hao-Ran; Inman, Derek; ...

    2017-08-01

    Constraining neutrino mass remains an elusive challenge in modern physics. Precision measurements are expected from several upcoming cosmological probes of large-scale structure. Achieving this goal relies on an equal level of precision from theoretical predictions of neutrino clustering. Numerical simulations of the non-linear evolution of cold dark matter and neutrinos play a pivotal role in this process. We incorporate neutrinos into the cosmological N-body code CUBEP3M and discuss the challenges associated with pushing to the extreme scales demanded by the neutrino problem. We highlight code optimizations made to exploit modern high performance computing architectures and present a novel method ofmore » data compression that reduces the phase-space particle footprint from 24 bytes in single precision to roughly 9 bytes. We scale the neutrino problem to the Tianhe-2 supercomputer and provide details of our production run, named TianNu, which uses 86% of the machine (13,824 compute nodes). With a total of 2.97 trillion particles, TianNu is currently the world’s largest cosmological N-body simulation and improves upon previous neutrino simulations by two orders of magnitude in scale. We finish with a discussion of the unanticipated computational challenges that were encountered during the TianNu runtime.« less

  16. Computing the proton aurora at early Mars

    NASA Astrophysics Data System (ADS)

    Lovato, K.; Gronoff, G.; Curry, S.; Simon Wedlund, C.; Moore, W. B.

    2017-12-01

    In the early Solar System, ( 4 Gyr ago) our Sun was 70% less luminous than what is seen today but much more active. Indeed, for young stars, solar flares occurs more frequently and therefore so do coronal mass ejections and solar energetic particle events. With an increase in solar events, the flux of protons becomes extremely high, and affects planetary atmosphere in a more extreme way as today. Proton precipitation on planets has an impact on the energy balance of their upper atmospheres, can affect the photochemistry and create auroral emissions. Understanding the protons precipitation at the early Mars can help in understanding occurring chemical process as well as atmospheric evolution and escape. We concentrated our effort on the proton up to a MeV since they have the most important influence on the upper atmosphere. Using scaling laws, we estimated the proton flux for the Early Mars up to a MeV. A kinetic 1D code, validated for the current Mars, was used to compute the effects of the low energy protons precipitation on the Early Mars. This model solves the coupled H+/H multi-stream dissipative transport equation as well as the transport of the secondary electron. For the Early Mars, it allowed to compute the magnitude of the proton Aurora, as well as the corresponding upwards H flux.

  17. A Comparison of Automatic Parallelization Tools/Compilers on the SGI Origin 2000 Using the NAS Benchmarks

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Frumkin, Michael; Hribar, Michelle; Jin, Hao-Qiang; Waheed, Abdul; Yan, Jerry

    1998-01-01

    Porting applications to new high performance parallel and distributed computing platforms is a challenging task. Since writing parallel code by hand is extremely time consuming and costly, porting codes would ideally be automated by using some parallelization tools and compilers. In this paper, we compare the performance of the hand written NAB Parallel Benchmarks against three parallel versions generated with the help of tools and compilers: 1) CAPTools: an interactive computer aided parallelization too] that generates message passing code, 2) the Portland Group's HPF compiler and 3) using compiler directives with the native FORTAN77 compiler on the SGI Origin2000.

  18. Clock Agreement Among Parallel Supercomputer Nodes

    DOE Data Explorer

    Jones, Terry R.; Koenig, Gregory A.

    2014-04-30

    This dataset presents measurements that quantify the clock synchronization time-agreement characteristics among several high performance computers including the current world's most powerful machine for open science, the U.S. Department of Energy's Titan machine sited at Oak Ridge National Laboratory. These ultra-fast machines derive much of their computational capability from extreme node counts (over 18000 nodes in the case of the Titan machine). Time-agreement is commonly utilized by parallel programming applications and tools, distributed programming application and tools, and system software. Our time-agreement measurements detail the degree of time variance between nodes and how that variance changes over time. The dataset includes empirical measurements and the accompanying spreadsheets.

  19. A preliminary evaluation of nearhore extreme sea level and wave models for fringing reef environments

    NASA Astrophysics Data System (ADS)

    Hoeke, R. K.; Reyns, J.; O'Grady, J.; Becker, J. M.; Merrifield, M. A.; Roelvink, J. A.

    2016-02-01

    Oceanic islands are widely perceived as vulnerable to sea level rise and are characterized by steep nearshore topography and fringing reefs. In such settings, near shore dynamics and (non-tidal) water level variability tends to be dominated by wind-wave processes. These processes are highly sensitive to reef morphology and roughness and to regional wave climate. Thus sea level extremes tend to be highly localized and their likelihood can be expected to change in the future (beyond simple extrapolation of sea level rise scenarios): e.g. sea level rise may increase the effective mean depth of reef crests and flats and ocean acidification and/or increased temperatures may lead to changes in reef structure. The problem is sufficiently complex that analytic or numerical approaches are necessary to estimate current hazards and explore potential future changes. In this study, we evaluate the capacity of several analytic/empirical approaches and phase-averaged and phase-resolved numerical models at sites in the insular tropical Pacific. We consider their ability to predict time-averaged wave setup and instantaneous water level exceedance probability (or dynamic wave run-up) as well as computational cost; where possible, we compare the model results with in situ observations from a number of previous studies. Preliminary results indicate analytic approaches are by far the most computationally efficient, but tend to perform poorly when alongshore straight and parallel morphology cannot be assumed. Phase-averaged models tend to perform well with respect to wave setup in such situations, but are unable to predict processes related to individual waves or wave groups, such as infragravity motions or wave run-up. Phase-resolved models tend to perform best, but come at high computational cost, an important consideration when exploring possible future scenarios. A new approach of combining an unstructured computational grid with a quasi-phase averaged approach (i.e. only phase resolving motions below a frequency cutoff) shows promise as a good compromise between computational efficiency and resolving processes such as wave runup and overtopping in more complex bathymetric situations.

  20. Highly Scalable Asynchronous Computing Method for Partial Differential Equations: A Path Towards Exascale

    NASA Astrophysics Data System (ADS)

    Konduri, Aditya

    Many natural and engineering systems are governed by nonlinear partial differential equations (PDEs) which result in a multiscale phenomena, e.g. turbulent flows. Numerical simulations of these problems are computationally very expensive and demand for extreme levels of parallelism. At realistic conditions, simulations are being carried out on massively parallel computers with hundreds of thousands of processing elements (PEs). It has been observed that communication between PEs as well as their synchronization at these extreme scales take up a significant portion of the total simulation time and result in poor scalability of codes. This issue is likely to pose a bottleneck in scalability of codes on future Exascale systems. In this work, we propose an asynchronous computing algorithm based on widely used finite difference methods to solve PDEs in which synchronization between PEs due to communication is relaxed at a mathematical level. We show that while stability is conserved when schemes are used asynchronously, accuracy is greatly degraded. Since message arrivals at PEs are random processes, so is the behavior of the error. We propose a new statistical framework in which we show that average errors drop always to first-order regardless of the original scheme. We propose new asynchrony-tolerant schemes that maintain accuracy when synchronization is relaxed. The quality of the solution is shown to depend, not only on the physical phenomena and numerical schemes, but also on the characteristics of the computing machine. A novel algorithm using remote memory access communications has been developed to demonstrate excellent scalability of the method for large-scale computing. Finally, we present a path to extend this method in solving complex multi-scale problems on Exascale machines.

  1. Analysis and trends of precipitation lapse rate and extreme indices over north Sikkim eastern Himalayas under CMIP5ESM-2M RCPs experiments

    NASA Astrophysics Data System (ADS)

    Singh, Vishal; Goyal, Manish Kumar

    2016-01-01

    This paper draws attention to highlight the spatial and temporal variability in precipitation lapse rate (PLR) and precipitation extreme indices (PEIs) through the mesoscale characterization of Teesta river catchment, which corresponds to north Sikkim eastern Himalayas. A PLR rate is an important variable for the snowmelt runoff models. In a mountainous region, the PLR could be varied from lower elevation parts to high elevation parts. In this study, a PLR was computed by accounting elevation differences, which varies from around 1500 m to 7000 m. A precipitation variability and extremity were analysed using multiple mathematical functions viz. quantile regression, spatial mean, spatial standard deviation, Mann-Kendall test and Sen's estimation. For this reason, a daily precipitation, in the historical (years 1980-2005) as measured/observed gridded points and projected experiments for the 21st century (years 2006-2100) simulated by CMIP5 ESM-2 M model (Coupled Model Intercomparison Project Phase 5 Earth System Model 2) employing three different radiative forcing scenarios (Representative Concentration Pathways), utilized for the research work. The outcomes of this study suggest that a PLR is significantly varied from lower elevation to high elevation parts. The PEI based analysis showed that the extreme high intensity events have been increased significantly, especially after 2040s. The PEI based observations also showed that the numbers of wet days are increased for all the RCPs. The quantile regression plots showed significant increments in the upper and lower quantiles of the various extreme indices. The Mann-Kendall test and Sen's estimation tests clearly indicated significant changing patterns in the frequency and intensity of the precipitation indices across all the sub-basins and RCP scenario in an intra-decadal time series domain. The RCP8.5 showed extremity of the projected outcomes.

  2. A high-speed linear algebra library with automatic parallelism

    NASA Technical Reports Server (NTRS)

    Boucher, Michael L.

    1994-01-01

    Parallel or distributed processing is key to getting highest performance workstations. However, designing and implementing efficient parallel algorithms is difficult and error-prone. It is even more difficult to write code that is both portable to and efficient on many different computers. Finally, it is harder still to satisfy the above requirements and include the reliability and ease of use required of commercial software intended for use in a production environment. As a result, the application of parallel processing technology to commercial software has been extremely small even though there are numerous computationally demanding programs that would significantly benefit from application of parallel processing. This paper describes DSSLIB, which is a library of subroutines that perform many of the time-consuming computations in engineering and scientific software. DSSLIB combines the high efficiency and speed of parallel computation with a serial programming model that eliminates many undesirable side-effects of typical parallel code. The result is a simple way to incorporate the power of parallel processing into commercial software without compromising maintainability, reliability, or ease of use. This gives significant advantages over less powerful non-parallel entries in the market.

  3. Science & Technology Review September/October 2008

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bearinger, J P

    2008-07-21

    This issue has the following articles: (1) Answering Scientists Most Audacious Questions--Commentary by Dona Crawford; (2) Testing the Accuracy of the Supernova Yardstick--High-resolution simulations are advancing understanding of Type Ia supernovae to help uncover the mysteries of dark energy; (3) Developing New Drugs and Personalized Medical Treatment--Accelerator mass spectrometry is emerging as an essential tool for assessing the effects of drugs in humans; (4) Triage in a Patch--A painless skin patch and accompanying detector can quickly indicate human exposure to biological pathogens, chemicals, explosives, or radiation; and (5) Smoothing Out Defects for Extreme Ultraviolet Lithography--A process for smoothing mask defectsmore » helps move extreme ultraviolet lithography one step closer to creating smaller, more powerful computer chips.« less

  4. Content range and precision of a computer adaptive test of upper extremity function for children with cerebral palsy.

    PubMed

    Montpetit, Kathleen; Haley, Stephen; Bilodeau, Nathalie; Ni, Pengsheng; Tian, Feng; Gorton, George; Mulcahey, M J

    2011-02-01

    This article reports on the content range and measurement precision of an upper extremity (UE) computer adaptive testing (CAT) platform of physical function in children with cerebral palsy. Upper extremity items representing skills of all abilities were administered to 305 parents. These responses were compared with two traditional standardized measures: Pediatric Outcomes Data Collection Instrument and Functional Independence Measure for Children. The UE CAT correlated strongly with the upper extremity component of these measures and had greater precision when describing individual functional ability. The UE item bank has wider range with items populating the lower end of the ability spectrum. This new UE item bank and CAT have the capability to quickly assess children of all ages and abilities with good precision and, most importantly, with items that are meaningful and appropriate for their age and level of physical function.

  5. Pressure profiles of the BRing based on the simulation used in the CSRm

    NASA Astrophysics Data System (ADS)

    Wang, J. C.; Li, P.; Yang, J. C.; Yuan, Y. J.; Wu, B.; Chai, Z.; Luo, C.; Dong, Z. Q.; Zheng, W. H.; Zhao, H.; Ruan, S.; Wang, G.; Liu, J.; Chen, X.; Wang, K. D.; Qin, Z. M.; Yin, B.

    2017-07-01

    HIAF-BRing, a new multipurpose accelerator facility of the High Intensity heavy-ion Accelerator Facility project, requires an extremely high vacuum lower than 10-11 mbar to fulfill the requirements of radioactive beam physics and high energy density physics. To achieve the required process pressure, the bench-marked codes of VAKTRAK and Molflow+ are used to simulate the pressure profiles of the BRing system. In order to ensure the accuracy of the implementation of VAKTRAK, the computational results are verified by measured pressure data and compared with a new simulation code BOLIDE on the current synchrotron CSRm. Since the verification of VAKTRAK has been done, the pressure profiles of the BRing are calculated with different parameters such as conductance, out-gassing rates and pumping speeds. According to the computational results, the optimal parameters are selected to achieve the required pressure for the BRing.

  6. Openwebglobe 2: Visualization of Complex 3D-GEODATA in the (mobile) Webbrowser

    NASA Astrophysics Data System (ADS)

    Christen, M.

    2016-06-01

    Providing worldwide high resolution data for virtual globes consists of compute and storage intense tasks for processing data. Furthermore, rendering complex 3D-Geodata, such as 3D-City models with an extremely high polygon count and a vast amount of textures at interactive framerates is still a very challenging task, especially on mobile devices. This paper presents an approach for processing, caching and serving massive geospatial data in a cloud-based environment for large scale, out-of-core, highly scalable 3D scene rendering on a web based virtual globe. Cloud computing is used for processing large amounts of geospatial data and also for providing 2D and 3D map data to a large amount of (mobile) web clients. In this paper the approach for processing, rendering and caching very large datasets in the currently developed virtual globe "OpenWebGlobe 2" is shown, which displays 3D-Geodata on nearly every device.

  7. Host-Guest Complexes with Protein-Ligand-Like Affinities: Computational Analysis and Design

    PubMed Central

    Moghaddam, Sarvin; Inoue, Yoshihisa

    2009-01-01

    It has recently been discovered that guests combining a nonpolar core with cationic substituents bind cucurbit[7]uril (CB[7]) in water with ultra-high affinities. The present study uses the Mining Minima algorithm to study the physics of these extraordinary associations and to computationally test a new series of CB[7] ligands designed to bind with similarly high affinity. The calculations reproduce key experimental observations regarding the affinities of ferrocene-based guests with CB[7] and β-cyclodextrin and provide a coherent view of the roles of electrostatics and configurational entropy as determinants of affinity in these systems. The newly designed series of compounds is based on a bicyclo[2.2.2]octane core, which is similar in size and polarity to the ferrocene core of the existing series. Mining Minima predicts that these new compounds will, like the ferrocenes, bind CB[7] with extremely high affinities. PMID:19133781

  8. The Influence of Extremely Large Solar Proton Events in a Changing Stratosphere. Stratospheric Influence of Solar Proton Events

    NASA Technical Reports Server (NTRS)

    Jackman, Charles H.; Fleming, Eric L.; Vitt, Francis M.

    1999-01-01

    Two periods of extremely large solar proton events (SPEs) occurred in the past thirty years, which forced significant long-term polar stratospheric changes. The August 2-10, 1972 and October 19-27, 1989 SPEs happened in stratospheres that were quite different chemically. The stratospheric chlorine levels were relatively small in 1972 (approximately 1.2 ppbv) and were fairly substantial in 1989 at about (approximately 3 ppbv). Although these SPEs produced both HO(x) and NO(y) constituents in the mesosphere and stratosphere, only the NO(y) constituents had lifetimes long enough to affect ozone for several months to years past the events. Our recently improved two-dimensional chemistry and transport atmospheric model was used to compute the effects of these gigantic SPEs in a changing stratosphere. Significant upper stratospheric ozone depletions > 10% are computed to last for a few months past these SPEs. The long-lived SPE-produced NO(y) constituents were transported to lower levels during winter after these huge SPEs and caused impacts in the middle and lower stratosphere. During periods of high halogen loading these impacts resulted in interference with the chlorine and bromine loss cycles for ozone destruction. The chemical state of the atmosphere, including the stratospheric sulfate aerosol density, substantially affected the predicted stratospheric influence of these extremely large SPEs.

  9. Study of high speed complex number algorithms. [for determining antenna for field radiation patterns

    NASA Technical Reports Server (NTRS)

    Heisler, R.

    1981-01-01

    A method of evaluating the radiation integral on the curved surface of a reflecting antenna is presented. A three dimensional Fourier transform approach is used to generate a two dimensional radiation cross-section along a planer cut at any angle phi through the far field pattern. Salient to the method is an algorithm for evaluating a subset of the total three dimensional discrete Fourier transform results. The subset elements are selectively evaluated to yield data along a geometric plane of constant. The algorithm is extremely efficient so that computation of the induced surface currents via the physical optics approximation dominates the computer time required to compute a radiation pattern. Application to paraboloid reflectors with off-focus feeds in presented, but the method is easily extended to offset antenna systems and reflectors of arbitrary shapes. Numerical results were computed for both gain and phase and are compared with other published work.

  10. Thermodynamics of Computational Copying in Biochemical Systems

    NASA Astrophysics Data System (ADS)

    Ouldridge, Thomas E.; Govern, Christopher C.; ten Wolde, Pieter Rein

    2017-04-01

    Living cells use readout molecules to record the state of receptor proteins, similar to measurements or copies in typical computational devices. But is this analogy rigorous? Can cells be optimally efficient, and if not, why? We show that, as in computation, a canonical biochemical readout network generates correlations; extracting no work from these correlations sets a lower bound on dissipation. For general input, the biochemical network cannot reach this bound, even with arbitrarily slow reactions or weak thermodynamic driving. It faces an accuracy-dissipation trade-off that is qualitatively distinct from and worse than implied by the bound, and more complex steady-state copy processes cannot perform better. Nonetheless, the cost remains close to the thermodynamic bound unless accuracy is extremely high. Additionally, we show that biomolecular reactions could be used in thermodynamically optimal devices under exogenous manipulation of chemical fuels, suggesting an experimental system for testing computational thermodynamics.

  11. Experimental determination of Ramsey numbers.

    PubMed

    Bian, Zhengbing; Chudak, Fabian; Macready, William G; Clark, Lane; Gaitan, Frank

    2013-09-27

    Ramsey theory is a highly active research area in mathematics that studies the emergence of order in large disordered structures. Ramsey numbers mark the threshold at which order first appears and are extremely difficult to calculate due to their explosive rate of growth. Recently, an algorithm that can be implemented using adiabatic quantum evolution has been proposed that calculates the two-color Ramsey numbers R(m,n). Here we present results of an experimental implementation of this algorithm and show that it correctly determines the Ramsey numbers R(3,3) and R(m,2) for 4≤m≤8. The R(8,2) computation used 84 qubits of which 28 were computational qubits. This computation is the largest experimental implementation of a scientifically meaningful adiabatic evolution algorithm that has been done to date.

  12. Experimental Determination of Ramsey Numbers

    NASA Astrophysics Data System (ADS)

    Bian, Zhengbing; Chudak, Fabian; Macready, William G.; Clark, Lane; Gaitan, Frank

    2013-09-01

    Ramsey theory is a highly active research area in mathematics that studies the emergence of order in large disordered structures. Ramsey numbers mark the threshold at which order first appears and are extremely difficult to calculate due to their explosive rate of growth. Recently, an algorithm that can be implemented using adiabatic quantum evolution has been proposed that calculates the two-color Ramsey numbers R(m,n). Here we present results of an experimental implementation of this algorithm and show that it correctly determines the Ramsey numbers R(3,3) and R(m,2) for 4≤m≤8. The R(8,2) computation used 84 qubits of which 28 were computational qubits. This computation is the largest experimental implementation of a scientifically meaningful adiabatic evolution algorithm that has been done to date.

  13. Mapping snow depth return levels: smooth spatial modeling versus station interpolation

    NASA Astrophysics Data System (ADS)

    Blanchet, J.; Lehning, M.

    2010-12-01

    For adequate risk management in mountainous countries, hazard maps for extreme snow events are needed. This requires the computation of spatial estimates of return levels. In this article we use recent developments in extreme value theory and compare two main approaches for mapping snow depth return levels from in situ measurements. The first one is based on the spatial interpolation of pointwise extremal distributions (the so-called Generalized Extreme Value distribution, GEV henceforth) computed at station locations. The second one is new and based on the direct estimation of a spatially smooth GEV distribution with the joint use of all stations. We compare and validate the different approaches for modeling annual maximum snow depth measured at 100 sites in Switzerland during winters 1965-1966 to 2007-2008. The results show a better performance of the smooth GEV distribution fitting, in particular where the station network is sparser. Smooth return level maps can be computed from the fitted model without any further interpolation. Their regional variability can be revealed by removing the altitudinal dependent covariates in the model. We show how return levels and their regional variability are linked to the main climatological patterns of Switzerland.

  14. Kinematic and Kinetic Profiles of Trunk and Lower Limbs during Baseball Pitching in Collegiate Pitchers

    PubMed Central

    Kageyama, Masahiro; Sugiyama, Takashi; Takai, Yohei; Kanehisa, Hiroaki; Maeda, Akira

    2014-01-01

    The purpose of this study was to clarify differences in the kinematic and kinetic profiles of the trunk and lower extremities during baseball pitching in collegiate baseball pitchers, in relation to differences in the pitched ball velocity. The subjects were 30 collegiate baseball pitchers aged 18 to 22 yrs, who were assigned to high- (HG, 37.4 ± 0.8 m·s-1) and low-pitched-ball-velocity groups (LG, 33.3 ± 0.8 m·s-1). Three-dimensional motion analysis with a comprehensive lower-extremity model was used to evaluate kinematic and kinetic parameters during baseball pitching. The ground-reaction forces (GRF) of the pivot and stride legs during pitching were determined using two multicomponent force plates. The joint torques of hip, knee, and ankle were calculated using inverse-dynamics computation of a musculoskeletal human model. To eliminate any effect of variation in body size, kinetic and GRF data were normalized by dividing them by body mass. The maxima and minima of GRF (Fy, Fz, and resultant forces) on the pivot and stride leg were significantly greater in the HG than in the LG (p < 0.05). Furthermore, Fy, Fz, and resultant forces on the stride leg at maximum shoulder external rotation and ball release were significantly greater in the HG than in the LG (p < 0.05). The hip abduction, hip internal rotation and knee extension torques of the pivot leg and the hip adduction torque of the stride leg when it contacted the ground were significantly greater in the HG than in the LG (p < 0.05). These results indicate that, compared with low-ball-velocity pitchers, high-ball-velocity pitchers can generate greater momentum of the lower limbs during baseball pitching. Key points High-ball-velocity pitchers are characterized by greater momentum of the lower limbs during pitching motion. For high-pitched-ball velocity, stabilizing lower limbs during pitching plays an important role in order to increase the rotation and forward motion of the trunk. Computation of the lower-extremity kinetics and measurement of lower-extremity strength may help clarify the role of muscle strength in determining knee and hip function in baseball pitching. PMID:25435765

  15. Study of Volumetrically Heated Ultra-High Energy Density Plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rocca, Jorge J.

    2016-10-27

    Heating dense matter to millions of degrees is important for applications, but requires complex and expensive methods. The major goal of the project was to demonstrate using a compact laser the creation of a new ultra-high energy density plasma regime characterized by simultaneous extremely high temperature and high density, and to study it combining experimental measurements and advanced simulations. We have demonstrated that trapping of intense femtosecond laser pulses deep within ordered nanowire arrays can heat near solid density matter into a new ultra hot plasma regime. Extreme electron densities, and temperatures of several tens of million degrees were achievedmore » using laser pulses of only 0.5 J energy from a compact laser. Our x-ray spectra and simulations showed that extremely highly ionized plasma volumes several micrometers in depth are generated by irradiation of gold and Nickel nanowire arrays with femtosecond laser pulses of relativistic intensities. We obtained extraordinarily high degrees of ionization (e.g. we peeled 52 electrons from gold atoms, and up to 26 electrons from nickel atoms). In the process we generated Gigabar pressures only exceeded in the central hot spot of highly compressed thermonuclear fusion plasmas.. The plasma created after the dissolved wires expand, collide, and thermalize, is computed to have a thermal energy density of 0.3 GJ cm -3 and a pressure of 1-2 Gigabar. These are pressures only exceeded in highly compressed thermonuclear fusion plasmas. Scaling these results to higher laser intensities promises to create plasmas with temperatures and pressures exceeding those in the center of the sun.« less

  16. Effects of precision demands and mental pressure on muscle activation and hand forces in computer mouse tasks.

    PubMed

    Visser, Bart; De Looze, Michiel; De Graaff, Matthijs; Van Dieën, Jaap

    2004-02-05

    The objective of the present study was to gain insight into the effects of precision demands and mental pressure on the load of the upper extremity. Two computer mouse tasks were used: an aiming and a tracking task. Upper extremity loading was operationalized as the myo-electric activity of the wrist flexor and extensor and of the trapezius descendens muscles and the applied grip- and click-forces on the computer mouse. Performance measures, reflecting the accuracy in both tasks and the clicking rate in the aiming task, indicated that the levels of the independent variables resulted in distinguishable levels of accuracy and work pace. Precision demands had a small effect on upper extremity loading with a significant increase in the EMG-amplitudes (21%) of the wrist flexors during the aiming tasks. Precision had large effects on performance. Mental pressure had substantial effects on EMG-amplitudes with an increase of 22% in the trapezius when tracking and increases of 41% in the trapezius and 45% and 140% in the wrist extensors and flexors, respectively, when aiming. During aiming, grip- and click-forces increased by 51% and 40% respectively. Mental pressure had small effects on accuracy but large effects on tempo during aiming. Precision demands and mental pressure in aiming and tracking tasks with a computer mouse were found to coincide with increased muscle activity in some upper extremity muscles and increased force exertion on the computer mouse. Mental pressure caused significant effects on these parameters more often than precision demands. Precision and mental pressure were found to have effects on performance, with precision effects being significant for all performance measures studied and mental pressure effects for some of them. The results of this study suggest that precision demands and mental pressure increase upper extremity load, with mental pressure effects being larger than precision effects. The possible role of precision demands as an indirect mental stressor in working conditions is discussed.

  17. Towards spatially constrained gust models

    NASA Astrophysics Data System (ADS)

    Bos, René; Bierbooms, Wim; van Bussel, Gerard

    2014-06-01

    With the trend of moving towards 10-20 MW turbines, rotor diameters are growing beyond the size of the largest turbulent structures in the atmospheric boundary layer. As a consequence, the fully uniform transients that are commonly used to predict extreme gust loads are losing their connection to reality and may lead to gross overdimensioning. More suiting would be to represent gusts by advecting air parcels and posing certain physical constraints on size and position. However, this would introduce several new degrees of freedom that significantly increase the computational burden of extreme load prediction. In an attempt to elaborate on the costs and benefits of such an approach, load calculations were done on the DTU 10 MW reference turbine where a single uniform gust shape was given various spatial dimensions with the transverse wavelength ranging up to twice the rotor diameter (357 m). The resulting loads displayed a very high spread, but remained well under the level of a uniform gust. Moving towards spatially constrained gust models would therefore yield far less conservative, though more realistic predictions at the cost of higher computation time.

  18. Fast neural network surrogates for very high dimensional physics-based models in computational oceanography.

    PubMed

    van der Merwe, Rudolph; Leen, Todd K; Lu, Zhengdong; Frolov, Sergey; Baptista, Antonio M

    2007-05-01

    We present neural network surrogates that provide extremely fast and accurate emulation of a large-scale circulation model for the coupled Columbia River, its estuary and near ocean regions. The circulation model has O(10(7)) degrees of freedom, is highly nonlinear and is driven by ocean, atmospheric and river influences at its boundaries. The surrogates provide accurate emulation of the full circulation code and run over 1000 times faster. Such fast dynamic surrogates will enable significant advances in ensemble forecasts in oceanography and weather.

  19. Message Passing and Shared Address Space Parallelism on an SMP Cluster

    NASA Technical Reports Server (NTRS)

    Shan, Hongzhang; Singh, Jaswinder P.; Oliker, Leonid; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Currently, message passing (MP) and shared address space (SAS) are the two leading parallel programming paradigms. MP has been standardized with MPI, and is the more common and mature approach; however, code development can be extremely difficult, especially for irregularly structured computations. SAS offers substantial ease of programming, but may suffer from performance limitations due to poor spatial locality and high protocol overhead. In this paper, we compare the performance of and the programming effort required for six applications under both programming models on a 32-processor PC-SMP cluster, a platform that is becoming increasingly attractive for high-end scientific computing. Our application suite consists of codes that typically do not exhibit scalable performance under shared-memory programming due to their high communication-to-computation ratios and/or complex communication patterns. Results indicate that SAS can achieve about half the parallel efficiency of MPI for most of our applications, while being competitive for the others. A hybrid MPI+SAS strategy shows only a small performance advantage over pure MPI in some cases. Finally, improved implementations of two MPI collective operations on PC-SMP clusters are presented.

  20. Phase transformation in tantalum under extreme laser deformation

    DOE PAGES

    Lu, C. -H.; Hahn, E. N.; Remington, B. A.; ...

    2015-10-19

    The structural and mechanical response of metals is intimately connected to phase transformations. For instance, the product of a phase transformation (martensite) is responsible for the extraordinary range of strength and toughness of steel, making it a versatile and important structural material. Although abundant in metals and alloys, the discovery of new phase transformations is not currently a common event and often requires a mix of experimentation, predictive computations, and luck. High-energy pulsed lasers enable the exploration of extreme pressures and temperatures, where such discoveries may lie. The formation of a hexagonal (omega) phase was observed in recovered monocrystalline body-centeredmore » cubic tantalum of four crystallographic orientations subjected to an extreme regime of pressure, temperature, and strain-rate. This was accomplished using high-energy pulsed lasers. The omega phase and twinning were identified by transmission electron microscopy at 70 GPa (determined by a corresponding VISAR experiment). It is proposed that the shear stresses generated by the uniaxial strain state of shock compression play an essential role in the transformation. In conclusion, molecular dynamics simulations show the transformation of small nodules from body-centered cubic to a hexagonal close-packed structure under the same stress state (pressure and shear).« less

  1. Phase Transformation in Tantalum under Extreme Laser Deformation

    PubMed Central

    Lu, C.-H.; Hahn, E. N.; Remington, B. A.; Maddox, B. R.; Bringa, E. M.; Meyers, M. A.

    2015-01-01

    The structural and mechanical response of metals is intimately connected to phase transformations. For instance, the product of a phase transformation (martensite) is responsible for the extraordinary range of strength and toughness of steel, making it a versatile and important structural material. Although abundant in metals and alloys, the discovery of new phase transformations is not currently a common event and often requires a mix of experimentation, predictive computations, and luck. High-energy pulsed lasers enable the exploration of extreme pressures and temperatures, where such discoveries may lie. The formation of a hexagonal (omega) phase was observed in recovered monocrystalline body-centered cubic tantalum of four crystallographic orientations subjected to an extreme regime of pressure, temperature, and strain-rate. This was accomplished using high-energy pulsed lasers. The omega phase and twinning were identified by transmission electron microscopy at 70 GPa (determined by a corresponding VISAR experiment). It is proposed that the shear stresses generated by the uniaxial strain state of shock compression play an essential role in the transformation. Molecular dynamics simulations show the transformation of small nodules from body-centered cubic to a hexagonal close-packed structure under the same stress state (pressure and shear). PMID:26478106

  2. [Process strategy for ethanol production from lignocellulose feedstock under extremely low water usage and high solids loading conditions].

    PubMed

    Zhang, Jian; Chu, Deqiang; Yu, Zhanchun; Zhang, Xiaoxi; Deng, Hongbo; Wang, Xiusheng; Zhu, Zhinan; Zhang, Huaiqing; Dai, Gance; Bao, Jie

    2010-07-01

    The massive water and steam are consumed in the production of cellulose ethanol, which correspondingly results in the significant increase of energy cost, waster water discharge and production cost as well. In this study, the process strategy under extremely low water usage and high solids loading of corn stover was investigated experimentally and computationally. The novel pretreatment technology with zero waste water discharge was developed; in which a unique biodetoxification method using a kerosene fungus strain Amorphotheca resinae ZN1 to degrade the lignocellulose derived inhibitors was applied. With high solids loading of pretreated corn stover, high ethanol titer was achieved in the simultaneous saccharification and fermentation process, and the scale-up principles were studied. Furthermore, the flowsheet simulation of the whole process was carried out with the Aspen plus based physical database, and the integrated process developed was tested in the biorefinery mini-plant. Finally, the core technologies were applied in the cellulose ethanol demonstration plant, which paved a way for the establishment of an energy saving and environment friendly technology of lignocellulose biotransformation with industry application potential.

  3. Identifying Heat Waves in Florida: Considerations of Missing Weather Data

    PubMed Central

    Leary, Emily; Young, Linda J.; DuClos, Chris; Jordan, Melissa M.

    2015-01-01

    Background Using current climate models, regional-scale changes for Florida over the next 100 years are predicted to include warming over terrestrial areas and very likely increases in the number of high temperature extremes. No uniform definition of a heat wave exists. Most past research on heat waves has focused on evaluating the aftermath of known heat waves, with minimal consideration of missing exposure information. Objectives To identify and discuss methods of handling and imputing missing weather data and how those methods can affect identified periods of extreme heat in Florida. Methods In addition to ignoring missing data, temporal, spatial, and spatio-temporal models are described and utilized to impute missing historical weather data from 1973 to 2012 from 43 Florida weather monitors. Calculated thresholds are used to define periods of extreme heat across Florida. Results Modeling of missing data and imputing missing values can affect the identified periods of extreme heat, through the missing data itself or through the computed thresholds. The differences observed are related to the amount of missingness during June, July, and August, the warmest months of the warm season (April through September). Conclusions Missing data considerations are important when defining periods of extreme heat. Spatio-temporal methods are recommended for data imputation. A heat wave definition that incorporates information from all monitors is advised. PMID:26619198

  4. Identifying Heat Waves in Florida: Considerations of Missing Weather Data.

    PubMed

    Leary, Emily; Young, Linda J; DuClos, Chris; Jordan, Melissa M

    2015-01-01

    Using current climate models, regional-scale changes for Florida over the next 100 years are predicted to include warming over terrestrial areas and very likely increases in the number of high temperature extremes. No uniform definition of a heat wave exists. Most past research on heat waves has focused on evaluating the aftermath of known heat waves, with minimal consideration of missing exposure information. To identify and discuss methods of handling and imputing missing weather data and how those methods can affect identified periods of extreme heat in Florida. In addition to ignoring missing data, temporal, spatial, and spatio-temporal models are described and utilized to impute missing historical weather data from 1973 to 2012 from 43 Florida weather monitors. Calculated thresholds are used to define periods of extreme heat across Florida. Modeling of missing data and imputing missing values can affect the identified periods of extreme heat, through the missing data itself or through the computed thresholds. The differences observed are related to the amount of missingness during June, July, and August, the warmest months of the warm season (April through September). Missing data considerations are important when defining periods of extreme heat. Spatio-temporal methods are recommended for data imputation. A heat wave definition that incorporates information from all monitors is advised.

  5. Extreme value modelling of Ghana stock exchange index.

    PubMed

    Nortey, Ezekiel N N; Asare, Kwabena; Mettle, Felix Okoe

    2015-01-01

    Modelling of extreme events has always been of interest in fields such as hydrology and meteorology. However, after the recent global financial crises, appropriate models for modelling of such rare events leading to these crises have become quite essential in the finance and risk management fields. This paper models the extreme values of the Ghana stock exchange all-shares index (2000-2010) by applying the extreme value theory (EVT) to fit a model to the tails of the daily stock returns data. A conditional approach of the EVT was preferred and hence an ARMA-GARCH model was fitted to the data to correct for the effects of autocorrelation and conditional heteroscedastic terms present in the returns series, before the EVT method was applied. The Peak Over Threshold approach of the EVT, which fits a Generalized Pareto Distribution (GPD) model to excesses above a certain selected threshold, was employed. Maximum likelihood estimates of the model parameters were obtained and the model's goodness of fit was assessed graphically using Q-Q, P-P and density plots. The findings indicate that the GPD provides an adequate fit to the data of excesses. The size of the extreme daily Ghanaian stock market movements were then computed using the value at risk and expected shortfall risk measures at some high quantiles, based on the fitted GPD model.

  6. Porting Extremely Lightweight Intrusion Detection (ELIDe) to Android

    DTIC Science & Technology

    2015-10-01

    ARL-TN-0681 ● OCT 2015 US Army Research Laboratory Porting Extremely Lightweight Intrusion Detection (ELIDe) to Android by...Lightweight Intrusion Detection (ELIDe) to Android by Ken F Yu and Garret S Payer Computational and Information Sciences Directorate, ARL...

  7. Final Report. Institute for Ultralscale Visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Kwan-Liu; Galli, Giulia; Gygi, Francois

    The SciDAC Institute for Ultrascale Visualization brought together leading experts from visualization, high-performance computing, and science application areas to make advanced visualization solutions for SciDAC scientists and the broader community. Over the five-year project, the Institute introduced many new enabling visualization techniques, which have significantly enhanced scientists’ ability to validate their simulations, interpret their data, and communicate with others about their work and findings. This Institute project involved a large number of junior and student researchers, who received the opportunities to work on some of the most challenging science applications and gain access to the most powerful high-performance computing facilitiesmore » in the world. They were readily trained and prepared for facing the greater challenges presented by extreme-scale computing. The Institute’s outreach efforts, through publications, workshops and tutorials, successfully disseminated the new knowledge and technologies to the SciDAC and the broader scientific communities. The scientific findings and experience of the Institute team helped plan the SciDAC3 program.« less

  8. How wearable technologies will impact the future of health care.

    PubMed

    Barnard, Rick; Shea, J Timothy

    2004-01-01

    After four hundred years of delivering health care in hospitals, industrialized countries are now shifting towards treating patients at the "point of need". This trend will likely accelerate demand for, and adoption of, wearable computing and smart fabric and interactive textile (SFIT) solutions. These healthcare solutions will be designed to provide real-time vital and diagnostic information to health care providers, patients, and related stakeholders in such a manner as to improve quality of care, reduce the cost of care, and allow patients greater control over their own health. The current market size for wearable computing and SFIT solutions is modest; however, the future outlook is extremely strong. Venture Development Corporation, a technology market research and strategy firm, was founded in 1971. Over the years, VDC has developed and implemented a unique and highly successful methodology for forecasting and analyzing highly dynamic technology markets. VDC has extensive experience in providing multi-client and proprietary analysis in the electronic components, advanced materials, and mobile computing markets.

  9. Predictive characterization of aging and degradation of reactor materials in extreme environments. Final report, December 20, 2013 - September 20, 2017

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qu, Jianmin

    Understanding of reactor material behavior in extreme environments is vital not only to the development of new materials for the next generation nuclear reactors, but also to the extension of the operating lifetimes of the current fleet of nuclear reactors. To this end, this project conducted a suite of unique experimental techniques, augmented by a mesoscale computational framework, to understand and predict the long-term effects of irradiation, temperature, and stress on material microstructures and their macroscopic behavior. The experimental techniques and computational tools were demonstrated on two distinctive types of reactor materials, namely, Zr alloys and high-Cr martensitic steels. Thesemore » materials are chosen as the test beds because they are the archetypes of high-performance reactor materials (cladding, wrappers, ducts, pressure vessel, piping, etc.). To fill the knowledge gaps, and to meet the technology needs, a suite of innovative in situ transmission electron microscopy (TEM) characterization techniques (heating, heavy ion irradiation, He implantation, quantitative small-scale mechanical testing, and various combinations thereof) were developed and used to elucidate and map the fundamental mechanisms of microstructure evolution in both Zr and Cr alloys for a wide range environmental boundary conditions in the thermal-mechanical-irradiation input space. Knowledge gained from the experimental observations of the active mechanisms and the role of local microstructural defects on the response of the material has been incorporated into a mathematically rigorous and comprehensive three-dimensional mesoscale framework capable of accounting for the compositional variation, microstructural evolution and localized deformation (radiation damage) to predict aging and degradation of key reactor materials operating in extreme environments. Predictions from this mesoscale framework were compared with the in situ TEM observations to validate the model.« less

  10. The effect of over-commitment and reward on trapezius muscle activity and shoulder, head, neck, and torso postures during computer use in the field

    PubMed Central

    Bruno Garza, Jennifer L.; Eijckelhof, Belinda H.W.; Huysmans, Maaike A.; Catalano, Paul J.; Katz, Jeffrey N.; Johnson, Peter W.; van Dieen, Jaap H.; van der Beek, Allard J.; Dennerlein, Jack T.

    2015-01-01

    Background Because of reported associations of psychosocial factors and computer related musculoskeletal symptoms, we investigated the effects of a workplace psychosocial factor, reward, in the presence of over-commitment, on trapezius muscle activity and shoulder, head, neck, and torso postures during computer use. Methods We measured 120 office workers across four groups (lowest/highest reward/over-commitment), performing their own computer work at their own workstations over a 2 hour period. Results Median trapezius muscle activity (p=0.04) and median neck flexion (p=0.03) were largest for participants reporting simultaneously low reward and high over-commitment. No differences were observed for other muscle activities or postures. Conclusions These data suggest that the interaction of reward and over-commitment can affect upper extremity muscle activity and postures during computer use in the real work environment. This finding aligns with the hypothesized biomechanical pathway connecting workplace psychosocial factors and musculoskeletal symptoms of the neck and shoulder. PMID:23818000

  11. Computational sciences in the upstream oil and gas industry

    PubMed Central

    Halsey, Thomas C.

    2016-01-01

    The predominant technical challenge of the upstream oil and gas industry has always been the fundamental uncertainty of the subsurface from which it produces hydrocarbon fluids. The subsurface can be detected remotely by, for example, seismic waves, or it can be penetrated and studied in the extremely limited vicinity of wells. Inevitably, a great deal of uncertainty remains. Computational sciences have been a key avenue to reduce and manage this uncertainty. In this review, we discuss at a relatively non-technical level the current state of three applications of computational sciences in the industry. The first of these is seismic imaging, which is currently being revolutionized by the emergence of full wavefield inversion, enabled by algorithmic advances and petascale computing. The second is reservoir simulation, also being advanced through the use of modern highly parallel computing architectures. Finally, we comment on the role of data analytics in the upstream industry. This article is part of the themed issue ‘Energy and the subsurface’. PMID:27597785

  12. Role of Water in the Selection of Stable Proteins at Ambient and Extreme Thermodynamic Conditions

    NASA Astrophysics Data System (ADS)

    Bianco, Valentino; Franzese, Giancarlo; Dellago, Christoph; Coluzza, Ivan

    2017-04-01

    Proteins that are functional at ambient conditions do not necessarily work at extreme conditions of temperature T and pressure P . Furthermore, there are limits of T and P above which no protein has a stable functional state. Here, we show that these limits and the selection mechanisms for working proteins depend on how the properties of the surrounding water change with T and P . We find that proteins selected at high T are superstable and are characterized by a nonextreme segregation of a hydrophilic surface and a hydrophobic core. Surprisingly, a larger segregation reduces the stability range in T and P . Our computer simulations, based on a new protein design protocol, explain the hydropathy profile of proteins as a consequence of a selection process influenced by water. Our results, potentially useful for engineering proteins and drugs working far from ambient conditions, offer an alternative rationale to the evolutionary action exerted by the environment in extreme conditions.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katti, Amogh; Di Fatta, Giuseppe; Naughton, Thomas

    Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum s User Level Failure Mitigation proposal has introduced an operation, MPI Comm shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI Comm shrink operation requires a failure detection and consensus algorithm. This paper presents three novel failure detection and consensus algorithms using Gossiping. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that inmore » all algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus. The third approach is a three-phase distributed failure detection and consensus algorithm and provides consistency guarantees even in very large and extreme-scale systems while at the same time being memory and bandwidth efficient.« less

  14. A Large-Scale Multi-Hop Localization Algorithm Based on Regularized Extreme Learning for Wireless Networks.

    PubMed

    Zheng, Wei; Yan, Xiaoyong; Zhao, Wei; Qian, Chengshan

    2017-12-20

    A novel large-scale multi-hop localization algorithm based on regularized extreme learning is proposed in this paper. The large-scale multi-hop localization problem is formulated as a learning problem. Unlike other similar localization algorithms, the proposed algorithm overcomes the shortcoming of the traditional algorithms which are only applicable to an isotropic network, therefore has a strong adaptability to the complex deployment environment. The proposed algorithm is composed of three stages: data acquisition, modeling and location estimation. In data acquisition stage, the training information between nodes of the given network is collected. In modeling stage, the model among the hop-counts and the physical distances between nodes is constructed using regularized extreme learning. In location estimation stage, each node finds its specific location in a distributed manner. Theoretical analysis and several experiments show that the proposed algorithm can adapt to the different topological environments with low computational cost. Furthermore, high accuracy can be achieved by this method without setting complex parameters.

  15. Ongoing climatic extreme dynamics in Siberia

    NASA Astrophysics Data System (ADS)

    Gordov, E. P.; Shulgina, T. M.; Okladnikov, I. G.; Titov, A. G.

    2013-12-01

    Ongoing global climate changes accompanied by the restructuring of global processes in the atmosphere and biosphere are strongly pronounced in the Northern Eurasia regions, especially in Siberia. Recent investigations indicate not only large changes in averaged climatic characteristics (Kabanov and Lykosov, 2006, IPCC, 2007; Groisman and Gutman, 2012), but more frequent occurrence and stronger impacts of climatic extremes are reported as well (Bulygina et al., 2007; IPCC, 2012: Climate Extremes, 2012; Oldenborh et al., 2013). This paper provides the results of daily temperature and precipitation extreme dynamics in Siberia for the last three decades (1979 - 2012). Their seasonal dynamics is assessed using 10th and 90th percentile-based threshold indices that characterize frequency, intensity and duration of climatic extremes. To obtain the geographical pattern of these variations with high spatial resolution, the sub-daily temperature data from ECMWF ERA-Interim reanalysis and daily precipitation amounts from APHRODITE JMA dataset were used. All extreme indices and linear trend coefficients have been calculated using web-GIS information-computational platform Climate (http://climate.scert.ru/) developed to support collaborative multidisciplinary investigations of regional climatic changes and their impacts (Gordov et al., 2012). Obtained results show that seasonal dynamics of daily temperature extremes is asymmetric for tails of cold and warm temperature extreme distributions. Namely, the intensity of warming during cold nights is higher than during warm nights, especially at high latitudes of Siberia. The similar dynamics is observed for cold and warm day-time temperatures. Slight summer cooling was observed in the central part of Siberia. It is associated with decrease in warm temperature extremes. In the southern Siberia in winter, we also observe some cooling mostly due to strengthening of the cold temperature extremes. Changes in daily precipitation extremes are spatially inhomogeneous. The largest increase in frequency and intensity of heavy precipitation is observed in the north of East Siberia. Negative trends related to precipitation amount decrease are found in the central West Siberia and in the south of East Siberia. The authors acknowledge partial financial support for this research from the Russian Foundation for Basic Research projects (11-05-01190 and 13-05-12034), SB RAS Integration project 131 and project VIII.80.2.1., the Ministry of Education and Science of the Russian Federation contract 8345 and grant of the President of Russian Federation (decree 181).

  16. ClimEx - Climate change and hydrological extreme events - risks and perspectives for water management in Bavaria and Québec

    NASA Astrophysics Data System (ADS)

    Ludwig, Ralf; Baese, Frank; Braun, Marco; Brietzke, Gilbert; Brissette, Francois; Frigon, Anne; Giguère, Michel; Komischke, Holger; Kranzlmueller, Dieter; Leduc, Martin; Martel, Jean-Luc; Ricard, Simon; Schmid, Josef; von Trentini, Fabian; Turcotte, Richard; Weismueller, Jens; Willkofer, Florian; Wood, Raul

    2017-04-01

    The recent accumulation of extreme hydrological events in Bavaria and Québec has stimulated scientific and also societal interest. In addition to the challenges of an improved prediction of such situations and the implications for the associated risk management, there is, as yet, no confirmed knowledge whether and how climate change contributes to the magnitude and frequency of hydrological extreme events and how regional water management could adapt to the corresponding risks. The ClimEx project (2015-2019) investigates the effects of climate change on the meteorological and hydrological extreme events and their implications for water management in Bavaria and Québec. High Performance Computing is employed to enable the complex simulations in a hydro-climatological model processing chain, resulting in a unique high-resolution and transient (1950-2100) dataset of climatological and meteorological forcing and hydrological response: (1) The climate module has developed a large ensemble of high resolution data (12km) of the CRCM5 RCM for Central Europe and North-Eastern North America, downscaled from 50 members of the CanESM2 GCM. The dataset is complemented by all available data from the Euro-CORDEX project to account for the assessment of both natural climate variability and climate change. The large ensemble with several thousand model years provides the potential to catch rare extreme events and thus improves the process understanding of extreme events with return periods of 1000+ years. (2) The hydrology module comprises process-based and spatially explicit model setups (e.g. WaSiM) for all major catchments in Bavaria and Southern Québec in high temporal (3h) and spatial (500m) resolution. The simulations form the basis for in depth analysis of hydrological extreme events based on the inputs from the large climate model dataset. The specific data situation enables to establish a new method for 'virtual perfect prediction', which assesses climate change impacts on flood risk and water resources management by identifying patterns in the data which reveal preferential triggers of hydrological extreme events. The presentation will highlight first results from the analysis of the large scale ClimEx model ensemble, showing the current and future ratio of natural variability and climate change impacts on meteorological extreme events. Selected data from the ensemble is used to drive a hydrological model experiment to illustrate the capacity to better determine the recurrence periods of hydrological extreme events under conditions of climate change. [The authors acknowledge funding for the project from the Bavarian State Ministry for the Environment and Consumer Protection].

  17. Development of a High-Throughput Microwave Imaging System for Concealed Weapons Detection

    DTIC Science & Technology

    2016-07-15

    hardware. Index Terms—Microwave imaging, multistatic radar, Fast Fourier Transform (FFT). I. INTRODUCTION Near-field microwave imaging is a non-ionizing...configuration, but its computational demands are extreme. Fast Fourier Transform (FFT) imaging has long been used to efficiently construct images sampled with...Simulated image of 25 point scatterers imaged at range 1.5m, with array layout depicted in Fig. 3. Left: image formed with Equation (5) ( Fourier

  18. Mapping fire probability and severity in a Mediterranean area using different weather and fuel moisture scenarios

    NASA Astrophysics Data System (ADS)

    Arca, B.; Salis, M.; Bacciu, V.; Duce, P.; Pellizzaro, G.; Ventura, A.; Spano, D.

    2009-04-01

    Although in many countries lightning is the main cause of ignition, in the Mediterranean Basin the forest fires are predominantly ignited by arson, or by human negligence. The fire season peaks coincide with extreme weather conditions (mainly strong winds, hot temperatures, low atmospheric water vapour content) and high tourist presence. Many works reported that in the Mediterranean Basin the projected impacts of climate change will cause greater weather variability and extreme weather conditions, with drier and hotter summers and heat waves. At long-term scale, climate changes could affect the fuel load and the dead/live fuel ratio, and therefore could change the vegetation flammability. At short-time scale, the increase of extreme weather events could directly affect fuel water status, and it could increase large fire occurrence. In this context, detecting the areas characterized by both high probability of large fire occurrence and high fire severity could represent an important component of the fire management planning. In this work we compared several fire probability and severity maps (fire occurrence, rate of spread, fireline intensity, flame length) obtained for a study area located in North Sardinia, Italy, using FlamMap simulator (USDA Forest Service, Missoula). FlamMap computes the potential fire behaviour characteristics over a defined landscape for given weather, wind and fuel moisture data. Different weather and fuel moisture scenarios were tested to predict the potential impact of climate changes on fire parameters. The study area, characterized by a mosaic of urban areas, protected areas, and other areas subject to anthropogenic disturbances, is mainly composed by fire-prone Mediterranean maquis. The input themes needed to run FlamMap were input as grid of 10 meters; the wind data, obtained using a computational fluid-dynamic model, were inserted as gridded file, with a resolution of 50 m. The analysis revealed high fire probability and severity in most of the areas, and therefore a high potential danger. The FlamMap outputs and the derived fire probability maps can be used in decision support systems for fire spread and behaviour and for fire danger assessment with actual and future fire regimes.

  19. College students and computers: assessment of usage patterns and musculoskeletal discomfort.

    PubMed

    Noack-Cooper, Karen L; Sommerich, Carolyn M; Mirka, Gary A

    2009-01-01

    A limited number of studies have focused on computer-use-related MSDs in college students, though risk factor exposure may be similar to that of workers who use computers. This study examined computer use patterns of college students, and made comparisons to a group of previously studied computer-using professionals. 234 students completed a web-based questionnaire concerning computer use habits and physical discomfort respondents specifically associated with computer use. As a group, students reported their computer use to be at least 'Somewhat likely' 18 out of 24 h/day, compared to 12 h for the professionals. Students reported more uninterrupted work behaviours than the professionals. Younger graduate students reported 33.7 average weekly computing hours, similar to hours reported by younger professionals. Students generally reported more frequent upper extremity discomfort than the professionals. Frequent assumption of awkward postures was associated with frequent discomfort. The findings signal a need for intervention, including, training and education, prior to entry into the workforce. Students are future workers, and so it is important to determine whether their increasing exposure to computers, prior to entering the workforce, may make it so they enter already injured or do not enter their chosen profession due to upper extremity MSDs.

  20. Estimating the impact of extreme climatic events on riverine sediment transport: new tools and methods

    NASA Astrophysics Data System (ADS)

    Lajeunesse, E.; Delacourt, C.; Allemand, P.; Limare, A.; Dessert, C.; Ammann, J.; Grandjean, P.

    2010-12-01

    A series of recent works have underlined that the flux of material exported outside of a watershed is dramatically increased during extreme climatic events, such as storms, tropical cyclones and hurricanes [Dadson et al., 2003 and 2004; Hilton et al., 2008]. Indeed the exceptionally high rainfall rates reached during these events trigger runoff and landsliding which destabilize slopes and accumulate a significant amount of sediments in flooded rivers. This observation raises the question of the control that extreme climatic events might exert on the denudation rate and the morphology of watersheds. Addressing this questions requires to measure sediment transport in flooded rivers. However most conventional sediment monitoring technics rely on manned operated measurements which cannot be performed during extreme climatic events. Monitoring riverine sediment transport during extreme climatic events remains therefore a challenging issue because of the lack of instruments and methodologies adapted to such extreme conditions. In this paper, we present a new methodology aimed at estimating the impact of extreme events on sediment transport in rivers. Our approach relies on the development of two instruments. The first one is an in-situ optical instrument, based on a LISST-25X sensor, capable of measuring both the water level and the concentration of suspended matter in rivers with a time step going from one measurement every hour at low flow to one measurement every 2 minutes during a flood. The second instrument is a remote controlled drone helicopter used to acquire high resolution stereophotogrammetric images of river beds used to compute DEMs and to estimate how flash floods impact the granulometry and the morphology of the river. These two instruments were developed and tested during a 1.5 years field survey performed from june 2007 to january 2009 on the Capesterre river located on Basse-Terre island (Guadeloupe archipelago, Lesser Antilles Arc).

  1. PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deelman, Ewa; Carothers, Christopher; Mandal, Anirban

    Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less

  2. PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows

    DOE PAGES

    Deelman, Ewa; Carothers, Christopher; Mandal, Anirban; ...

    2015-07-14

    Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less

  3. The future of scientific workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deelman, Ewa; Peterka, Tom; Altintas, Ilkay

    Today’s computational, experimental, and observational sciences rely on computations that involve many related tasks. The success of a scientific mission often hinges on the computer automation of these workflows. In April 2015, the US Department of Energy (DOE) invited a diverse group of domain and computer scientists from national laboratories supported by the Office of Science, the National Nuclear Security Administration, from industry, and from academia to review the workflow requirements of DOE’s science and national security missions, to assess the current state of the art in science workflows, to understand the impact of emerging extreme-scale computing systems on thosemore » workflows, and to develop requirements for automated workflow management in future and existing environments. This article is a summary of the opinions of over 50 leading researchers attending this workshop. We highlight use cases, computing systems, workflow needs and conclude by summarizing the remaining challenges this community sees that inhibit large-scale scientific workflows from becoming a mainstream tool for extreme-scale science.« less

  4. Efficient reconstruction method for ground layer adaptive optics with mixed natural and laser guide stars.

    PubMed

    Wagner, Roland; Helin, Tapio; Obereder, Andreas; Ramlau, Ronny

    2016-02-20

    The imaging quality of modern ground-based telescopes such as the planned European Extremely Large Telescope is affected by atmospheric turbulence. In consequence, they heavily depend on stable and high-performance adaptive optics (AO) systems. Using measurements of incoming light from guide stars, an AO system compensates for the effects of turbulence by adjusting so-called deformable mirror(s) (DMs) in real time. In this paper, we introduce a novel reconstruction method for ground layer adaptive optics. In the literature, a common approach to this problem is to use Bayesian inference in order to model the specific noise structure appearing due to spot elongation. This approach leads to large coupled systems with high computational effort. Recently, fast solvers of linear order, i.e., with computational complexity O(n), where n is the number of DM actuators, have emerged. However, the quality of such methods typically degrades in low flux conditions. Our key contribution is to achieve the high quality of the standard Bayesian approach while at the same time maintaining the linear order speed of the recent solvers. Our method is based on performing a separate preprocessing step before applying the cumulative reconstructor (CuReD). The efficiency and performance of the new reconstructor are demonstrated using the OCTOPUS, the official end-to-end simulation environment of the ESO for extremely large telescopes. For more specific simulations we also use the MOST toolbox.

  5. An ultrafast programmable electrical tester for enabling time-resolved, sub-nanosecond switching dynamics and programming of nanoscale memory devices.

    PubMed

    Shukla, Krishna Dayal; Saxena, Nishant; Manivannan, Anbarasu

    2017-12-01

    Recent advancements in commercialization of high-speed non-volatile electronic memories including phase change memory (PCM) have shown potential not only for advanced data storage but also for novel computing concepts. However, an in-depth understanding on ultrafast electrical switching dynamics is a key challenge for defining the ultimate speed of nanoscale memory devices that demands for an unconventional electrical setup, specifically capable of handling extremely fast electrical pulses. In the present work, an ultrafast programmable electrical tester (PET) setup has been developed exceptionally for unravelling time-resolved electrical switching dynamics and programming characteristics of nanoscale memory devices at the picosecond (ps) time scale. This setup consists of novel high-frequency contact-boards carefully designed to capture extremely fast switching transient characteristics within 200 ± 25 ps using time-resolved current-voltage measurements. All the instruments in the system are synchronized using LabVIEW, which helps to achieve various programming characteristics such as voltage-dependent transient parameters, read/write operations, and endurance test of memory devices systematically using short voltage pulses having pulse parameters varied from 1 ns rise/fall time and 1.5 ns pulse width (full width half maximum). Furthermore, the setup has successfully demonstrated strikingly one order faster switching characteristics of Ag 5 In 5 Sb 60 Te 30 (AIST) PCM devices within 250 ps. Hence, this novel electrical setup would be immensely helpful for realizing the ultimate speed limits of various high-speed memory technologies for future computing.

  6. An ultrafast programmable electrical tester for enabling time-resolved, sub-nanosecond switching dynamics and programming of nanoscale memory devices

    NASA Astrophysics Data System (ADS)

    Shukla, Krishna Dayal; Saxena, Nishant; Manivannan, Anbarasu

    2017-12-01

    Recent advancements in commercialization of high-speed non-volatile electronic memories including phase change memory (PCM) have shown potential not only for advanced data storage but also for novel computing concepts. However, an in-depth understanding on ultrafast electrical switching dynamics is a key challenge for defining the ultimate speed of nanoscale memory devices that demands for an unconventional electrical setup, specifically capable of handling extremely fast electrical pulses. In the present work, an ultrafast programmable electrical tester (PET) setup has been developed exceptionally for unravelling time-resolved electrical switching dynamics and programming characteristics of nanoscale memory devices at the picosecond (ps) time scale. This setup consists of novel high-frequency contact-boards carefully designed to capture extremely fast switching transient characteristics within 200 ± 25 ps using time-resolved current-voltage measurements. All the instruments in the system are synchronized using LabVIEW, which helps to achieve various programming characteristics such as voltage-dependent transient parameters, read/write operations, and endurance test of memory devices systematically using short voltage pulses having pulse parameters varied from 1 ns rise/fall time and 1.5 ns pulse width (full width half maximum). Furthermore, the setup has successfully demonstrated strikingly one order faster switching characteristics of Ag5In5Sb60Te30 (AIST) PCM devices within 250 ps. Hence, this novel electrical setup would be immensely helpful for realizing the ultimate speed limits of various high-speed memory technologies for future computing.

  7. Structures and properties of materials recovered from high shock pressures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nellis, W.J.

    1994-03-01

    Shock compression produces high dynamic pressures, densities, temperatures, and their quench rates. Because of these extreme conditions, shock compression produces materials with novel crystal structures, microstructures, and physical properties. Using a 6.5-m-long two-stage gun, we perform experiments with specimens up to 10 mm in diameter and 0.001--1 mm thick. For example, oriented disks of melt-textured superconducting YBa{sub 2}Cu{sub 3}O{sub 7} were shocked to 7 GPa without macroscopic fracture. Lattice defects are deposited in the crystal, which improve magnetic hysteresis at {approximately}1 kOe. A computer code has been developed to simulate shock compaction of 100 powder particles. Computations will be comparedmore » with experiments with 15--20 {mu}m Cu powders. The method is applicable to other powders and dynamic conditions.« less

  8. Efficient Ab initio Modeling of Random Multicomponent Alloys

    DOE PAGES

    Jiang, Chao; Uberuaga, Blas P.

    2016-03-08

    Here, we present in this Letter a novel small set of ordered structures (SSOS) method that allows extremely efficient ab initio modeling of random multi-component alloys. Using inverse II-III spinel oxides and equiatomic quinary bcc (so-called high entropy) alloys as examples, we also demonstrate that a SSOS can achieve the same accuracy as a large supercell or a well-converged cluster expansion, but with significantly reduced computational cost. In particular, because of this efficiency, a large number of quinary alloy compositions can be quickly screened, leading to the identification of several new possible high entropy alloy chemistries. Furthermore, the SSOS methodmore » developed here can be broadly useful for the rapid computational design of multi-component materials, especially those with a large number of alloying elements, a challenging problem for other approaches.« less

  9. Computer proficiency questionnaire: assessing low and high computer proficient seniors.

    PubMed

    Boot, Walter R; Charness, Neil; Czaja, Sara J; Sharit, Joseph; Rogers, Wendy A; Fisk, Arthur D; Mitzner, Tracy; Lee, Chin Chin; Nair, Sankaran

    2015-06-01

    Computers and the Internet have the potential to enrich the lives of seniors and aid in the performance of important tasks required for independent living. A prerequisite for reaping these benefits is having the skills needed to use these systems, which is highly dependent on proper training. One prerequisite for efficient and effective training is being able to gauge current levels of proficiency. We developed a new measure (the Computer Proficiency Questionnaire, or CPQ) to measure computer proficiency in the domains of computer basics, printing, communication, Internet, calendaring software, and multimedia use. Our aim was to develop a measure appropriate for individuals with a wide range of proficiencies from noncomputer users to extremely skilled users. To assess the reliability and validity of the CPQ, a diverse sample of older adults, including 276 older adults with no or minimal computer experience, was recruited and asked to complete the CPQ. The CPQ demonstrated excellent reliability (Cronbach's α = .98), with subscale reliabilities ranging from .86 to .97. Age, computer use, and general technology use all predicted CPQ scores. Factor analysis revealed three main factors of proficiency related to Internet and e-mail use; communication and calendaring; and computer basics. Based on our findings, we also developed a short-form CPQ (CPQ-12) with similar properties but 21 fewer questions. The CPQ and CPQ-12 are useful tools to gauge computer proficiency for training and research purposes, even among low computer proficient older adults. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. Extreme Mean and Its Applications

    NASA Technical Reports Server (NTRS)

    Swaroop, R.; Brownlow, J. D.

    1979-01-01

    Extreme value statistics obtained from normally distributed data are considered. An extreme mean is defined as the mean of p-th probability truncated normal distribution. An unbiased estimate of this extreme mean and its large sample distribution are derived. The distribution of this estimate even for very large samples is found to be nonnormal. Further, as the sample size increases, the variance of the unbiased estimate converges to the Cramer-Rao lower bound. The computer program used to obtain the density and distribution functions of the standardized unbiased estimate, and the confidence intervals of the extreme mean for any data are included for ready application. An example is included to demonstrate the usefulness of extreme mean application.

  11. An overabundance of low-density Neptune-like planets

    NASA Astrophysics Data System (ADS)

    Cubillos, Patricio; Erkaev, Nikolai V.; Juvan, Ines; Fossati, Luca; Johnstone, Colin P.; Lammer, Helmut; Lendl, Monika; Odert, Petra; Kislyakova, Kristina G.

    2017-04-01

    We present a uniform analysis of the atmospheric escape rate of Neptune-like planets with estimated radius and mass (restricted to Mp < 30 M⊕). For each planet, we compute the restricted Jeans escape parameter, Λ, for a hydrogen atom evaluated at the planetary mass, radius, and equilibrium temperature. Values of Λ ≲ 20 suggest extremely high mass-loss rates. We identify 27 planets (out of 167) that are simultaneously consistent with hydrogen-dominated atmospheres and are expected to exhibit extreme mass-loss rates. We further estimate the mass-loss rates (Lhy) of these planets with tailored atmospheric hydrodynamic models. We compare Lhy to the energy-limited (maximum-possible high-energy driven) mass-loss rates. We confirm that 25 planets (15 per cent of the sample) exhibit extremely high mass-loss rates (Lhy > 0.1 M⌖ Gyr-1), well in excess of the energy-limited mass-loss rates. This constitutes a contradiction, since the hydrogen envelopes cannot be retained given the high mass-loss rates. We hypothesize that these planets are not truly under such high mass-loss rates. Instead, either hydrodynamic models overestimate the mass-loss rates, transit-timing-variation measurements underestimate the planetary masses, optical transit observations overestimate the planetary radii (due to high-altitude clouds), or Neptunes have consistently higher albedos than Jupiter planets. We conclude that at least one of these established estimations/techniques is consistently producing biased values for Neptune planets. Such an important fraction of exoplanets with misinterpreted parameters can significantly bias our view of populations studies, like the observed mass-radius distribution of exoplanets for example.

  12. Additional extensions to the NASCAP computer code, volume 1

    NASA Technical Reports Server (NTRS)

    Mandell, M. J.; Katz, I.; Stannard, P. R.

    1981-01-01

    Extensions and revisions to a computer code that comprehensively analyzes problems of spacecraft charging (NASCAP) are documented. Using a fully three dimensional approach, it can accurately predict spacecraft potentials under a variety of conditions. Among the extensions are a multiple electron/ion gun test tank capability, and the ability to model anisotropic and time dependent space environments. Also documented are a greatly extended MATCHG program and the preliminary version of NASCAP/LEO. The interactive MATCHG code was developed into an extremely powerful tool for the study of material-environment interactions. The NASCAP/LEO, a three dimensional code to study current collection under conditions of high voltages and short Debye lengths, was distributed for preliminary testing.

  13. [Role of multislice computed tomography in the diagnosis of acute rupture of the thoracic aorta and hepatic artery in a patient with severe concomitant injury].

    PubMed

    Muslimov, R Sh; Sharifullin, F A; Chernaia, N R; Novruzbekov, M S; Kokov, L S

    2015-01-01

    Acute traumatic aortic rupture is associated with extremely high mortality rates and requires emergency diagnosis and treatment. This clinical example shows the role of multislice spiral computed tomography in the emergency diagnosis of rupture of two large arterial vessels in severe concomitant injury. It presents the benefits of this rapid and noninvasive imaging technique, an algorithm of the study and the semiotics of injuries in patients with suspected traumatic aortic rupture. The paper also shows the importance of this method in defining treatment policy and then in the assessment of the results of the performed correction.

  14. Local reconstruction in computed tomography of diffraction enhanced imaging

    NASA Astrophysics Data System (ADS)

    Huang, Zhi-Feng; Zhang, Li; Kang, Ke-Jun; Chen, Zhi-Qiang; Zhu, Pei-Ping; Yuan, Qing-Xi; Huang, Wan-Xia

    2007-07-01

    Computed tomography of diffraction enhanced imaging (DEI-CT) based on synchrotron radiation source has extremely high sensitivity of weakly absorbing low-Z samples in medical and biological fields. The authors propose a modified backprojection filtration(BPF)-type algorithm based on PI-line segments to reconstruct region of interest from truncated refraction-angle projection data in DEI-CT. The distribution of refractive index decrement in the sample can be directly estimated from its reconstruction images, which has been proved by experiments at the Beijing Synchrotron Radiation Facility. The algorithm paves the way for local reconstruction of large-size samples by the use of DEI-CT with small field of view based on synchrotron radiation source.

  15. The QuakeSim Project: Numerical Simulations for Active Tectonic Processes

    NASA Technical Reports Server (NTRS)

    Donnellan, Andrea; Parker, Jay; Lyzenga, Greg; Granat, Robert; Fox, Geoffrey; Pierce, Marlon; Rundle, John; McLeod, Dennis; Grant, Lisa; Tullis, Terry

    2004-01-01

    In order to develop a solid earth science framework for understanding and studying of active tectonic and earthquake processes, this task develops simulation and analysis tools to study the physics of earthquakes using state-of-the art modeling, data manipulation, and pattern recognition technologies. We develop clearly defined accessible data formats and code protocols as inputs to the simulations. these are adapted to high-performance computers because the solid earth system is extremely complex and nonlinear resulting in computationally intensive problems with millions of unknowns. With these tools it will be possible to construct the more complex models and simulations necessary to develop hazard assessment systems critical for reducing future losses from major earthquakes.

  16. Neural architecture design based on extreme learning machine.

    PubMed

    Bueno-Crespo, Andrés; García-Laencina, Pedro J; Sancho-Gómez, José-Luis

    2013-12-01

    Selection of the optimal neural architecture to solve a pattern classification problem entails to choose the relevant input units, the number of hidden neurons and its corresponding interconnection weights. This problem has been widely studied in many research works but their solutions usually involve excessive computational cost in most of the problems and they do not provide a unique solution. This paper proposes a new technique to efficiently design the MultiLayer Perceptron (MLP) architecture for classification using the Extreme Learning Machine (ELM) algorithm. The proposed method provides a high generalization capability and a unique solution for the architecture design. Moreover, the selected final network only retains those input connections that are relevant for the classification task. Experimental results show these advantages. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Spatial aliasing for efficient direction-of-arrival estimation based on steering vector reconstruction

    NASA Astrophysics Data System (ADS)

    Yan, Feng-Gang; Cao, Bin; Rong, Jia-Jia; Shen, Yi; Jin, Ming

    2016-12-01

    A new technique is proposed to reduce the computational complexity of the multiple signal classification (MUSIC) algorithm for direction-of-arrival (DOA) estimate using a uniform linear array (ULA). The steering vector of the ULA is reconstructed as the Kronecker product of two other steering vectors, and a new cost function with spatial aliasing at hand is derived. Thanks to the estimation ambiguity of this spatial aliasing, mirror angles mathematically relating to the true DOAs are generated, based on which the full spectral search involved in the MUSIC algorithm is highly compressed into a limited angular sector accordingly. Further complexity analysis and performance studies are conducted by computer simulations, which demonstrate that the proposed estimator requires an extremely reduced computational burden while it shows a similar accuracy to the standard MUSIC.

  18. Unusual cortical bone features in a patient with gorlin-goltz syndrome: a case report.

    PubMed

    Tarnoki, Adam Domonkos; Tarnoki, David Laszlo; Klara Kiss, Katalin; Bata, Pal; Karlinger, Kinga; Banvolgyi, Andras; Wikonkal, Norbert; Berczi, Viktor

    2014-12-01

    Gorlin-Goltz syndrome (GGS) consists of ectodermal and mesodermal abnormalities. In this case report we will investigate lower extremity lesions of GGS. A 52-year-old man with GGS underwent skull and lower extremity computer tomography. Radiographic findings included cervical spondylosis, transparent areas with slurred margins, and cerebral falx calcification. Tibial and fibular specific cortical lesions (thin cortical and subcortical cystic lesions) were seen on the radiography, which was confirmed by computer tomography. To our knowledge, this is the first report of such a long lesion of the tibia and fibula. Specific lower extremity cortical lesions (thin cortical and subcortical cystic lesions) may occur and these abnormalities can be found on radiography or CT, which are most probably attributed to retinoid treatment.

  19. Unusual Cortical Bone Features in a Patient with Gorlin-Goltz Syndrome: A Case Report

    PubMed Central

    Tarnoki, Adam Domonkos; Tarnoki, David Laszlo; Klara Kiss, Katalin; Bata, Pal; Karlinger, Kinga; Banvolgyi, Andras; Wikonkal, Norbert; Berczi, Viktor

    2014-01-01

    Gorlin-Goltz syndrome (GGS) consists of ectodermal and mesodermal abnormalities. In this case report we will investigate lower extremity lesions of GGS. A 52-year-old man with GGS underwent skull and lower extremity computer tomography. Radiographic findings included cervical spondylosis, transparent areas with slurred margins, and cerebral falx calcification. Tibial and fibular specific cortical lesions (thin cortical and subcortical cystic lesions) were seen on the radiography, which was confirmed by computer tomography. To our knowledge, this is the first report of such a long lesion of the tibia and fibula. Specific lower extremity cortical lesions (thin cortical and subcortical cystic lesions) may occur and these abnormalities can be found on radiography or CT, which are most probably attributed to retinoid treatment. PMID:25780550

  20. Computational dynamic approaches for temporal omics data with applications to systems medicine.

    PubMed

    Liang, Yulan; Kelemen, Arpad

    2017-01-01

    Modeling and predicting biological dynamic systems and simultaneously estimating the kinetic structural and functional parameters are extremely important in systems and computational biology. This is key for understanding the complexity of the human health, drug response, disease susceptibility and pathogenesis for systems medicine. Temporal omics data used to measure the dynamic biological systems are essentials to discover complex biological interactions and clinical mechanism and causations. However, the delineation of the possible associations and causalities of genes, proteins, metabolites, cells and other biological entities from high throughput time course omics data is challenging for which conventional experimental techniques are not suited in the big omics era. In this paper, we present various recently developed dynamic trajectory and causal network approaches for temporal omics data, which are extremely useful for those researchers who want to start working in this challenging research area. Moreover, applications to various biological systems, health conditions and disease status, and examples that summarize the state-of-the art performances depending on different specific mining tasks are presented. We critically discuss the merits, drawbacks and limitations of the approaches, and the associated main challenges for the years ahead. The most recent computing tools and software to analyze specific problem type, associated platform resources, and other potentials for the dynamic trajectory and interaction methods are also presented and discussed in detail.

  1. Note: Is it symmetric or not?

    PubMed

    Stanton, John F

    2013-07-28

    The reasons by which a molecule might distort from an idealized high symmetry configuration (for example, D3h for the nitrate radical) in a quantum-chemical computation are well-known, but briefly reviewed here in light of considerable recent debate on the BNB molecule. The role of the pseudo-Jahn-Teller effect in such cases is emphasized, as is the ultimate relevance and proper interpretation of the title question in cases where the adiabatic potential energy surface is extremely flat.

  2. The Concurrent Implementation of Radio Frequency Identification and Unique Item Identification at Naval Surface Warfare Center, Crane, IN as a Model for a Navy Supply Chain Application

    DTIC Science & Technology

    2007-12-01

    electromagnetic theory related to RFID in his works “ Field measurements using active scatterers” and “Theory of loaded scatterers”. At the same time...Business Case Analysis BRE: Bangor Radio Frequency Evaluation C4ISR: Command, Control, Communications, Computers, Intelligence, Surveillance...Surveillance EEDSKs: Early Entry Deployment Support Kits EHF: Extremely High Frequency xvi EUCOM: European Command FCC : Federal Communications

  3. B-2 Extremely High Frequency SATCOM and Computer Increment 1 (B-2 EHF Inc 1)

    DTIC Science & Technology

    2013-12-01

    2012 FEB 2012 FEB 2012 FEB 2012 Final DIOT&E flight JUL 2012 JUL 2012 JUL 2012 JUL 2012 RAA MAR 2015 MAR 2015 MAR 2016 MAR 2015 Change Explanations...None Memo RAA is defined as eight assigned aircraft modified, sufficient aircrews and maintenance personnel trained, sufficient aircrew and...incremental upgrade. Acronyms and Abbreviations DIOT&E - Dedicated Initial Operational Test and Evaluation RAA - Required Assets Available B-2 EHF Inc 1

  4. Equivalent reduced model technique development for nonlinear system dynamic response

    NASA Astrophysics Data System (ADS)

    Thibault, Louis; Avitabile, Peter; Foley, Jason; Wolfson, Janet

    2013-04-01

    The dynamic response of structural systems commonly involves nonlinear effects. Often times, structural systems are made up of several components, whose individual behavior is essentially linear compared to the total assembled system. However, the assembly of linear components using highly nonlinear connection elements or contact regions causes the entire system to become nonlinear. Conventional transient nonlinear integration of the equations of motion can be extremely computationally intensive, especially when the finite element models describing the components are very large and detailed. In this work, the equivalent reduced model technique (ERMT) is developed to address complicated nonlinear contact problems. ERMT utilizes a highly accurate model reduction scheme, the System equivalent reduction expansion process (SEREP). Extremely reduced order models that provide dynamic characteristics of linear components, which are interconnected with highly nonlinear connection elements, are formulated with SEREP for the dynamic response evaluation using direct integration techniques. The full-space solution will be compared to the response obtained using drastically reduced models to make evident the usefulness of the technique for a variety of analytical cases.

  5. Interaction Entropy: A New Paradigm for Highly Efficient and Reliable Computation of Protein-Ligand Binding Free Energy.

    PubMed

    Duan, Lili; Liu, Xiao; Zhang, John Z H

    2016-05-04

    Efficient and reliable calculation of protein-ligand binding free energy is a grand challenge in computational biology and is of critical importance in drug design and many other molecular recognition problems. The main challenge lies in the calculation of entropic contribution to protein-ligand binding or interaction systems. In this report, we present a new interaction entropy method which is theoretically rigorous, computationally efficient, and numerically reliable for calculating entropic contribution to free energy in protein-ligand binding and other interaction processes. Drastically different from the widely employed but extremely expensive normal mode method for calculating entropy change in protein-ligand binding, the new method calculates the entropic component (interaction entropy or -TΔS) of the binding free energy directly from molecular dynamics simulation without any extra computational cost. Extensive study of over a dozen randomly selected protein-ligand binding systems demonstrated that this interaction entropy method is both computationally efficient and numerically reliable and is vastly superior to the standard normal mode approach. This interaction entropy paradigm introduces a novel and intuitive conceptual understanding of the entropic effect in protein-ligand binding and other general interaction systems as well as a practical method for highly efficient calculation of this effect.

  6. Variation in the Gross Tumor Volume and Clinical Target Volume for Preoperative Radiotherapy of Primary Large High-Grade Soft Tissue Sarcoma of the Extremity Among RTOG Sarcoma Radiation Oncologists

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Dian, E-mail: dwang@mcw.edu; Bosch, Walter; Kirsch, David G.

    Purpose: To evaluate variability in the definition of preoperative radiotherapy gross tumor volume (GTV) and clinical target volume (CTV) delineated by sarcoma radiation oncologists. Methods and Materials: Extremity sarcoma planning CT images along with the corresponding diagnostic MRI from two patients were distributed to 10 Radiation Therapy Oncology Group sarcoma radiation oncologists with instructions to define GTV and CTV using standardized guidelines. The CT data with contours were then returned for central analysis. Contours representing statistically corrected 95% (V95) and 100% (V100) agreement were computed for each structure. Results: For the GTV, the minimum, maximum, mean (SD) volumes (mL) weremore » 674, 798, 752 {+-} 35 for the lower extremity case and 383, 543, 447 {+-} 46 for the upper extremity case. The volume (cc) of the union, V95 and V100 were 882, 761, and 752 for the lower, and 587, 461, and 455 for the upper extremity, respectively. The overall GTV agreement was judged to be almost perfect in both lower and upper extremity cases (kappa = 0.9 [p < 0.0001] and kappa = 0.86 [p < 0.0001]). For the CTV, the minimum, maximum, mean (SD) volumes (mL) were 1145, 1911, 1605 {+-} 211 for the lower extremity case and 637, 1246, 1006 {+-} 180 for the upper extremity case. The volume (cc) of the union, V95, and V100 were 2094, 1609, and 1593 for the lower, and 1533, 1020, and 965 for the upper extremity cases, respectively. The overall CTV agreement was judged to be almost perfect in the lower extremity case (kappa = 0.85 [p < 0.0001]) but only substantial in the upper extremity case (kappa = 0.77 [p < 0.0001]). Conclusions: Almost perfect agreement existed in the GTV of these two representative cases. Tshere was no significant disagreement in the CTV of the lower extremity, but variation in the CTV of upper extremity was seen, perhaps related to the positional differences between the planning CT and the diagnostic MRI.« less

  7. Method for high-precision multi-layered thin film deposition for deep and extreme ultraviolet mirrors

    DOEpatents

    Ruffner, Judith Alison

    1999-01-01

    A method for coating (flat or non-flat) optical substrates with high-reflectivity multi-layer coatings for use at Deep Ultra-Violet ("DUV") and Extreme Ultra-Violet ("EUV") wavelengths. The method results in a product with minimum feature sizes of less than 0.10-.mu.m for the shortest wavelength (13.4-nm). The present invention employs a computer-based modeling and deposition method to enable lateral and vertical thickness control by scanning the position of the substrate with respect to the sputter target during deposition. The thickness profile of the sputter targets is modeled before deposition and then an appropriate scanning algorithm is implemented to produce any desired, radially-symmetric thickness profile. The present invention offers the ability to predict and achieve a wide range of thickness profiles on flat or figured substrates, i.e., account for 1/R.sup.2 factor in a model, and the ability to predict and accommodate changes in deposition rate as a result of plasma geometry, i.e., over figured substrates.

  8. New digital capacitive measurement system for blade clearances

    NASA Astrophysics Data System (ADS)

    Moenich, Marcel; Bailleul, Gilles

    This paper presents a totally new concept for tip blade clearance evaluation in turbine engines. This system is able to detect exact 'measurands' even under high temperature and severe conditions like ionization. The system is based on a heavy duty probe head, a miniaturized thick-film hybrid electronic circuit and a signal processing unit for real time computing. The high frequency individual measurement values are digitally filtered and linearized in real time. The electronic is built in hybrid technology and therefore can be kept extremely small and robust, so that the system can be used on actual flights.

  9. "Extreme Programming" in a Bioinformatics Class

    ERIC Educational Resources Information Center

    Kelley, Scott; Alger, Christianna; Deutschman, Douglas

    2009-01-01

    The importance of Bioinformatics tools and methodology in modern biological research underscores the need for robust and effective courses at the college level. This paper describes such a course designed on the principles of cooperative learning based on a computer software industry production model called "Extreme Programming" (EP).…

  10. Computation of high-resolution SAR distributions in a head due to a radiating dipole antenna representing a hand-held mobile phone.

    PubMed

    Van de Kamer, J B; Lagendijk, J J W

    2002-05-21

    SAR distributions in a healthy female adult head as a result of a radiating vertical dipole antenna (frequency 915 MHz) representing a hand-held mobile phone have been computed for three different resolutions: 2 mm, 1 mm and 0.4 mm. The extremely high resolution of 0.4 mm was obtained with our quasistatic zooming technique, which is briefly described in this paper. For an effectively transmitted power of 0.25 W, the maximum averaged SAR values in both cubic- and arbitrary-shaped volumes are, respectively, about 1.72 and 2.55 W kg(-1) for 1 g and 0.98 and 1.73 W kg(-1) for 10 g of tissue. These numbers do not vary much (<8%) for the different resolutions, indicating that SAR computations at a resolution of 2 mm are sufficiently accurate to describe the large-scale distribution. However, considering the detailed SAR pattern in the head, large differences may occur if high-resolution computations are performed rather than low-resolution ones. These deviations are caused by both increased modelling accuracy and improved anatomical description in higher resolution simulations. For example, the SAR profile across a boundary between tissues with high dielectric contrast is much more accurately described at higher resolutions. Furthermore, low-resolution dielectric geometries may suffer from loss of anatomical detail, which greatly affects small-scale SAR distributions. Thus. for strongly inhomogeneous regions high-resolution SAR modelling is an absolute necessity.

  11. Dynamic remapping decisions in multi-phase parallel computations

    NASA Technical Reports Server (NTRS)

    Nicol, D. M.; Reynolds, P. F., Jr.

    1986-01-01

    The effectiveness of any given mapping of workload to processors in a parallel system is dependent on the stochastic behavior of the workload. Program behavior is often characterized by a sequence of phases, with phase changes occurring unpredictably. During a phase, the behavior is fairly stable, but may become quite different during the next phase. Thus a workload assignment generated for one phase may hinder performance during the next phase. We consider the problem of deciding whether to remap a paralled computation in the face of uncertainty in remapping's utility. Fundamentally, it is necessary to balance the expected remapping performance gain against the delay cost of remapping. This paper treats this problem formally by constructing a probabilistic model of a computation with at most two phases. We use stochastic dynamic programming to show that the remapping decision policy which minimizes the expected running time of the computation has an extremely simple structure: the optimal decision at any step is followed by comparing the probability of remapping gain against a threshold. This theoretical result stresses the importance of detecting a phase change, and assessing the possibility of gain from remapping. We also empirically study the sensitivity of optimal performance to imprecise decision threshold. Under a wide range of model parameter values, we find nearly optimal performance if remapping is chosen simply when the gain probability is high. These results strongly suggest that except in extreme cases, the remapping decision problem is essentially that of dynamically determining whether gain can be achieved by remapping after a phase change; precise quantification of the decision model parameters is not necessary.

  12. The Evolution of Biological Complexity in Digital Organisms

    NASA Astrophysics Data System (ADS)

    Ofria, Charles

    2013-03-01

    When Darwin first proposed his theory of evolution by natural selection, he realized that it had a problem explaining the origins of traits of ``extreme perfection and complication'' such as the vertebrate eye. Critics of Darwin's theory have latched onto this perceived flaw as a proof that Darwinian evolution is impossible. In anticipation of this issue, Darwin described the perfect data needed to understand this process, but lamented that such data are ``scarcely ever possible'' to obtain. In this talk, I will discuss research where we use populations of digital organisms (self-replicating and evolving computer programs) to elucidate the genetic and evolutionary processes by which new, highly-complex traits arise, drawing inspiration directly from Darwin's wistful thinking and hypotheses. During the process of evolution in these fully-transparent computational environments we can measure the incorporation of new information into the genome, a process akin to a natural Maxwell's Demon, and identify the original source of any such information. We show that, as Darwin predicted, much of the information used to encode a complex trait was already in the genome as part of simpler evolved traits, and that many routes must be possible for a new complex trait to have a high probability of successfully evolving. In even more extreme examples of the evolution of complexity, we are now using these same principles to examine the evolutionary dynamics the drive major transitions in evolution; that is transitions to higher-levels of organization, which are some of the most complex evolutionary events to occur in nature. Finally, I will explore some of the implications of this research to other aspects of evolutionary biology and as well as ways that these evolutionary principles can be applied toward solving computational and engineering problems.

  13. Verifying a computational method for predicting extreme ground motion

    USGS Publications Warehouse

    Harris, R.A.; Barall, M.; Andrews, D.J.; Duan, B.; Ma, S.; Dunham, E.M.; Gabriel, A.-A.; Kaneko, Y.; Kase, Y.; Aagaard, Brad T.; Oglesby, D.D.; Ampuero, J.-P.; Hanks, T.C.; Abrahamson, N.

    2011-01-01

    In situations where seismological data is rare or nonexistent, computer simulations may be used to predict ground motions caused by future earthquakes. This is particularly practical in the case of extreme ground motions, where engineers of special buildings may need to design for an event that has not been historically observed but which may occur in the far-distant future. Once the simulations have been performed, however, they still need to be tested. The SCEC-USGS dynamic rupture code verification exercise provides a testing mechanism for simulations that involve spontaneous earthquake rupture. We have performed this examination for the specific computer code that was used to predict maximum possible ground motion near Yucca Mountain. Our SCEC-USGS group exercises have demonstrated that the specific computer code that was used for the Yucca Mountain simulations produces similar results to those produced by other computer codes when tackling the same science problem. We also found that the 3D ground motion simulations produced smaller ground motions than the 2D simulations.

  14. ELTs adaptive optics for multi-objects 3D spectroscopy: key parameters and design rules

    NASA Astrophysics Data System (ADS)

    Neichel, B.; Conan, J.-M.; Fusco, T.; Gendron, E.; Puech, M.; Rousset, G.; Hammer, F.

    2006-06-01

    In the last few years, new Adaptive Optics [AO] techniques have emerged to answer new astronomical challenges: Ground-Layer AO [GLAO] and Multi-Conjugate AO [MCAO] to access a wider Field of View [FoV], Multi-Object AO [MOAO] for the simultaneous observation of several faint galaxies, eXtreme AO [XAO] for the detection of faint companions. In this paper, we focus our study to one of these applications : high red-shift galaxy observations using MOAO techniques in the framework of Extremely Large Telescopes [ELTs]. We present the high-level specifications of a dedicated instrument. We choose to describe the scientific requirements with the following criteria : 40% of Ensquared Energy [EE] in H band (1.65μm) and in an aperture size from 25 to 150 mas. Considering these specifications we investigate different AO solutions thanks to Fourier based simulations. Sky Coverage [SC] is computed for Natural and Laser Guide Stars [NGS, LGS] systems. We show that specifications are met for NGS-based systems at the cost of an extremely low SC. For the LGS approach, the option of low order correction with a faint NGS is discussed. We demonstrate that, this last solution allows the scientific requirements to be met together with a quasi full SC.

  15. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE PAGES

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...

    2017-09-21

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  16. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  17. High spatial resolution time-resolved magnetic resonance angiography of lower extremity tumors at 3T

    PubMed Central

    Wu, Gang; Jin, Teng; Li, Ting; Morelli, John; Li, Xiaoming

    2016-01-01

    Abstract The aim of this study was to compare diagnostic value of high spatial resolution time-resolved magnetic resonance angiography with interleaved stochastic trajectory (TWIST) using Gadobutrol to Computed tomography angiography (CTA) for preoperative evaluation of lower extremity tumors. This prospective study was approved by the institutional review board. Fifty consecutive patients (31 men, 19 women, age range 18–80 years, average age 42.7 years) with lower extremity tumors underwent TWIST magnetic resonance angiography (MRA) and CTA. Digital subtraction angiography was available for 8 patients. Image quality of MRA was compared with CTA by 2 radiologists according to a 4-point Likert scale. Arterial involvement by tumor was compared using kappa test between MRA and CTA. The ability to identify feeding arteries and arterio-venous fistulae (AVF) was compared using Wilcoxon signed rank test and McNemar test, respectively. Image quality of MRA and CTA was rated without a statistically significant difference (3.88 ± 0.37 vs. 3.97 ± 0.16, P = 0.135). Intramodality agreement was high for the identification of arterial invasion (kappa = 0.806 ± 0.073 for Reader 1, kappa = 0.805 ± 0.073 for Reader 2). Readers found AVF in 27 of 50 MRA cases and 14 of 50 CTA cases (P < 0.001). Mean feeding arteries identified with MRA were significantly more than that with CTA (2.08 ± 1.72 vs. 1.62 ± 1.52, P = .02). TWIST MRA is a reliable imaging modality for the assessment of lower extremity tumors. TWIST MRA is comparable to CTA for the identification of AVF and feeding arteries. PMID:27631262

  18. Fully Convolutional Architecture for Low-Dose CT Image Noise Reduction

    NASA Astrophysics Data System (ADS)

    Badretale, S.; Shaker, F.; Babyn, P.; Alirezaie, J.

    2017-10-01

    One of the critical topics in medical low-dose Computed Tomography (CT) imaging is how best to maintain image quality. As the quality of images decreases with lowering the X-ray radiation dose, improving image quality is extremely important and challenging. We have proposed a novel approach to denoise low-dose CT images. Our algorithm learns directly from an end-to-end mapping from the low-dose Computed Tomography images for denoising the normal-dose CT images. Our method is based on a deep convolutional neural network with rectified linear units. By learning various low-level to high-level features from a low-dose image the proposed algorithm is capable of creating a high-quality denoised image. We demonstrate the superiority of our technique by comparing the results with two other state-of-the-art methods in terms of the peak signal to noise ratio, root mean square error, and a structural similarity index.

  19. Foundational Tools for Petascale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Barton

    2014-05-19

    The Paradyn project has a history of developing algorithms, techniques, and software that push the cutting edge of tool technology for high-end computing systems. Under this funding, we are working on a three-year agenda to make substantial new advances in support of new and emerging Petascale systems. The overall goal for this work is to address the steady increase in complexity of these petascale systems. Our work covers two key areas: (1) The analysis, instrumentation and control of binary programs. Work in this area falls under the general framework of the Dyninst API tool kits. (2) Infrastructure for building toolsmore » and applications at extreme scale. Work in this area falls under the general framework of the MRNet scalability framework. Note that work done under this funding is closely related to work done under a contemporaneous grant, “High-Performance Energy Applications and Systems”, SC0004061/FG02-10ER25972, UW PRJ36WV.« less

  20. Integrity management of offshore structures and its implication on computation of structural action effects and resistance

    NASA Astrophysics Data System (ADS)

    Moan, T.

    2017-12-01

    An overview of integrity management of offshore structures, with emphasis on the oil and gas energy sector, is given. Based on relevant accident experiences and means to control the associated risks, accidents are categorized from a technical-physical as well as human and organizational point of view. Structural risk relates to extreme actions as well as structural degradation. Risk mitigation measures, including adequate design criteria, inspection, repair and maintenance as well as quality assurance and control of engineering processes, are briefly outlined. The current status of risk and reliability methodology to aid decisions in the integrity management is briefly reviewed. Finally, the need to balance the uncertainties in data, methods and computational efforts and the cautious use and quality assurance and control in applying high fidelity methods to avoid human errors, is emphasized, and with a plea to develop both high fidelity as well as efficient, simplified methods for design.

  1. Final report for “Extreme-scale Algorithms and Solver Resilience”

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gropp, William Douglas

    2017-06-30

    This is a joint project with principal investigators at Oak Ridge National Laboratory, Sandia National Laboratories, the University of California at Berkeley, and the University of Tennessee. Our part of the project involves developing performance models for highly scalable algorithms and the development of latency tolerant iterative methods. During this project, we extended our performance models for the Multigrid method for solving large systems of linear equations and conducted experiments with highly scalable variants of conjugate gradient methods that avoid blocking synchronization. In addition, we worked with the other members of the project on alternative techniques for resilience and reproducibility.more » We also presented an alternative approach for reproducible dot-products in parallel computations that performs almost as well as the conventional approach by separating the order of computation from the details of the decomposition of vectors across the processes.« less

  2. Computed tomography and magnetic resonance imaging in diagnosing hepatocellular carcinoma.

    PubMed

    Dalla Palma, L; Pozzi-Mucelli, R S

    1992-02-01

    The evaluation of hepatocellular carcinoma (HCC) is based upon ultrasonography (US) which has proved to have a high sensitivity and is also extremely useful in guiding the percutaneous needle biopsy. The main role of computed tomography (CT) and magnetic resonance imaging (MRI) is to supplement US in evaluating the extent of HCC. The Authors discuss the different techniques of examinations of the liver both for CT and MRI as far as the modalities of contrast enhancement, site of injection, and type of contrast agents are concerned. The differences between low field and high field magnets are also discussed. The main CT and MRI findings are illustrated, depending upon the technique of examination. Finally the role of these techniques is discussed. Based upon personal experience and the data in CT literature, and if performed with updated technology and intraarterial injection (lipiodol), CT is the method of choice in order to supplement US in the evaluation of HCC.

  3. [Computer mediated discussion and attitude polarization].

    PubMed

    Shiraishi, Takashi; Endo, Kimihisa; Yoshida, Fujio

    2002-10-01

    This study examined the hypothesis that computer mediated discussions lead to more extreme decisions than face-to-face (FTF) meeting. Kiesler, Siegel, & McGuire (1984) claimed that computer mediated communication (CMC) tended to be relatively uninhibited, as seen in 'flaming', and that group decisions under CMC using Choice Dilemma Questionnaire tended to be more extreme and riskier than FTF meetings. However, for the same reason, CMC discussions on controversial social issues for which participants initially hold strongly opposing views, might be less likely to reach a consensus, and no polarization should occur. Fifteen 4-member groups discussed a controversial social issue under one of three conditions: FTF, CMC, and partition. After discussion, participants rated their position as a group on a 9-point bipolar scale ranging from strong disagreement to strong agreement. A stronger polarization effect was observed for FTF groups than those where members were separated with partitions. However, no extreme shift from their original, individual positions was found for CMC participants. There results were discussed in terms of 'expertise and status equalization' and 'absence of social context cues' under CMC.

  4. Evolution of precipitation extremes in two large ensembles of climate simulations

    NASA Astrophysics Data System (ADS)

    Martel, Jean-Luc; Mailhot, Alain; Talbot, Guillaume; Brissette, François; Ludwig, Ralf; Frigon, Anne; Leduc, Martin; Turcotte, Richard

    2017-04-01

    Recent studies project significant changes in the future distribution of precipitation extremes due to global warming. It is likely that extreme precipitation intensity will increase in a future climate and that extreme events will be more frequent. In this work, annual maxima daily precipitation series from the Canadian Earth System Model (CanESM2) 50-member large ensemble (spatial resolution of 2.8°x2.8°) and the Community Earth System Model (CESM1) 40-member large ensemble (spatial resolution of 1°x1°) are used to investigate extreme precipitation over the historical (1980-2010) and future (2070-2100) periods. The use of these ensembles results in respectively 1 500 (30 years x 50 members) and 1200 (30 years x 40 members) simulated years over both the historical and future periods. These large datasets allow the computation of empirical daily extreme precipitation quantiles for large return periods. Using the CanESM2 and CESM1 large ensembles, extreme daily precipitation with return periods ranging from 2 to 100 years are computed in historical and future periods to assess the impact of climate change. Results indicate that daily precipitation extremes generally increase in the future over most land grid points and that these increases will also impact the 100-year extreme daily precipitation. Considering that many public infrastructures have lifespans exceeding 75 years, the increase in extremes has important implications on service levels of water infrastructures and public safety. Estimated increases in precipitation associated to very extreme precipitation events (e.g. 100 years) will drastically change the likelihood of flooding and their extent in future climate. These results, although interesting, need to be extended to sub-daily durations, relevant for urban flooding protection and urban infrastructure design (e.g. sewer networks, culverts). Models and simulations at finer spatial and temporal resolution are therefore needed.

  5. Numerical computation of spherical harmonics of arbitrary degree and order by extending exponent of floating point numbers

    NASA Astrophysics Data System (ADS)

    Fukushima, Toshio

    2012-04-01

    By extending the exponent of floating point numbers with an additional integer as the power index of a large radix, we compute fully normalized associated Legendre functions (ALF) by recursion without underflow problem. The new method enables us to evaluate ALFs of extremely high degree as 232 = 4,294,967,296, which corresponds to around 1 cm resolution on the Earth's surface. By limiting the application of exponent extension to a few working variables in the recursion, choosing a suitable large power of 2 as the radix, and embedding the contents of the basic arithmetic procedure of floating point numbers with the exponent extension directly in the program computing the recurrence formulas, we achieve the evaluation of ALFs in the double-precision environment at the cost of around 10% increase in computational time per single ALF. This formulation realizes meaningful execution of the spherical harmonic synthesis and/or analysis of arbitrary degree and order.

  6. Computer-Aided Nodule Assessment and Risk Yield Risk Management of Adenocarcinoma: The Future of Imaging?

    PubMed

    Foley, Finbar; Rajagopalan, Srinivasan; Raghunath, Sushravya M; Boland, Jennifer M; Karwoski, Ronald A; Maldonado, Fabien; Bartholmai, Brian J; Peikert, Tobias

    2016-01-01

    Increased clinical use of chest high-resolution computed tomography results in increased identification of lung adenocarcinomas and persistent subsolid opacities. However, these lesions range from very indolent to extremely aggressive tumors. Clinically relevant diagnostic tools to noninvasively risk stratify and guide individualized management of these lesions are lacking. Research efforts investigating semiquantitative measures to decrease interrater and intrarater variability are emerging, and in some cases steps have been taken to automate this process. However, many such methods currently are still suboptimal, require validation and are not yet clinically applicable. The computer-aided nodule assessment and risk yield software application represents a validated tool for the automated, quantitative, and noninvasive tool for risk stratification of adenocarcinoma lung nodules. Computer-aided nodule assessment and risk yield correlates well with consensus histology and postsurgical patient outcomes, and therefore may help to guide individualized patient management, for example, in identification of nodules amenable to radiological surveillance, or in need of adjunctive therapy. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Computational analysis of drop formation before and after the first singularity: the fate of free and satellite drops during simple dripping and DOD drop formation

    NASA Astrophysics Data System (ADS)

    Chen, Alvin U.; Basaran, Osman A.

    2000-11-01

    Drop formation from a capillary --- dripping mode --- or an ink jet nozzle --- drop-on-demand (DOD) mode --- falls into a class of scientifically challenging yet practically useful free surface flows that exhibit a finite time singularity, i.e. the breakup of an initially single liquid mass into two or more fragments. While computational tools to model such problems have been developed recently, they lack the accuracy needed to quantitatively predict all the dynamics observed in experiments. Here we present a new finite element method (FEM) based on a robust algorithm for elliptic mesh generation and remeshing to handle extremely large interface deformations. The new algorithm allows continuation of computations beyond the first singularity to track fates of both primary and any satellite drops. The accuracy of the computations is demonstrated by comparison of simulations with experimental measurements made possible with an ultra high-speed digital imager capable of recording 100 million frames per second.

  8. A Unified Data-Driven Approach for Programming In Situ Analysis and Visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aiken, Alex

    The placement and movement of data is becoming the key limiting factor on both performance and energy efficiency of high performance computations. As systems generate more data, it is becoming increasingly difficult to actually move that data elsewhere for post-processing, as the rate of improvements in supporting I/O infrastructure is not keeping pace. Together, these trends are creating a shift in how we think about exascale computations, from a viewpoint that focuses on FLOPS to one that focuses on data and data-centric operations as fundamental to the reasoning about, and optimization of, scientific workflows on extreme-scale architectures. The overarching goalmore » of our effort was the study of a unified data-driven approach for programming applications and in situ analysis and visualization. Our work was to understand the interplay between data-centric programming model requirements at extreme-scale and the overall impact of those requirements on the design, capabilities, flexibility, and implementation details for both applications and the supporting in situ infrastructure. In this context, we made many improvements to the Legion programming system (one of the leading data-centric models today) and demonstrated in situ analyses on real application codes using these improvements.« less

  9. A multiply-add engine with monolithically integrated 3D memristor crossbar/CMOS hybrid circuit.

    PubMed

    Chakrabarti, B; Lastras-Montaño, M A; Adam, G; Prezioso, M; Hoskins, B; Payvand, M; Madhavan, A; Ghofrani, A; Theogarajan, L; Cheng, K-T; Strukov, D B

    2017-02-14

    Silicon (Si) based complementary metal-oxide semiconductor (CMOS) technology has been the driving force of the information-technology revolution. However, scaling of CMOS technology as per Moore's law has reached a serious bottleneck. Among the emerging technologies memristive devices can be promising for both memory as well as computing applications. Hybrid CMOS/memristor circuits with CMOL (CMOS + "Molecular") architecture have been proposed to combine the extremely high density of the memristive devices with the robustness of CMOS technology, leading to terabit-scale memory and extremely efficient computing paradigm. In this work, we demonstrate a hybrid 3D CMOL circuit with 2 layers of memristive crossbars monolithically integrated on a pre-fabricated CMOS substrate. The integrated crossbars can be fully operated through the underlying CMOS circuitry. The memristive devices in both layers exhibit analog switching behavior with controlled tunability and stable multi-level operation. We perform dot-product operations with the 2D and 3D memristive crossbars to demonstrate the applicability of such 3D CMOL hybrid circuits as a multiply-add engine. To the best of our knowledge this is the first demonstration of a functional 3D CMOL hybrid circuit.

  10. A multiply-add engine with monolithically integrated 3D memristor crossbar/CMOS hybrid circuit

    PubMed Central

    Chakrabarti, B.; Lastras-Montaño, M. A.; Adam, G.; Prezioso, M.; Hoskins, B.; Cheng, K.-T.; Strukov, D. B.

    2017-01-01

    Silicon (Si) based complementary metal-oxide semiconductor (CMOS) technology has been the driving force of the information-technology revolution. However, scaling of CMOS technology as per Moore’s law has reached a serious bottleneck. Among the emerging technologies memristive devices can be promising for both memory as well as computing applications. Hybrid CMOS/memristor circuits with CMOL (CMOS + “Molecular”) architecture have been proposed to combine the extremely high density of the memristive devices with the robustness of CMOS technology, leading to terabit-scale memory and extremely efficient computing paradigm. In this work, we demonstrate a hybrid 3D CMOL circuit with 2 layers of memristive crossbars monolithically integrated on a pre-fabricated CMOS substrate. The integrated crossbars can be fully operated through the underlying CMOS circuitry. The memristive devices in both layers exhibit analog switching behavior with controlled tunability and stable multi-level operation. We perform dot-product operations with the 2D and 3D memristive crossbars to demonstrate the applicability of such 3D CMOL hybrid circuits as a multiply-add engine. To the best of our knowledge this is the first demonstration of a functional 3D CMOL hybrid circuit. PMID:28195239

  11. High-Fidelity Piezoelectric Audio Device

    NASA Technical Reports Server (NTRS)

    Woodward, Stanley E.; Fox, Robert L.; Bryant, Robert G.

    2003-01-01

    ModalMax is a very innovative means of harnessing the vibration of a piezoelectric actuator to produce an energy efficient low-profile device with high-bandwidth high-fidelity audio response. The piezoelectric audio device outperforms many commercially available speakers made using speaker cones. The piezoelectric device weighs substantially less (4 g) than the speaker cones which use magnets (10 g). ModalMax devices have extreme fabrication simplicity. The entire audio device is fabricated by lamination. The simplicity of the design lends itself to lower cost. The piezoelectric audio device can be used without its acoustic chambers and thereby resulting in a very low thickness of 0.023 in. (0.58 mm). The piezoelectric audio device can be completely encapsulated, which makes it very attractive for use in wet environments. Encapsulation does not significantly alter the audio response. Its small size (see Figure 1) is applicable to many consumer electronic products, such as pagers, portable radios, headphones, laptop computers, computer monitors, toys, and electronic games. The audio device can also be used in automobile or aircraft sound systems.

  12. Dilepton production from the quark-gluon plasma using (3 +1 )-dimensional anisotropic dissipative hydrodynamics

    NASA Astrophysics Data System (ADS)

    Ryblewski, Radoslaw; Strickland, Michael

    2015-07-01

    We compute dilepton production from the deconfined phase of the quark-gluon plasma using leading-order (3 +1 )-dimensional anisotropic hydrodynamics. The anisotropic hydrodynamics equations employed describe the full spatiotemporal evolution of the transverse temperature, spheroidal momentum-space anisotropy parameter, and the associated three-dimensional collective flow of the matter. The momentum-space anisotropy is also taken into account in the computation of the dilepton production rate, allowing for a self-consistent description of dilepton production from the quark-gluon plasma. For our final results, we present predictions for high-energy dilepton yields as a function of invariant mass, transverse momentum, and pair rapidity. We demonstrate that high-energy dilepton production is extremely sensitive to the assumed level of initial momentum-space anisotropy of the quark-gluon plasma. As a result, it may be possible to experimentally constrain the early-time momentum-space anisotropy of the quark-gluon plasma generated in relativistic heavy-ion collisions using high-energy dilepton yields.

  13. Improving the performance of extreme learning machine for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Li, Jiaojiao; Du, Qian; Li, Wei; Li, Yunsong

    2015-05-01

    Extreme learning machine (ELM) and kernel ELM (KELM) can offer comparable performance as the standard powerful classifier―support vector machine (SVM), but with much lower computational cost due to extremely simple training step. However, their performance may be sensitive to several parameters, such as the number of hidden neurons. An empirical linear relationship between the number of training samples and the number of hidden neurons is proposed. Such a relationship can be easily estimated with two small training sets and extended to large training sets so as to greatly reduce computational cost. Other parameters, such as the steepness parameter in the sigmodal activation function and regularization parameter in the KELM, are also investigated. The experimental results show that classification performance is sensitive to these parameters; fortunately, simple selections will result in suboptimal performance.

  14. A Parallel Numerical Algorithm To Solve Linear Systems Of Equations Emerging From 3D Radiative Transfer

    NASA Astrophysics Data System (ADS)

    Wichert, Viktoria; Arkenberg, Mario; Hauschildt, Peter H.

    2016-10-01

    Highly resolved state-of-the-art 3D atmosphere simulations will remain computationally extremely expensive for years to come. In addition to the need for more computing power, rethinking coding practices is necessary. We take a dual approach by introducing especially adapted, parallel numerical methods and correspondingly parallelizing critical code passages. In the following, we present our respective work on PHOENIX/3D. With new parallel numerical algorithms, there is a big opportunity for improvement when iteratively solving the system of equations emerging from the operator splitting of the radiative transfer equation J = ΛS. The narrow-banded approximate Λ-operator Λ* , which is used in PHOENIX/3D, occurs in each iteration step. By implementing a numerical algorithm which takes advantage of its characteristic traits, the parallel code's efficiency is further increased and a speed-up in computational time can be achieved.

  15. A hybrid solution using computational prediction and measured data to accurately determine process corrections with reduced overlay sampling

    NASA Astrophysics Data System (ADS)

    Noyes, Ben F.; Mokaberi, Babak; Mandoy, Ram; Pate, Alex; Huijgen, Ralph; McBurney, Mike; Chen, Owen

    2017-03-01

    Reducing overlay error via an accurate APC feedback system is one of the main challenges in high volume production of the current and future nodes in the semiconductor industry. The overlay feedback system directly affects the number of dies meeting overlay specification and the number of layers requiring dedicated exposure tools through the fabrication flow. Increasing the former number and reducing the latter number is beneficial for the overall efficiency and yield of the fabrication process. An overlay feedback system requires accurate determination of the overlay error, or fingerprint, on exposed wafers in order to determine corrections to be automatically and dynamically applied to the exposure of future wafers. Since current and future nodes require correction per exposure (CPE), the resolution of the overlay fingerprint must be high enough to accommodate CPE in the overlay feedback system, or overlay control module (OCM). Determining a high resolution fingerprint from measured data requires extremely dense overlay sampling that takes a significant amount of measurement time. For static corrections this is acceptable, but in an automated dynamic correction system this method creates extreme bottlenecks for the throughput of said system as new lots have to wait until the previous lot is measured. One solution is using a less dense overlay sampling scheme and employing computationally up-sampled data to a dense fingerprint. That method uses a global fingerprint model over the entire wafer; measured localized overlay errors are therefore not always represented in its up-sampled output. This paper will discuss a hybrid system shown in Fig. 1 that combines a computationally up-sampled fingerprint with the measured data to more accurately capture the actual fingerprint, including local overlay errors. Such a hybrid system is shown to result in reduced modelled residuals while determining the fingerprint, and better on-product overlay performance.

  16. A Low Complexity System Based on Multiple Weighted Decision Trees for Indoor Localization

    PubMed Central

    Sánchez-Rodríguez, David; Hernández-Morera, Pablo; Quinteiro, José Ma.; Alonso-González, Itziar

    2015-01-01

    Indoor position estimation has become an attractive research topic due to growing interest in location-aware services. Nevertheless, satisfying solutions have not been found with the considerations of both accuracy and system complexity. From the perspective of lightweight mobile devices, they are extremely important characteristics, because both the processor power and energy availability are limited. Hence, an indoor localization system with high computational complexity can cause complete battery drain within a few hours. In our research, we use a data mining technique named boosting to develop a localization system based on multiple weighted decision trees to predict the device location, since it has high accuracy and low computational complexity. The localization system is built using a dataset from sensor fusion, which combines the strength of radio signals from different wireless local area network access points and device orientation information from a digital compass built-in mobile device, so that extra sensors are unnecessary. Experimental results indicate that the proposed system leads to substantial improvements on computational complexity over the widely-used traditional fingerprinting methods, and it has a better accuracy than they have. PMID:26110413

  17. Rapid Automated Aircraft Simulation Model Updating from Flight Data

    NASA Technical Reports Server (NTRS)

    Brian, Geoff; Morelli, Eugene A.

    2011-01-01

    Techniques to identify aircraft aerodynamic characteristics from flight measurements and compute corrections to an existing simulation model of a research aircraft were investigated. The purpose of the research was to develop a process enabling rapid automated updating of aircraft simulation models using flight data and apply this capability to all flight regimes, including flight envelope extremes. The process presented has the potential to improve the efficiency of envelope expansion flight testing, revision of control system properties, and the development of high-fidelity simulators for pilot training.

  18. Towards timelike singularity via AdS dual

    NASA Astrophysics Data System (ADS)

    Bhowmick, Samrat; Chatterjee, Soumyabrata

    2017-07-01

    It is well known that Kasner geometry with spacelike singularity can be extended to bulk AdS-like geometry, furthermore, one can study field theory on this Kasner space via its gravity dual. In this paper, we show that there exists a Kasner-like geometry with timelike singularity for which one can construct a dual gravity description. We then study various extremal surfaces including spacelike geodesics in the dual gravity description. Finally, we compute correlators of highly massive operators in the boundary field theory with a geodesic approximation.

  19. Analysis of Thickness and Quality factor of a Double Paddle Oscillator at Room Temperature.

    PubMed

    Shakeel, Hamza; Metcalf, Thomas H; Pomeroy, J M

    2016-01-01

    In this paper, we evaluate the quality (Q) factor and the resonance frequency of a double paddle oscillator (DPO) with different thickness using analytical, computational and experimental methods. The study is carried out for the 2 nd anti-symmetric resonance mode that provides extremely high experimental Q factors on the order of 10 5 . The results show that both the Q factor and the resonance frequency of a DPO increase with the thickness at room temperature.

  20. Are X-rays the key to integrated computational materials engineering?

    DOE PAGES

    Ice, Gene E.

    2015-11-01

    The ultimate dream of materials science is to predict materials behavior from composition and processing history. Owing to the growing power of computers, this long-time dream has recently found expression through worldwide excitement in a number of computation-based thrusts: integrated computational materials engineering, materials by design, computational materials design, three-dimensional materials physics and mesoscale physics. However, real materials have important crystallographic structures at multiple length scales, which evolve during processing and in service. Moreover, real materials properties can depend on the extreme tails in their structural and chemical distributions. This makes it critical to map structural distributions with sufficient resolutionmore » to resolve small structures and with sufficient statistics to capture the tails of distributions. For two-dimensional materials, there are high-resolution nondestructive probes of surface and near-surface structures with atomic or near-atomic resolution that can provide detailed structural, chemical and functional distributions over important length scales. Furthermore, there are no nondestructive three-dimensional probes with atomic resolution over the multiple length scales needed to understand most materials.« less

  1. Use of a Computer Program for Advance Care Planning with African American Participants.

    PubMed

    Markham, Sarah A; Levi, Benjamin H; Green, Michael J; Schubart, Jane R

    2015-02-01

    The authors wish to acknowledge the support and assistance of Dr. William Lawrence for his contribution to the M.A.UT model used in the decision aid, Making Your Wishes Known: Planning Your Medical Future (MYWK), Dr. Cheryl Dellasega for her leadership in focus group activities, Charles Sabatino for his review of legal aspects of MYWK, Dr. Robert Pearlman and his collaborative team for use of the advance care planning booklet "Your Life, Your Choices," Megan Whitehead for assistance in grant preparation and project organization, and the Instructional Media Development Center at the University of Wisconsin as well as JPL Integrated Communications for production and programming of MYWK. For various cultural and historical reasons, African Americans are less likely than Caucasians to engage in advance care planning (ACP) for healthcare decisions. This pilot study tested whether an interactive computer program could help overcome barriers to effective ACP among African Americans. African American adults were recruited from traditionally Black churches to complete an interactive computer program on ACP, pre-/post-questionnaires, and a follow-up phone interview. Eighteen adults (mean age =53.2 years, 83% female) completed the program without any problems. Knowledge about ACP significantly increased following the computer intervention (44.9% → 61.3%, p=0.0004), as did individuals' sense of self-determination. Participants were highly satisfied with the ACP process (9.4; 1 = not at all satisfied, 10 = extremely satisfied), and reported that the computer-generated advance directive accurately reflected their wishes (6.4; 1 = not at all accurate, 7 = extremely accurate). Follow-up phone interviews found that >80% of participants reported having shared their advance directives with family members and spokespeople. Preliminary evidence suggests that an interactive computer program can help African Americans engage in effective advance care planning, including creating an accurate advance directive document that will be shared with loved ones. © 2015 National Medical Association. Published by Elsevier Inc. All rights reserved.

  2. Development of a SaaS application probe to the physical properties of the Earth's interior: An attempt at moving HPC to the cloud

    NASA Astrophysics Data System (ADS)

    Huang, Qian

    2014-09-01

    Scientific computing often requires the availability of a massive number of computers for performing large-scale simulations, and computing in mineral physics is no exception. In order to investigate physical properties of minerals at extreme conditions in computational mineral physics, parallel computing technology is used to speed up the performance by utilizing multiple computer resources to process a computational task simultaneously thereby greatly reducing computation time. Traditionally, parallel computing has been addressed by using High Performance Computing (HPC) solutions and installed facilities such as clusters and super computers. Today, it has been seen that there is a tremendous growth in cloud computing. Infrastructure as a Service (IaaS), the on-demand and pay-as-you-go model, creates a flexible and cost-effective mean to access computing resources. In this paper, a feasibility report of HPC on a cloud infrastructure is presented. It is found that current cloud services in IaaS layer still need to improve performance to be useful to research projects. On the other hand, Software as a Service (SaaS), another type of cloud computing, is introduced into an HPC system for computing in mineral physics, and an application of which is developed. In this paper, an overall description of this SaaS application is presented. This contribution can promote cloud application development in computational mineral physics, and cross-disciplinary studies.

  3. An Example of Economic Value in Rapid Prototyping

    NASA Technical Reports Server (NTRS)

    Hauer, R. L.; Braunscheidel, E. P.

    2001-01-01

    Today's modern machining projects are composed more and more of complicated and intricate structure due to a variety of reasons including the ability to computer model complex surfaces and forms. The cost of producing these forms can be extremely high not only in dollars but in time to complete. Changes are even more difficult to incorporate. The subject blade shown is an excellent example. Its complex form would have required hundreds of hours in fabrication for just a simple prototype. The procurement would have taken in the neighborhood of six weeks to complete. The actual fabrication would have been an equal amount of time to complete. An alternative to this process would have been a wood model. Although cheaper than a metal fabrication, it would be extremely time intensive and require in the neighborhood of a month to produce in-house.

  4. The dual-state theory of prefrontal cortex dopamine function with relevance to catechol-o-methyltransferase genotypes and schizophrenia.

    PubMed

    Durstewitz, Daniel; Seamans, Jeremy K

    2008-11-01

    There is now general consensus that at least some of the cognitive deficits in schizophrenia are related to dysfunctions in the prefrontal cortex (PFC) dopamine (DA) system. At the cellular and synaptic level, the effects of DA in PFC via D1- and D2-class receptors are highly complex, often apparently opposing, and hence difficult to understand with regard to their functional implications. Biophysically realistic computational models have provided valuable insights into how the effects of DA on PFC neurons and synaptic currents as measured in vitro link up to the neural network and cognitive levels. They suggest the existence of two discrete dynamical regimes, a D1-dominated state characterized by a high energy barrier among different network patterns that favors robust online maintenance of information and a D2-dominated state characterized by a low energy barrier that is beneficial for flexible and fast switching among representational states. These predictions are consistent with a variety of electrophysiological, neuroimaging, and behavioral results in humans and nonhuman species. Moreover, these biophysically based models predict that imbalanced D1:D2 receptor activation causing extremely low or extremely high energy barriers among activity states could lead to the emergence of cognitive, positive, and negative symptoms observed in schizophrenia. Thus, combined experimental and computational approaches hold the promise of allowing a detailed mechanistic understanding of how DA alters information processing in normal and pathological conditions, thereby potentially providing new routes for the development of pharmacological treatments for schizophrenia.

  5. Data Provenance Hybridization Supporting Extreme-Scale Scientific WorkflowApplications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elsethagen, Todd O.; Stephan, Eric G.; Raju, Bibi

    As high performance computing (HPC) infrastructures continue to grow in capability and complexity, so do the applications that they serve. HPC and distributed-area computing (DAC) (e.g. grid and cloud) users are looking increasingly toward workflow solutions to orchestrate their complex application coupling, pre- and post-processing needs To gain insight and a more quantitative understanding of a workflow’s performance our method includes not only the capture of traditional provenance information, but also the capture and integration of system environment metrics helping to give context and explanation for a workflow’s execution. In this paper, we describe IPPD’s provenance management solution (ProvEn) andmore » its hybrid data store combining both of these data provenance perspectives.« less

  6. Astronomers as Software Developers

    NASA Astrophysics Data System (ADS)

    Pildis, Rachel A.

    2016-01-01

    Astronomers know that their research requires writing, adapting, and documenting computer software. Furthermore, they often have to learn new computer languages and figure out how existing programs work without much documentation or guidance and with extreme time pressure. These are all skills that can lead to a software development job, but recruiters and employers probably won't know that. I will discuss all the highly useful experience that astronomers may not know that they already have, and how to explain that knowledge to others when looking for non-academic software positions. I will also talk about some of the pitfalls I have run into while interviewing for jobs and working as a developer, and encourage you to embrace the curiosity employers might have about your non-standard background.

  7. Using Discrete Event Simulation for Programming Model Exploration at Extreme-Scale: Macroscale Components for the Structural Simulation Toolkit (SST).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilke, Jeremiah J; Kenny, Joseph P.

    2015-02-01

    Discrete event simulation provides a powerful mechanism for designing and testing new extreme- scale programming models for high-performance computing. Rather than debug, run, and wait for results on an actual system, design can first iterate through a simulator. This is particularly useful when test beds cannot be used, i.e. to explore hardware or scales that do not yet exist or are inaccessible. Here we detail the macroscale components of the structural simulation toolkit (SST). Instead of depending on trace replay or state machines, the simulator is architected to execute real code on real software stacks. Our particular user-space threading frameworkmore » allows massive scales to be simulated even on small clusters. The link between the discrete event core and the threading framework allows interesting performance metrics like call graphs to be collected from a simulated run. Performance analysis via simulation can thus become an important phase in extreme-scale programming model and runtime system design via the SST macroscale components.« less

  8. Design and evaluation of a toroidal wheel for planetary rovers

    NASA Technical Reports Server (NTRS)

    Koskol, J.; Yerazunis, S. W.

    1977-01-01

    The inverted toroidal wheel concept was perceived, mathematically quantified, and experimentally verified. The wheel design has a number of important characteristics, namely; (1) the low footprint pressures required for Mars exploration (0.5 to 1.0 psi); (2) high vehicle weight to wheel weight ratios capable of exceeding 10:1; (3) extremely long cyclic endurances tending towards infinite life; and (4) simplicity of design. The concept, in combination with appropriate materials such as titanium or composites, provides a planetary roving vehicle with a very high degree of exploratory mobility, a substantial savings in weight and a high assurity of mission success. Design equations and computation procedures necessary to formulate an inverted wheel are described in detail.

  9. An efficient three-dimensional Poisson solver for SIMD high-performance-computing architectures

    NASA Technical Reports Server (NTRS)

    Cohl, H.

    1994-01-01

    We present an algorithm that solves the three-dimensional Poisson equation on a cylindrical grid. The technique uses a finite-difference scheme with operator splitting. This splitting maps the banded structure of the operator matrix into a two-dimensional set of tridiagonal matrices, which are then solved in parallel. Our algorithm couples FFT techniques with the well-known ADI (Alternating Direction Implicit) method for solving Elliptic PDE's, and the implementation is extremely well suited for a massively parallel environment like the SIMD architecture of the MasPar MP-1. Due to the highly recursive nature of our problem, we believe that our method is highly efficient, as it avoids excessive interprocessor communication.

  10. Modal analysis of the ultrahigh finesse Haroche QED cavity

    NASA Astrophysics Data System (ADS)

    Marsic, Nicolas; De Gersem, Herbert; Demésy, Guillaume; Nicolet, André; Geuzaine, Christophe

    2018-04-01

    In this paper, we study a high-order finite element approach to simulate an ultrahigh finesse Fabry–Pérot superconducting open resonator for cavity quantum electrodynamics. Because of its high quality factor, finding a numerically converged value of the damping time requires an extremely high spatial resolution. Therefore, the use of high-order simulation techniques appears appropriate. This paper considers idealized mirrors (no surface roughness and perfect geometry, just to cite a few hypotheses), and shows that under these assumptions, a damping time much higher than what is available in experimental measurements could be achieved. In addition, this work shows that both high-order discretizations of the governing equations and high-order representations of the curved geometry are mandatory for the computation of the damping time of such cavities.

  11. Wireless Internet Gateways (WINGS)

    DTIC Science & Technology

    1997-01-01

    WIRELESS INTERNET GATEWAYS (WINGS) J.J. Garcia-Luna-Aceves, Chane L. Fullmer, Ewerton Madruga Computer Engineering Department University of...rooftop.com Abstract— Today’s internetwork technology has been extremely success- ful in linking huge numbers of computers and users. However, to date...this technology has been oriented to computer interconnection in relatively stable operational environments, and thus cannot adequately support many of

  12. Children's Narrative Development through Computer Game Authoring

    ERIC Educational Resources Information Center

    Robertson, Judy; Good, Judith

    2005-01-01

    Playing computer games is an extremely popular leisure activity for children. In fact, the computer games market in the UK is now double that of the video rental market, and substantially larger than cinema box office sales, and under people under the age of 18 make up 38% of these game players. Based on the popularity and strong motivational…

  13. Introductory Computer Programming Course Teaching Improvement Using Immersion Language, Extreme Programming, and Education Theories

    ERIC Educational Resources Information Center

    Velez-Rubio, Miguel

    2013-01-01

    Teaching computer programming to freshmen students in Computer Sciences and other Information Technology areas has been identified as a complex activity. Different approaches have been studied looking for the best one that could help to improve this teaching process. A proposed approach was implemented which is based in the language immersion…

  14. Programed asynchronous serial data interrogation in a two-computer system

    NASA Technical Reports Server (NTRS)

    Schneberger, N. A.

    1975-01-01

    Technique permits redundant computers, with one unit in control mode and one in MONITOR mode, to interrogate the same serial data source. Its use for program-controlled serial data transfer results in extremely simple hardware and software mechanization.

  15. Simulations of horizontal roll vortex development above lines of extreme surface heating

    Treesearch

    W.E. Heilman; J.D. Fast

    1992-01-01

    A two-dimensional, nonhydrostatic, coupled, earth/atmospheric model has been used to simulate mean and turbulent atmospheric characteristics near lines of extreme surface heating. Prognostic equations are used to solve for the horizontal and vertical wind components, potential temperature, and turbulent kinetic energy (TKE). The model computes nonhydrostatic pressure...

  16. WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code

    NASA Astrophysics Data System (ADS)

    Mendygral, P. J.; Radcliffe, N.; Kandalla, K.; Porter, D.; O'Neill, B. J.; Nolting, C.; Edmon, P.; Donnert, J. M. F.; Jones, T. W.

    2017-02-01

    We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it may be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.

  17. CORDIC-based digital signal processing (DSP) element for adaptive signal processing

    NASA Astrophysics Data System (ADS)

    Bolstad, Gregory D.; Neeld, Kenneth B.

    1995-04-01

    The High Performance Adaptive Weight Computation (HAWC) processing element is a CORDIC based application specific DSP element that, when connected in a linear array, can perform extremely high throughput (100s of GFLOPS) matrix arithmetic operations on linear systems of equations in real time. In particular, it very efficiently performs the numerically intense computation of optimal least squares solutions for large, over-determined linear systems. Most techniques for computing solutions to these types of problems have used either a hard-wired, non-programmable systolic array approach, or more commonly, programmable DSP or microprocessor approaches. The custom logic methods can be efficient, but are generally inflexible. Approaches using multiple programmable generic DSP devices are very flexible, but suffer from poor efficiency and high computation latencies, primarily due to the large number of DSP devices that must be utilized to achieve the necessary arithmetic throughput. The HAWC processor is implemented as a highly optimized systolic array, yet retains some of the flexibility of a programmable data-flow system, allowing efficient implementation of algorithm variations. This provides flexible matrix processing capabilities that are one to three orders of magnitude less expensive and more dense than the current state of the art, and more importantly, allows a realizable solution to matrix processing problems that were previously considered impractical to physically implement. HAWC has direct applications in RADAR, SONAR, communications, and image processing, as well as in many other types of systems.

  18. Integration of tools for the Design and Assessment of High-Performance, Highly Reliable Computing Systems (DAHPHRS), phase 1

    NASA Technical Reports Server (NTRS)

    Scheper, C.; Baker, R.; Frank, G.; Yalamanchili, S.; Gray, G.

    1992-01-01

    Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the systems engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report describes an investigation of methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercubes, the Encore Multimax, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel-computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified.

  19. dV/dt - Accelerating the Rate of Progress towards Extreme Scale Collaborative Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livny, Miron

    This report introduces publications that report the results of a project that aimed to design a computational framework that enables computational experimentation at scale while supporting the model of “submit locally, compute globally”. The project focuses on estimating application resource needs, finding the appropriate computing resources, acquiring those resources,deploying the applications and data on the resources, managing applications and resources during run.

  20. Computed tomographic venography for varicose veins of the lower extremities: prospective comparison of 80-kVp and conventional 120-kVp protocols.

    PubMed

    Cho, Eun-Suk; Kim, Joo Hee; Kim, Sungjun; Yu, Jeong-Sik; Chung, Jae-Joon; Yoon, Choon-Sik; Lee, Hyeon-Kyeong; Lee, Kyung Hee

    2012-01-01

    To prospectively investigate the feasibility of an 80-kilovolt (peak) (kVp) protocol in computed tomographic venography for varicose veins of the lower extremities by comparison with conventional 120-kVp protocol. Attenuation values and signal-to-noise ratio of iodine contrast medium (CM) were determined in a water phantom for 2 tube voltages (80 kVp and 120 kVp). Among 100 patients, 50 patients were scanned with 120 kVp and 150 effective milliampere second (mAs(eff)), and the other 50 patients were scanned with 80 kVp and 390 mAs(eff) after the administration of 1.7-mL/kg CM (370 mg of iodine per milliliter). The 2 groups were compared for venous attenuation, contrast-to-noise ratio, and subjective degree of venous enhancement, image noise, and overall diagnostic image quality. In the phantom, the attenuation value and signal-to-noise ratio value for iodine CM at 80 kVp were 63.8% and 33.0% higher, respectively, than those obtained at 120 kVp. The mean attenuation of the measured veins of the lower extremities was 148.3 Hounsfield units (HU) for the 80-kVp protocol and 94.8 HU for the 120-kVp protocol. Contrast-to-noise ratio was also significantly higher with the 80-kVp protocol. The overall diagnostic image quality of the 3-dimensional volume-rendered images was good with both protocols. The subjective score for venous enhancement was higher at the 80-kVp protocol. The mean volume computed tomography dose index of the 80-kVp (5.6 mGy) protocol was 23.3% lower than that of the 120-kVp (7.3 mGy) protocol. The use of the 80-kVp protocol improved overall venous attenuation, especially in perforating vein, and provided similarly high diagnostic image quality with a lower radiation dose when compared to the conventional 120-kVp protocol.

  1. Recent inner ear specialization for high-speed hunting in cheetahs.

    PubMed

    Grohé, Camille; Lee, Beatrice; Flynn, John J

    2018-02-02

    The cheetah, Acinonyx jubatus, is the fastest living land mammal. Because of its specialized hunting strategy, this species evolved a series of specialized morphological and functional body features to increase its exceptional predatory performance during high-speed hunting. Using high-resolution X-ray computed micro-tomography (μCT), we provide the first analyses of the size and shape of the vestibular system of the inner ear in cats, an organ essential for maintaining body balance and adapting head posture and gaze direction during movement in most vertebrates. We demonstrate that the vestibular system of modern cheetahs is extremely different in shape and proportions relative to other cats analysed (12 modern and two fossil felid species), including a closely-related fossil cheetah species. These distinctive attributes (i.e., one of the greatest volumes of the vestibular system, dorsal extension of the anterior and posterior semicircular canals) correlate with a greater afferent sensitivity of the inner ear to head motions, facilitating postural and visual stability during high-speed prey pursuit and capture. These features are not present in the fossil cheetah A. pardinensis, that went extinct about 126,000 years ago, demonstrating that the unique and highly specialized inner ear of the sole living species of cheetah likely evolved extremely recently, possibly later than the middle Pleistocene.

  2. Numerical tools to predict the environmental loads for offshore structures under extreme weather conditions

    NASA Astrophysics Data System (ADS)

    Wu, Yanling

    2018-05-01

    In this paper, the extreme waves were generated using the open source computational fluid dynamic (CFD) tools — OpenFOAM and Waves2FOAM — using linear and nonlinear NewWave input. They were used to conduct the numerical simulation of the wave impact process. Numerical tools based on first-order (with and without stretching) and second-order NewWave are investigated. The simulation to predict force loading for the offshore platform under the extreme weather condition is implemented and compared.

  3. Assessing the impact of future climate extremes on the US corn and soybean production

    NASA Astrophysics Data System (ADS)

    Jin, Z.

    2015-12-01

    Future climate changes will place big challenges to the US agricultural system, among which increasing heat stress and precipitation variability were the two major concerns. Reliable prediction of crop productions in response to the increasingly frequent and severe extreme climate is a prerequisite for developing adaptive strategies on agricultural risk management. However, the progress has been slow on quantifying the uncertainty of computational predictions at high spatial resolutions. Here we assessed the risks of future climate extremes on the US corn and soybean production using the Agricultural Production System sIMulator (APSIM) model under different climate scenarios. To quantify the uncertainty due to conceptual representations of heat, drought and flooding stress in crop models, we proposed a new strategy of algorithm ensemble in which different methods for simulating crop responses to those extreme climatic events were incorporated into the APSIM. This strategy allowed us to isolate irrelevant structure differences among existing crop models but only focus on the process of interest. Future climate inputs were derived from high-spatial-resolution (12km × 12km) Weather Research and Forecasting (WRF) simulations under Representative Concentration Pathways 4.5 (RCP 4.5) and 8.5 (RCP 8.5). Based on crop model simulations, we analyzed the magnitude and frequency of heat, drought and flooding stress for the 21st century. We also evaluated the water use efficiency and water deficit on regional scales if farmers were to boost their yield by applying more fertilizers. Finally we proposed spatially explicit adaptation strategies of irrigation and fertilizing for different management zones.

  4. Risk factors for generally reduced productivity--a prospective cohort study of young adults with neck or upper-extremity musculoskeletal symptoms.

    PubMed

    Boström, Maria; Dellve, Lotta; Thomée, Sara; Hagberg, Mats

    2008-04-01

    This study prospectively assessed the importance of individual conditions and computer use during school or work and leisure time as risk factors for self-reported generally reduced productivity due to musculoskeletal complaints among young adults with musculoskeletal symptoms in the neck or upper extremities. A cohort of 2914 young adults (18-25 years, vocational school and college or university students) responded to an internet-based questionnaire concerning musculoskeletal symptoms related to individual conditions and computer use during school or work and leisure time that possibly affected general productivity. Prevalence ratios (PR) were used to assess prospective risk factors for generally reduced productivity. The selected study sample (N=1051) had reported neck or upper-extremity symptoms. At baseline, 280 of them reported reduced productivity. A follow-up of the 771 who reported no reduced productivity was carried out after 1 year. Risk factors for self-reported generally reduced productivity for those followed-up were symptoms in two or three locations or dimensions for the upper back or neck and the shoulders, arms, wrists, or hands [PR 2.30, 95% confidence interval (95% CI) 1.40-3.78], symptoms persisting longer than 90 days in the shoulders, arms, wrists, or hands (PR 2.50, 95% CI 1.12-5.58), current symptoms in the shoulders, arms, wrists, or hands (PR 1.78, 95% CI 1.10-2.90) and computer use 8-14 hours/week during leisure time (PR 2.32, 95% CI 1.20-4.47). A stronger relationship was found if three or four risk factors were present. For women, a relationship was found between generally reduced productivity and widespread and current symptoms in the upper extremities. The main risk factors for generally reduced productivity due to musculoskeletal symptoms among young adults in this study were chronic symptoms in the upper extremities and widespread symptoms in the neck and upper extremities.

  5. Comparison of different statistical methods for estimation of extreme sea levels with wave set-up contribution

    NASA Astrophysics Data System (ADS)

    Kergadallan, Xavier; Bernardara, Pietro; Benoit, Michel; Andreewsky, Marc; Weiss, Jérôme

    2013-04-01

    Estimating the probability of occurrence of extreme sea levels is a central issue for the protection of the coast. Return periods of sea level with wave set-up contribution are estimated here in one site : Cherbourg in France in the English Channel. The methodology follows two steps : the first one is computation of joint probability of simultaneous wave height and still sea level, the second one is interpretation of that joint probabilities to assess a sea level for a given return period. Two different approaches were evaluated to compute joint probability of simultaneous wave height and still sea level : the first one is multivariate extreme values distributions of logistic type in which all components of the variables become large simultaneously, the second one is conditional approach for multivariate extreme values in which only one component of the variables have to be large. Two different methods were applied to estimate sea level with wave set-up contribution for a given return period : Monte-Carlo simulation in which estimation is more accurate but needs higher calculation time and classical ocean engineering design contours of type inverse-FORM in which the method is simpler and allows more complex estimation of wave setup part (wave propagation to the coast for example). We compare results from the two different approaches with the two different methods. To be able to use both Monte-Carlo simulation and design contours methods, wave setup is estimated with an simple empirical formula. We show advantages of the conditional approach compared to the multivariate extreme values approach when extreme sea-level occurs when either surge or wave height is large. We discuss the validity of the ocean engineering design contours method which is an alternative when computation of sea levels is too complex to use Monte-Carlo simulation method.

  6. On exact correlation functions of chiral ring operators in 2 d N=(2, 2) SCFTs via localization

    NASA Astrophysics Data System (ADS)

    Chen, Jin

    2018-03-01

    We study the extremal correlation functions of (twisted) chiral ring operators via superlocalization in N=(2, 2) superconformal field theories (SCFTs) with central charge c ≥ 3, especially for SCFTs with Calabi-Yau geometric phases. We extend the method in arXiv: 1602.05971 with mild modifications, so that it is applicable to disentangle operators mixing on S 2 in nilpotent (twisted) chiral rings of 2 d SCFTs. With the extended algorithm and technique of localization, we compute exactly the extremal correlators in 2 d N=(2, 2) (twisted) chiral rings as non-holomorphic functions of marginal parameters of the theories. Especially in the context of Calabi-Yau geometries, we give an explicit geometric interpretation to our algorithm as the Griffiths transversality with projection on the Hodge bundle over Calabi-Yau complex moduli. We also apply the method to compute extremal correlators in Kähler moduli, or say twisted chiral rings, of several interesting Calabi-Yau manifolds. In the case of complete intersections in toric varieties, we provide an alternative formalism for extremal correlators via localization onto Higgs branch. In addition, as a spinoff we find that, from the extremal correlators of the top element in twisted chiral rings, one can extract chiral correlators in A-twisted topological theories.

  7. Physics and Entrepreneurship: A Small Business Perspective

    NASA Astrophysics Data System (ADS)

    Cleveland, Jason

    2013-03-01

    DARPA's Microsystems Technology Office, MTO, conceives and develops a wide range of technologies to benefit the US warfighter, from exotic GaN transistors to high-power fiber lasers, highly efficient embedded computer systems to synthetic biology. MTO has world class electrical and mechanical engineers, but we also have a cadre of extremely capable physicists, whose complementary skillset has been absolutely essential to identifying promising technological avenues for the office and for the agency. In this talk I will explain the DARPA model of technology development, using real examples from MTO, highlighting programs where physics-based insights have led to important new capabilities for the Dept of Defense.

  8. The PyRosetta Toolkit: a graphical user interface for the Rosetta software suite.

    PubMed

    Adolf-Bryfogle, Jared; Dunbrack, Roland L

    2013-01-01

    The Rosetta Molecular Modeling suite is a command-line-only collection of applications that enable high-resolution modeling and design of proteins and other molecules. Although extremely useful, Rosetta can be difficult to learn for scientists with little computational or programming experience. To that end, we have created a Graphical User Interface (GUI) for Rosetta, called the PyRosetta Toolkit, for creating and running protocols in Rosetta for common molecular modeling and protein design tasks and for analyzing the results of Rosetta calculations. The program is highly extensible so that developers can add new protocols and analysis tools to the PyRosetta Toolkit GUI.

  9. The ELF in Your Library.

    ERIC Educational Resources Information Center

    McKimmie, Tim; Smith, Jeanette

    1994-01-01

    Presents an overview of the issues related to extremely low frequency (ELF) radiation from computer video display terminals. Highlights include electromagnetic fields; measuring ELF; computer use in libraries; possible health effects; electromagnetic radiation; litigation and legislation; standards and safety; and what libraries can do. (Contains…

  10. PGAS in-memory data processing for the Processing Unit of the Upgraded Electronics of the Tile Calorimeter of the ATLAS Detector

    NASA Astrophysics Data System (ADS)

    Ohene-Kwofie, Daniel; Otoo, Ekow

    2015-10-01

    The ATLAS detector, operated at the Large Hadron Collider (LHC) records proton-proton collisions at CERN every 50ns resulting in a sustained data flow up to PB/s. The upgraded Tile Calorimeter of the ATLAS experiment will sustain about 5PB/s of digital throughput. These massive data rates require extremely fast data capture and processing. Although there has been a steady increase in the processing speed of CPU/GPGPU assembled for high performance computing, the rate of data input and output, even under parallel I/O, has not kept up with the general increase in computing speeds. The problem then is whether one can implement an I/O subsystem infrastructure capable of meeting the computational speeds of the advanced computing systems at the petascale and exascale level. We propose a system architecture that leverages the Partitioned Global Address Space (PGAS) model of computing to maintain an in-memory data-store for the Processing Unit (PU) of the upgraded electronics of the Tile Calorimeter which is proposed to be used as a high throughput general purpose co-processor to the sROD of the upgraded Tile Calorimeter. The physical memory of the PUs are aggregated into a large global logical address space using RDMA- capable interconnects such as PCI- Express to enhance data processing throughput.

  11. Multi-GPU hybrid programming accelerated three-dimensional phase-field model in binary alloy

    NASA Astrophysics Data System (ADS)

    Zhu, Changsheng; Liu, Jieqiong; Zhu, Mingfang; Feng, Li

    2018-03-01

    In the process of dendritic growth simulation, the computational efficiency and the problem scales have extremely important influence on simulation efficiency of three-dimensional phase-field model. Thus, seeking for high performance calculation method to improve the computational efficiency and to expand the problem scales has a great significance to the research of microstructure of the material. A high performance calculation method based on MPI+CUDA hybrid programming model is introduced. Multi-GPU is used to implement quantitative numerical simulations of three-dimensional phase-field model in binary alloy under the condition of multi-physical processes coupling. The acceleration effect of different GPU nodes on different calculation scales is explored. On the foundation of multi-GPU calculation model that has been introduced, two optimization schemes, Non-blocking communication optimization and overlap of MPI and GPU computing optimization, are proposed. The results of two optimization schemes and basic multi-GPU model are compared. The calculation results show that the use of multi-GPU calculation model can improve the computational efficiency of three-dimensional phase-field obviously, which is 13 times to single GPU, and the problem scales have been expanded to 8193. The feasibility of two optimization schemes is shown, and the overlap of MPI and GPU computing optimization has better performance, which is 1.7 times to basic multi-GPU model, when 21 GPUs are used.

  12. Higher-order ice-sheet modelling accelerated by multigrid on graphics cards

    NASA Astrophysics Data System (ADS)

    Brædstrup, Christian; Egholm, David

    2013-04-01

    Higher-order ice flow modelling is a very computer intensive process owing primarily to the nonlinear influence of the horizontal stress coupling. When applied for simulating long-term glacial landscape evolution, the ice-sheet models must consider very long time series, while both high temporal and spatial resolution is needed to resolve small effects. The use of higher-order and full stokes models have therefore seen very limited usage in this field. However, recent advances in graphics card (GPU) technology for high performance computing have proven extremely efficient in accelerating many large-scale scientific computations. The general purpose GPU (GPGPU) technology is cheap, has a low power consumption and fits into a normal desktop computer. It could therefore provide a powerful tool for many glaciologists working on ice flow models. Our current research focuses on utilising the GPU as a tool in ice-sheet and glacier modelling. To this extent we have implemented the Integrated Second-Order Shallow Ice Approximation (iSOSIA) equations on the device using the finite difference method. To accelerate the computations, the GPU solver uses a non-linear Red-Black Gauss-Seidel iterator coupled with a Full Approximation Scheme (FAS) multigrid setup to further aid convergence. The GPU finite difference implementation provides the inherent parallelization that scales from hundreds to several thousands of cores on newer cards. We demonstrate the efficiency of the GPU multigrid solver using benchmark experiments.

  13. [Multi-temporal scale analysis of impacts of extreme high temperature on net carbon uptake in subtropical coniferous plantation.

    PubMed

    Zhang, Mi; Wen, Xue Fa; Zhang, Lei Ming; Wang, Hui Min; Guo, Yi Wen; Yu, Gui Rui

    2018-02-01

    Extreme high temperature is one of important extreme weathers that impact forest ecosystem carbon cycle. In this study, applying CO 2 flux and routine meteorological data measured during 2003-2012, we examined the impacts of extreme high temperature and extreme high temperature event on net carbon uptake of subtropical coniferous plantation in Qianyanzhou. Combining with wavelet analysis, we analyzed environmental controls on net carbon uptake at different temporal scales, when the extreme high temperature and extreme high temperature event happened. The results showed that mean daily cumulative NEE decreased by 51% in the days with daily maximum air temperature range between 35 ℃ and 40 ℃, compared with that in the days with the range between 30 ℃ and 34 ℃. The effects of the extreme high temperature and extreme high temperature event on monthly NEE and annual NEE related to the strength and duration of extreme high tempe-rature event. In 2003, when strong extreme high temperature event happened, the sum of monthly cumulative NEE in July and August was only -11.64 g C·m -2 ·(2 month) -1 . The value decreased by 90%, compared with multi-year average value. At the same time, the relative variation of annual NEE reached -6.7%. In July and August, when the extreme high temperature and extreme high temperature event occurred, air temperature (T a ) and vapor press deficit (VPD) were the dominant controller for the daily variation of NEE. The coherency between NEE T a and NEE VPD was 0.97 and 0.95, respectively. At 8-, 16-, and 32-day periods, T a , VPD, soil water content at 5 cm depth (SWC), and precipitation (P) controlled NEE. The coherency between NEE SWC and NEE P was higher than 0.8 at monthly scale. The results indicated that atmospheric water deficit impacted NEE at short temporal scale, when the extreme high temperature and extreme high temperature event occurred, both of atmospheric water deficit and soil drought stress impacted NEE at long temporal scales in this ecosystem.

  14. Back and upper extremity disorders among enlisted U.S. Marines: burden and individual risk factors.

    PubMed

    Huang, G D; Feuerstein, M; Arroyo, F

    2001-11-01

    Although musculoskeletal disorders of the low back and upper extremities can affect military readiness, little is known about their extent and risk factors in the U.S. Marine Corps. Using the Defense Medical Epidemiology and Defense Medical Surveillance System databases, back and upper extremity diagnostic categories were among the top four sources of outpatient visits and duty limitation among enlisted Marines. Back disorders were also found to be the fifth most common cause for lost time. Subsequently, high-risk occupations were identified, age-related trends for clinic visit rates were determined, and rate ratios were computed for the top 15 low back and upper extremity diagnoses among enlisted Marines from 1997 through 1998. Occupational categories with the highest rates of musculoskeletal-related outpatient visits included image interpretation, auditing and accounting, disturbsing, surveillance/target acquisition, and aircraft launch equipment. Significantly increasing linear trends in rates across age groups were found for most diagnoses. For 1998, age-specific rate ratios indicated significantly higher rates for most low back and upper extremity disorders for females; lower rank (i.e., E1-E4) was also a risk, but for fewer diagnoses. The findings emphasize the need to identify modifiable (e.g., work-related, individual) risk factors and to develop focused primary and secondary prevention programs for musculoskeletal disorders in the Marine Corps. Subsequently, these efforts can assist in reducing associated effects, maximizing resource utilization, and enhancing operational readiness.

  15. ASC ATDM Level 2 Milestone #5325: Asynchronous Many-Task Runtime System Analysis and Assessment for Next Generation Platforms.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Gavin Matthew; Bettencourt, Matthew Tyler; Bova, Steven W.

    2015-09-01

    This report provides in-depth information and analysis to help create a technical road map for developing next- generation Orogramming mocleN and runtime systemsl that support Advanced Simulation and Computing (ASC) work- load requirements. The focus herein is on 4synchronous many-task (AMT) model and runtime systems, which are of great interest in the context of "Oriascale7 computing, as they hold the promise to address key issues associated with future extreme-scale computer architectures. This report includes a thorough qualitative and quantitative examination of three best-of-class AIM] runtime systemsHCharm-HE, Legion, and Uintah, all of which are in use as part of the Centers.more » The studies focus on each of the runtimes' programmability, performance, and mutability. Through the experiments and analysis presented, several overarching Predictive Science Academic Alliance Program II (PSAAP-II) Ascl findings emerge. From a performance perspective, AIVT11runtimes show tremendous potential for addressing extreme- scale challenges. Empirical studies show an AM11 runtime can mitigate performance heterogeneity inherent to the machine itself and that Message Passing Interface (MP1) and AM11runtimes perform comparably under balanced con- ditions. From a programmability and mutability perspective however, none of the runtimes in this study are currently ready for use in developing production-ready Sandia ASCIapplications. The report concludes by recommending a co- design path forward, wherein application, programming model, and runtime system developers work together to define requirements and solutions. Such a requirements-driven co-design approach benefits the community as a whole, with widespread community engagement mitigating risk for both application developers developers. and high-performance computing inntime systein« less

  16. [Upper extremities, neck and back symptoms in office employees working at computer stations].

    PubMed

    Zejda, Jan E; Bugajska, Joanna; Kowalska, Małgorzata; Krzych, Lukasz; Mieszkowska, Marzena; Brozek, Grzegorz; Braczkowska, Bogumiła

    2009-01-01

    To obtain current data on the occurrence ofwork-related symptoms of office computer users in Poland we implemented a questionnaire survey. Its goal was to assess the prevalence and intensity of symptoms of upper extremities, neck and back in office workers who use computers on a regular basis, and to find out if the occurrence of symptoms depends on the duration of computer use and other work-related factors. Office workers in two towns (Warszawa and Katowice), employed in large social services companies, were invited to fill in the Polish version of Nordic Questionnaire. The questions included work history and history of last-week symptoms of pain of hand/wrist, elbow, arm, neck and upper and lower back (occurrence and intensity measured by visual scale). Altogether 477 men and women returned the completed questionnaires. Between-group symptom differences (chi-square test) were verified by multivariate analysis (GLM). The prevalence of symptoms in individual body parts was as follows: neck, 55.6%; arm, 26.9%; elbow, 13.3%; wrist/hand, 29.9%; upper back, 49.6%; and lower back, 50.1%. Multivariate analysis confirmed the effect of gender, age and years of computer use on the occurrence of symptoms. Among other determinants, forearm support explained pain of wrist/hand, wrist support of elbow pain, and chair adjustment of arm pain. Association was also found between low back pain and chair adjustment and keyboard position. The findings revealed frequent occurrence of symptoms of pain in upper extremities and neck in office workers who use computers on a regular basis. Seating position could also contribute to the frequent occurrence of back pain in the examined population.

  17. On Shaft Data Acquisition System (OSDAS)

    NASA Technical Reports Server (NTRS)

    Pedings, Marc; DeHart, Shawn; Formby, Jason; Naumann, Charles

    2012-01-01

    On Shaft Data Acquisition System (OSDAS) is a rugged, compact, multiple-channel data acquisition computer system that is designed to record data from instrumentation while operating under extreme rotational centrifugal or gravitational acceleration forces. This system, which was developed for the Heritage Fuel Air Turbine Test (HFATT) program, addresses the problem of recording multiple channels of high-sample-rate data on most any rotating test article by mounting the entire acquisition computer onboard with the turbine test article. With the limited availability of slip ring wires for power and communication, OSDAS utilizes its own resources to provide independent power and amplification for each instrument. Since OSDAS utilizes standard PC technology as well as shared code interfaces with the next-generation, real-time health monitoring system (SPARTAA Scalable Parallel Architecture for Real Time Analysis and Acquisition), this system could be expanded beyond its current capabilities, such as providing advanced health monitoring capabilities for the test article. High-conductor-count slip rings are expensive to purchase and maintain, yet only provide a limited number of conductors for routing instrumentation off the article and to a stationary data acquisition system. In addition to being limited to a small number of instruments, slip rings are prone to wear quickly, and introduce noise and other undesirable characteristics to the signal data. This led to the development of a system capable of recording high-density instrumentation, at high sample rates, on the test article itself, all while under extreme rotational stress. OSDAS is a fully functional PC-based system with 48 channels of 24-bit, high-sample-rate input channels, phase synchronized, with an onboard storage capacity of over 1/2-terabyte of solid-state storage. This recording system takes a novel approach to the problem of recording multiple channels of instrumentation, integrated with the test article itself, packaged in a compact/rugged form factor, consuming limited power, all while rotating at high turbine speeds.

  18. Efficient universal blind quantum computation.

    PubMed

    Giovannetti, Vittorio; Maccone, Lorenzo; Morimae, Tomoyuki; Rudolph, Terry G

    2013-12-06

    We give a cheat sensitive protocol for blind universal quantum computation that is efficient in terms of computational and communication resources: it allows one party to perform an arbitrary computation on a second party's quantum computer without revealing either which computation is performed, or its input and output. The first party's computational capabilities can be extremely limited: she must only be able to create and measure single-qubit superposition states. The second party is not required to use measurement-based quantum computation. The protocol requires the (optimal) exchange of O(Jlog2(N)) single-qubit states, where J is the computational depth and N is the number of qubits needed for the computation.

  19. Is the GUI approach to Computer Development (For Example, Mac, and Windows Technology) a Threat to Computer Users Who Are Blind?

    ERIC Educational Resources Information Center

    Melrose, S.; And Others

    1995-01-01

    In this point/counterpoint feature, S. Melrose contends that complex graphical user interfaces (GUIs) threaten the independence and equal employment of individuals with blindness. D. Wakefield then points out that access to the Windows software program for blind computer users is extremely unpredictable, and J. Gill describes a major European…

  20. Summary of Meteorological Observations, Surface (SMOS), Patuxent River, Maryland.

    DTIC Science & Technology

    1983-11-01

    annual (all months). The extremes for a month are not printed nor used in computations if one or more observations are missing. NOTE: Snow depth was...297 1979 20 1 jn7 77 -1279 3.7 *a 1l7 20 .1.12.U 21 lol 2 71 1969 21 1 1 4 1949 111.31 I -J&2 21 2.9 91 1*!l& n.1 . 19&LL 22 -I2 JJD 2p’ 1971 22- 3D ’A...month, the extreme is selected and printed . These values are then used to compute means and standard deviations for the entire period. Every month of a

  1. Towards a framework for testing general relativity with extreme-mass-ratio-inspiral observations

    NASA Astrophysics Data System (ADS)

    Chua, A. J. K.; Hee, S.; Handley, W. J.; Higson, E.; Moore, C. J.; Gair, J. R.; Hobson, M. P.; Lasenby, A. N.

    2018-07-01

    Extreme-mass-ratio-inspiral observations from future space-based gravitational-wave detectors such as LISA will enable strong-field tests of general relativity with unprecedented precision, but at prohibitive computational cost if existing statistical techniques are used. In one such test that is currently employed for LIGO black hole binary mergers, generic deviations from relativity are represented by N deformation parameters in a generalized waveform model; the Bayesian evidence for each of its 2N combinatorial submodels is then combined into a posterior odds ratio for modified gravity over relativity in a null-hypothesis test. We adapt and apply this test to a generalized model for extreme-mass-ratio inspirals constructed on deformed black hole spacetimes, and focus our investigation on how computational efficiency can be increased through an evidence-free method of model selection. This method is akin to the algorithm known as product-space Markov chain Monte Carlo, but uses nested sampling and improved error estimates from a rethreading technique. We perform benchmarking and robustness checks for the method, and find order-of-magnitude computational gains over regular nested sampling in the case of synthetic data generated from the null model.

  2. Towards a framework for testing general relativity with extreme-mass-ratio-inspiral observations

    NASA Astrophysics Data System (ADS)

    Chua, A. J. K.; Hee, S.; Handley, W. J.; Higson, E.; Moore, C. J.; Gair, J. R.; Hobson, M. P.; Lasenby, A. N.

    2018-04-01

    Extreme-mass-ratio-inspiral observations from future space-based gravitational-wave detectors such as LISA will enable strong-field tests of general relativity with unprecedented precision, but at prohibitive computational cost if existing statistical techniques are used. In one such test that is currently employed for LIGO black-hole binary mergers, generic deviations from relativity are represented by N deformation parameters in a generalised waveform model; the Bayesian evidence for each of its 2N combinatorial submodels is then combined into a posterior odds ratio for modified gravity over relativity in a null-hypothesis test. We adapt and apply this test to a generalised model for extreme-mass-ratio inspirals constructed on deformed black-hole spacetimes, and focus our investigation on how computational efficiency can be increased through an evidence-free method of model selection. This method is akin to the algorithm known as product-space Markov chain Monte Carlo, but uses nested sampling and improved error estimates from a rethreading technique. We perform benchmarking and robustness checks for the method, and find order-of-magnitude computational gains over regular nested sampling in the case of synthetic data generated from the null model.

  3. Topological data analyses and machine learning for detection, classification and characterization of atmospheric rivers

    NASA Astrophysics Data System (ADS)

    Muszynski, G.; Kashinath, K.; Wehner, M. F.; Prabhat, M.; Kurlin, V.

    2017-12-01

    We investigate novel approaches to detecting, classifying and characterizing extreme weather events, such as atmospheric rivers (ARs), in large high-dimensional climate datasets. ARs are narrow filaments of concentrated water vapour in the atmosphere that bring much of the precipitation in many mid-latitude regions. The precipitation associated with ARs is also responsible for major flooding events in many coastal regions of the world, including the west coast of the United States and western Europe. In this study we combine ideas from Topological Data Analysis (TDA) with Machine Learning (ML) for detecting, classifying and characterizing extreme weather events, like ARs. TDA is a new field that sits at the interface between topology and computer science, that studies "shape" - hidden topological structure - in raw data. It has been applied successfully in many areas of applied sciences, including complex networks, signal processing and image recognition. Using TDA we provide ARs with a shape characteristic as a new feature descriptor for the task of AR classification. In particular, we track the change in topology in precipitable water (integrated water vapour) fields using the Union-Find algorithm. We use the generated feature descriptors with ML classifiers to establish reliability and classification performance of our approach. We utilize the parallel toolkit for extreme climate events analysis (TECA: Petascale Pattern Recognition for Climate Science, Prabhat et al., Computer Analysis of Images and Patterns, 2015) for comparison (it is assumed that events identified by TECA is ground truth). Preliminary results indicate that our approach brings new insight into the study of ARs and provides quantitative information about the relevance of topological feature descriptors in analyses of a large climate datasets. We illustrate this method on climate model output and NCEP reanalysis datasets. Further, our method outperforms existing methods on detection and classification of ARs. This work illustrates that TDA combined with ML may provide a uniquely powerful approach for detection, classification and characterization of extreme weather phenomena.

  4. Extreme Physics

    NASA Astrophysics Data System (ADS)

    Colvin, Jeff; Larsen, Jon

    2013-11-01

    Acknowledgements; 1. Extreme environments: what, where, how; 2. Properties of dense and classical plasmas; 3. Laser energy absorption in matter; 4. Hydrodynamic motion; 5. Shocks; 6. Equation of state; 7. Ionization; 8. Thermal energy transport; 9. Radiation energy transport; 10. Magnetohydrodynamics; 11. Considerations for constructing radiation-hydrodynamics computer codes; 12. Numerical simulations; Appendix: units and constants, glossary of symbols; References; Bibliography; Index.

  5. Dynamic compression of dense oxide (Gd3Ga5O12) from 0.4 to 2.6 TPa: Universal Hugoniot of fluid metals

    PubMed Central

    Ozaki, N.; Nellis, W. J.; Mashimo, T.; Ramzan, M.; Ahuja, R.; Kaewmaraya, T.; Kimura, T.; Knudson, M.; Miyanishi, K.; Sakawa, Y.; Sano, T.; Kodama, R.

    2016-01-01

    Materials at high pressures and temperatures are of great current interest for warm dense matter physics, planetary sciences, and inertial fusion energy research. Shock-compression equation-of-state data and optical reflectivities of the fluid dense oxide, Gd3Ga5O12 (GGG), were measured at extremely high pressures up to 2.6 TPa (26 Mbar) generated by high-power laser irradiation and magnetically-driven hypervelocity impacts. Above 0.75 TPa, the GGG Hugoniot data approach/reach a universal linear line of fluid metals, and the optical reflectivity most likely reaches a constant value indicating that GGG undergoes a crossover from fluid semiconductor to poor metal with minimum metallic conductivity (MMC). These results suggest that most fluid compounds, e.g., strong planetary oxides, reach a common state on the universal Hugoniot of fluid metals (UHFM) with MMC at sufficiently extreme pressures and temperatures. The systematic behaviors of warm dense fluid would be useful benchmarks for developing theoretical equation-of-state and transport models in the warm dense matter regime in determining computational predictions. PMID:27193942

  6. Dynamic compression of dense oxide (Gd 3Ga 5O 12) from 0.4 to 2.6 TPa: Universal Hugoniot of fluid metals

    DOE PAGES

    Ozaki, N.; Nellis, W. J.; Mashimo, T.; ...

    2016-05-19

    Materials at high pressures and temperatures are of great current interest for warm dense matter physics, planetary sciences, and inertial fusion energy research. Shock-compression equation-of-state data and optical reflectivities of the fluid dense oxide, Gd 3Ga 5O 12 (GGG), were measured at extremely high pressures up to 2.6 TPa (26 Mbar) generated by high-power laser irradiation and magnetically-driven hypervelocity impacts. Above 0.75 TPa, the GGG Hugoniot data approach/reach a universal linear line of fluid metals, and the optical reflectivity most likely reaches a constant value indicating that GGG undergoes a crossover from fluid semiconductor to poor metal with minimum metallicmore » conductivity (MMC). These results suggest that most fluid compounds, e.g., strong planetary oxides, reach a common state on the universal Hugoniot of fluid metals (UHFM) with MMC at sufficiently extreme pressures and temperatures. Lastly, the systematic behaviors of warm dense fluid would be useful benchmarks for developing theoretical equation-of-state and transport models in the warm dense matter regime in determining computational predictions.« less

  7. Dynamic compression of dense oxide (Gd3Ga5O12) from 0.4 to 2.6 TPa: Universal Hugoniot of fluid metals.

    PubMed

    Ozaki, N; Nellis, W J; Mashimo, T; Ramzan, M; Ahuja, R; Kaewmaraya, T; Kimura, T; Knudson, M; Miyanishi, K; Sakawa, Y; Sano, T; Kodama, R

    2016-05-19

    Materials at high pressures and temperatures are of great current interest for warm dense matter physics, planetary sciences, and inertial fusion energy research. Shock-compression equation-of-state data and optical reflectivities of the fluid dense oxide, Gd3Ga5O12 (GGG), were measured at extremely high pressures up to 2.6 TPa (26 Mbar) generated by high-power laser irradiation and magnetically-driven hypervelocity impacts. Above 0.75 TPa, the GGG Hugoniot data approach/reach a universal linear line of fluid metals, and the optical reflectivity most likely reaches a constant value indicating that GGG undergoes a crossover from fluid semiconductor to poor metal with minimum metallic conductivity (MMC). These results suggest that most fluid compounds, e.g., strong planetary oxides, reach a common state on the universal Hugoniot of fluid metals (UHFM) with MMC at sufficiently extreme pressures and temperatures. The systematic behaviors of warm dense fluid would be useful benchmarks for developing theoretical equation-of-state and transport models in the warm dense matter regime in determining computational predictions.

  8. Towards a Competency Model for Teaching Computer Science

    ERIC Educational Resources Information Center

    Bender, Elena; Hubwieser, Peter; Schaper, Niclas; Margaritis, Melanie; Berges, Marc; Ohrndorf, Laura; Magenheim, Johannes; Schubert, Sigrid

    2015-01-01

    To address the special challenges of teaching computer science, adequate development of teachers' competencies during their education is extremely important. In particular, pedagogical content knowledge and teachers' beliefs and motivational orientations play an important role in effective teaching. This research field has been sparsely…

  9. Observation of gravity waves during the extreme tornado outbreak of 3 April 1974

    NASA Technical Reports Server (NTRS)

    Hung, R. J.; Phan, T.; Smith, R. E.

    1978-01-01

    A continuous wave-spectrum high-frequency radiowave Doppler sounder array was used to observe upper-atmospheric disturbances during an extreme tornado outbreak. The observations indicated that gravity waves with two harmonic wave periods were detected at the F-region ionospheric height. Using a group ray path computational technique, the observed gravity waves were traced in order to locate potential sources. The signals were apparently excited 1-3 hours before tornado touchdown. Reverse ray tracing indicated that the wave source was located at the aurora zone with a Kp index of 6 at the time of wave excitation. The summation of the 24-hour Kp index for the day was 36. The results agree with existing theories (Testud, 1970; Titheridge, 1971; Kato, 1976) for the excitation of large-scale traveling ionospheric disturbances associated with geomagnetic activity in the aurora zone.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katti, Amogh; Di Fatta, Giuseppe; Naughton III, Thomas J

    Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum's User Level Failure Mitigation proposal has introduced an operation, MPI_Comm_shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI_Comm_shrink operation requires a fault tolerant failure detection and consensus algorithm. This paper presents and compares two novel failure detection and consensus algorithms. The proposed algorithms are based on Gossip protocols and are inherently fault-tolerant and scalable. The proposed algorithms were implementedmore » and tested using the Extreme-scale Simulator. The results show that in both algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus.« less

  11. eXascale PRogramming Environment and System Software (XPRESS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapman, Barbara; Gabriel, Edgar

    Exascale systems, with a thousand times the compute capacity of today’s leading edge petascale computers, are expected to emerge during the next decade. Their software systems will need to facilitate the exploitation of exceptional amounts of concurrency in applications, and ensure that jobs continue to run despite the occurrence of system failures and other kinds of hard and soft errors. Adapting computations at runtime to cope with changes in the execution environment, as well as to improve power and performance characteristics, is likely to become the norm. As a result, considerable innovation is required to develop system support to meetmore » the needs of future computing platforms. The XPRESS project aims to develop and prototype a revolutionary software system for extreme-­scale computing for both exascale and strong­scaled problems. The XPRESS collaborative research project will advance the state-­of-­the-­art in high performance computing and enable exascale computing for current and future DOE mission-­critical applications and supporting systems. The goals of the XPRESS research project are to: A. enable exascale performance capability for DOE applications, both current and future, B. develop and deliver a practical computing system software X-­stack, OpenX, for future practical DOE exascale computing systems, and C. provide programming methods and environments for effective means of expressing application and system software for portable exascale system execution.« less

  12. Adding Data Management Services to Parallel File Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brandt, Scott

    2015-03-04

    The objective of this project, called DAMASC for “Data Management in Scientific Computing”, is to coalesce data management with parallel file system management to present a declarative interface to scientists for managing, querying, and analyzing extremely large data sets efficiently and predictably. Managing extremely large data sets is a key challenge of exascale computing. The overhead, energy, and cost of moving massive volumes of data demand designs where computation is close to storage. In current architectures, compute/analysis clusters access data in a physically separate parallel file system and largely leave it scientist to reduce data movement. Over the past decadesmore » the high-end computing community has adopted middleware with multiple layers of abstractions and specialized file formats such as NetCDF-4 and HDF5. These abstractions provide a limited set of high-level data processing functions, but have inherent functionality and performance limitations: middleware that provides access to the highly structured contents of scientific data files stored in the (unstructured) file systems can only optimize to the extent that file system interfaces permit; the highly structured formats of these files often impedes native file system performance optimizations. We are developing Damasc, an enhanced high-performance file system with native rich data management services. Damasc will enable efficient queries and updates over files stored in their native byte-stream format while retaining the inherent performance of file system data storage via declarative queries and updates over views of underlying files. Damasc has four key benefits for the development of data-intensive scientific code: (1) applications can use important data-management services, such as declarative queries, views, and provenance tracking, that are currently available only within database systems; (2) the use of these services becomes easier, as they are provided within a familiar file-based ecosystem; (3) common optimizations, e.g., indexing and caching, are readily supported across several file formats, avoiding effort duplication; and (4) performance improves significantly, as data processing is integrated more tightly with data storage. Our key contributions are: SciHadoop which explores changes to MapReduce assumption by taking advantage of semantics of structured data while preserving MapReduce’s failure and resource management; DataMods which extends common abstractions of parallel file systems so they become programmable such that they can be extended to natively support a variety of data models and can be hooked into emerging distributed runtimes such as Stanford’s Legion; and Miso which combines Hadoop and relational data warehousing to minimize time to insight, taking into account the overhead of ingesting data into data warehousing.« less

  13. Reducing the Time and Cost of Testing Engines

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Producing a new aircraft engine currently costs approximately $1 billion, with 3 years of development time for a commercial engine and 10 years for a military engine. The high development time and cost make it extremely difficult to transition advanced technologies for cleaner, quieter, and more efficient new engines. To reduce this time and cost, NASA created a vision for the future where designers would use high-fidelity computer simulations early in the design process in order to resolve critical design issues before building the expensive engine hardware. To accomplish this vision, NASA's Glenn Research Center initiated a collaborative effort with the aerospace industry and academia to develop its Numerical Propulsion System Simulation (NPSS), an advanced engineering environment for the analysis and design of aerospace propulsion systems and components. Partners estimate that using NPSS has the potential to dramatically reduce the time, effort, and expense necessary to design and test jet engines by generating sophisticated computer simulations of an aerospace object or system. These simulations will permit an engineer to test various design options without having to conduct costly and time-consuming real-life tests. By accelerating and streamlining the engine system design analysis and test phases, NPSS facilitates bringing the final product to market faster. NASA's NPSS Version (V)1.X effort was a task within the Agency s Computational Aerospace Sciences project of the High Performance Computing and Communication program, which had a mission to accelerate the availability of high-performance computing hardware and software to the U.S. aerospace community for its use in design processes. The technology brings value back to NASA by improving methods of analyzing and testing space transportation components.

  14. Study on Temperature and Synthetic Compensation of Piezo-Resistive Differential Pressure Sensors by Coupled Simulated Annealing and Simplex Optimized Kernel Extreme Learning Machine

    PubMed Central

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam SM, Jahangir

    2017-01-01

    As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems. PMID:28422080

  15. Study on Temperature and Synthetic Compensation of Piezo-Resistive Differential Pressure Sensors by Coupled Simulated Annealing and Simplex Optimized Kernel Extreme Learning Machine.

    PubMed

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir

    2017-04-19

    As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems.

  16. Explicit integration with GPU acceleration for large kinetic networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brock, Benjamin; Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37830; Belt, Andrew

    2015-12-01

    We demonstrate the first implementation of recently-developed fast explicit kinetic integration algorithms on modern graphics processing unit (GPU) accelerators. Taking as a generic test case a Type Ia supernova explosion with an extremely stiff thermonuclear network having 150 isotopic species and 1604 reactions coupled to hydrodynamics using operator splitting, we demonstrate the capability to solve of order 100 realistic kinetic networks in parallel in the same time that standard implicit methods can solve a single such network on a CPU. This orders-of-magnitude decrease in computation time for solving systems of realistic kinetic networks implies that important coupled, multiphysics problems inmore » various scientific and technical fields that were intractable, or could be simulated only with highly schematic kinetic networks, are now computationally feasible.« less

  17. Colonic obstruction secondary to incarcerated Spigelian hernia in a severely obese patient.

    PubMed

    Salemis, Nikolaos S; Kontoravdis, Nikolaos; Gourgiotis, Stavros; Panagiotopoulos, Nikolaos; Gakis, Christos; Dimitrakopoulos, Georgios

    2010-01-01

    Spigelian hernia is a rare hernia of the ventral abdominal wall accounting for 1-2% of all hernias. Incarceration of a Spigelian hernia has been reported in 17-24% of the cases. We herein describe an extremely rare case of a colonic obstruction secondary to an incarcerated Spigelian hernia in a severely obese patient. Physical examination was inconclusive and diagnosis was established by computed tomography scans. The patient underwent an open intraperitoneal mesh repair. A high level of suspicion and awareness is required as clinical findings of a Spigelian hernia are often nonspecific especially in obese patients. Computed tomography scan provides detailed information for the surgical planning. Open mesh repair is safe in the emergent surgical intervention of a complicated Spigelian hernia in severely obese patients.

  18. Colonic obstruction secondary to incarcerated Spigelian hernia in a severely obese patient

    PubMed Central

    Salemis, Nikolaos S.; Kontoravdis, Nikolaos; Gourgiotis, Stavros; Panagiotopoulos, Nikolaos; Gakis, Christos; Dimitrakopoulos, Georgios

    2010-01-01

    Spigelian hernia is a rare hernia of the ventral abdominal wall accounting for 1–2% of all hernias. Incarceration of a Spigelian hernia has been reported in 17–24% of the cases. We herein describe an extremely rare case of a colonic obstruction secondary to an incarcerated Spigelian hernia in a severely obese patient. Physical examination was inconclusive and diagnosis was established by computed tomography scans. The patient underwent an open intraperitoneal mesh repair. A high level of suspicion and awareness is required as clinical findings of a Spigelian hernia are often nonspecific especially in obese patients. Computed tomography scan provides detailed information for the surgical planning. Open mesh repair is safe in the emergent surgical intervention of a complicated Spigelian hernia in severely obese patients. PMID:22096670

  19. An analytical study of electric vehicle handling dynamics

    NASA Technical Reports Server (NTRS)

    Greene, J. E.; Segal, D. J.

    1979-01-01

    Hypothetical electric vehicle configurations were studied by applying available analytical methods. Elementary linearized models were used in addition to a highly sophisticated vehicle dynamics computer simulation technique. Physical properties of specific EV's were defined for various battery and powertrain packaging approaches applied to a range of weight distribution and inertial properties which characterize a generic class of EV's. Computer simulations of structured maneuvers were performed for predicting handling qualities in the normal driving range and during various extreme conditions related to accident avoidance. Results indicate that an EV with forward weight bias will possess handling qualities superior to a comparable EV that is rear-heavy or equally balanced. The importance of properly matching tires, suspension systems, and brake system front/rear torque proportioning to a given EV configuration during the design stage is demonstrated.

  20. Layered synthetic microstructures as Bragg diffractors for X rays and extreme ultraviolet - Theory and predicted performance

    NASA Technical Reports Server (NTRS)

    Underwood, J. H.; Barbee, T. W., Jr.

    1981-01-01

    The theory of X-ray diffraction by periodic structures is applied to the layered synthetic microstructures (LSMs) made possible by recent developments in thin film technology, and approximate formulas for estimating their performance are presented. A more complete computation scheme based on optical multilayer theory is also described, and it is shown that the diffracting properties may be tailored to specific applications by adjusting the refractive indices and thicknesses of the component layers. The theory may be modified to take account of imperfections in the LMS structure, and the properties of nonperiodic structures thereby computed. Structures with high integrated reflectivity constructed according to the methods defined have potential application in many areas of X-ray or EUV research and instrumentation.

  1. Life-threatening emphysematous liver abscess associated with poorly controlled diabetes mellitus: a case report.

    PubMed

    Takano, Yuichi; Hayashi, Masafumi; Niiya, Fumitaka; Nakanishi, Toru; Hanamura, Shotaro; Asonuma, Kunio; Yamamura, Eiichi; Gomi, Kuniyo; Kuroki, Yuichiro; Maruoka, Naotaka; Inoue, Kazuaki; Nagahama, Masatsugu

    2017-03-06

    Emphysematous liver abscesses are defined as liver abscesses accompanied by gas formation. The fatality rate is extremely high at 27%, necessitating prompt intensive care. The patient was a 69-year-old Japanese man with type 2 diabetes. He visited the emergency outpatient department for fever and general malaise that had been ongoing for 2 weeks. Abdominal computed tomography revealed an abscess 5 cm in diameter accompanied by gas formation in the right hepatic lobe. Markedly impaired glucose tolerance was observed with a blood sugar level of 571 mg/dL and a glycated hemoglobin level of 14.6%. The patient underwent emergency percutaneous abscess drainage, and intensive care was subsequently initiated. Klebsiella pneumoniae was detected in both the abscess cavity and blood cultures. The drain was removed 3 weeks later, and the patient was discharged. Emphysematous liver abscesses are often observed in patients with poorly controlled diabetes, and the fatality rate is extremely high. Fever and malaise occasionally mask life-threatening infections in diabetic patients, necessitating careful examination.

  2. Method for high-precision multi-layered thin film deposition for deep and extreme ultraviolet mirrors

    DOEpatents

    Ruffner, J.A.

    1999-06-15

    A method for coating (flat or non-flat) optical substrates with high-reflectivity multi-layer coatings for use at Deep Ultra-Violet (DUV) and Extreme Ultra-Violet (EUV) wavelengths. The method results in a product with minimum feature sizes of less than 0.10 [micro]m for the shortest wavelength (13.4 nm). The present invention employs a computer-based modeling and deposition method to enable lateral and vertical thickness control by scanning the position of the substrate with respect to the sputter target during deposition. The thickness profile of the sputter targets is modeled before deposition and then an appropriate scanning algorithm is implemented to produce any desired, radially-symmetric thickness profile. The present invention offers the ability to predict and achieve a wide range of thickness profiles on flat or figured substrates, i.e., account for 1/R[sup 2] factor in a model, and the ability to predict and accommodate changes in deposition rate as a result of plasma geometry, i.e., over figured substrates. 15 figs.

  3. High-resolution x-ray computed tomography to understand ruminant phylogeny

    NASA Astrophysics Data System (ADS)

    Costeur, Loic; Schulz, Georg; Müller, Bert

    2014-09-01

    High-resolution X-ray computed tomography has become a vital technique to study fossils down to the true micrometer level. Paleontological research requires the non-destructive analysis of internal structures of fossil specimens. We show how X-ray computed tomography enables us to visualize the inner ear of extinct and extant ruminants without skull destruction. The inner ear, a sensory organ for hearing and balance has a rather complex three-dimensional morphology and thus provides relevant phylogenetical information what has been to date essentially shown in primates. We made visible the inner ears of a set of living and fossil ruminants using the phoenix x-ray nanotom®m (GE Sensing and Inspection Technologies GmbH). Because of the high absorbing objects a tungsten target was used and the experiments were performed with maximum accelerating voltage of 180 kV and a beam current of 30 μA. Possible stem ruminants of the living families are known in the fossil record but extreme morphological convergences in external structures such as teeth is a strong limitation to our understanding of the evolutionary history of this economically important group of animals. We thus investigate the inner ear to assess its phylogenetical potential for ruminants and our first results show strong family-level morphological differences.

  4. Novel Scalable 3-D MT Inverse Solver

    NASA Astrophysics Data System (ADS)

    Kuvshinov, A. V.; Kruglyakov, M.; Geraskin, A.

    2016-12-01

    We present a new, robust and fast, three-dimensional (3-D) magnetotelluric (MT) inverse solver. As a forward modelling engine a highly-scalable solver extrEMe [1] is used. The (regularized) inversion is based on an iterative gradient-type optimization (quasi-Newton method) and exploits adjoint sources approach for fast calculation of the gradient of the misfit. The inverse solver is able to deal with highly detailed and contrasting models, allows for working (separately or jointly) with any type of MT (single-site and/or inter-site) responses, and supports massive parallelization. Different parallelization strategies implemented in the code allow for optimal usage of available computational resources for a given problem set up. To parameterize an inverse domain a mask approach is implemented, which means that one can merge any subset of forward modelling cells in order to account for (usually) irregular distribution of observation sites. We report results of 3-D numerical experiments aimed at analysing the robustness, performance and scalability of the code. In particular, our computational experiments carried out at different platforms ranging from modern laptops to high-performance clusters demonstrate practically linear scalability of the code up to thousands of nodes. 1. Kruglyakov, M., A. Geraskin, A. Kuvshinov, 2016. Novel accurate and scalable 3-D MT forward solver based on a contracting integral equation method, Computers and Geosciences, in press.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sauter, Nicholas K., E-mail: nksauter@lbl.gov; Hattne, Johan; Grosse-Kunstleve, Ralf W.

    The Computational Crystallography Toolbox (cctbx) is a flexible software platform that has been used to develop high-throughput crystal-screening tools for both synchrotron sources and X-ray free-electron lasers. Plans for data-processing and visualization applications are discussed, and the benefits and limitations of using graphics-processing units are evaluated. Current pixel-array detectors produce diffraction images at extreme data rates (of up to 2 TB h{sup −1}) that make severe demands on computational resources. New multiprocessing frameworks are required to achieve rapid data analysis, as it is important to be able to inspect the data quickly in order to guide the experiment in realmore » time. By utilizing readily available web-serving tools that interact with the Python scripting language, it was possible to implement a high-throughput Bragg-spot analyzer (cctbx.spotfinder) that is presently in use at numerous synchrotron-radiation beamlines. Similarly, Python interoperability enabled the production of a new data-reduction package (cctbx.xfel) for serial femtosecond crystallography experiments at the Linac Coherent Light Source (LCLS). Future data-reduction efforts will need to focus on specialized problems such as the treatment of diffraction spots on interleaved lattices arising from multi-crystal specimens. In these challenging cases, accurate modeling of close-lying Bragg spots could benefit from the high-performance computing capabilities of graphics-processing units.« less

  6. Amorphization of nanocrystalline monoclinic ZrO2 by swift heavy ion irradiation.

    PubMed

    Lu, Fengyuan; Wang, Jianwei; Lang, Maik; Toulemonde, Marcel; Namavar, Fereydoon; Trautmann, Christina; Zhang, Jiaming; Ewing, Rodney C; Lian, Jie

    2012-09-21

    Bulk ZrO(2) polymorphs generally have an extremely high amorphization tolerance upon low energy ion and swift heavy ion irradiation in which ballistic interaction and ionization radiation dominate the ion-solid interaction, respectively. However, under very high-energy irradiation by 1.33 GeV U-238, nanocrystalline (40-50 nm) monoclinic ZrO(2) can be amorphized. A computational simulation based on a thermal spike model reveals that the strong ionizing radiation from swift heavy ions with a very high electronic energy loss of 52.2 keV nm(-1) can induce transient zones with temperatures well above the ZrO(2) melting point. The extreme electronic energy loss, coupled with the high energy state of the nanostructured materials and a high thermal confinement due to the less effective heat transport within the transient hot zone, may eventually be responsible for the ionizing radiation-induced amorphization without transforming to the tetragonal polymorph. The amorphization of nanocrystalline zirconia was also confirmed by 1.69 GeV Au ion irradiation with the electronic energy loss of 40 keV nm(-1). These results suggest that highly radiation tolerant materials in bulk forms, such as ZrO(2), may be radiation sensitive with the reduced length scale down to the nano-metered regime upon irradiation above a threshold value of electronic energy loss.

  7. The Exceptionally High Life Expectancy of Costa Rican Nonagenarians

    PubMed Central

    ROSERO-BIXBY, LUIS

    2008-01-01

    Robust data from a voter registry show that Costa Rican nonagenarians have an exceptionally high live expectancy. Mortality at age 90 in Costa Rica is at least 14% lower than an average of 13 high-income countries. This advantage increases with age by 1% per year. Males have an additional 12% advantage. Age-90 life expectancy for males is 4.4 years, one-half year more than any other country in the world. These estimates do not use problematic data on reported ages, but ages are computed from birth dates in the Costa Rican birth-registration ledgers. Census data confirm the exceptionally high survival of elderly Costa Ricans, especially males. Comparisons with the United States and Sweden show that the Costa Rican advantage comes mostly from reduced incidence of cardiovascular diseases, coupled with a low prevalence of obesity, as the only available explanatory risk factor. Costa Rican nonagenarians are survivors of cohorts that underwent extremely harsh health conditions when young, and their advantage might be just a heterogeneity in frailty effect that might disappear in more recent cohorts. The availability of reliable estimates for the oldest-old in low-income populations is extremely rare. These results may enlighten the debate over how harsh early-life health conditions affect older-age mortality. PMID:18939667

  8. Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release Version 1.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vigil,Benny Manuel; Ballance, Robert; Haskell, Karen

    Cielo is a massively parallel supercomputer funded by the DOE/NNSA Advanced Simulation and Computing (ASC) program, and operated by the Alliance for Computing at Extreme Scale (ACES), a partnership between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL). The primary Cielo compute platform is physically located at Los Alamos National Laboratory. This Cielo Computational Environment Usage Model documents the capabilities and the environment to be provided for the Q1 FY12 Level 2 Cielo Capability Computing (CCC) Platform Production Readiness Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model ismore » focused on the needs of the ASC user working in the secure computing environments at Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory, or Sandia National Laboratories, but also addresses the needs of users working in the unclassified environment. The Cielo Computational Environment Usage Model maps the provided capabilities to the tri-Lab ASC Computing Environment (ACE) Version 8.0 requirements. The ACE requirements reflect the high performance computing requirements for the Production Readiness Milestone user environment capabilities of the ASC community. A description of ACE requirements met, and those requirements that are not met, are included in each section of this document. The Cielo Computing Environment, along with the ACE mappings, has been issued and reviewed throughout the tri-Lab community.« less

  9. Gas-fired duplex free-piston Stirling refrigerator

    NASA Astrophysics Data System (ADS)

    Urieli, L.

    1984-03-01

    The duplex free-piston Stirling refrigerator is a potentially high efficiency, high reliability device which is ideally suited to the home appliance field, in particular as a gas-fired refrigerator. It has significant advantages over other equivalent devices including freedom from halogenated hydrocarbons, extremely low temperatures available at a high efficiency, integrated water heating, and simple burner system control. The design and development of a portable working demonstration gas-fired duplex Stirling refrigeration unit is described. A unique combination of computer aided development and experimental development was used, enabling a continued interaction between the theoretical analysis and practical testing and evaluation. A universal test rig was developed in order to separately test and evaluate major subunits, enabling a smooth system integration phase.

  10. Suitability of holographic beam scanning in high resolution applications

    NASA Astrophysics Data System (ADS)

    Kalita, Ranjan; Goutam Buddha, S. S.; Boruah, Bosanta R.

    2018-02-01

    The high resolution applications of a laser scanning imaging system very much demand the accurate positioning of the illumination beam. The galvanometer scanner based beam scanning imaging systems, on the other hand, suffer from both short term and long term beam instability issues. Fortunately Computer generated holography based beam scanning offers extremely accurate beam steering, which can be very useful for imaging in high-resolution applications in confocal microscopy. The holographic beam scanning can be achieved by writing a sequence of holograms onto a spatial light modulator and utilizing one of the diffracted orders as the illumination beam. This paper highlights relative advantages of such a holographic beam scanning based confocal system and presents some of preliminary experimental results.

  11. Dynamic power scheduling system for JPEG2000 delivery over wireless networks

    NASA Astrophysics Data System (ADS)

    Martina, Maurizio; Vacca, Fabrizio

    2003-06-01

    Third generation mobile terminals diffusion is encouraging the development of new multimedia based applications. The reliable transmission of audiovisual content will gain major interest being one of the most valuable services. Nevertheless, mobile scenario is severely power constrained: high compression ratios and refined energy management strategies are highly advisable. JPEG2000 as the source encoding stage assures excellent performance with extremely good visual quality. However the limited power budged imposes to limit the computational effort in order to save as much power as possible. Starting from an error prone environment, as the wireless one, high error-resilience features need to be employed. This paper tries to investigate the trade-off between quality and power in such a challenging environment.

  12. Computational approach on PEB process in EUV resist: multi-scale simulation

    NASA Astrophysics Data System (ADS)

    Kim, Muyoung; Moon, Junghwan; Choi, Joonmyung; Lee, Byunghoon; Jeong, Changyoung; Kim, Heebom; Cho, Maenghyo

    2017-03-01

    For decades, downsizing has been a key issue for high performance and low cost of semiconductor, and extreme ultraviolet lithography is one of the promising candidates to achieve the goal. As a predominant process in extreme ultraviolet lithography on determining resolution and sensitivity, post exposure bake has been mainly studied by experimental groups, but development of its photoresist is at the breaking point because of the lack of unveiled mechanism during the process. Herein, we provide theoretical approach to investigate underlying mechanism on the post exposure bake process in chemically amplified resist, and it covers three important reactions during the process: acid generation by photo-acid generator dissociation, acid diffusion, and deprotection. Density functional theory calculation (quantum mechanical simulation) was conducted to quantitatively predict activation energy and probability of the chemical reactions, and they were applied to molecular dynamics simulation for constructing reliable computational model. Then, overall chemical reactions were simulated in the molecular dynamics unit cell, and final configuration of the photoresist was used to predict the line edge roughness. The presented multiscale model unifies the phenomena of both quantum and atomic scales during the post exposure bake process, and it will be helpful to understand critical factors affecting the performance of the resulting photoresist and design the next-generation material.

  13. Detecting Silent Data Corruption for Extreme-Scale Applications through Data Mining

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bautista-Gomez, Leonardo; Cappello, Franck

    Supercomputers allow scientists to study natural phenomena by means of computer simulations. Next-generation machines are expected to have more components and, at the same time, consume several times less energy per operation. These trends are pushing supercomputer construction to the limits of miniaturization and energy-saving strategies. Consequently, the number of soft errors is expected to increase dramatically in the coming years. While mechanisms are in place to correct or at least detect some soft errors, a significant percentage of those errors pass unnoticed by the hardware. Such silent errors are extremely damaging because they can make applications silently produce wrongmore » results. In this work we propose a technique that leverages certain properties of high-performance computing applications in order to detect silent errors at the application level. Our technique detects corruption solely based on the behavior of the application datasets and is completely application-agnostic. We propose multiple corruption detectors, and we couple them to work together in a fashion transparent to the user. We demonstrate that this strategy can detect the majority of the corruptions, while incurring negligible overhead. We show that with the help of these detectors, applications can have up to 80% of coverage against data corruption.« less

  14. Predicting crystal structures and properties of matter under extreme conditions via quantum mechanics: The pressure is on

    DOE PAGES

    Zurek, Eva; Grochala, Wojciech

    2014-11-27

    Experimental studies of compressed matter are now routinely conducted at pressures exceeding 1 mln atm (100 GPa) and occasionally they even surpass 10 mln atm (1 TPa). The structure and properties of solids that have been so significantly squeezed differ considerably from those know at ambient pressures (1 atm), often times leading to new and unexpected physics. Chemical reactivity is also substantially altered in the extreme pressure regime. In this feature paper we describe how synergy between theory and experiment can pave the road towards new experimental discoveries. Because chemical rules-of-thumb established at 1 atm often fail to predict themore » structures of solids under high pressure, automated crystal structure prediction (CSP) methods have been increasingly employed. After outlining the most important CSP techniques, we showcase a few examples from the recent literature that exemplify just how useful theory can be as an aid in the interpretation of experimental data, describe exciting theoretical predictions that are guiding experiment, and discuss when the computational methods that are currently routinely employed fail. Lastly, we forecast important problems that will be targeted by theory as theoretical methods undergo rapid development, along with the simultaneous increase of computational power.« less

  15. Low Cost Desktop Image Analysis Workstation With Enhanced Interactive User Interface

    NASA Astrophysics Data System (ADS)

    Ratib, Osman M.; Huang, H. K.

    1989-05-01

    A multimodality picture archiving and communication system (PACS) is in routine clinical use in the UCLA Radiology Department. Several types workstations are currently implemented for this PACS. Among them, the Apple Macintosh II personal computer was recently chosen to serve as a desktop workstation for display and analysis of radiological images. This personal computer was selected mainly because of its extremely friendly user-interface, its popularity among the academic and medical community and its low cost. In comparison to other microcomputer-based systems the Macintosh II offers the following advantages: the extreme standardization of its user interface, file system and networking, and the availability of a very large variety of commercial software packages. In the current configuration the Macintosh II operates as a stand-alone workstation where images are imported from a centralized PACS server through an Ethernet network using a standard TCP-IP protocol, and stored locally on magnetic disk. The use of high resolution screens (1024x768 pixels x 8bits) offer sufficient performance for image display and analysis. We focused our project on the design and implementation of a variety of image analysis algorithms ranging from automated structure and edge detection to sophisticated dynamic analysis of sequential images. Specific analysis programs were developed for ultrasound images, digitized angiograms, MRI and CT tomographic images and scintigraphic images.

  16. Development of a low-cost virtual reality workstation for training and education

    NASA Technical Reports Server (NTRS)

    Phillips, James A.

    1996-01-01

    Virtual Reality (VR) is a set of breakthrough technologies that allow a human being to enter and fully experience a 3-dimensional, computer simulated environment. A true virtual reality experience meets three criteria: (1) it involves 3-dimensional computer graphics; (2) it includes real-time feedback and response to user actions; and (3) it must provide a sense of immersion. Good examples of a virtual reality simulator are the flight simulators used by all branches of the military to train pilots for combat in high performance jet fighters. The fidelity of such simulators is extremely high -- but so is the price tag, typically millions of dollars. Virtual reality teaching and training methods are manifestly effective, but the high cost of VR technology has limited its practical application to fields with big budgets, such as military combat simulation, commercial pilot training, and certain projects within the space program. However, in the last year there has been a revolution in the cost of VR technology. The speed of inexpensive personal computers has increased dramatically, especially with the introduction of the Pentium processor and the PCI bus for IBM-compatibles, and the cost of high-quality virtual reality peripherals has plummeted. The result is that many public schools, colleges, and universities can afford a PC-based workstation capable of running immersive virtual reality applications. My goal this summer was to assemble and evaluate such a system.

  17. B-2 Extremely High Frequency SATCOM and Computer Increment 1 (B-2 EHF Inc 1)

    DTIC Science & Technology

    2015-12-01

    Confidence Level Confidence Level of cost estimate for current APB: 55% This APB reflects cost and funding data based on the B-2 EHF Increment I SCP...This cost estimate was quantified at the Mean (~55%) confidence level . Total Quantity Quantity SAR Baseline Production Estimate Current APB...Production Estimate Econ Qty Sch Eng Est Oth Spt Total 33.624 -0.350 1.381 0.375 0.000 -6.075 0.000 -0.620 -5.289 28.335 Current SAR Baseline to Current

  18. Methods of mathematical modeling using polynomials of algebra of sets

    NASA Astrophysics Data System (ADS)

    Kazanskiy, Alexandr; Kochetkov, Ivan

    2018-03-01

    The article deals with the construction of discrete mathematical models for solving applied problems arising from the operation of building structures. Security issues in modern high-rise buildings are extremely serious and relevant, and there is no doubt that interest in them will only increase. The territory of the building is divided into zones for which it is necessary to observe. Zones can overlap and have different priorities. Such situations can be described using formulas algebra of sets. Formulas can be programmed, which makes it possible to work with them using computer models.

  19. Computation of elementary modes: a unifying framework and the new binary approach

    PubMed Central

    Gagneur, Julien; Klamt, Steffen

    2004-01-01

    Background Metabolic pathway analysis has been recognized as a central approach to the structural analysis of metabolic networks. The concept of elementary (flux) modes provides a rigorous formalism to describe and assess pathways and has proven to be valuable for many applications. However, computing elementary modes is a hard computational task. In recent years we assisted in a multiplication of algorithms dedicated to it. We require a summarizing point of view and a continued improvement of the current methods. Results We show that computing the set of elementary modes is equivalent to computing the set of extreme rays of a convex cone. This standard mathematical representation provides a unified framework that encompasses the most prominent algorithmic methods that compute elementary modes and allows a clear comparison between them. Taking lessons from this benchmark, we here introduce a new method, the binary approach, which computes the elementary modes as binary patterns of participating reactions from which the respective stoichiometric coefficients can be computed in a post-processing step. We implemented the binary approach in FluxAnalyzer 5.1, a software that is free for academics. The binary approach decreases the memory demand up to 96% without loss of speed giving the most efficient method available for computing elementary modes to date. Conclusions The equivalence between elementary modes and extreme ray computations offers opportunities for employing tools from polyhedral computation for metabolic pathway analysis. The new binary approach introduced herein was derived from this general theoretical framework and facilitates the computation of elementary modes in considerably larger networks. PMID:15527509

  20. WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendygral, P. J.; Radcliffe, N.; Kandalla, K.

    2017-02-01

    We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it maymore » be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.« less

  1. Progress in fast, accurate multi-scale climate simulations

    DOE PAGES

    Collins, W. D.; Johansen, H.; Evans, K. J.; ...

    2015-06-01

    We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enablingmore » improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less

  2. A projected preconditioned conjugate gradient algorithm for computing many extreme eigenpairs of a Hermitian matrix [A projected preconditioned conjugate gradient algorithm for computing a large eigenspace of a Hermitian matrix

    DOE PAGES

    Vecharynski, Eugene; Yang, Chao; Pask, John E.

    2015-02-25

    Here, we present an iterative algorithm for computing an invariant subspace associated with the algebraically smallest eigenvalues of a large sparse or structured Hermitian matrix A. We are interested in the case in which the dimension of the invariant subspace is large (e.g., over several hundreds or thousands) even though it may still be small relative to the dimension of A. These problems arise from, for example, density functional theory (DFT) based electronic structure calculations for complex materials. The key feature of our algorithm is that it performs fewer Rayleigh–Ritz calculations compared to existing algorithms such as the locally optimalmore » block preconditioned conjugate gradient or the Davidson algorithm. It is a block algorithm, and hence can take advantage of efficient BLAS3 operations and be implemented with multiple levels of concurrency. We discuss a number of practical issues that must be addressed in order to implement the algorithm efficiently on a high performance computer.« less

  3. Computational imaging of light in flight

    NASA Astrophysics Data System (ADS)

    Hullin, Matthias B.

    2014-10-01

    Many computer vision tasks are hindered by image formation itself, a process that is governed by the so-called plenoptic integral. By averaging light falling into the lens over space, angle, wavelength and time, a great deal of information is irreversibly lost. The emerging idea of transient imaging operates on a time resolution fast enough to resolve non-stationary light distributions in real-world scenes. It enables the discrimination of light contributions by the optical path length from light source to receiver, a dimension unavailable in mainstream imaging to date. Until recently, such measurements used to require high-end optical equipment and could only be acquired under extremely restricted lab conditions. To address this challenge, we introduced a family of computational imaging techniques operating on standard time-of-flight image sensors, for the first time allowing the user to "film" light in flight in an affordable, practical and portable way. Just as impulse responses have proven a valuable tool in almost every branch of science and engineering, we expect light-in-flight analysis to impact a wide variety of applications in computer vision and beyond.

  4. Vertebral deformities and fractures are associated with MRI and pQCT measures obtained at the distal tibia and radius of postmenopausal women

    PubMed Central

    Rajapakse, C. S.; Phillips, E. A.; Sun, W.; Wald, M. J.; Magland, J. F.; Snyder, P. J.; Wehrli, F. W.

    2016-01-01

    Summary We investigated the association of postmenopausal vertebral deformities and fractures with bone parameters derived from distal extremities using MRI and pQCT. Distal extremity measures showed variable degrees of association with vertebral deformities and fractures, highlighting the systemic nature of postmenopausal bone loss. Introduction Prevalent vertebral deformities and fractures are known to predict incident further fractures. However, the association of distal extremity measures and vertebral deformities in postmenopausal women has not been fully established. Methods This study involved 98 postmenopausal women (age range 60–88 years, mean 70 years) with DXA BMD T-scores at either the hip or spine in the range of −1.5 to −3.5. Wedge, biconcavity, and crush deformities were computed on the basis of spine MRI. Vertebral fractures were assessed using Eastell's criterion. Distal tibia and radius stiffness was computed using MRI-based finite element analysis. BMD at the distal extremities were obtained using pQCT. Results Several distal extremity MRI and pQCT measures showed negative association with vertebral deformity on the basis of single parameter correlation (r up to 0.67) and two-parameter regression (r up to 0.76) models involving MRI stiffness and pQCT BMD. Subjects who had at least one prevalent vertebral fracture showed decreased MRI stiffness (up to 17.9 %) and pQCT density (up to 34.2 %) at the distal extremities compared to the non-fracture group. DXA lumbar spine BMD T-score was not associated with vertebral deformities. Conclusions The association between vertebral deformities and distal extremity measures supports the notion of postmenopausal osteoporosis as a systemic phenomenon. PMID:24221453

  5. "Attention on the flight deck": what ambulatory care providers can learn from pilots about complex coordinated actions.

    PubMed

    Frankel, Richard M; Saleem, Jason J

    2013-12-01

    Technical and interpersonal challenges of using electronic health records (EHRs) in ambulatory care persist. We use cockpit communication as an example of highly coordinated complex activity during flight and compare it with providers' communication when computers are used in the exam room. Maximum variation sampling was used to identify two videotapes from a parent study of primary care physicians' exam room computer demonstrating the greatest variation. We then produced and analyzed visualizations of the time providers spent looking at the computer and looking at the patient. Unlike the cockpit which is engineered to optimize joint attention on complex coordinated activities, we found polar extremes in the use of joint focus of attention to manage the medical encounter. We conclude that there is a great deal of room for improving the balance of interpersonal and technical attention that occurs in routine ambulatory visits in which computers are present in the exam room. Using well-known aviation practices can help primary care providers become more aware of the opportunities and challenges for enhancing the physician patient relationship in an era of exam room computing. Published by Elsevier Ireland Ltd.

  6. Notebook computer use on a desk, lap and lap support: effects on posture, performance and comfort.

    PubMed

    Asundi, Krishna; Odell, Dan; Luce, Adam; Dennerlein, Jack T

    2010-01-01

    This study quantified postures of users working on a notebook computer situated in their lap and tested the effect of using a device designed to increase the height of the notebook when placed on the lap. A motion analysis system measured head, neck and upper extremity postures of 15 adults as they worked on a notebook computer placed on a desk (DESK), the lap (LAP) and a commercially available lapdesk (LAPDESK). Compared with the DESK, the LAP increased downwards head tilt 6 degrees and wrist extension 8 degrees . Shoulder flexion and ulnar deviation decreased 13 degrees and 9 degrees , respectively. Compared with the LAP, the LAPDESK decreased downwards head tilt 4 degrees , neck flexion 2 degrees , and wrist extension 9 degrees. Users reported less discomfort and difficulty in the DESK configuration. Use of the lapdesk improved postures compared with the lap; however, all configurations resulted in high values of wrist extension, wrist deviation and downwards head tilt. STATEMENT OF RELEVANCE: This study quantifies postures of users working with a notebook computer in typical portable configurations. A better understanding of the postures assumed during notebook computer use can improve usage guidelines to reduce the risk of musculoskeletal injuries.

  7. Neurophysiological substrates of stroke patients with motor imagery-based Brain-Computer Interface training.

    PubMed

    Li, Mingfen; Liu, Ye; Wu, Yi; Liu, Sirao; Jia, Jie; Zhang, Liqing

    2014-06-01

    We investigated the efficacy of motor imagery-based Brain Computer Interface (MI-based BCI) training for eight stroke patients with severe upper extremity paralysis using longitudinal clinical assessments. The results were compared with those of a control group (n = 7) that only received FES (Functional Electrical Stimulation) treatment besides conventional therapies. During rehabilitation training, changes in the motor function of the upper extremity and in the neurophysiologic electroencephalographic (EEG) were observed for two groups. After 8 weeks of training, a significant improvement in the motor function of the upper extremity for the BCI group was confirmed (p < 0.05 for ARAT), simultaneously with the activation of bilateral cerebral hemispheres. Additionally, event-related desynchronization (ERD) of the affected sensorimotor cortexes (SMCs) was significantly enhanced when compared to the pretraining course, which was only observed in the BCI group (p < 0.05). Furthermore, the activation of affected SMC and parietal lobe were determined to contribute to motor function recovery (p < 0.05). In brief, our findings demonstrate that MI-based BCI training can enhance the motor function of the upper extremity for stroke patients by inducing the optimal cerebral motor functional reorganization.

  8. Lattice Thermal Conductivity of Ultra High Temperature Ceramics (UHTC) ZrB2 and HfB2 from Atomistic Simulations

    NASA Technical Reports Server (NTRS)

    Lawson, John W.; Daw, Murray S.; Bauschlicher, Charles W.

    2012-01-01

    Ultra high temperature ceramics (UHTC) including ZrB2 and HfB2 have a number of properties that make them attractive for applications in extreme environments. One such property is their high thermal conductivity. Computational modeling of these materials will facilitate understanding of fundamental mechanisms, elucidate structure-property relationships, and ultimately accelerate the materials design cycle. Progress in computational modeling of UHTCs however has been limited in part due to the absence of suitable interatomic potentials. Recently, we developed Tersoff style parameterizations of such potentials for both ZrB2 and HfB2 appropriate for atomistic simulations. As an application, Green-Kubo molecular dynamics simulations were performed to evaluate the lattice thermal conductivity for single crystals of ZrB2 and HfB2. The atomic mass difference in these binary compounds leads to oscillations in the time correlation function of the heat current, in contrast to the more typical monotonic decay seen in monoatomic materials such as Silicon, for example. Results at room temperature and at elevated temperatures will be reported.

  9. Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation.

    PubMed

    Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi

    2015-01-01

    Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it.

  10. Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation

    PubMed Central

    Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi

    2015-01-01

    Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it. PMID:26221133

  11. Making Advanced Scientific Algorithms and Big Scientific Data Management More Accessible

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venkatakrishnan, S. V.; Mohan, K. Aditya; Beattie, Keith

    2016-02-14

    Synchrotrons such as the Advanced Light Source (ALS) at Lawrence Berkeley National Laboratory are known as user facilities. They are sources of extremely bright X-ray beams, and scientists come from all over the world to perform experiments that require these beams. As the complexity of experiments has increased, and the size and rates of data sets has exploded, managing, analyzing and presenting the data collected at synchrotrons has been an increasing challenge. The ALS has partnered with high performance computing, fast networking, and applied mathematics groups to create a"super-facility", giving users simultaneous access to the experimental, computational, and algorithmic resourcesmore » to overcome this challenge. This combination forms an efficient closed loop, where data despite its high rate and volume is transferred and processed, in many cases immediately and automatically, on appropriate compute resources, and results are extracted, visualized, and presented to users or to the experimental control system, both to provide immediate insight and to guide decisions about subsequent experiments during beam-time. In this paper, We will present work done on advanced tomographic reconstruction algorithms to support users of the 3D micron-scale imaging instrument (Beamline 8.3.2, hard X-ray micro-tomography).« less

  12. Application of multidetector-row computed tomography in propeller flap planning.

    PubMed

    Ono, Shimpei; Chung, Kevin C; Hayashi, Hiromitsu; Ogawa, Rei; Takami, Yoshihiro; Hyakusoku, Hiko

    2011-02-01

    The propeller flap is defined as (1) being island-shaped, (2) having an axis that includes the perforators, and (3) having the ability to be rotated around an axis. The advantage of the propeller flap is that it is a pedicle flap that can be applied to cover defects located at the distal ends of the extremities. The specific aims of the authors' study were (1) to evaluate the usefulness of multidetector-row computed tomography in the planning of propeller flaps and (2) to present a clinical case series of propeller flap reconstructions that were planned preoperatively using multidetector-row computed tomography. The authors retrospectively analyzed all cases between April of 2007 and April of 2010 at Nippon Medical School Hospital in Tokyo, where multidetector-row computed tomography was used preoperatively to plan surgical reconstructions using propeller flaps. Thirteen patients underwent 16 flaps using the propeller flap technique. The perforators were identified accurately by multidetector-row computed tomography preoperatively in all cases. This is the first report describing the application of multidetector-row computed tomography in the planning of propeller flaps. Multidetector-row computed tomography is superior to other imaging methods because it demonstrates more precisely the perforator's position and subcutaneous course using high-resolution three-dimensional images. By using multidetector-row computed tomography to preoperatively identify a flap's perforators, the surgeon can better plan the flap design to efficiently conduct the flap surgery.

  13. A computer model for predicting grapevine cold hardiness

    USDA-ARS?s Scientific Manuscript database

    We developed a robust computer model of grapevine bud cold hardiness that will aid in the anticipation of and response to potential injury from fluctuations in winter temperature and from extreme cold events. The model uses time steps of 1 day along with the measured daily mean air temperature to ca...

  14. Computer-Communications Networks and Teletraffic.

    ERIC Educational Resources Information Center

    Switzer, I.

    Bi-directional cable TV (CATV) systems that are being installed today may not be well suited for computer communications. Older CATV systems are being modified to bi-directional transmission and most new systems are being built with bi-directional capability included. The extreme bandwidth requirement for carrying 20 or more TV channels on a…

  15. Downscaling wind and wavefields for 21st century coastal flood hazard projections in a region of complex terrain

    USGS Publications Warehouse

    O'Neill, Andrea; Erikson, Li; Barnard, Patrick

    2017-01-01

    While global climate models (GCMs) provide useful projections of near-surface wind vectors into the 21st century, resolution is not sufficient enough for use in regional wave modeling. Statistically downscaled GCM projections from Multivariate Adaptive Constructed Analogues provide daily averaged near-surface winds at an appropriate spatial resolution for wave modeling within the orographically complex region of San Francisco Bay, but greater resolution in time is needed to capture the peak of storm events. Short-duration high wind speeds, on the order of hours, are usually excluded in statistically downscaled climate models and are of key importance in wave and subsequent coastal flood modeling. Here we present a temporal downscaling approach, similar to constructed analogues, for near-surface winds suitable for use in local wave models and evaluate changes in wind and wave conditions for the 21st century. Reconstructed hindcast winds (1975–2004) recreate important extreme wind values within San Francisco Bay. A computationally efficient method for simulating wave heights over long time periods was used to screen for extreme events. Wave hindcasts show resultant maximum wave heights of 2.2 m possible within the Bay. Changes in extreme over-water wind speeds suggest contrasting trends within the different regions of San Francisco Bay, but 21th century projections show little change in the overall magnitude of extreme winds and locally generated waves.

  16. Massively parallel de novo protein design for targeted therapeutics.

    PubMed

    Chevalier, Aaron; Silva, Daniel-Adriano; Rocklin, Gabriel J; Hicks, Derrick R; Vergara, Renan; Murapa, Patience; Bernard, Steffen M; Zhang, Lu; Lam, Kwok-Ho; Yao, Guorui; Bahl, Christopher D; Miyashita, Shin-Ichiro; Goreshnik, Inna; Fuller, James T; Koday, Merika T; Jenkins, Cody M; Colvin, Tom; Carter, Lauren; Bohn, Alan; Bryan, Cassie M; Fernández-Velasco, D Alejandro; Stewart, Lance; Dong, Min; Huang, Xuhui; Jin, Rongsheng; Wilson, Ian A; Fuller, Deborah H; Baker, David

    2017-10-05

    De novo protein design holds promise for creating small stable proteins with shapes customized to bind therapeutic targets. We describe a massively parallel approach for designing, manufacturing and screening mini-protein binders, integrating large-scale computational design, oligonucleotide synthesis, yeast display screening and next-generation sequencing. We designed and tested 22,660 mini-proteins of 37-43 residues that target influenza haemagglutinin and botulinum neurotoxin B, along with 6,286 control sequences to probe contributions to folding and binding, and identified 2,618 high-affinity binders. Comparison of the binding and non-binding design sets, which are two orders of magnitude larger than any previously investigated, enabled the evaluation and improvement of the computational model. Biophysical characterization of a subset of the binder designs showed that they are extremely stable and, unlike antibodies, do not lose activity after exposure to high temperatures. The designs elicit little or no immune response and provide potent prophylactic and therapeutic protection against influenza, even after extensive repeated dosing.

  17. Massively parallel de novo protein design for targeted therapeutics

    NASA Astrophysics Data System (ADS)

    Chevalier, Aaron; Silva, Daniel-Adriano; Rocklin, Gabriel J.; Hicks, Derrick R.; Vergara, Renan; Murapa, Patience; Bernard, Steffen M.; Zhang, Lu; Lam, Kwok-Ho; Yao, Guorui; Bahl, Christopher D.; Miyashita, Shin-Ichiro; Goreshnik, Inna; Fuller, James T.; Koday, Merika T.; Jenkins, Cody M.; Colvin, Tom; Carter, Lauren; Bohn, Alan; Bryan, Cassie M.; Fernández-Velasco, D. Alejandro; Stewart, Lance; Dong, Min; Huang, Xuhui; Jin, Rongsheng; Wilson, Ian A.; Fuller, Deborah H.; Baker, David

    2017-10-01

    De novo protein design holds promise for creating small stable proteins with shapes customized to bind therapeutic targets. We describe a massively parallel approach for designing, manufacturing and screening mini-protein binders, integrating large-scale computational design, oligonucleotide synthesis, yeast display screening and next-generation sequencing. We designed and tested 22,660 mini-proteins of 37-43 residues that target influenza haemagglutinin and botulinum neurotoxin B, along with 6,286 control sequences to probe contributions to folding and binding, and identified 2,618 high-affinity binders. Comparison of the binding and non-binding design sets, which are two orders of magnitude larger than any previously investigated, enabled the evaluation and improvement of the computational model. Biophysical characterization of a subset of the binder designs showed that they are extremely stable and, unlike antibodies, do not lose activity after exposure to high temperatures. The designs elicit little or no immune response and provide potent prophylactic and therapeutic protection against influenza, even after extensive repeated dosing.

  18. New Python-based methods for data processing

    PubMed Central

    Sauter, Nicholas K.; Hattne, Johan; Grosse-Kunstleve, Ralf W.; Echols, Nathaniel

    2013-01-01

    Current pixel-array detectors produce diffraction images at extreme data rates (of up to 2 TB h−1) that make severe demands on computational resources. New multiprocessing frameworks are required to achieve rapid data analysis, as it is important to be able to inspect the data quickly in order to guide the experiment in real time. By utilizing readily available web-serving tools that interact with the Python scripting language, it was possible to implement a high-throughput Bragg-spot analyzer (cctbx.spotfinder) that is presently in use at numerous synchrotron-radiation beamlines. Similarly, Python interoperability enabled the production of a new data-reduction package (cctbx.xfel) for serial femto­second crystallography experiments at the Linac Coherent Light Source (LCLS). Future data-reduction efforts will need to focus on specialized problems such as the treatment of diffraction spots on interleaved lattices arising from multi-crystal specimens. In these challenging cases, accurate modeling of close-lying Bragg spots could benefit from the high-performance computing capabilities of graphics-processing units. PMID:23793153

  19. Massively parallel de novo protein design for targeted therapeutics

    PubMed Central

    Chevalier, Aaron; Silva, Daniel-Adriano; Rocklin, Gabriel J.; Hicks, Derrick R.; Vergara, Renan; Murapa, Patience; Bernard, Steffen M.; Zhang, Lu; Lam, Kwok-Ho; Yao, Guorui; Bahl, Christopher D.; Miyashita, Shin-Ichiro; Goreshnik, Inna; Fuller, James T.; Koday, Merika T.; Jenkins, Cody M.; Colvin, Tom; Carter, Lauren; Bohn, Alan; Bryan, Cassie M.; Fernández-Velasco, D. Alejandro; Stewart, Lance; Dong, Min; Huang, Xuhui; Jin, Rongsheng; Wilson, Ian A.; Fuller, Deborah H.; Baker, David

    2018-01-01

    De novo protein design holds promise for creating small stable proteins with shapes customized to bind therapeutic targets. We describe a massively parallel approach for designing, manufacturing and screening mini-protein binders, integrating large-scale computational design, oligonucleotide synthesis, yeast display screening and next-generation sequencing. We designed and tested 22,660 mini-proteins of 37–43 residues that target influenza haemagglutinin and botulinum neurotoxin B, along with 6,286 control sequences to probe contributions to folding and binding, and identified 2,618 high-affinity binders. Comparison of the binding and non-binding design sets, which are two orders of magnitude larger than any previously investigated, enabled the evaluation and improvement of the computational model. Biophysical characterization of a subset of the binder designs showed that they are extremely stable and, unlike antibodies, do not lose activity after exposure to high temperatures. The designs elicit little or no immune response and provide potent prophylactic and therapeutic protection against influenza, even after extensive repeated dosing. PMID:28953867

  20. Quantum information processing with long-wavelength radiation

    NASA Astrophysics Data System (ADS)

    Murgia, David; Weidt, Sebastian; Randall, Joseph; Lekitsch, Bjoern; Webster, Simon; Navickas, Tomas; Grounds, Anton; Rodriguez, Andrea; Webb, Anna; Standing, Eamon; Pearce, Stuart; Sari, Ibrahim; Kiang, Kian; Rattanasonti, Hwanjit; Kraft, Michael; Hensinger, Winfried

    To this point, the entanglement of ions has predominantly been performed using lasers. Using long wavelength radiation with static magnetic field gradients provides an architecture to simplify construction of a large scale quantum computer. The use of microwave-dressed states protects against decoherence from fluctuating magnetic fields, with radio-frequency fields used for qubit manipulation. I will report the realisation of spin-motion entanglement using long-wavelength radiation, and a new method to efficiently prepare dressed-state qubits and qutrits, reducing experimental complexity of gate operations. I will also report demonstration of ground state cooling using long wavelength radiation, which may increase two-qubit entanglement fidelity. I will then report demonstration of a high-fidelity long-wavelength two-ion quantum gate using dressed states. Combining these results with microfabricated ion traps allows for scaling towards a large scale ion trap quantum computer, and provides a platform for quantum simulations of fundamental physics. I will report progress towards the operation of microchip ion traps with extremely high magnetic field gradients for multi-ion quantum gates.

  1. Consumption of meat in relation to physical functioning in the Seniors-ENRICA cohort.

    PubMed

    Struijk, Ellen A; Banegas, José R; Rodríguez-Artalejo, Fernando; Lopez-Garcia, Esther

    2018-04-05

    Meat is an important source of high-quality protein and vitamin B but also has a relatively high content of saturated and trans fatty acids. Although protein and vitamin B intake seems to protect people from functional limitations, little is known about the effect of habitual meat consumption on physical function. The objective of this study was to examine the prospective association between the intake of meat (processed meat, red meat, and poultry) and physical function impairment in older adults. Data were collected for 2982 participants in the Seniors-ENRICA cohort, who were aged ≥60 years and free of physical function impairment. In 2008-2010, their habitual diet was assessed through a validated computer-assisted face-to-face diet history. Study participants were followed up through 2015 to assess self-reported incident impairment in agility, mobility, and performance-based lower-extremity function. Over a median follow-up of 5.2 years, we identified 625 participants with impaired agility, 455 with impaired mobility, and 446 with impaired lower-extremity function. After adjustment for potential confounders, processed meat intake was associated with a higher risk of impaired agility (hazard ratio [HR] for highest vs. lowest tertile: 1.33; 95% confidence interval [CI]: 1.08-1.64; p trend = 0.01) and of impaired lower-extremity function (HR for highest vs. lowest tertile: 1.31; 95% CI: 1.02-1.68; p trend = 0.04). No significant associations were found for red meat and poultry. Replacing one serving per day of processed meat with one serving per day of red meat, poultry, or with other important protein sources (fish, legumes, dairy, and nuts) was associated with lower risk of impaired agility and lower-extremity function. A higher consumption of processed meat was associated with a higher risk of impairment in agility and lower-extremity function. Replacing processed meat by other protein sources may slow the decline in physical functioning in older adults.

  2. The study on servo-control system in the large aperture telescope

    NASA Astrophysics Data System (ADS)

    Hu, Wei; Zhenchao, Zhang; Daxing, Wang

    2008-08-01

    Large astronomical telescope or extremely enormous astronomical telescope servo tracking technique will be one of crucial technology that must be solved in researching and manufacturing. To control technique feature of large astronomical telescope or extremely enormous astronomical telescope, this paper design a sort of large astronomical telescope servo tracking control system. This system composes a principal and subordinate distributed control system, host computer sends steering instruction and receive slave computer functional mode, slave computer accomplish control algorithm and execute real-time control. Large astronomical telescope servo control use direct drive machine, and adopt DSP technology to complete direct torque control algorithm, Such design can not only increase control system performance, but also greatly reduced volume and costs of control system, which has a significant occurrence. The system design scheme can be proved reasonably by calculating and simulating. This system can be applied to large astronomical telescope.

  3. Quantifying the effect of Tmax extreme events on local adaptation to climate change of maize crop in Andalusia for the 21st century

    NASA Astrophysics Data System (ADS)

    Gabaldon, Clara; Lorite, Ignacio J.; Ines Minguez, M.; Lizaso, Jon; Dosio, Alessandro; Sanchez, Enrique; Ruiz-Ramos, Margarita

    2015-04-01

    Extreme events of Tmax can threaten maize production on Andalusia (Ruiz-Ramos et al., 2011). The objective of this work is to attempt a quantification of the effects of Tmax extreme events on the previously identified (Gabaldón et al., 2013) local adaptation strategies to climate change of irrigated maize crop in Andalusia for the first half of the 21st century. This study is focused on five Andalusia locations. Local adaptation strategies identified consisted on combinations of changes on sowing dates and choice of cultivar (Gabaldón et al., 2013). Modified cultivar features were the duration of phenological phases and the grain filling rate. The phenological and yield simulations with the adaptative changes were obtained from a modelling chain: current simulated climate and future climate scenarios (2013-2050) were taken from a group of regional climate models at high resolution (25 km) from the European Project ENSEMBLES (http://www.ensembles-eu.org/). After bias correcting these data for temperature and precipitation (Dosio and Paruolo, 2011; Dosio et al., 2012) crop simulations were generated by the CERES-maize model (Jones and Kiniry, 1986) under DSSAT platform, previously calibrated and validated. Quantification of the effects of extreme Tmax on maize yield was computed for different phenological stages following Teixeira et al. (2013). A heat stress index was computed; this index assumes that yield-damage intensity due to heat stress increases linearly from 0.0 at a critical temperature to a maximum of 1.0 at a limit temperature. The decrease of crop yield is then computed by a normalized production damage index which combines attainable yield and heat stress index for each location. Selection of the most suitable adaptation strategy will be reviewed and discussed in light of the quantified effect on crop yield of the projected change of Tmax extreme events. This study will contribute to MACSUR knowledge Hub within the Joint Programming Initiative on Agriculture, Food Security and Climate Change (FACCE - JPI) of EU and is financed by MULCLIVAR project (CGL2012-38923-C02-02) and IFAPA project AGR6126 from Junta de Andalucía, Spain. References Dosio A. and Paruolo P., 2011. Bias correction of the ENSEMBLES high-resolution climate change projections for use by impact models: Evaluation on the present climate. Journal of Geophysical Research, VOL. 116, D16106, doi:10.1029/2011JD015934 Dosio A., Paruolo P. and Rojas R., 2012. Bias correction of the ENSEMBLES high resolution climate change projections for use by impact models: Analysis of the climate change signal. Journal of Geophysical Research, Volume 117, D17, doi: 0.1029/2012JD017968 Gabaldón C, Lorite IJ, Mínguez MI, Dosio A, Sánchez-Sánchez E and Ruiz-Ramos M, 2013. Evaluation of local adaptation strategies to climate change of maize crop in Andalusia for the first half of 21st century. Geophysical Research Abstracts. Vol. 15, EGU2013-13625, 2013. EGU General Assembly 2013, April 2013, Vienna, Austria. Jones C.A. and J.R. Kiniry. 1986. CERES-Maize: A simulation model of maize growth and development. Texas A&M Univ. Press, College Station. Ruiz-Ramos M., E. Sanchez, C. Galllardo, and M.I. Minguez. 2011. Impacts of projected maximum temperature extremes for C21 by an ensemble of regional climate models on cereal cropping systems in the Iberian Peninsula. Natural Hazards and Earth System Science 11: 3275-3291. Teixeira EI, Fischer G, van Velthuizen H, Walter C, Ewert F. Global hotspots of heat stress on agricultural crops due to climate change. Agric For Meteorol. 2013;170(15):206-215.

  4. A numerical tool for the calculation of non-equilibrium ionisation states in the solar corona and other astrophysical plasma environments

    NASA Astrophysics Data System (ADS)

    Bradshaw, S. J.

    2009-07-01

    Context: The effects of non-equilibrium processes on the ionisation state of strongly emitting elements in the solar corona can be extremely difficult to assess and yet they are critically important. For example, there is much interest in dynamic heating events localised in the solar corona because they are believed to be responsible for its high temperature and yet recent work has shown that the hottest (≥107 K) emission predicted to be associated with these events can be observationally elusive due to the difficulty of creating the highly ionised states from which the expected emission arises. This leads to the possibility of observing instruments missing such heating events entirely. Aims: The equations describing the evolution of the ionisaton state are a very stiff system of coupled, partial differential equations whose solution can be numerically challenging and time-consuming. Without access to specialised codes and significant computational resources it is extremely difficult to avoid the assumption of an equilibrium ionisation state even when it clearly cannot be justified. The aim of the current work is to develop a computational tool to allow straightforward calculation of the time-dependent ionisation state for a wide variety of physical circumstances. Methods: A numerical model comprising the system of time-dependent ionisation equations for a particular element and tabulated values of plasma temperature as a function of time is developed. The tabulated values can be the solutions of an analytical model, the output from a numerical code or a set of observational measurements. An efficient numerical method to solve the ionisation equations is implemented. Results: A suite of tests is designed and run to demonstrate that the code provides reliable and accurate solutions for a number of scenarios including equilibration of the ion population and rapid heating followed by thermal conductive cooling. It is found that the solver can evolve the ionisation state to recover exactly the equilibrium state found by an independent, steady-state solver for all temperatures, resolve the extremely small ionisation/recombination timescales associated with rapid temperature changes at high densities, and provide stable and accurate solutions for both dominant and minor ion population fractions. Rapid heating and cooling of low to moderate density plasma is characterised by significant non-equilibrium ionisation conditions. The effective ionisation temperatures are significantly lower than the electron temperature and the values found are in close agreement with the previous work of others. At the very highest densities included in the present study an assumption of equilibrium ionisation is found to be robust. Conclusions: The computational tool presented here provides a straightforward and reliable way to calculate ionisation states for a wide variety of physical circumstances. The numerical code gives results that are accurate and consistent with previous studies, has relatively undemanding computational requirements and is freely available from the author.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCaskey, Alexander J.

    Hybrid programming models for beyond-CMOS technologies will prove critical for integrating new computing technologies alongside our existing infrastructure. Unfortunately the software infrastructure required to enable this is lacking or not available. XACC is a programming framework for extreme-scale, post-exascale accelerator architectures that integrates alongside existing conventional applications. It is a pluggable framework for programming languages developed for next-gen computing hardware architectures like quantum and neuromorphic computing. It lets computational scientists efficiently off-load classically intractable work to attached accelerators through user-friendly Kernel definitions. XACC makes post-exascale hybrid programming approachable for domain computational scientists.

  6. Spectral photometry of extreme helium stars: Ultraviolet fluxes and effective temperature

    NASA Technical Reports Server (NTRS)

    Drilling, J. S.; Schoenberner, D.; Heber, U.; Lynas-Gray, A. E.

    1982-01-01

    Ultraviolet flux distributions are presented for the extremely helium rich stars BD +10 deg 2179, HD 124448, LSS 3378, BD -9 deg 4395, LSE 78, HD 160641, LSIV -1 deg 2, BD 1 deg 3438, HD 168476, MV Sgr, LS IV-14 deg 109 (CD -35 deg 11760), LSII +33 deg 5 and BD +1 deg 4381 (LSIV +2 deg 13) obtained with the International Ultraviolet Explorer (IUE). Broad band photometry and a newly computed grid of line blanketed model atmospheres were used to determine accurate angular diameters and total stellar fluxes. The resultant effective temperatures are in most cases in satisfactory agreement with those based on broad band photometry and/or high resolution spectroscopy in the visible. For two objects, LSII +33 deg 5 and LSE 78, disagreement was found between the IUE observations and broadband photometry: the colors predict temperatures around 20,000 K, whereas the UV spectra indicate much lower photospheric temperatures of 14,000 to 15,000 K. The new temperature scale for extreme helium stars extends to lower effective temperatures than that of Heber and Schoenberner (1981) and covers the range from 8,500 K to 32,000 K.

  7. The Extreme Climate Index: a novel and multi-hazard index for extreme weather events.

    NASA Astrophysics Data System (ADS)

    Cucchi, Marco; Petitta, Marcello; Calmanti, Sandro

    2017-04-01

    In this presentation we introduce the Extreme Climate Index (ECI): an objective, multi-hazard index capable of tracking changes in the frequency or magnitude of extreme weather events in African countries, thus indicating that a shift to a new climate regime is underway in a particular area. This index has been developed in the context of XCF (eXtreme Climate Facilities) project lead by ARC (African Risk Capacity, specialised agency of the African Union), and will be used in the payouts triggering mechanism of an insurance programme against risks related to the increase of frequency and magnitude of extreme weather events due to climate regimes' changes. The main hazards covered by ECI will be extreme dry, wet and heat events, with the possibility of adding region-specific risk events such as tropical cyclones for the most vulnerable areas. It will be based on data coming from consistent, sufficiently long, high quality historical records and will be standardized across broad geographical regions, so that extreme events occurring under different climatic regimes in Africa can be comparable. The first step to construct such an index is to define single hazard indicators. In this first study we focused on extreme dry/wet and heat events, using for their description respectively the well-known SPI (Standardized Precipitation Index) and an index developed by us, called SHI (Standardized Heat-waves Index). The second step consists in the development of a computational strategy to combine these, and possibly other indices, so that the ECI can describe, by means of a single indicator, different types of climatic extremes. According to the methodology proposed in this paper, the ECI is defined by two statistical components: the ECI intensity, which indicates whether an event is extreme or not; the angular component, which represent the contribution of each hazard to the overall intensity of the index. The ECI can thus be used to identify "extremes" after defining a suitable threshold above which the events can be held as extremes. In this presentation, after describing the methodology we used for the construction of the ECI, we present results obtained on different African regions, using NCEP Reanalysis dataset for air temperature at sig995 level and CHIRP dataset for precipitations. Particular attention will be devoted to 2015/2016 Malawi drought, which received some media attention due to the failure of the risk assessment model used to trigger due payouts: it will be shown how, on the contrary, combination of hydrological and temperature data used in ECI succeed in evaluating the extremeness of this event.

  8. Quantifying the effect of interannual ocean variability on the attribution of extreme climate events to human influence

    NASA Astrophysics Data System (ADS)

    Risser, Mark D.; Stone, Dáithí A.; Paciorek, Christopher J.; Wehner, Michael F.; Angélil, Oliver

    2017-11-01

    In recent years, the climate change research community has become highly interested in describing the anthropogenic influence on extreme weather events, commonly termed "event attribution." Limitations in the observational record and in computational resources motivate the use of uncoupled, atmosphere/land-only climate models with prescribed ocean conditions run over a short period, leading up to and including an event of interest. In this approach, large ensembles of high-resolution simulations can be generated under factual observed conditions and counterfactual conditions that might have been observed in the absence of human interference; these can be used to estimate the change in probability of the given event due to anthropogenic influence. However, using a prescribed ocean state ignores the possibility that estimates of attributable risk might be a function of the ocean state. Thus, the uncertainty in attributable risk is likely underestimated, implying an over-confidence in anthropogenic influence. In this work, we estimate the year-to-year variability in calculations of the anthropogenic contribution to extreme weather based on large ensembles of atmospheric model simulations. Our results both quantify the magnitude of year-to-year variability and categorize the degree to which conclusions of attributable risk are qualitatively affected. The methodology is illustrated by exploring extreme temperature and precipitation events for the northwest coast of South America and northern-central Siberia; we also provides results for regions around the globe. While it remains preferable to perform a full multi-year analysis, the results presented here can serve as an indication of where and when attribution researchers should be concerned about the use of atmosphere-only simulations.

  9. Numerical simulation of the flow about the F-18 HARV at high angle of attack

    NASA Technical Reports Server (NTRS)

    Murman, Scott M.

    1994-01-01

    As part of NASA's High Alpha Technology Program, research has been aimed at developing and extending numerical methods to accurately predict the high Reynolds number flow about the NASA F-18 High Alpha Research Vehicle (HARV) at large angles of attack. The HARV aircraft is equipped with a bidirectional thrust vectoring unit which enables stable, controlled flight through 70 deg angle of attack. Currently, high-fidelity numerical solutions for the flow about the HARV have been obtained at alpha = 30 deg, and validated against flight-test data. It is planned to simulate the flow about the HARV through alpha = 60 deg, and obtain solutions of the same quality as those at the lower angles of attack. This report presents the status of work aimed at extending the HARV computations to the extreme angle of attack range.

  10. Computing the qg → qg cross section using the BCFW recursion and introduction to jet tomography in heavy ion collisions via MHV techniques

    NASA Astrophysics Data System (ADS)

    Rabemananajara, Tanjona R.; Horowitz, W. A.

    2017-09-01

    To make predictions for the particle physics processes, one has to compute the cross section of the specific process as this is what one can measure in a modern collider experiment such as the Large Hadron Collider (LHC) at CERN. Theoretically, it has been proven to be extremely difficult to compute scattering amplitudes using conventional methods of Feynman. Calculations with Feynman diagrams are realizations of a perturbative expansion and when doing calculations one has to set up all topologically different diagrams, for a given process up to a given order of coupling in the theory. This quickly makes the calculation of scattering amplitudes a hot mess. Fortunately, one can simplify calculations by considering the helicity amplitude for the Maximally Helicity Violating (MHV). This can be extended to the formalism of on-shell recursion, which is able to derive, in a much simpler way the expression of a high order scattering amplitude from lower orders.

  11. Rovibrational bound states of SO2 isotopologues. I: Total angular momentum J = 0-10

    NASA Astrophysics Data System (ADS)

    Kumar, Praveen; Ellis, Joseph; Poirier, Bill

    2015-04-01

    Isotopic variation of the rovibrational bound states of SO2 for the four stable sulfur isotopes 32-34,36S is investigated in comprehensive detail. In a two-part series, we compute the low-lying energy levels for all values of total angular momentum in the range J = 0-20. All rovibrational levels are computed, to an extremely high level of numerical convergence. The calculations have been carried out using the ScalIT suite of parallel codes. The present study (Paper I) examines the J = 0-10 rovibrational levels, providing unambiguous symmetry and rovibrational label assignments for each computed state. The calculated vibrational energy levels exhibit very good agreement with previously reported experimental and theoretical data. Rovibrational energy levels, calculated without any Coriolis approximations, are reported here for the first time. Among other potential ramifications, this data will facilitate understanding of the origin of mass-independent fractionation of sulfur isotopes in the Archean rock record-of great relevance for understanding the "oxygen revolution".

  12. Signal and noise extraction from analog memory elements for neuromorphic computing.

    PubMed

    Gong, N; Idé, T; Kim, S; Boybat, I; Sebastian, A; Narayanan, V; Ando, T

    2018-05-29

    Dense crossbar arrays of non-volatile memory (NVM) can potentially enable massively parallel and highly energy-efficient neuromorphic computing systems. The key requirements for the NVM elements are continuous (analog-like) conductance tuning capability and switching symmetry with acceptable noise levels. However, most NVM devices show non-linear and asymmetric switching behaviors. Such non-linear behaviors render separation of signal and noise extremely difficult with conventional characterization techniques. In this study, we establish a practical methodology based on Gaussian process regression to address this issue. The methodology is agnostic to switching mechanisms and applicable to various NVM devices. We show tradeoff between switching symmetry and signal-to-noise ratio for HfO 2 -based resistive random access memory. Then, we characterize 1000 phase-change memory devices based on Ge 2 Sb 2 Te 5 and separate total variability into device-to-device variability and inherent randomness from individual devices. These results highlight the usefulness of our methodology to realize ideal NVM devices for neuromorphic computing.

  13. Generation of multivariate near shore extreme wave conditions based on an extreme value copula for offshore boundary conditions.

    NASA Astrophysics Data System (ADS)

    Leyssen, Gert; Mercelis, Peter; De Schoesitter, Philippe; Blanckaert, Joris

    2013-04-01

    Near shore extreme wave conditions, used as input for numerical wave agitation simulations and for the dimensioning of coastal defense structures, need to be determined at a harbour entrance situated at the French North Sea coast. To obtain significant wave heights, the numerical wave model SWAN has been used. A multivariate approach was used to account for the joint probabilities. Considered variables are: wind velocity and direction, water level and significant offshore wave height and wave period. In a first step a univariate extreme value distribution has been determined for the main variables. By means of a technique based on the mean excess function, an appropriate member of the GPD is selected. An optimal threshold for peak over threshold selection is determined by maximum likelihood optimization. Next, the joint dependency structure for the primary random variables is modeled by an extreme value copula. Eventually the multivariate domain of variables was stratified in different classes, each of which representing a combination of variable quantiles with a joint probability, which are used for model simulation. The main variable is the wind velocity, as in the area of concern extreme wave conditions are wind driven. The analysis is repeated for 9 different wind directions. The secondary variable is water level. In shallow waters extreme waves will be directly affected by water depth. Hence the joint probability of occurrence for water level and wave height is of major importance for design of coastal defense structures. Wind velocity and water levels are only dependent for some wind directions (wind induced setup). Dependent directions are detected using a Kendall and Spearman test and appeared to be those with the longest fetch. For these directions, wind velocity and water level extreme value distributions are multivariately linked through a Gumbel Copula. These distributions are stratified into classes of which the frequency of occurrence can be calculated. For the remaining directions the univariate extreme wind velocity distribution is stratified, each class combined with 5 high water levels. The wave height at the model boundaries was taken into account by a regression with the extreme wind velocity at the offshore location. The regression line and the 95% confidence limits where combined with each class. Eventually the wave period is computed by a new regression with the significant wave height. This way 1103 synthetic events were selected and simulated with the SWAN wave model, each of which a frequency of occurrence is calculated for. Hence near shore significant wave heights are obtained with corresponding frequencies. The statistical distribution of the near shore wave heights is determined by sorting the model results in a descending order and accumulating the corresponding frequencies. This approach allows determination of conditional return periods. For example, for the imposed univariate design return periods of 100 years for significant wave height and 30 years for water level, the joint return period for a simultaneous exceedance of both conditions can be computed as 4000 years. Hence, this methodology allows for a probabilistic design of coastal defense structures.

  14. Challenges of Particle Flow reconstruction in the CMS High-Granularity Calorimeter at the High-Luminosity LHC

    NASA Astrophysics Data System (ADS)

    Chlebana, Frank; CMS Collaboration

    2017-11-01

    The challenges of the High-Luminosity LHC (HL-LHC) are driven by the large number of overlapping proton-proton collisions (pileup) in each bunch-crossing and the extreme radiation dose to detectors at high pseudorapidity. To overcome this challenge CMS is developing an endcap electromagnetic+hadronic sampling calorimeter employing silicon sensors in the electromagnetic and front hadronic sections, comprising over 6 million channels, and highly-segmented plastic scintillators in the rear part of the hadronic section. This High- Granularity Calorimeter (HGCAL) will be the first of its kind used in a colliding beam experiment. Clustering deposits of energy over many cells and layers is a complex and challenging computational task, particularly in the high-pileup environment of HL-LHC. Baseline detector performance results are presented for electromagnetic and hadronic objects, and studies demonstrating the advantages of fine longitudinal and transverse segmentation are explored.

  15. Error driven remeshing strategy in an elastic-plastic shakedown problem

    NASA Astrophysics Data System (ADS)

    Pazdanowski, Michał J.

    2018-01-01

    A shakedown based approach has been for many years successfully used to calculate the distributions of residual stresses in bodies made of elastic-plastic materials and subjected to cyclic loads exceeding their bearing capacity. The calculations performed indicated the existence of areas characterized by extremely high gradients and rapid changes of sign over small areas in the stress field sought. In order to account for these changes in sign, relatively dense nodal meshes had to be used during calculations in disproportionately large parts of considered bodies, resulting in unnecessary expenditure of computer resources. Therefore the effort was undertaken to limit the areas of high mesh densities and drive the mesh regeneration algorithm by selected error indicators.

  16. Streptococcus pneumoniae necrotizing fasciitis in systemic lupus erythematosus.

    PubMed

    Sánchez, A; Robaina, R; Pérez, G; Cairoli, E

    2016-04-01

    Necrotizing fasciitis is a rapidly progressive destructive soft tissue infection with high mortality. Streptococcus pneumoniae as etiologic agent of necrotizing fasciitis is extremely unusual. The increased susceptibility to Streptococcus pneumoniae infection in patients with systemic lupus erythematosus is probably a multifactorial phenomenon. We report a case of a patient, a 36-year-old Caucasian female with 8-year history of systemic lupus erythematosus who presented a fatal Streptococcus pneumoniae necrotizing fasciitis. The role of computed tomography and the high performance of blood cultures for isolation of the causative microorganism are emphasized. Once diagnosis is suspected, empiric antibiotic treatment must be prescribed and prompt surgical exploration is mandatory. © The Author(s) 2015.

  17. Model reductions using a projection formulation

    NASA Technical Reports Server (NTRS)

    De Villemagne, Christian; Skelton, Robert E.

    1987-01-01

    A new methodology for model reduction of MIMO systems exploits the notion of an oblique projection. A reduced model is uniquely defined by a projector whose range space and orthogonal to the null space are chosen among the ranges of generalized controllability and observability matrices. The reduced order models match various combinations (chosen by the designer) of four types of parameters of the full order system associated with (1) low frequency response, (2) high frequency response, (3) low frequency power spectral density, and (4) high frequency power spectral density. Thus, the proposed method is a computationally simple substitute for many existing methods, has an extreme flexibility to embrace combinations of existing methods and offers some new features.

  18. VLSI neuroprocessors

    NASA Technical Reports Server (NTRS)

    Kemeny, Sabrina E.

    1994-01-01

    Electronic and optoelectronic hardware implementations of highly parallel computing architectures address several ill-defined and/or computation-intensive problems not easily solved by conventional computing techniques. The concurrent processing architectures developed are derived from a variety of advanced computing paradigms including neural network models, fuzzy logic, and cellular automata. Hardware implementation technologies range from state-of-the-art digital/analog custom-VLSI to advanced optoelectronic devices such as computer-generated holograms and e-beam fabricated Dammann gratings. JPL's concurrent processing devices group has developed a broad technology base in hardware implementable parallel algorithms, low-power and high-speed VLSI designs and building block VLSI chips, leading to application-specific high-performance embeddable processors. Application areas include high throughput map-data classification using feedforward neural networks, terrain based tactical movement planner using cellular automata, resource optimization (weapon-target assignment) using a multidimensional feedback network with lateral inhibition, and classification of rocks using an inner-product scheme on thematic mapper data. In addition to addressing specific functional needs of DOD and NASA, the JPL-developed concurrent processing device technology is also being customized for a variety of commercial applications (in collaboration with industrial partners), and is being transferred to U.S. industries. This viewgraph p resentation focuses on two application-specific processors which solve the computation intensive tasks of resource allocation (weapon-target assignment) and terrain based tactical movement planning using two extremely different topologies. Resource allocation is implemented as an asynchronous analog competitive assignment architecture inspired by the Hopfield network. Hardware realization leads to a two to four order of magnitude speed-up over conventional techniques and enables multiple assignments, (many to many), not achievable with standard statistical approaches. Tactical movement planning (finding the best path from A to B) is accomplished with a digital two-dimensional concurrent processor array. By exploiting the natural parallel decomposition of the problem in silicon, a four order of magnitude speed-up over optimized software approaches has been demonstrated.

  19. Computational data sciences for assessment and prediction of climate extremes

    NASA Astrophysics Data System (ADS)

    Ganguly, A. R.

    2011-12-01

    Climate extremes may be defined inclusively as severe weather events or large shifts in global or regional weather patterns which may be caused or exacerbated by natural climate variability or climate change. This area of research arguably represents one of the largest knowledge-gaps in climate science which is relevant for informing resource managers and policy makers. While physics-based climate models are essential in view of non-stationary and nonlinear dynamical processes, their current pace of uncertainty reduction may not be adequate for urgent stakeholder needs. The structure of the models may in some cases preclude reduction of uncertainty for critical processes at scales or for the extremes of interest. On the other hand, methods based on complex networks, extreme value statistics, machine learning, and space-time data mining, have demonstrated significant promise to improve scientific understanding and generate enhanced predictions. When combined with conceptual process understanding at multiple spatiotemporal scales and designed to handle massive data, interdisciplinary data science methods and algorithms may complement or supplement physics-based models. Specific examples from the prior literature and our ongoing work suggests how data-guided improvements may be possible, for example, in the context of ocean meteorology, climate oscillators, teleconnections, and atmospheric process understanding, which in turn can improve projections of regional climate, precipitation extremes and tropical cyclones in an useful and interpretable fashion. A community-wide effort is motivated to develop and adapt computational data science tools for translating climate model simulations to information relevant for adaptation and policy, as well as for improving our scientific understanding of climate extremes from both observed and model-simulated data.

  20. Structural Design of a Horizontal-Axis Tidal Current Turbine Composite Blade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bir, G. S.; Lawson, M. J.; Li, Y.

    2011-10-01

    This paper describes the structural design of a tidal composite blade. The structural design is preceded by two steps: hydrodynamic design and determination of extreme loads. The hydrodynamic design provides the chord and twist distributions along the blade length that result in optimal performance of the tidal turbine over its lifetime. The extreme loads, i.e. the extreme flap and edgewise loads that the blade would likely encounter over its lifetime, are associated with extreme tidal flow conditions and are obtained using a computational fluid dynamics (CFD) software. Given the blade external shape and the extreme loads, we use a laminate-theory-basedmore » structural design to determine the optimal layout of composite laminas such that the ultimate-strength and buckling-resistance criteria are satisfied at all points in the blade. The structural design approach allows for arbitrary specification of the chord, twist, and airfoil geometry along the blade and an arbitrary number of shear webs. In addition, certain fabrication criteria are imposed, for example, each composite laminate must be an integral multiple of its constituent ply thickness. In the present effort, the structural design uses only static extreme loads; dynamic-loads-based fatigue design will be addressed in the future. Following the blade design, we compute the distributed structural properties, i.e. flap stiffness, edgewise stiffness, torsion stiffness, mass, moments of inertia, elastic-axis offset, and center-of-mass offset along the blade. Such properties are required by hydro-elastic codes to model the tidal current turbine and to perform modal, stability, loads, and response analyses.« less

  1. Performance Analysis, Design Considerations, and Applications of Extreme-Scale In Situ Infrastructures

    DOE PAGES

    Ayachit, Utkarsh; Bauer, Andrew; Duque, Earl P. N.; ...

    2016-11-01

    A key trend facing extreme-scale computational science is the widening gap between computational and I/O rates, and the challenge that follows is how to best gain insight from simulation data when it is increasingly impractical to save it to persistent storage for subsequent visual exploration and analysis. One approach to this challenge is centered around the idea of in situ processing, where visualization and analysis processing is performed while data is still resident in memory. Our paper examines several key design and performance issues related to the idea of in situ processing at extreme scale on modern platforms: Scalability, overhead,more » performance measurement and analysis, comparison and contrast with a traditional post hoc approach, and interfacing with simulation codes. We illustrate these principles in practice with studies, conducted on large-scale HPC platforms, that include a miniapplication and multiple science application codes, one of which demonstrates in situ methods in use at greater than 1M-way concurrency.« less

  2. The results of bone deformity correction using a spider frame with web-based software for lower extremity long bone deformities.

    PubMed

    Tekin, Ali Çağrı; Çabuk, Haluk; Dedeoğlu, Süleyman Semih; Saygılı, Mehmet Selçuk; Adaş, Müjdat; Esenyel, Cem Zeki; Büyükkurt, Cem Dinçay; Tonbul, Murat

    2016-03-22

    To present the functional and radiological results and evaluate the effectiveness of a computer-assisted external fixator (spider frame) in patients with lower extremity shortness and deformity. The study comprised 17 patients (14 male, 3 female) who were treated for lower extremity long bone deformity and shortness between 2012 and 2015 using a spider frame. The procedure's level of difficulty was determined preoperatively using the Paley Scale. Postoperatively, the results for the patients who underwent tibial operations were evaluated using the Paley criteria modified by ASAMI, and the results for the patients who underwent femoral operations were evaluated according to the Paley scoring system. The evaluations were made by calculating the External Fixator and Distraction indexes. The mean age of the patients was 24.58 years (range, 5-51 years). The spider frame was applied to the femur in 10 patients and to the tibia in seven. The mean follow-up period was 15 months (range, 6-31 months) from the operation day, and the mean amount of lengthening was 3.0 cm (range, 1-6 cm). The mean duration of fixator application was 202.7 days (range, 104-300 days). The mean External Fixator Index was 98 days/cm (range, 42-265 days/cm). The mean Distraction Index was 10.49 days/cm (range, 10-14 days/cm). The computer-assisted external fixator system (spider frame) achieves single-stage correction in cases of both deformity and shortness. The system can be applied easily, and because of its high-tech software, it offers the possibility of postoperative treatment of the deformity.

  3. Enhancing coherence in molecular spin qubits via atomic clock transitions

    NASA Astrophysics Data System (ADS)

    Shiddiq, Muhandis; Komijani, Dorsa; Duan, Yan; Gaita-Ariño, Alejandro; Coronado, Eugenio; Hill, Stephen

    2016-03-01

    Quantum computing is an emerging area within the information sciences revolving around the concept of quantum bits (qubits). A major obstacle is the extreme fragility of these qubits due to interactions with their environment that destroy their quantumness. This phenomenon, known as decoherence, is of fundamental interest. There are many competing candidates for qubits, including superconducting circuits, quantum optical cavities, ultracold atoms and spin qubits, and each has its strengths and weaknesses. When dealing with spin qubits, the strongest source of decoherence is the magnetic dipolar interaction. To minimize it, spins are typically diluted in a diamagnetic matrix. For example, this dilution can be taken to the extreme of a single phosphorus atom in silicon, whereas in molecular matrices a typical ratio is one magnetic molecule per 10,000 matrix molecules. However, there is a fundamental contradiction between reducing decoherence by dilution and allowing quantum operations via the interaction between spin qubits. To resolve this contradiction, the design and engineering of quantum hardware can benefit from a ‘bottom-up’ approach whereby the electronic structure of magnetic molecules is chemically tailored to give the desired physical behaviour. Here we present a way of enhancing coherence in solid-state molecular spin qubits without resorting to extreme dilution. It is based on the design of molecular structures with crystal field ground states possessing large tunnelling gaps that give rise to optimal operating points, or atomic clock transitions, at which the quantum spin dynamics become protected against dipolar decoherence. This approach is illustrated with a holmium molecular nanomagnet in which long coherence times (up to 8.4 microseconds at 5 kelvin) are obtained at unusually high concentrations. This finding opens new avenues for quantum computing based on molecular spin qubits.

  4. Computational methods using weighed-extreme learning machine to predict protein self-interactions with protein evolutionary information.

    PubMed

    An, Ji-Yong; Zhang, Lei; Zhou, Yong; Zhao, Yu-Jun; Wang, Da-Fu

    2017-08-18

    Self-interactions Proteins (SIPs) is important for their biological activity owing to the inherent interaction amongst their secondary structures or domains. However, due to the limitations of experimental Self-interactions detection, one major challenge in the study of prediction SIPs is how to exploit computational approaches for SIPs detection based on evolutionary information contained protein sequence. In the work, we presented a novel computational approach named WELM-LAG, which combined the Weighed-Extreme Learning Machine (WELM) classifier with Local Average Group (LAG) to predict SIPs based on protein sequence. The major improvement of our method lies in presenting an effective feature extraction method used to represent candidate Self-interactions proteins by exploring the evolutionary information embedded in PSI-BLAST-constructed position specific scoring matrix (PSSM); and then employing a reliable and robust WELM classifier to carry out classification. In addition, the Principal Component Analysis (PCA) approach is used to reduce the impact of noise. The WELM-LAG method gave very high average accuracies of 92.94 and 96.74% on yeast and human datasets, respectively. Meanwhile, we compared it with the state-of-the-art support vector machine (SVM) classifier and other existing methods on human and yeast datasets, respectively. Comparative results indicated that our approach is very promising and may provide a cost-effective alternative for predicting SIPs. In addition, we developed a freely available web server called WELM-LAG-SIPs to predict SIPs. The web server is available at http://219.219.62.123:8888/WELMLAG/ .

  5. A New Volumetric Radiologic Method to Assess Indirect Decompression After Extreme Lateral Interbody Fusion Using High-Resolution Intraoperative Computed Tomography.

    PubMed

    Navarro-Ramirez, Rodrigo; Berlin, Connor; Lang, Gernot; Hussain, Ibrahim; Janssen, Insa; Sloan, Stephen; Askin, Gulce; Avila, Mauricio J; Zubkov, Micaella; Härtl, Roger

    2018-01-01

    Two-dimensional radiographic methods have been proposed to evaluate the radiographic outcome after indirect decompression through extreme lateral interbody fusion (XLIF). However, the assessment of neural decompression in a single plane may underestimate the effect of indirect decompression on central canal and foraminal volumes. The present study aimed to assess the reliability and consistency of a novel 3-dimensional radiographic method that assesses neural decompression by volumetric analysis using a new generation of intraoperative fan-beam computed tomography scanner in patients undergoing XLIF. Prospectively collected data from 7 patients (9 levels) undergoing XLIF was retrospectively analyzed. Three independent, blind raters using imaging analysis software performed volumetric measurements pre- and postoperatively to determine central canal and foraminal volumes. Intrarater and Interrater reliability tests were performed to assess the reliability of this novel volumetric method. The interrater reliability between the three raters ranged from 0.800 to 0.952, P < 0.0001. The test-retest analysis on a randomly selected subset of three patients showed good to excellent internal reliability (range of 0.78-1.00) for all 3 raters. There was a significant increase in mean volume ≈20% for right foramen, left foramen, and central canal volumes postoperatively (P = 0.0472; P = 0.0066; P = 0.0003, respectively). Here we demonstrate a new volumetric analysis technique that is feasible, reliable, and reproducible amongst independent raters for central canal and foraminal volumes in the lumbar spine using an intraoperative computed tomography scanner. Copyright © 2017. Published by Elsevier Inc.

  6. Using modified fruit fly optimisation algorithm to perform the function test and case studies

    NASA Astrophysics Data System (ADS)

    Pan, Wen-Tsao

    2013-06-01

    Evolutionary computation is a computing mode established by practically simulating natural evolutionary processes based on the concept of Darwinian Theory, and it is a common research method. The main contribution of this paper was to reinforce the function of searching for the optimised solution using the fruit fly optimization algorithm (FOA), in order to avoid the acquisition of local extremum solutions. The evolutionary computation has grown to include the concepts of animal foraging behaviour and group behaviour. This study discussed three common evolutionary computation methods and compared them with the modified fruit fly optimization algorithm (MFOA). It further investigated the ability of the three mathematical functions in computing extreme values, as well as the algorithm execution speed and the forecast ability of the forecasting model built using the optimised general regression neural network (GRNN) parameters. The findings indicated that there was no obvious difference between particle swarm optimization and the MFOA in regards to the ability to compute extreme values; however, they were both better than the artificial fish swarm algorithm and FOA. In addition, the MFOA performed better than the particle swarm optimization in regards to the algorithm execution speed, and the forecast ability of the forecasting model built using the MFOA's GRNN parameters was better than that of the other three forecasting models.

  7. Optimal dynamic remapping of parallel computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Reynolds, Paul F., Jr.

    1987-01-01

    A large class of computations are characterized by a sequence of phases, with phase changes occurring unpredictably. The decision problem was considered regarding the remapping of workload to processors in a parallel computation when the utility of remapping and the future behavior of the workload is uncertain, and phases exhibit stable execution requirements during a given phase, but requirements may change radically between phases. For these problems a workload assignment generated for one phase may hinder performance during the next phase. This problem is treated formally for a probabilistic model of computation with at most two phases. The fundamental problem of balancing the expected remapping performance gain against the delay cost was addressed. Stochastic dynamic programming is used to show that the remapping decision policy minimizing the expected running time of the computation has an extremely simple structure. Because the gain may not be predictable, the performance of a heuristic policy that does not require estimnation of the gain is examined. The heuristic method's feasibility is demonstrated by its use on an adaptive fluid dynamics code on a multiprocessor. The results suggest that except in extreme cases, the remapping decision problem is essentially that of dynamically determining whether gain can be achieved by remapping after a phase change. The results also suggest that this heuristic is applicable to computations with more than two phases.

  8. An approximate, maximum terminal velocity descent to a point

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eisler, G.R.; Hull, D.G.

    1987-01-01

    No closed form control solution exists for maximizing the terminal velocity of a hypersonic glider at an arbitrary point. As an alternative, this study uses neighboring extremal theory to provide a sampled data feedback law to guide the vehicle to a constrained ground range and altitude. The guidance algorithm is divided into two parts: 1) computation of a nominal, approximate, maximum terminal velocity trajectory to a constrained final altitude and computation of the resulting unconstrained groundrange, and 2) computation of the neighboring extremal control perturbation at the sample value of flight path angle to compensate for changes in the approximatemore » physical model and enable the vehicle to reach the on-board computed groundrange. The trajectories are characterized by glide and dive flight to the target to minimize the time spent in the denser parts of the atmosphere. The proposed on-line scheme successfully brings the final altitude and range constraints together, as well as compensates for differences in flight model, atmosphere, and aerodynamics at the expense of guidance update computation time. Comparison with an independent, parameter optimization solution for the terminal velocity is excellent. 6 refs., 3 figs.« less

  9. New Developments and Geoscience Applications of Synchrotron Computed Microtomography (Invited)

    NASA Astrophysics Data System (ADS)

    Rivers, M. L.; Wang, Y.; Newville, M.; Sutton, S. R.; Yu, T.; Lanzirotti, A.

    2013-12-01

    Computed microtomography is the extension to micron spatial resolution of the CAT scanning technique developed for medical imaging. Synchrotron sources are ideal for the method, since they provide a monochromatic, parallel beam with high intensity. High energy storage rings such as the Advanced Photon Source at Argonne National Laboratory produce x-rays with high energy, high brilliance, and high coherence. All of these factors combine to produce an extremely powerful imaging tool for earth science research. Techniques that have been developed include: - Absorption and phase contrast computed tomography with spatial resolution below one micron. - Differential contrast computed tomography, imaging above and below the absorption edge of a particular element. - High-pressure tomography, imaging inside a pressure cell at pressures above 10GPa. - High speed radiography and tomography, with 100 microsecond temporal resolution. - Fluorescence tomography, imaging the 3-D distribution of elements present at ppm concentrations. - Radiographic strain measurements during deformation at high confining pressure, combined with precise x-ray diffraction measurements to determine stress. These techniques have been applied to important problems in earth and environmental sciences, including: - The 3-D distribution of aqueous and organic liquids in porous media, with applications in contaminated groundwater and petroleum recovery. - The kinetics of bubble formation in magma chambers, which control explosive volcanism. - Studies of the evolution of the early solar system from 3-D textures in meteorites - Accurate crystal size distributions in volcanic systems, important for understanding the evolution of magma chambers. - The equation-of-state of amorphous materials at high pressure using both direct measurements of volume as a function of pressure and also by measuring the change x-ray absorption coefficient as a function of pressure. - The location and chemical speciation of toxic elements such as arsenic and nickel in soils and in plant tissues in contaminated Superfund sites. - The strength of earth materials under the pressure and temperature conditions of the Earth's mantle, providing insights into plate tectonics and the generation of earthquakes.

  10. Muscle atrophy in chronic inflammatory demyelinating polyneuropathy: a computed tomography assessment.

    PubMed

    Ohyama, K; Koike, H; Katsuno, M; Takahashi, M; Hashimoto, R; Kawagashira, Y; Iijima, M; Adachi, H; Watanabe, H; Sobue, G

    2014-07-01

    Muscle atrophy is generally mild in patients with chronic inflammatory demyelinating polyneuropathy (CIDP) compared with the severity and duration of the muscle weakness. Muscle atrophy was evaluated using computed tomography (CT) in patients with CIDP. Thirty-one patients with typical CIDP who satisfied the diagnostic criteria for the definite CIDP classification proposed by the European Federation of Neurological Societies and the Peripheral Nerve Society were assessed. The clinicopathological findings in patients with muscle atrophy were also compared with those in patients without atrophy. Computed tomography evidence was found of marked muscle atrophy with findings suggestive of fatty degeneration in 11 of the 31 patients with CIDP. CT-assessed muscle atrophy was in the lower extremities, particularly in the ankle plantarflexor muscles. Muscle weakness, which reflects the presence of muscle atrophy, tended to be more pronounced in the lower extremities than in the upper extremities in patients with muscle atrophy, whereas the upper and lower limbs tended to be equally affected in patients without muscle atrophy. Nerve conduction examinations revealed significantly greater reductions in compound muscle action potential amplitudes in the tibial nerves of patients with muscle atrophy. Sural nerve biopsy findings were similar in both groups. The functional prognoses after immunomodulatory therapies were significantly poorer amongst patients with muscle atrophy. Muscle atrophy was present in a subgroup of patients with CIDP, including patients with a typical form of the disease. These patients tended to demonstrate predominant motor impairments of the lower extremities and poorer functional prognoses. © 2014 The Author(s) European Journal of Neurology © 2014 EFNS.

  11. Biomechanical loading on the upper extremity increases from single key tapping to directional tapping.

    PubMed

    Qin, Jin; Trudeau, Matthieu; Katz, Jeffrey N; Buchholz, Bryan; Dennerlein, Jack T

    2011-08-01

    Musculoskeletal disorders associated with computer use span the joints of the upper extremity. Computing typically involves tapping in multiple directions. Thus, we sought to describe the loading on the finger, wrist, elbow and shoulder joints in terms of kinematic and kinetic difference across single key switch tapping to directional tapping on multiple keys. An experiment with repeated measures design was conducted. Six subjects tapped with their right index finger on a stand-alone number keypad placed horizontally in three conditions: (1) on single key switch (the number key 5); (2) left and right on number key 4 and 6; (3) top and bottom on number key 8 and 2. A force-torque transducer underneath the keypad measured the fingertip force. An active-marker infrared motion analysis system measured the kinematics of the fingertip, hand, forearm, upper arm and torso. Joint moments for the metacarpophalangeal, wrist, elbow, and shoulder joints were estimated using inverse dynamics. Tapping in the top-bottom orientation introduced the largest biomechanical loading on the upper extremity especially for the proximal joint, followed by tapping in the left-right orientation, and the lowest loading was observed during single key switch tapping. Directional tapping on average increased the fingertip force, joint excursion, and peak-to-peak joint torque by 45%, 190% and 55%, respectively. Identifying the biomechanical loading patterns associated with these fundamental movements of keying improves the understanding of the risks of upper extremity musculoskeletal disorders for computer keyboard users. Copyright © 2010 Elsevier Ltd. All rights reserved.

  12. Integrating surrogate models into subsurface simulation framework allows computation of complex reactive transport scenarios

    NASA Astrophysics Data System (ADS)

    De Lucia, Marco; Kempka, Thomas; Jatnieks, Janis; Kühn, Michael

    2017-04-01

    Reactive transport simulations - where geochemical reactions are coupled with hydrodynamic transport of reactants - are extremely time consuming and suffer from significant numerical issues. Given the high uncertainties inherently associated with the geochemical models, which also constitute the major computational bottleneck, such requirements may seem inappropriate and probably constitute the main limitation for their wide application. A promising way to ease and speed-up such coupled simulations is achievable employing statistical surrogates instead of "full-physics" geochemical models [1]. Data-driven surrogates are reduced models obtained on a set of pre-calculated "full physics" simulations, capturing their principal features while being extremely fast to compute. Model reduction of course comes at price of a precision loss; however, this appears justified in presence of large uncertainties regarding the parametrization of geochemical processes. This contribution illustrates the integration of surrogates into the flexible simulation framework currently being developed by the authors' research group [2]. The high level language of choice for obtaining and dealing with surrogate models is R, which profits from state-of-the-art methods for statistical analysis of large simulations ensembles. A stand-alone advective mass transport module was furthermore developed in order to add such capability to any multiphase finite volume hydrodynamic simulator within the simulation framework. We present 2D and 3D case studies benchmarking the performance of surrogates and "full physics" chemistry in scenarios pertaining the assessment of geological subsurface utilization. [1] Jatnieks, J., De Lucia, M., Dransch, D., Sips, M.: "Data-driven surrogate model approach for improving the performance of reactive transport simulations.", Energy Procedia 97, 2016, p. 447-453. [2] Kempka, T., Nakaten, B., De Lucia, M., Nakaten, N., Otto, C., Pohl, M., Chabab [Tillner], E., Kühn, M.: "Flexible Simulation Framework to Couple Processes in Complex 3D Models for Subsurface Utilization Assessment.", Energy Procedia, 97, 2016 p. 494-501.

  13. The Nike Laser Facility and its Capabilities

    NASA Astrophysics Data System (ADS)

    Serlin, V.; Aglitskiy, Y.; Chan, L. Y.; Karasik, M.; Kehne, D. M.; Oh, J.; Obenschain, S. P.; Weaver, J. L.

    2013-10-01

    The Nike laser is a 56-beam krypton fluoride (KrF) system that provides 3 to 4 kJ of laser energy on target. The laser uses induced spatial incoherence to achieve highly uniform focal distributions. 44 beams are overlapped onto target with peak intensities up to 1016 W/cm2. The effective time-averaged illumination nonuniformity is < 0 . 2 %. Nike produces highly uniform ablation pressures on target allowing well-controlled experiments at pressures up to 20 Mbar. The other 12 laser beams are used to generate diagnostic x-rays radiographing the primary laser-illuminated target. The facility includes a front end that generates the desired temporal and spatial laser profiles, two electron-beam pumped KrF amplifiers, a computer-controlled optical system, and a vacuum target chamber for experiments. Nike is used to study the physics and technology issues of direct-drive laser fusion, such as, hydrodynamic and laser-plasma instabilities, studies of the response of materials to extreme pressures, and generation of X rays from laser-heated targets. Nike features a computer-controlled data acquisition system, high-speed, high-resolution x-ray and visible imaging systems, x-ray and visible spectrometers, and cryogenic target capability. Work supported by DOE/NNSA.

  14. Extreme-Scale Bayesian Inference for Uncertainty Quantification of Complex Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biros, George

    Uncertainty quantification (UQ)—that is, quantifying uncertainties in complex mathematical models and their large-scale computational implementations—is widely viewed as one of the outstanding challenges facing the field of CS&E over the coming decade. The EUREKA project set to address the most difficult class of UQ problems: those for which both the underlying PDE model as well as the uncertain parameters are of extreme scale. In the project we worked on these extreme-scale challenges in the following four areas: 1. Scalable parallel algorithms for sampling and characterizing the posterior distribution that exploit the structure of the underlying PDEs and parameter-to-observable map. Thesemore » include structure-exploiting versions of the randomized maximum likelihood method, which aims to overcome the intractability of employing conventional MCMC methods for solving extreme-scale Bayesian inversion problems by appealing to and adapting ideas from large-scale PDE-constrained optimization, which have been very successful at exploring high-dimensional spaces. 2. Scalable parallel algorithms for construction of prior and likelihood functions based on learning methods and non-parametric density estimation. Constructing problem-specific priors remains a critical challenge in Bayesian inference, and more so in high dimensions. Another challenge is construction of likelihood functions that capture unmodeled couplings between observations and parameters. We will create parallel algorithms for non-parametric density estimation using high dimensional N-body methods and combine them with supervised learning techniques for the construction of priors and likelihood functions. 3. Bayesian inadequacy models, which augment physics models with stochastic models that represent their imperfections. The success of the Bayesian inference framework depends on the ability to represent the uncertainty due to imperfections of the mathematical model of the phenomena of interest. This is a central challenge in UQ, especially for large-scale models. We propose to develop the mathematical tools to address these challenges in the context of extreme-scale problems. 4. Parallel scalable algorithms for Bayesian optimal experimental design (OED). Bayesian inversion yields quantified uncertainties in the model parameters, which can be propagated forward through the model to yield uncertainty in outputs of interest. This opens the way for designing new experiments to reduce the uncertainties in the model parameters and model predictions. Such experimental design problems have been intractable for large-scale problems using conventional methods; we will create OED algorithms that exploit the structure of the PDE model and the parameter-to-output map to overcome these challenges. Parallel algorithms for these four problems were created, analyzed, prototyped, implemented, tuned, and scaled up for leading-edge supercomputers, including UT-Austin’s own 10 petaflops Stampede system, ANL’s Mira system, and ORNL’s Titan system. While our focus is on fundamental mathematical/computational methods and algorithms, we will assess our methods on model problems derived from several DOE mission applications, including multiscale mechanics and ice sheet dynamics.« less

  15. High-resolution downscaling for hydrological management

    NASA Astrophysics Data System (ADS)

    Ulbrich, Uwe; Rust, Henning; Meredith, Edmund; Kpogo-Nuwoklo, Komlan; Vagenas, Christos

    2017-04-01

    Hydrological modellers and water managers require high-resolution climate data to model regional hydrologies and how these may respond to future changes in the large-scale climate. The ability to successfully model such changes and, by extension, critical infrastructure planning is often impeded by a lack of suitable climate data. This typically takes the form of too-coarse data from climate models, which are not sufficiently detailed in either space or time to be able to support water management decisions and hydrological research. BINGO (Bringing INnovation in onGOing water management; ) aims to bridge the gap between the needs of hydrological modellers and planners, and the currently available range of climate data, with the overarching aim of providing adaptation strategies for climate change-related challenges. Producing the kilometre- and sub-daily-scale climate data needed by hydrologists through continuous simulations is generally computationally infeasible. To circumvent this hurdle, we adopt a two-pronged approach involving (1) selective dynamical downscaling and (2) conditional stochastic weather generators, with the former presented here. We take an event-based approach to downscaling in order to achieve the kilometre-scale input needed by hydrological modellers. Computational expenses are minimized by identifying extremal weather patterns for each BINGO research site in lower-resolution simulations and then only downscaling to the kilometre-scale (convection permitting) those events during which such patterns occur. Here we (1) outline the methodology behind the selection of the events, and (2) compare the modelled precipitation distribution and variability (preconditioned on the extremal weather patterns) with that found in observations.

  16. A Comparative Distributed Evaluation of the NWS-RDHM using Shape Matching and Traditional Measures with In Situ and Remotely Sensed Information

    NASA Astrophysics Data System (ADS)

    KIM, J.; Bastidas, L. A.

    2011-12-01

    We evaluate, calibrate and diagnose the performance of National Weather Service RDHM distributed model over the Durango River Basin in Colorado using simultaneously in situ and remotely sensed information from different discharge gaging stations (USGS), information about snow cover (SCV) and snow water equivalent (SWE) in situ from several SNOTEL sites and snow information distributed over the catchment from remotely sensed information (NOAA-NASA). In the process of evaluation we attempt to establish the optimal degree of parameter distribution over the catchment by calibration. A multi-criteria approach based on traditional measures (RMSE) and similarity based pattern comparisons using the Hausdorff and Earth Movers Distance approaches is used for the overall evaluation of the model performance. These pattern based approaches (shape matching) are found to be extremely relevant to account for the relatively large degree of inaccuracy in the remotely sensed SWE (judged inaccurate in terms of the value but reliable in terms of the distribution pattern) and the high reliability of the SCV (yes/no situation) while at the same time allow for an evaluation that quantifies the accuracy of the model over the entire catchment considering the different types of observations. The Hausdorff norm, due to its intrinsically multi-dimensional nature, allows for the incorporation of variables such as the terrain elevation as one of the variables for evaluation. The EMD, because of its extremely high computational overburden, requires the mapping of the set of evaluation variables into a two dimensional matrix for computation.

  17. Homogeneous spectroscopic parameters for bright planet host stars from the northern hemisphere . The impact on stellar and planetary mass

    NASA Astrophysics Data System (ADS)

    Sousa, S. G.; Santos, N. C.; Mortier, A.; Tsantaki, M.; Adibekyan, V.; Delgado Mena, E.; Israelian, G.; Rojas-Ayala, B.; Neves, V.

    2015-04-01

    Aims: In this work we derive new precise and homogeneous parameters for 37 stars with planets. For this purpose, we analyze high resolution spectra obtained by the NARVAL spectrograph for a sample composed of bright planet host stars in the northern hemisphere. The new parameters are included in the SWEET-Cat online catalogue. Methods: To ensure that the catalogue is homogeneous, we use our standard spectroscopic analysis procedure, ARES+MOOG, to derive effective temperatures, surface gravities, and metallicities. These spectroscopic stellar parameters are then used as input to compute the stellar mass and radius, which are fundamental for the derivation of the planetary mass and radius. Results: We show that the spectroscopic parameters, masses, and radii are generally in good agreement with the values available in online databases of exoplanets. There are some exceptions, especially for the evolved stars. These are analyzed in detail focusing on the effect of the stellar mass on the derived planetary mass. Conclusions: We conclude that the stellar mass estimations for giant stars should be managed with extreme caution when using them to compute the planetary masses. We report examples within this sample where the differences in planetary mass can be as high as 100% in the most extreme cases. Based on observations obtained at the Telescope Bernard Lyot (USR5026) operated by the Observatoire Midi-Pyrénées and the Institut National des Science de l'Univers of the Centre National de la Recherche Scientifique of France (Run ID L131N11 - OPTICON_2013A_027).

  18. The climatic characteristics of extreme precipitations for short-term intervals in the watershed of Lake Maggiore

    NASA Astrophysics Data System (ADS)

    Saidi, Helmi; Ciampittiello, Marzia; Dresti, Claudia; Ghiglieri, Giorgio

    2013-07-01

    Alpine and Mediterranean areas are undergoing a profound change in the typology and distribution of rainfall. In particular, there has been an increase in consecutive non-rainy days, and an escalation of extreme rainy events. The climatic characteristic of extreme precipitations over short-term intervals is an object of study in the watershed of Lake Maggiore, the second largest freshwater basin in Italy (located in the north-west of the country) and an important resource for tourism, fishing and commercial flower growing. The historical extreme rainfall series with high-resolution from 5 to 45 min and above: 1, 2, 3, 6, 12 and 24 h collected at different gauges located at representative sites in the watershed of Lake Maggiore, have been computed to perform regional frequency analysis of annual maxima precipitation based on the L-moments approach, and to produce growth curves for different return-period rainfall events. Because of different rainfall-generating mechanisms in the watershed of Lake Maggiore such as elevation, no single parent distribution could be found for the entire study area. This paper concerns an investigation designed to give a first view of the temporal change and evolution of annual maxima precipitation, focusing particularly on both heavy and extreme events recorded at time intervals ranging from few minutes to 24 h and also to create and develop an extreme storm precipitation database, starting from historical sub-daily precipitation series distributed over the territory. There have been two-part changes in extreme rainfall events occurrence in the last 23 years from 1987 to 2009. Little change is observed in 720 min and 24-h precipitations, but the change seen in 5, 10, 15, 20, 30, 45, 60, 120, 180 and 360 min events is significant. In fact, during the 2000s, growth curves have flattened and annual maxima have decreased.

  19. Mechanistic experimental pain assessment in computer users with and without chronic musculoskeletal pain.

    PubMed

    Ge, Hong-You; Vangsgaard, Steffen; Omland, Øyvind; Madeleine, Pascal; Arendt-Nielsen, Lars

    2014-12-06

    Musculoskeletal pain from the upper extremity and shoulder region is commonly reported by computer users. However, the functional status of central pain mechanisms, i.e., central sensitization and conditioned pain modulation (CPM), has not been investigated in this population. The aim was to evaluate sensitization and CPM in computer users with and without chronic musculoskeletal pain. Pressure pain threshold (PPT) mapping in the neck-shoulder (15 points) and the elbow (12 points) was assessed together with PPT measurement at mid-point in the tibialis anterior (TA) muscle among 47 computer users with chronic pain in the upper extremity and/or neck-shoulder pain (pain group) and 17 pain-free computer users (control group). Induced pain intensities and profiles over time were recorded using a 0-10 cm electronic visual analogue scale (VAS) in response to different levels of pressure stimuli on the forearm with a new technique of dynamic pressure algometry. The efficiency of CPM was assessed using cuff-induced pain as conditioning pain stimulus and PPT at TA as test stimulus. The demographics, job seniority and number of working hours/week using a computer were similar between groups. The PPTs measured at all 15 points in the neck-shoulder region were not significantly different between groups. There were no significant differences between groups neither in PPTs nor pain intensity induced by dynamic pressure algometry. No significant difference in PPT was observed in TA between groups. During CPM, a significant increase in PPT at TA was observed in both groups (P < 0.05) without significant differences between groups. For the chronic pain group, higher clinical pain intensity, lower PPT values from the neck-shoulder and higher pain intensity evoked by the roller were all correlated with less efficient descending pain modulation (P < 0.05). This suggests that the excitability of the central pain system is normal in a large group of computer users with low pain intensity chronic upper extremity and/or neck-shoulder pain and that increased excitability of the pain system cannot explain the reported pain. However, computer users with higher pain intensity and lower PPTs were found to have decreased efficiency in descending pain modulation.

  20. The Relationships Between the Trends of Mean and Extreme Precipitation

    NASA Technical Reports Server (NTRS)

    Zhou, Yaping; Lau, William K.-M.

    2017-01-01

    This study provides a better understanding of the relationships between the trends of mean and extreme precipitation in two observed precipitation data sets: the Climate Prediction Center Unified daily precipitation data set and the Global Precipitation Climatology Program (GPCP) pentad data set. The study employs three kinds of definitions of extreme precipitation: (1) percentile, (2) standard deviation and (3) generalize extreme value (GEV) distribution analysis for extreme events based on local statistics. Relationship between trends in the mean and extreme precipitation is identified with a novel metric, i.e. area aggregated matching ratio (AAMR) computed on regional and global scales. Generally, more (less) extreme events are likely to occur in regions with a positive (negative) mean trend. The match between the mean and extreme trends deteriorates for increasingly heavy precipitation events. The AAMR is higher in regions with negative mean trends than in regions with positive mean trends, suggesting a higher likelihood of severe dry events, compared with heavy rain events in a warming climate. AAMR is found to be higher in tropics and oceans than in the extratropics and land regions, reflecting a higher degree of randomness and more important dynamical rather than thermodynamical contributions of extreme events in the latter regions.

  1. New Developments of Computational Fluid Dynamics and Their Applications to Practical Engineering Problems

    NASA Astrophysics Data System (ADS)

    Chen, Hudong

    2001-06-01

    There have been considerable advances in Lattice Boltzmann (LB) based methods in the last decade. By now, the fundamental concept of using the approach as an alternative tool for computational fluid dynamics (CFD) has been substantially appreciated and validated in mainstream scientific research and in industrial engineering communities. Lattice Boltzmann based methods possess several major advantages: a) less numerical dissipation due to the linear Lagrange type advection operator in the Boltzmann equation; b) local dynamic interactions suitable for highly parallel processing; c) physical handling of boundary conditions for complicated geometries and accurate control of fluxes; d) microscopically consistent modeling of thermodynamics and of interface properties in complex multiphase flows. It provides a great opportunity to apply the method to practical engineering problems encountered in a wide range of industries from automotive, aerospace to chemical, biomedical, petroleum, nuclear, and others. One of the key challenges is to extend the applicability of this alternative approach to regimes of highly turbulent flows commonly encountered in practical engineering situations involving high Reynolds numbers. Over the past ten years, significant efforts have been made on this front at Exa Corporation in developing a lattice Boltzmann based commercial CFD software, PowerFLOW. It has become a useful computational tool for the simulation of turbulent aerodynamics in practical engineering problems involving extremely complex geometries and flow situations, such as in new automotive vehicle designs world wide. In this talk, we present an overall LB based algorithm concept along with certain key extensions in order to accurately handle turbulent flows involving extremely complex geometries. To demonstrate the accuracy of turbulent flow simulations, we provide a set of validation results for some well known academic benchmarks. These include straight channels, backward-facing steps, flows over a curved hill and typical NACA airfoils at various angles of attack including prediction of stall angle. We further provide numerous engineering cases, ranging from external aerodynamics around various car bodies to internal flows involved in various industrial devices. We conclude with a discussion of certain future extensions for complex fluids.

  2. The COSIMA experiments and their verification, a data base for the validation of two phase flow computer codes

    NASA Astrophysics Data System (ADS)

    Class, G.; Meyder, R.; Stratmanns, E.

    1985-12-01

    The large data base for validation and development of computer codes for two-phase flow, generated at the COSIMA facility, is reviewed. The aim of COSIMA is to simulate the hydraulic, thermal, and mechanical conditions in the subchannel and the cladding of fuel rods in pressurized water reactors during the blowout phase of a loss of coolant accident. In terms of fuel rod behavior, it is found that during blowout under realistic conditions only small strains are reached. For cladding rupture extremely high rod internal pressures are necessary. The behavior of fuel rod simulators and the effect of thermocouples attached to the cladding outer surface are clarified. Calculations performed with the codes RELAP and DRUFAN show satisfactory agreement with experiments. This can be improved by updating the phase separation models in the codes.

  3. Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method.

    PubMed

    Matsushima, Kyoji; Nakahara, Sumio

    2009-12-01

    A large-scale full-parallax computer-generated hologram (CGH) with four billion (2(16) x 2(16)) pixels is created to reconstruct a fine true 3D image of a scene, with occlusions. The polygon-based method numerically generates the object field of a surface object, whose shape is provided by a set of vertex data of polygonal facets, while the silhouette method makes it possible to reconstruct the occluded scene. A novel technique using the segmented frame buffer is presented for handling and propagating large wave fields even in the case where the whole wave field cannot be stored in memory. We demonstrate that the full-parallax CGH, calculated by the proposed method and fabricated by a laser lithography system, reconstructs a fine 3D image accompanied by a strong sensation of depth.

  4. [Violent computergames: distribution via and discussion on the Internet].

    PubMed

    Nagenborg, Michael

    2005-11-01

    The spread and use of computer-games including (interactive) depictions of violence are considered a moral problem, particularly if played by children and youths. This essay expresses an opinion on H. Volper's (2004) demand of condemning certain contents by media ethics. At the same time, an overview on the spread and use of "violent games" by children and youths is offered. As a matter of fact, the share of these titles in the complete range must not be estimated too high, certain titles on the other hand are extremely wide-spread. Finally, Fritz's and Fehr's thesis of the cultural conflict "computer game" (2004) is discussed, demonstrated at the example of the discussion on the Internet, and on the basis of this thesis a mediating position between the two cultures including audience ethics (Funiok 1999) is presented.

  5. Explicit integration with GPU acceleration for large kinetic networks

    DOE PAGES

    Brock, Benjamin; Belt, Andrew; Billings, Jay Jay; ...

    2015-09-15

    In this study, we demonstrate the first implementation of recently-developed fast explicit kinetic integration algorithms on modern graphics processing unit (GPU) accelerators. Taking as a generic test case a Type Ia supernova explosion with an extremely stiff thermonuclear network having 150 isotopic species and 1604 reactions coupled to hydrodynamics using operator splitting, we demonstrate the capability to solve of order 100 realistic kinetic networks in parallel in the same time that standard implicit methods can solve a single such network on a CPU. In addition, this orders-of-magnitude decrease in computation time for solving systems of realistic kinetic networks implies thatmore » important coupled, multiphysics problems in various scientific and technical fields that were intractable, or could be simulated only with highly schematic kinetic networks, are now computationally feasible.« less

  6. Three-dimensional near-field MIMO array imaging using range migration techniques.

    PubMed

    Zhuge, Xiaodong; Yarovoy, Alexander G

    2012-06-01

    This paper presents a 3-D near-field imaging algorithm that is formulated for 2-D wideband multiple-input-multiple-output (MIMO) imaging array topology. The proposed MIMO range migration technique performs the image reconstruction procedure in the frequency-wavenumber domain. The algorithm is able to completely compensate the curvature of the wavefront in the near-field through a specifically defined interpolation process and provides extremely high computational efficiency by the application of the fast Fourier transform. The implementation aspects of the algorithm and the sampling criteria of a MIMO aperture are discussed. The image reconstruction performance and computational efficiency of the algorithm are demonstrated both with numerical simulations and measurements using 2-D MIMO arrays. Real-time 3-D near-field imaging can be achieved with a real-aperture array by applying the proposed MIMO range migration techniques.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bakosi, Jozsef; Christon, Mark A.; Francois, Marianne M.

    This report describes the work carried out for completion of the Thermal Hydraulics Methods (THM) Level 3 Milestone THM.CFD.P5.05 for the Consortium for Advanced Simulation of Light Water Reactors (CASL). A series body-fitted computational meshes have been generated by Numeca's Hexpress/Hybrid, a.k.a. 'Spider', meshing technology for the V5H 3x3 and 5x5 rod bundle geometry used to compute the fluid dynamics of grid-to-rod fretting (GTRF). Spider is easy to use, fast, and automatically generates high-quality meshes for extremely complex geometries, required for the GTRF problem. Hydra-TH has been used to carry out large-eddy simulations on both 3x3 and 5x5 geometries, usingmore » different mesh resolutions. The results analyzed show good agreement with Star-CCM+ simulations and experimental data.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bakosi, Jozsef; Christon, Mark A.; Francois, Marianne M.

    This report describes the work carried out for completion of the Thermal Hydraulics Methods (THM) Level 3 Milestone THM.CFD.P5.05 for the Consortium for Advanced Simulation of Light Water Reactors (CASL). A series of body-fitted computational meshes have been generated by Numeca's Hexpress/Hybrid, a.k.a. 'Spider', meshing technology for the V5H 3 x 3 and 5 x 5 rod bundle geometries and subsequently used to compute the fluid dynamics of grid-to-rod fretting (GTRF). Spider is easy to use, fast, and automatically generates high-quality meshes for extremely complex geometries, required for the GTRF problem. Hydra-TH has been used to carry out large-eddy simulationsmore » on both 3 x 3 and 5 x 5 geometries, using different mesh resolutions. The results analyzed show good agreement with Star-CCM+ simulations and experimental data.« less

  9. Multiscale Computational Analysis of Nitrogen and Oxygen Gas-Phase Thermochemistry in Hypersonic Flows

    NASA Astrophysics Data System (ADS)

    Bender, Jason D.

    Understanding hypersonic aerodynamics is important for the design of next-generation aerospace vehicles for space exploration, national security, and other applications. Ground-level experimental studies of hypersonic flows are difficult and expensive; thus, computational science plays a crucial role in this field. Computational fluid dynamics (CFD) simulations of extremely high-speed flows require models of chemical and thermal nonequilibrium processes, such as dissociation of diatomic molecules and vibrational energy relaxation. Current models are outdated and inadequate for advanced applications. We describe a multiscale computational study of gas-phase thermochemical processes in hypersonic flows, starting at the atomic scale and building systematically up to the continuum scale. The project was part of a larger effort centered on collaborations between aerospace scientists and computational chemists. We discuss the construction of potential energy surfaces for the N4, N2O2, and O4 systems, focusing especially on the multi-dimensional fitting problem. A new local fitting method named L-IMLS-G2 is presented and compared with a global fitting method. Then, we describe the theory of the quasiclassical trajectory (QCT) approach for modeling molecular collisions. We explain how we implemented the approach in a new parallel code for high-performance computing platforms. Results from billions of QCT simulations of high-energy N2 + N2, N2 + N, and N2 + O2 collisions are reported and analyzed. Reaction rate constants are calculated and sets of reactive trajectories are characterized at both thermal equilibrium and nonequilibrium conditions. The data shed light on fundamental mechanisms of dissociation and exchange reactions -- and their coupling to internal energy transfer processes -- in thermal environments typical of hypersonic flows. We discuss how the outcomes of this investigation and other related studies lay a rigorous foundation for new macroscopic models for hypersonic CFD. This research was supported by the Department of Energy Computational Science Graduate Fellowship and by the Air Force Office of Scientific Research Multidisciplinary University Research Initiative.

  10. SymPy: Symbolic computing in python

    DOE PAGES

    Meurer, Aaron; Smith, Christopher P.; Paprocki, Mateusz; ...

    2017-01-02

    Here, SymPy is a full featured computer algebra system (CAS) written in the Python programming language. It is open source, being licensed under the extremely permissive 3-clause BSD license. SymPy was started by Ondrej Certik in 2005, and it has since grown into a large open source project, with over 500 contributors.

  11. Mathematical String Sculptures: A Case Study in Computationally-Enhanced Mathematical Crafts

    ERIC Educational Resources Information Center

    Eisenberg, Michael

    2007-01-01

    Mathematical string sculptures constitute an extremely beautiful realm of mathematical crafts. This snapshot begins with a description of a marvelous (and no longer manufactured) toy called Space Spider, which provided a framework with which children could experiment with string sculptures. Using a computer-controlled laser cutter to create frames…

  12. Abernethy malformation with portal vein aneurysm in a child.

    PubMed

    Chandrashekhara, Sheragaru H; Bhalla, Ashu Seith; Gupta, Arun Kumar; Vikash, C S; Kabra, Susheel Kumar

    2011-01-01

    Abernethy malformation is an extremely rare anomaly of the splanchnic venous system. We describe multidetector computed tomography findings of an incidentally detected Abernethy malformation with portal vein aneurysm in a two-and-half-year old child. The computed tomography scan was performed for the evaluation of respiratory distress, poor growth, and loss of appetite.

  13. System Expertise Training Courses in Private Sector: Can They Be Given Online?

    ERIC Educational Resources Information Center

    Balci Demirci, Birim

    2014-01-01

    It is widely known that there are many schools in the private sector offering courses in Computer Technology, Computer Engineering, Information Systems and similar disciplines in addition to Universities presenting such courses. The private sector programs are extremely popular with students already studying at university as well as being of great…

  14. Architectures for Device Aware Network

    DTIC Science & Technology

    2005-03-01

    68 b. PDA in DAN Mode ............................................................. 69 c. Cell Phone in DAN Mode...68 Figure 15. PDA in DAN Mode - Reduced Resolution Image ..................................... 69 Figure 16. Cell Phone in DAN Mode -No Image...computer, notebook computer, cell phone and a host of networked embedded systems) may have extremely differing capabilities and resources to retrieve and

  15. Evaluation of a focussed protocol for hand-held echocardiography and computer-assisted auscultation in detecting latent rheumatic heart disease in scholars.

    PubMed

    Zühlke, Liesl J; Engel, Mark E; Nkepu, Simpiwe; Mayosi, Bongani M

    2016-08-01

    Introduction Echocardiography is the diagnostic test of choice for latent rheumatic heart disease. The utility of echocardiography for large-scale screening is limited by high cost, complex diagnostic protocols, and time to acquire multiple images. We evaluated the performance of a brief hand-held echocardiography protocol and computer-assisted auscultation in detecting latent rheumatic heart disease with or without pathological murmur. A total of 27 asymptomatic patients with latent rheumatic heart disease based on the World Heart Federation criteria and 66 healthy controls were examined by standard cardiac auscultation to detect pathological murmur. Hand-held echocardiography using a focussed protocol that utilises one view - that is, the parasternal long-axis view - and one measurement - that is, mitral regurgitant jet - and a computer-assisted auscultation utilising an automated decision tool were performed on all patients. The sensitivity and specificity of computer-assisted auscultation in latent rheumatic heart disease were 4% (95% CI 1.0-20.4%) and 93.7% (95% CI 84.5-98.3%), respectively. The sensitivity and specificity of the focussed hand-held echocardiography protocol for definite rheumatic heart disease were 92.3% (95% CI 63.9-99.8%) and 100%, respectively. The test reliability of hand-held echocardiography was 98.7% for definite and 94.7% for borderline disease, and the adjusted diagnostic odds ratios were 1041 and 263.9 for definite and borderline disease, respectively. Computer-assisted auscultation has extremely low sensitivity but high specificity for pathological murmur in latent rheumatic heart disease. Focussed hand-held echocardiography has fair sensitivity but high specificity and diagnostic utility for definite or borderline rheumatic heart disease in asymptomatic patients.

  16. Comprehensive Materials and Morphologies Study of Ion Traps (COMMIT) for Scalable Quantum Computation

    DTIC Science & Technology

    2012-04-21

    the photoelectric effect. The typical shortest wavelengths needed for ion traps range from 194 nm for Hg+ to 493 nm for Ba +, corresponding to 6.4-2.5...REPORT Comprehensive Materials and Morphologies Study of Ion Traps (COMMIT) for scalable Quantum Computation - Final Report 14. ABSTRACT 16. SECURITY...CLASSIFICATION OF: Trapped ion systems, are extremely promising for large-scale quantum computation, but face a vexing problem, with motional quantum

  17. Extremely Low Frequency Electromagnetic Field from Convective Air Warming System on Temperature Selection and Distance.

    PubMed

    Cho, Kwang Rae; Kim, Myoung-Hun; Ko, Myoung Jin; Jung, Jae Wook; Lee, Ki Hwa; Park, Yei-Heum; Kim, Yong Han; Kim, Ki Hoon; Kim, Jin Soo

    2014-12-01

    Hypothermia generates potentially severe complications in operating or recovery room. Forced air warmer is effective to maintain body temperature. Extremely low frequency electromagnetic field (ELF-EMF) is harmful to human body and mainly produced by electronic equipment including convective air warming system. We investigated ELF-EMF from convective air warming device on various temperature selection and distance for guideline to protect medical personnel and patients. The intensity of ELF-EMF was measured as two-second interval for five minutes on various distance (0.1, 0.2, 0.3, 0.5 and 1meter) and temperature selection (high, medium, low and ambient). All of electrical devices were off including lamp, computer and air conditioner. Groups were compared using one-way ANOVA. P<0.05 was considered significant. Mean values of ELF-EMF on the distance of 30 cm were 18.63, 18.44, 18.23 and 17.92 milligauss (mG) respectively (high, medium, low and ambient temperature set). ELF-EMF of high temperature set was higher than data of medium, low and ambient set in all the distances. ELF-EMF from convective air warming system is higher in condition of more close location and higher temperature. ELF-EMF within thirty centimeters exceeds 2mG recommended by Swedish TCO guideline.

  18. A new deadlock resolution protocol and message matching algorithm for the extreme-scale simulator

    DOE PAGES

    Engelmann, Christian; Naughton, III, Thomas J.

    2016-03-22

    Investigating the performance of parallel applications at scale on future high-performance computing (HPC) architectures and the performance impact of different HPC architecture choices is an important component of HPC hardware/software co-design. The Extreme-scale Simulator (xSim) is a simulation toolkit for investigating the performance of parallel applications at scale. xSim scales to millions of simulated Message Passing Interface (MPI) processes. The overhead introduced by a simulation tool is an important performance and productivity aspect. This paper documents two improvements to xSim: (1)~a new deadlock resolution protocol to reduce the parallel discrete event simulation overhead and (2)~a new simulated MPI message matchingmore » algorithm to reduce the oversubscription management overhead. The results clearly show a significant performance improvement. The simulation overhead for running the NAS Parallel Benchmark suite was reduced from 102% to 0% for the embarrassingly parallel (EP) benchmark and from 1,020% to 238% for the conjugate gradient (CG) benchmark. xSim offers a highly accurate simulation mode for better tracking of injected MPI process failures. Furthermore, with highly accurate simulation, the overhead was reduced from 3,332% to 204% for EP and from 37,511% to 13,808% for CG.« less

  19. A standard set of upper extremity tasks for evaluating rehabilitation interventions for individuals with complete arm paralysis

    PubMed Central

    Cornwell, Andrew S.; Liao, James Y.; Bryden, Anne M.; Kirsch, Robert F.

    2013-01-01

    We have developed a set of upper extremity functional tasks to guide the design and test the performance of rehabilitation technologies that restore arm motion in people with high tetraplegia. Our goal was to develop a short set of tasks that would be representative of a much larger set of activities of daily living while also being feasible for a unilateral user of an implanted Functional Electrical Stimulation (FES) system. To compile this list of tasks, we reviewed existing clinical outcome measures related to arm and hand function, and were further informed by surveys of patient desires. We ultimately selected a set of five tasks that captured the most common components of movement seen in these tasks, making them highly relevant for assessing FES-restored unilateral arm function in individuals with high cervical spinal cord injury (SCI). The tasks are intended to be used when setting design specifications and for evaluation and standardization of rehabilitation technologies under development. While not unique, this set of tasks will provide a common basis for comparing different interventions (e.g., FES, powered orthoses, robotic assistants) and testing different user command interfaces (e.g., sip-and-puff, head joysticks, brain-computer interfaces). PMID:22773199

  20. [Body proportions of healthy and short stature adolescent girls].

    PubMed

    Milde, Katarzyna; Tomaszewski, Paweł; Majcher, Anna; Pyrżak, Beata; Stupnicki, Romuald

    2011-01-01

    Regularly conducted assessment of body proportions is of importance as early detection of possible growth disorders and immediate prevention may allow gathering an optimum of child's genetically conditioned level of development. To assess body proportions of adolescent girls, healthy or with growth deficiency. Three groups were studied: 104 healthy, short-statured girls (body height below the 10th percentile), 84 girls with Turner's syndrome (ZT) and 263 healthy girls of normal stature (between percentiles 25 and 75), all aged 11-15 years. The following measurements were conducted according to common anthropometric standards: body height, sitting body height, shoulder width, upper extremity length and lower extremity length - the last one was computed as the difference between standing and sitting body heights. All measurements were converted to logarithms and allometric linear regressions vs log body height were computed. The Turner girls proved to have allometrically shorter legs (p<0.001) and wider shoulders (p<0.001) compared with both groups of healthy girls, and longer upper extremities (p<0.001) compared with the girls of normal stature. Healthy, short-statured girls had longer lower extremities (p<0.001) as compared to other groups; they also had wider shoulders (p<0.001) and longer upper extremities (p<0.001) compared to healthy girls of normal height. Allometric relations of anthropometric measurements enable a deeper insight into the body proportions, especially in the growth period. The presented discrimination of Turner girls may serve as a screening test, and recommendation for further clinical treatment.

  1. Algorithm for fuel conservative horizontal capture trajectories

    NASA Technical Reports Server (NTRS)

    Neuman, F.; Erzberger, H.

    1981-01-01

    A real time algorithm for computing constant altitude fuel-conservative approach trajectories for aircraft is described. The characteristics of the trajectory computed were chosen to approximate the extremal trajectories obtained from the optimal control solution to the problem and showed a fuel difference of only 0.5 to 2 percent for the real time algorithm in favor of the extremals. The trajectories may start at any initial position, heading, and speed and end at any other final position, heading, and speed. They consist of straight lines and a series of circular arcs of varying radius to approximate constant bank-angle decelerating turns. Throttle control is maximum thrust, nominal thrust, or zero thrust. Bank-angle control is either zero or aproximately 30 deg.

  2. [Parallel virtual reality visualization of extreme large medical datasets].

    PubMed

    Tang, Min

    2010-04-01

    On the basis of a brief description of grid computing, the essence and critical techniques of parallel visualization of extreme large medical datasets are discussed in connection with Intranet and common-configuration computers of hospitals. In this paper are introduced several kernel techniques, including the hardware structure, software framework, load balance and virtual reality visualization. The Maximum Intensity Projection algorithm is realized in parallel using common PC cluster. In virtual reality world, three-dimensional models can be rotated, zoomed, translated and cut interactively and conveniently through the control panel built on virtual reality modeling language (VRML). Experimental results demonstrate that this method provides promising and real-time results for playing the role in of a good assistant in making clinical diagnosis.

  3. Numerical Analysis of Flood modeling of upper Citarum River under Extreme Flood Condition

    NASA Astrophysics Data System (ADS)

    Siregar, R. I.

    2018-02-01

    This paper focuses on how to approach the numerical method and computation to analyse flood parameters. Water level and flood discharge are the flood parameters solved by numerical methods approach. Numerical method performed on this paper for unsteady flow conditions have strengths and weaknesses, among others easily applied to the following cases in which the boundary irregular flow. The study area is in upper Citarum Watershed, Bandung, West Java. This paper uses computation approach with Force2 programming and HEC-RAS to solve the flow problem in upper Citarum River, to investigate and forecast extreme flood condition. Numerical analysis based on extreme flood events that have occurred in the upper Citarum watershed. The result of water level parameter modeling and extreme flood discharge compared with measurement data to analyse validation. The inundation area about flood that happened in 2010 is about 75.26 square kilometres. Comparing two-method show that the FEM analysis with Force2 programs has the best approach to validation data with Nash Index is 0.84 and HEC-RAS that is 0.76 for water level. For discharge data Nash Index obtained the result analysis use Force2 is 0.80 and with use HEC-RAS is 0.79.

  4. Catheter-Directed Thrombolysis of Acute Deep Vein Thrombosis in the Lower Extremity of a Child with Interrupted Inferior Vena Cava

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oguzkurt, Levent, E-mail: loguzkurt@yahoo.com; Ozkan, Ugur; Tercan, Fahri

    2007-04-15

    We present the case of a 14-year-old girl who developed acute deep vein thrombosis (DVT) in her right lower extremity. Laboratory testing revealed protein S deficiency, and the patient's father also had this abnormality with a history of lower extremity DVT. Manual thromboaspiration followed by catheter-directed thrombolysis resulted in total clearance of all thrombi. Computed tomography and later venography revealed an interrupted inferior vena cava. Catheter-directed thrombolysis is an established treatment for adults with acute DVT. To the best of our knowledge, this report is the first to describe catheter-directed thrombolysis in a pediatric patient with lower extremity DVT. Ourmore » results suggest that catheter-directed thrombolysis is safe and effective for use in selected older children and adolescents with acute DVT in the lower extremity.« less

  5. [Imaging of diabetic osteopathy].

    PubMed

    Patsch, J; Pietschmann, P; Schueller-Weidekamm, C

    2015-04-01

    Diabetic bone diseases are more than just osteoporosis in patients with diabetes mellitus (DM): a relatively high bone mineral density is paired with a paradoxically high risk of fragility fractures. Diabetics exhibit low bone turnover, osteocyte dysfunction, relative hypoparathyroidism and an accumulation of advanced glycation end products in the bone matrix. Besides typical insufficiency fractures, diabetics show a high risk for peripheral fractures of the lower extremities (e.g. metatarsal fractures). The correct interdisciplinary assessment of fracture risks in patients with DM is therefore a clinical challenge. There are two state of the art imaging methods for the quantification of fracture risks: dual energy X-ray absorptiometry (DXA) and quantitative computed tomography (QCT). Radiography, multidetector computed tomography (MDCT) and magnetic resonance imaging (MRI) are suitable for the detection of insufficiency fractures. Novel research imaging techniques, such as high-resolution peripheral quantitative computed tomography (HR-pQCT) provide non-invasive insights into bone microarchitecture of the peripheral skeleton. Using MR spectroscopy, bone marrow composition can be studied. Both methods have been shown to be capable of discriminating between type 2 diabetic patients with and without prevalent fragility fractures and thus bear the potential of improving the current standard of care. Currently both methods remain limited to clinical research applications. DXA and HR-pQCT are valid tools for the quantification of bone mineral density and assessment of fracture risk in patients with DM, especially if interpreted in the context of clinical risk factors. Radiography, CT and MRI are suitable for the detection of insufficiency fractures.

  6. Preliminary analysis of aircraft fuel systems for use with broadened specification jet fuels

    NASA Technical Reports Server (NTRS)

    Pasion, A. J.; Thomas, I.

    1977-01-01

    An analytical study was conducted on the use of broadened specification hydrocarbon fuels in present day aircraft. A short range Boeing 727 mission and three long range Boeing 747 missions were used as basis of calculation for one-day-per-year extreme values of fuel loading, airport ambient and altitude ambient temperatures with various seasonal and climatic conditions. Four hypothetical fuels were selected; two high-vapor-pressure fuels with 35 kPa and 70 kPa RVP and two high-freezing-point fuels with -29 C and -18 C freezing points. In-flight fuel temperatures were predicted by Boeing's aircraft fuel tank thermal analyzer computer program. Boil-off rates were calculated for the high vapor pressure fuels and heating/insulation requirements for the high freezing point fuels were established. Possible minor and major heating system modifications were investigated with respect to heat output, performance and economic penalties for the high freezing point fuels.

  7. Effect of Fuel Injection and Mixing Characteristics on Pulse-Combustor Performance at High-Pressure

    NASA Technical Reports Server (NTRS)

    Yungster, Shaye; Paxson, Daniel E.; Perkins, Hugh D.

    2014-01-01

    Recent calculations of pulse-combustors operating at high-pressure conditions produced pressure gains significantly lower than those observed experimentally and computationally at atmospheric conditions. The factors limiting the pressure-gain at high-pressure conditions are identified, and the effects of fuel injection and air mixing characteristics on performance are investigated. New pulse-combustor configurations were developed, and the results show that by suitable changes to the combustor geometry, fuel injection scheme and valve dynamics the performance of the pulse-combustor operating at high-pressure conditions can be increased to levels comparable to those observed at atmospheric conditions. In addition, the new configurations can significantly reduce the levels of NOx emissions. One particular configuration resulted in extremely low levels of NO, producing an emission index much less than one, although at a lower pressure-gain. Calculations at representative cruise conditions demonstrated that pulse-combustors can achieve a high level of performance at such conditions.

  8. Precise attitude control of the Stanford relativity satellite.

    NASA Technical Reports Server (NTRS)

    Bull, J. S.; Debra, D. B.

    1973-01-01

    A satellite being designed by the Stanford University to measure (with extremely high precision) the effect of General Relativity is described. Specifically, the satellite will measure two relativistic precessions predicted by the theory: the geodetic effect (6.9 arcsec/yr), due solely to motion about the earth, and the motional effect (0.05 arcsec/yr), due to rotation of the earth. The gyro design requirements, including the requirement for precise attitude control and a dynamic model for attitude control synthesis, are discussed. Closed loop simulation of the satellite's natural dynamics on an analog computer is described.

  9. A fast isogeometric BEM for the three dimensional Laplace- and Helmholtz problems

    NASA Astrophysics Data System (ADS)

    Dölz, Jürgen; Harbrecht, Helmut; Kurz, Stefan; Schöps, Sebastian; Wolf, Felix

    2018-03-01

    We present an indirect higher order boundary element method utilising NURBS mappings for exact geometry representation and an interpolation-based fast multipole method for compression and reduction of computational complexity, to counteract the problems arising due to the dense matrices produced by boundary element methods. By solving Laplace and Helmholtz problems via a single layer approach we show, through a series of numerical examples suitable for easy comparison with other numerical schemes, that one can indeed achieve extremely high rates of convergence of the pointwise potential through the utilisation of higher order B-spline-based ansatz functions.

  10. High Temperature Near-Field NanoThermoMechanical Rectification

    PubMed Central

    Elzouka, Mahmoud; Ndao, Sidy

    2017-01-01

    Limited performance and reliability of electronic devices at extreme temperatures, intensive electromagnetic fields, and radiation found in space exploration missions (i.e., Venus & Jupiter planetary exploration, and heliophysics missions) and earth-based applications requires the development of alternative computing technologies. In the pursuit of alternative technologies, research efforts have looked into developing thermal memory and logic devices that use heat instead of electricity to perform computations. However, most of the proposed technologies operate at room or cryogenic temperatures, due to their dependence on material’s temperature-dependent properties. Here in this research, we show experimentally—for the first time—the use of near-field thermal radiation (NFTR) to achieve thermal rectification at high temperatures, which can be used to build high-temperature thermal diodes for performing logic operations in harsh environments. We achieved rectification through the coupling between NFTR and the size of a micro/nano gap separating two terminals, engineered to be a function of heat flow direction. We fabricated and tested a proof-of-concept NanoThermoMechanical device that has shown a maximum rectification of 10.9% at terminals’ temperatures of 375 and 530 K. Experimentally, we operated the microdevice in temperatures as high as about 600 K, demonstrating this technology’s suitability to operate at high temperatures. PMID:28322324

  11. High Temperature Near-Field NanoThermoMechanical Rectification

    NASA Astrophysics Data System (ADS)

    Elzouka, Mahmoud; Ndao, Sidy

    2017-03-01

    Limited performance and reliability of electronic devices at extreme temperatures, intensive electromagnetic fields, and radiation found in space exploration missions (i.e., Venus & Jupiter planetary exploration, and heliophysics missions) and earth-based applications requires the development of alternative computing technologies. In the pursuit of alternative technologies, research efforts have looked into developing thermal memory and logic devices that use heat instead of electricity to perform computations. However, most of the proposed technologies operate at room or cryogenic temperatures, due to their dependence on material’s temperature-dependent properties. Here in this research, we show experimentally—for the first time—the use of near-field thermal radiation (NFTR) to achieve thermal rectification at high temperatures, which can be used to build high-temperature thermal diodes for performing logic operations in harsh environments. We achieved rectification through the coupling between NFTR and the size of a micro/nano gap separating two terminals, engineered to be a function of heat flow direction. We fabricated and tested a proof-of-concept NanoThermoMechanical device that has shown a maximum rectification of 10.9% at terminals’ temperatures of 375 and 530 K. Experimentally, we operated the microdevice in temperatures as high as about 600 K, demonstrating this technology’s suitability to operate at high temperatures.

  12. Large-scale protein-protein interactions detection by integrating big biosensing data with computational model.

    PubMed

    You, Zhu-Hong; Li, Shuai; Gao, Xin; Luo, Xin; Ji, Zhen

    2014-01-01

    Protein-protein interactions are the basis of biological functions, and studying these interactions on a molecular level is of crucial importance for understanding the functionality of a living cell. During the past decade, biosensors have emerged as an important tool for the high-throughput identification of proteins and their interactions. However, the high-throughput experimental methods for identifying PPIs are both time-consuming and expensive. On the other hand, high-throughput PPI data are often associated with high false-positive and high false-negative rates. Targeting at these problems, we propose a method for PPI detection by integrating biosensor-based PPI data with a novel computational model. This method was developed based on the algorithm of extreme learning machine combined with a novel representation of protein sequence descriptor. When performed on the large-scale human protein interaction dataset, the proposed method achieved 84.8% prediction accuracy with 84.08% sensitivity at the specificity of 85.53%. We conducted more extensive experiments to compare the proposed method with the state-of-the-art techniques, support vector machine. The achieved results demonstrate that our approach is very promising for detecting new PPIs, and it can be a helpful supplement for biosensor-based PPI data detection.

  13. Numerical aspects in modeling high Deborah number flow and elastic instability

    NASA Astrophysics Data System (ADS)

    Kwon, Youngdon

    2014-05-01

    Investigating highly nonlinear viscoelastic flow in 2D domain, we explore problem as well as property possibly inherent in the streamline upwinding technique (SUPG) and then present various results of elastic instability. The mathematically stable Leonov model written in tensor-logarithmic formulation is employed in the framework of finite element method for spatial discretization of several representative problem domains. For enhancement of computation speed, decoupled integration scheme is applied for shear thinning and Boger-type fluids. From the analysis of 4:1 contraction flow at low and moderate values of the Deborah number (De) the solution with SUPG method does not show noticeable difference from the one by the computation without upwinding. On the other hand, in the flow regime of high De, especially in the state of elastic instability the SUPG significantly distorts the flow field and the result differs considerably from the solution acquired straightforwardly. When the strength of elastic flow and thus the nonlinearity further increase, the computational scheme with upwinding fails to converge and evolutionary solution does not become available any more. All this result suggests that extreme care has to be taken on occasions where upwinding is applied, and one has to first of all prove validity of this algorithm in the case of high nonlinearity. On the contrary, the straightforward computation with no upwinding can efficiently model representative phenomena of elastic instability in such benchmark problems as 4:1 contraction flow, flow over a circular cylinder and flow over asymmetric array of cylinders. Asymmetry of the flow field occurring in the symmetric domain, enhanced spatial and temporal fluctuation of dynamic variables and flow effects caused by extension hardening are properly described in this study.

  14. shinyheatmap: Ultra fast low memory heatmap web interface for big data genomics.

    PubMed

    Khomtchouk, Bohdan B; Hennessy, James R; Wahlestedt, Claes

    2017-01-01

    Transcriptomics, metabolomics, metagenomics, and other various next-generation sequencing (-omics) fields are known for their production of large datasets, especially across single-cell sequencing studies. Visualizing such big data has posed technical challenges in biology, both in terms of available computational resources as well as programming acumen. Since heatmaps are used to depict high-dimensional numerical data as a colored grid of cells, efficiency and speed have often proven to be critical considerations in the process of successfully converting data into graphics. For example, rendering interactive heatmaps from large input datasets (e.g., 100k+ rows) has been computationally infeasible on both desktop computers and web browsers. In addition to memory requirements, programming skills and knowledge have frequently been barriers-to-entry for creating highly customizable heatmaps. We propose shinyheatmap: an advanced user-friendly heatmap software suite capable of efficiently creating highly customizable static and interactive biological heatmaps in a web browser. shinyheatmap is a low memory footprint program, making it particularly well-suited for the interactive visualization of extremely large datasets that cannot typically be computed in-memory due to size restrictions. Also, shinyheatmap features a built-in high performance web plug-in, fastheatmap, for rapidly plotting interactive heatmaps of datasets as large as 105-107 rows within seconds, effectively shattering previous performance benchmarks of heatmap rendering speed. shinyheatmap is hosted online as a freely available web server with an intuitive graphical user interface: http://shinyheatmap.com. The methods are implemented in R, and are available as part of the shinyheatmap project at: https://github.com/Bohdan-Khomtchouk/shinyheatmap. Users can access fastheatmap directly from within the shinyheatmap web interface, and all source code has been made publicly available on Github: https://github.com/Bohdan-Khomtchouk/fastheatmap.

  15. Arctic Boreal Vulnerability Experiment (ABoVE) Science Cloud

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Schnase, J. L.; McInerney, M.; Webster, W. P.; Sinno, S.; Thompson, J. H.; Griffith, P. C.; Hoy, E.; Carroll, M.

    2014-12-01

    The effects of climate change are being revealed at alarming rates in the Arctic and Boreal regions of the planet. NASA's Terrestrial Ecology Program has launched a major field campaign to study these effects over the next 5 to 8 years. The Arctic Boreal Vulnerability Experiment (ABoVE) will challenge scientists to take measurements in the field, study remote observations, and even run models to better understand the impacts of a rapidly changing climate for areas of Alaska and western Canada. The NASA Center for Climate Simulation (NCCS) at the Goddard Space Flight Center (GSFC) has partnered with the Terrestrial Ecology Program to create a science cloud designed for this field campaign - the ABoVE Science Cloud. The cloud combines traditional high performance computing with emerging technologies to create an environment specifically designed for large-scale climate analytics. The ABoVE Science Cloud utilizes (1) virtualized high-speed InfiniBand networks, (2) a combination of high-performance file systems and object storage, and (3) virtual system environments tailored for data intensive, science applications. At the center of the architecture is a large object storage environment, much like a traditional high-performance file system, that supports data proximal processing using technologies like MapReduce on a Hadoop Distributed File System (HDFS). Surrounding the storage is a cloud of high performance compute resources with many processing cores and large memory coupled to the storage through an InfiniBand network. Virtual systems can be tailored to a specific scientist and provisioned on the compute resources with extremely high-speed network connectivity to the storage and to other virtual systems. In this talk, we will present the architectural components of the science cloud and examples of how it is being used to meet the needs of the ABoVE campaign. In our experience, the science cloud approach significantly lowers the barriers and risks to organizations that require high performance computing solutions and provides the NCCS with the agility required to meet our customers' rapidly increasing and evolving requirements.

  16. The Number Density of Quiescent Compact Galaxies at Intermediate Redshift

    NASA Astrophysics Data System (ADS)

    Damjanov, Ivana; Hwang, Ho Seong; Geller, Margaret J.; Chilingarian, Igor

    2014-09-01

    Massive compact systems at 0.2 < z < 0.6 are the missing link between the predominantly compact population of massive quiescent galaxies at high redshift and their analogs and relics in the local volume. The evolution in number density of these extreme objects over cosmic time is the crucial constraining factor for the models of massive galaxy assembly. We select a large sample of ~200 intermediate-redshift massive compacts from the Baryon Oscillation Spectroscopic Survey (BOSS) spectroscopy by identifying point-like Sloan Digital Sky Survey photometric sources with spectroscopic signatures of evolved redshifted galaxies. A subset of our targets have publicly available high-resolution ground-based images that we use to augment the dynamical and stellar population properties of these systems by their structural parameters. We confirm that all BOSS compact candidates are as compact as their high-redshift massive counterparts and less than half the size of similarly massive systems at z ~ 0. We use the completeness-corrected numbers of BOSS compacts to compute lower limits on their number densities in narrow redshift bins spanning the range of our sample. The abundance of extremely dense quiescent galaxies at 0.2 < z < 0.6 is in excellent agreement with the number densities of these systems at high redshift. Our lower limits support the models of massive galaxy assembly through a series of minor mergers over the redshift range 0 < z < 2.

  17. X-ray Crystallographic Structure of Thermophilic Rhodopsin

    PubMed Central

    Tsukamoto, Takashi; Mizutani, Kenji; Hasegawa, Taisuke; Takahashi, Megumi; Honda, Naoya; Hashimoto, Naoki; Shimono, Kazumi; Yamashita, Keitaro; Yamamoto, Masaki; Miyauchi, Seiji; Takagi, Shin; Hayashi, Shigehiko; Murata, Takeshi; Sudo, Yuki

    2016-01-01

    Thermophilic rhodopsin (TR) is a photoreceptor protein with an extremely high thermal stability and the first characterized light-driven electrogenic proton pump derived from the extreme thermophile Thermus thermophilus JL-18. In this study, we confirmed its high thermal stability compared with other microbial rhodopsins and also report the potential availability of TR for optogenetics as a light-induced neural silencer. The x-ray crystal structure of TR revealed that its overall structure is quite similar to that of xanthorhodopsin, including the presence of a putative binding site for a carotenoid antenna; but several distinct structural characteristics of TR, including a decreased surface charge and a larger number of hydrophobic residues and aromatic-aromatic interactions, were also clarified. Based on the crystal structure, the structural changes of TR upon thermal stimulation were investigated by molecular dynamics simulations. The simulations revealed the presence of a thermally induced structural substate in which an increase of hydrophobic interactions in the extracellular domain, the movement of extracellular domains, the formation of a hydrogen bond, and the tilting of transmembrane helices were observed. From the computational and mutational analysis, we propose that an extracellular LPGG motif between helices F and G plays an important role in the thermal stability, acting as a “thermal sensor.” These findings will be valuable for understanding retinal proteins with regard to high protein stability and high optogenetic performance. PMID:27129243

  18. Designed protein reveals structural determinants of extreme kinetic stability

    PubMed Central

    Broom, Aron; Ma, S. Martha; Xia, Ke; Rafalia, Hitesh; Trainor, Kyle; Colón, Wilfredo; Gosavi, Shachi; Meiering, Elizabeth M.

    2015-01-01

    The design of stable, functional proteins is difficult. Improved design requires a deeper knowledge of the molecular basis for design outcomes and properties. We previously used a bioinformatics and energy function method to design a symmetric superfold protein composed of repeating structural elements with multivalent carbohydrate-binding function, called ThreeFoil. This and similar methods have produced a notably high yield of stable proteins. Using a battery of experimental and computational analyses we show that despite its small size and lack of disulfide bonds, ThreeFoil has remarkably high kinetic stability and its folding is specifically chaperoned by carbohydrate binding. It is also extremely stable against thermal and chemical denaturation and proteolytic degradation. We demonstrate that the kinetic stability can be predicted and modeled using absolute contact order (ACO) and long-range order (LRO), as well as coarse-grained simulations; the stability arises from a topology that includes many long-range contacts which create a large and highly cooperative energy barrier for unfolding and folding. Extensive data from proteomic screens and other experiments reveal that a high ACO/LRO is a general feature of proteins with strong resistances to denaturation and degradation. These results provide tractable approaches for predicting resistance and designing proteins with sufficient topological complexity and long-range interactions to accommodate destabilizing functional features as well as withstand chemical and proteolytic challenge. PMID:26554002

  19. Can Concentration - Discharge Relationships Diagnose Material Source During Extreme Events?

    NASA Astrophysics Data System (ADS)

    Karwan, D. L.; Godsey, S.; Rose, L.

    2017-12-01

    Floods can carry >90% of the basin material exported in a given year as well as alter flow pathways and material sources. In turn, sediment and solute fluxes can increase flood damages and negatively impact water quality and integrate physical and chemical weathering of landscapes and channels. Concentration-discharge (C-Q) relationships are used to both describe export patterns as well as compute them. Metrics for describing C-Q patterns and inferring their controls are vulnerable to infrequent sampling that affects how C-Q relationships are interpolated and interpreted. C-Q relationships are typically evaluated from multiple samples, but because hydrological extremes are rare, data are often unavailable for extreme events. Because solute and sediment C-Q relationships likely respond to changes in hydrologic extremes in different ways, there is a pressing need to define their behavior under extreme conditions, including how to properly sample to capture these patterns. In the absence of such knowledge, improving load estimates in extreme floods will likely remain difficult. Here we explore the use of C-Q relationships to determine when an event alters a watershed system such that it enters a new material source/transport regime. We focus on watersheds with sediment and discharge time series include low-frequency and/or extreme events. For example, we compare solute and sediment patterns in White Clay Creek in southeastern Pennsylvania across a range of flows inclusive of multiple hurricanes for which we have ample ancillary hydrochemical data. TSS is consistently mobilized during high flow events, even during extreme floods associated with hurricanes, and sediment fingerprinting indicates different sediment sources, including in-channel remobilization and landscape erosion, are active at different times. In other words, TSS mobilization in C-Q space is not sensitive to the source of material being mobilized. Unlike sediments, weathering solutes in this watershed tend to exhibit a relatively chemostatic C-Q pattern, except during the runoff-dominated Hurricane Irene, when they exhibit a diluting C-Q pattern. Finally, we summarize the vulnerability of these observations to shifts in sampling effort to highlight the utility and limitations of C-Q-derived export patterns.

  20. Surface radiation dose comparison of a dedicated extremity cone beam computed tomography (CBCT) device and a multidetector computed tomography (MDCT) machine in pediatric ankle and wrist phantoms

    PubMed Central

    Nagy, Eszter; Apfaltrer, Georg; Riccabona, Michael; Singer, Georg; Stücklschweiger, Georg; Guss, Helmuth; Sorantin, Erich

    2017-01-01

    Objectives To evaluate and compare surface doses of a cone beam computed tomography (CBCT) and a multidetector computed tomography (MDCT) device in pediatric ankle and wrist phantoms. Methods Thermoluminescent dosimeters (TLD) were used to measure and compare surface doses between CBCT and MDCT in a left ankle and a right wrist pediatric phantom. In both modalities adapted pediatric dose protocols were utilized to achieve realistic imaging conditions. All measurements were repeated three times to prove test-retest reliability. Additionally, objective and subjective image quality parameters were assessed. Results Average surface doses were 3.8 ±2.1 mGy for the ankle, and 2.2 ±1.3 mGy for the wrist in CBCT. The corresponding surface doses in optimized MDCT were 4.5 ±1.3 mGy for the ankle, and 3.4 ±0.7 mGy for the wrist. Overall, mean surface dose was significantly lower in CBCT (3.0 ±1.9 mGy vs. 3.9 ±1.2 mGy, p<0.001). Subjectively rated general image quality was not significantly different between the study protocols (p = 0.421), whereas objectively measured image quality parameters were in favor of CBCT (p<0.001). Conclusions Adapted extremity CBCT imaging protocols have the potential to fall below optimized pediatric ankle and wrist MDCT doses at comparable image qualities. These possible dose savings warrant further development and research in pediatric extremity CBCT applications. PMID:28570626

  1. Explicit Computations of Instantons and Large Deviations in Beta-Plane Turbulence

    NASA Astrophysics Data System (ADS)

    Laurie, J.; Bouchet, F.; Zaboronski, O.

    2012-12-01

    We use a path integral formalism and instanton theory in order to make explicit analytical predictions about large deviations and rare events in beta-plane turbulence. The path integral formalism is a concise way to get large deviation results in dynamical systems forced by random noise. In the most simple cases, it leads to the same results as the Freidlin-Wentzell theory, but it has a wider range of applicability. This approach is however usually extremely limited, due to the complexity of the theoretical problems. As a consequence it provides explicit results in a fairly limited number of models, often extremely simple ones with only a few degrees of freedom. Few exception exist outside the realm of equilibrium statistical physics. We will show that the barotropic model of beta-plane turbulence is one of these non-equilibrium exceptions. We describe sets of explicit solutions to the instanton equation, and precise derivations of the action functional (or large deviation rate function). The reason why such exact computations are possible is related to the existence of hidden symmetries and conservation laws for the instanton dynamics. We outline several applications of this apporach. For instance, we compute explicitly the very low probability to observe flows with an energy much larger or smaller than the typical one. Moreover, we consider regimes for which the system has multiple attractors (corresponding to different numbers of alternating jets), and discuss the computation of transition probabilities between two such attractors. These extremely rare events are of the utmost importance as the dynamics undergo qualitative macroscopic changes during such transitions.

  2. Facilitating mathematics learning for students with upper extremity disabilities using touch-input system.

    PubMed

    Choi, Kup-Sze; Chan, Tak-Yin

    2015-03-01

    To investigate the feasibility of using tablet device as user interface for students with upper extremity disabilities to input mathematics efficiently into computer. A touch-input system using tablet device as user interface was proposed to assist these students to write mathematics. User-switchable and context-specific keyboard layouts were designed to streamline the input process. The system could be integrated with conventional computer systems only with minor software setup. A two-week pre-post test study involving five participants was conducted to evaluate the performance of the system and collect user feedback. The mathematics input efficiency of the participants was found to improve during the experiment sessions. In particular, their performance in entering trigonometric expressions by using the touch-input system was significantly better than that by using conventional mathematics editing software with keyboard and mouse. The participants rated the touch-input system positively and were confident that they could operate at ease with more practice. The proposed touch-input system provides a convenient way for the students with hand impairment to write mathematics and has the potential to facilitate their mathematics learning. Implications for Rehabilitation Students with upper extremity disabilities often face barriers to learning mathematics which is largely based on handwriting. Conventional computer user interfaces are inefficient for them to input mathematics into computer. A touch-input system with context-specific and user-switchable keyboard layouts was designed to improve the efficiency of mathematics input. Experimental results and user feedback suggested that the system has the potential to facilitate mathematics learning for the students.

  3. Computing the Free Energy Barriers for Less by Sampling with a Coarse Reference Potential while Retaining Accuracy of the Target Fine Model.

    PubMed

    Plotnikov, Nikolay V

    2014-08-12

    Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force.

  4. Computing the Free Energy Barriers for Less by Sampling with a Coarse Reference Potential while Retaining Accuracy of the Target Fine Model

    PubMed Central

    2015-01-01

    Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force. PMID:25136268

  5. Flexible and Secure Computer-Based Assessment Using a Single Zip Disk

    ERIC Educational Resources Information Center

    Ko, C. C.; Cheng, C. D.

    2008-01-01

    Electronic examination systems, which include Internet-based system, require extremely complicated installation, configuration and maintenance of software as well as hardware. In this paper, we present the design and development of a flexible, easy-to-use and secure examination system (e-Test), in which any commonly used computer can be used as a…

  6. Abernethy malformation with portal vein aneurysm in a child

    PubMed Central

    Chandrashekhara, Sheragaru H.; Bhalla, Ashu Seith; Gupta, Arun Kumar; Vikash, C. S.; Kabra, Susheel Kumar

    2011-01-01

    Abernethy malformation is an extremely rare anomaly of the splanchnic venous system. We describe multidetector computed tomography findings of an incidentally detected Abernethy malformation with portal vein aneurysm in a two-and-half-year old child. The computed tomography scan was performed for the evaluation of respiratory distress, poor growth, and loss of appetite. PMID:21430844

  7. The Effects of Routing and Scoring within a Computer Adaptive Multi-Stage Framework

    ERIC Educational Resources Information Center

    Dallas, Andrew

    2014-01-01

    This dissertation examined the overall effects of routing and scoring within a computer adaptive multi-stage framework (ca-MST). Testing in a ca-MST environment has become extremely popular in the testing industry. Testing companies enjoy its efficiency benefits as compared to traditionally linear testing and its quality-control features over…

  8. A centennial tribute to G.K. Gilbert's Hydraulic Mining Débris in the Sierra Nevada

    NASA Astrophysics Data System (ADS)

    James, L. A.; Phillips, J. D.; Lecce, S. A.

    2017-10-01

    G.K. Gilbert's (1917) classic monograph, Hydraulic-Mining Débris in the Sierra Nevada, is described and put into the context of modern geomorphic knowledge. The emphasis here is on large-scale applied fluvial geomorphology, but other key elements-e.g., coastal geomorphology-are also briefly covered. A brief synopsis outlines key elements of the monograph, followed by discussions of highly influential aspects including the integrated watershed perspective, the extreme example of anthropogenic sedimentation, computation of a quantitative, semidistributed sediment budget, and advent of sediment-wave theory. Although Gilbert did not address concepts of equilibrium and grade in much detail, the rivers of the northwestern Sierra Nevada were highly disrupted and thrown into a condition of nonequilibrium. Therefore, concepts of equilibrium and grade-for which Gilbert's early work is often cited-are discussed. Gilbert's work is put into the context of complex nonlinear dynamics in geomorphic systems and how these concepts can be used to interpret the nonequilibrium systems described by Gilbert. Broad, basin-scale studies were common in the period, but few were as quantitative and empirically rigorous or employed such a range of methodologies as PP105. None demonstrated such an extreme case of anthropogeomorphic change.

  9. Effects of packet retransmission with finite packet lifetime on traffic capacity in scale-free networks

    NASA Astrophysics Data System (ADS)

    Jiang, Zhong-Yuan; Ma, Jian-Feng

    Existing routing strategies such as the global dynamic routing [X. Ling, M. B. Hu, R. Jiang and Q. S. Wu, Phys. Rev. E 81, 016113 (2010)] can achieve very high traffic capacity at the cost of extremely long packet traveling delay. In many real complex networks, especially for real-time applications such as the instant communication software, extremely long packet traveling time is unacceptable. In this work, we propose to assign a finite Time-to-Live (TTL) parameter for each packet. To guarantee every packet to arrive at its destination within its TTL, we assume that a packet is retransmitted by its source once its TTL expires. We employ source routing mechanisms in the traffic model to avoid the routing-flaps induced by the global dynamic routing. We compose extensive simulations to verify our proposed mechanisms. With small TTL, the effects of packet retransmission on network traffic capacity are obvious, and the phase transition from flow free state to congested state occurs. For the purpose of reducing the computation frequency of the routing table, we employ a computing cycle Tc within which the routing table is recomputed once. The simulation results show that the traffic capacity decreases with increasing Tc. Our work provides a good insight into the understanding of effects of packet retransmission with finite packet lifetime on traffic capacity in scale-free networks.

  10. A Computer-Adaptive Disability Instrument for Lower Extremity Osteoarthritis Research Demonstrated Promising Breadth, Precision and Reliability

    PubMed Central

    Jette, Alan M.; McDonough, Christine M.; Haley, Stephen M.; Ni, Pengsheng; Olarsch, Sippy; Latham, Nancy; Hambleton, Ronald K.; Felson, David; Kim, Young-jo; Hunter, David

    2012-01-01

    Objective To develop and evaluate a prototype measure (OA-DISABILITY-CAT) for osteoarthritis research using Item Response Theory (IRT) and Computer Adaptive Test (CAT) methodologies. Study Design and Setting We constructed an item bank consisting of 33 activities commonly affected by lower extremity (LE) osteoarthritis. A sample of 323 adults with LE osteoarthritis reported their degree of limitation in performing everyday activities and completed the Health Assessment Questionnaire-II (HAQ-II). We used confirmatory factor analyses to assess scale unidimensionality and IRT methods to calibrate the items and examine the fit of the data. Using CAT simulation analyses, we examined the performance of OA-DISABILITY-CATs of different lengths compared to the full item bank and the HAQ-II. Results One distinct disability domain was identified. The 10-item OA-DISABILITY-CAT demonstrated a high degree of accuracy compared with the full item bank (r=0.99). The item bank and the HAQ-II scales covered a similar estimated scoring range. In terms of reliability, 95% of OA-DISABILITY reliability estimates were over 0.83 versus 0.60 for the HAQ-II. Except at the highest scores the 10-item OA-DISABILITY-CAT demonstrated superior precision to the HAQ-II. Conclusion The prototype OA-DISABILITY-CAT demonstrated promising measurement properties compared to the HAQ-II, and is recommended for use in LE osteoarthritis research. PMID:19216052

  11. PROMIS Physical Function Computer Adaptive Test Compared With Other Upper Extremity Outcome Measures in the Evaluation of Proximal Humerus Fractures in Patients Older Than 60 Years.

    PubMed

    Morgan, Jordan H; Kallen, Michael A; Okike, Kanu; Lee, Olivia C; Vrahas, Mark S

    2015-06-01

    To compare the PROMIS Physical Function Computer Adaptive Test (PROMIS PF CAT) to commonly used traditional PF measures for the evaluation of patients with proximal humerus fractures. Prospective. Two Level I trauma centers. Forty-seven patients older than 60 years with displaced proximal humerus fractures treated between 2006 and 2009. Evaluation included completion of the PROMIS PF CAT, the Constant Shoulder Score, the Disabilities of the Arm, Shoulder, and Hand (DASH) and the Short Musculoskeletal Functional Assessment (SMFA). Observed correlations among the administered PF outcome measures. On average, patients responded to 86 outcome-related items for this study: 4 for the PROMIS PF CAT (range: 4-8 items), 6 for the Constant Shoulder Score, 30 for the DASH, and 46 for the SMFA. Time to complete the PROMIS PF CAT (median completion time = 98 seconds) was significantly less than that for the DASH (median completion time = 336 seconds, P < 0.001) and for the SMFA (median completion time = 482 seconds, P < 0.001). PROMIS PF CAT scores correlated statistically significantly and were of moderate-to-high magnitude with all other PF outcome measure scores administered. This study suggests using the PROMIS PF CAT as a sole PF outcome measure can yield an assessment of upper extremity function similar to those provided by traditional PF measures, while substantially reducing patient assessment time.

  12. Synoptic and meteorological drivers of extreme ozone concentrations over Europe

    NASA Astrophysics Data System (ADS)

    Otero, Noelia Felipe; Sillmann, Jana; Schnell, Jordan L.; Rust, Henning W.; Butler, Tim

    2016-04-01

    The present work assesses the relationship between local and synoptic meteorological conditions and surface ozone concentration over Europe in spring and summer months, during the period 1998-2012 using a new interpolated data set of observed surface ozone concentrations over the European domain. Along with local meteorological conditions, the influence of large-scale atmospheric circulation on surface ozone is addressed through a set of airflow indices computed with a novel implementation of a grid-by-grid weather type classification across Europe. Drivers of surface ozone over the full distribution of maximum daily 8-hour average values are investigated, along with drivers of the extreme high percentiles and exceedances or air quality guideline thresholds. Three different regression techniques are applied: multiple linear regression to assess the drivers of maximum daily ozone, logistic regression to assess the probability of threshold exceedances and quantile regression to estimate the meteorological influence on extreme values, as represented by the 95th percentile. The relative importance of the input parameters (predictors) is assessed by a backward stepwise regression procedure that allows the identification of the most important predictors in each model. Spatial patterns of model performance exhibit distinct variations between regions. The inclusion of the ozone persistence is particularly relevant over Southern Europe. In general, the best model performance is found over Central Europe, where the maximum temperature plays an important role as a driver of maximum daily ozone as well as its extreme values, especially during warmer months.

  13. Has the Temperature Climate of the United States Become More Extreme?

    NASA Astrophysics Data System (ADS)

    Stevens, L. E.; Kunkel, K.; Vose, R. S.; Knight, R. W.

    2014-12-01

    Extreme heat has affected parts of the United States during recent summers, particularly 2011 and 2012. Severe cold has also occurred in recent years. This has created a perception that the temperature climate of the U.S. has become more extreme. Is this the case? We address this question by computing probability distribution functions (PDFs) for each season and evaluating temporal changes for the 20th and early 21st centuries using a new gridded monthly temperature data set. We examine changes in the mean, width, and shape of the PDFs for seven U.S. regions, as defined in the third National Climate Assessment. During the past 2-3 decades, there has been a shift toward more frequent very warm months, but this has been accompanied by a decrease in the occurrence of very cold months. Thus, overall we determine that the temperature climate of the U.S. has not become more extreme. The 1930s were an earlier period of frequent very warm months, but this was primarily a result of very warm daytime temperatures, while the occurrence of months with very high nighttime temperatures was not unusually large during that period. There are important regional variations in these results. In particular, the shift to more frequent very warm months is not predominant in the southeast U.S. annually or in parts of the central U.S. in the summer. This lack of warming is a feature of daytime maximum temperature, not nighttime minimum temperature.

  14. Scalable Analysis Methods and In Situ Infrastructure for Extreme Scale Knowledge Discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duque, Earl P.N.; Whitlock, Brad J.

    High performance computers have for many years been on a trajectory that gives them extraordinary compute power with the addition of more and more compute cores. At the same time, other system parameters such as the amount of memory per core and bandwidth to storage have remained constant or have barely increased. This creates an imbalance in the computer, giving it the ability to compute a lot of data that it cannot reasonably save out due to time and storage constraints. While technologies have been invented to mitigate this problem (burst buffers, etc.), software has been adapting to employ inmore » situ libraries which perform data analysis and visualization on simulation data while it is still resident in memory. This avoids the need to ever have to pay the costs of writing many terabytes of data files. Instead, in situ enables the creation of more concentrated data products such as statistics, plots, and data extracts, which are all far smaller than the full-sized volume data. With the increasing popularity of in situ, multiple in situ infrastructures have been created, each with its own mechanism for integrating with a simulation. To make it easier to instrument a simulation with multiple in situ infrastructures and include custom analysis algorithms, this project created the SENSEI framework.« less

  15. First-Order SPICE Modeling of Extreme-Temperature 4H-SiC JFET Integrated Circuits

    NASA Technical Reports Server (NTRS)

    Neudeck, Philip G.; Spry, David J.; Chen, Liang-Yu

    2016-01-01

    A separate submission to this conference reports that 4H-SiC Junction Field Effect Transistor (JFET) digital and analog Integrated Circuits (ICs) with two levels of metal interconnect have reproducibly demonstrated electrical operation at 500 C in excess of 1000 hours. While this progress expands the complexity and durability envelope of high temperature ICs, one important area for further technology maturation is the development of reasonably accurate and accessible computer-aided modeling and simulation tools for circuit design of these ICs. Towards this end, we report on development and verification of 25 C to 500 C SPICE simulation models of first order accuracy for this extreme-temperature durable 4H-SiC JFET IC technology. For maximum availability, the JFET IC modeling is implemented using the baseline-version SPICE NMOS LEVEL 1 model that is common to other variations of SPICE software and importantly includes the body-bias effect. The first-order accuracy of these device models is verified by direct comparison with measured experimental device characteristics.

  16. Brain Tumour Segmentation based on Extremely Randomized Forest with high-level features.

    PubMed

    Pinto, Adriano; Pereira, Sergio; Correia, Higino; Oliveira, J; Rasteiro, Deolinda M L D; Silva, Carlos A

    2015-08-01

    Gliomas are among the most common and aggressive brain tumours. Segmentation of these tumours is important for surgery and treatment planning, but also for follow-up evaluations. However, it is a difficult task, given that its size and locations are variable, and the delineation of all tumour tissue is not trivial, even with all the different modalities of the Magnetic Resonance Imaging (MRI). We propose a discriminative and fully automatic method for the segmentation of gliomas, using appearance- and context-based features to feed an Extremely Randomized Forest (Extra-Trees). Some of these features are computed over a non-linear transformation of the image. The proposed method was evaluated using the publicly available Challenge database from BraTS 2013, having obtained a Dice score of 0.83, 0.78 and 0.73 for the complete tumour, and the core and the enhanced regions, respectively. Our results are competitive, when compared against other results reported using the same database.

  17. Accretion and Outflow from a Magnetized, Neutrino Cooled Torus around the Gamma Ray Burst Central Engine

    NASA Astrophysics Data System (ADS)

    Janiuk, Agnieszka; Moscibrodzka, Monika

    Gamma Ray Bursts (GRB) are the extremely energetic transient events, visible from the most distant parts of the Universe. They are most likely powered by accretion on the hyper-Eddington rates that proceeds onto a newly born stellar mass black hole. This central engine gives rise to the most powerful, high Lorentz factor jets that are responsible for energetic gamma ray emission. We investigate the accretion flow evolution in GRB central engine, using the 2D MHD simulations in General Relativity. We compute the structure and evolution of the extremely hot and dense torus accreting onto the fast spinning black hole, which launches the magnetized jets. We calculate the chemical structure of the disk and account for neutrino cooling. Our preliminary runs apply to the short GRB case (remnant torus accreted after NS-NS or NS-BH merger). We estimate the neutrino luminosity of such an event for chosen disk and central BH mass.

  18. Real-time turbulence profiling with a pair of laser guide star Shack-Hartmann wavefront sensors for wide-field adaptive optics systems on large to extremely large telescopes.

    PubMed

    Gilles, L; Ellerbroek, B L

    2010-11-01

    Real-time turbulence profiling is necessary to tune tomographic wavefront reconstruction algorithms for wide-field adaptive optics (AO) systems on large to extremely large telescopes, and to perform a variety of image post-processing tasks involving point-spread function reconstruction. This paper describes a computationally efficient and accurate numerical technique inspired by the slope detection and ranging (SLODAR) method to perform this task in real time from properly selected Shack-Hartmann wavefront sensor measurements accumulated over a few hundred frames from a pair of laser guide stars, thus eliminating the need for an additional instrument. The algorithm is introduced, followed by a theoretical influence function analysis illustrating its impulse response to high-resolution turbulence profiles. Finally, its performance is assessed in the context of the Thirty Meter Telescope multi-conjugate adaptive optics system via end-to-end wave optics Monte Carlo simulations.

  19. Dynamic inundation mapping of Hurricane Harvey flooding in the Houston metro area using hyper-resolution modeling and quantitative image reanalysis

    NASA Astrophysics Data System (ADS)

    Noh, S. J.; Lee, J. H.; Lee, S.; Zhang, Y.; Seo, D. J.

    2017-12-01

    Hurricane Harvey was one of the most extreme weather events in Texas history and left significant damages in the Houston and adjoining coastal areas. To understand better the relative impact to urban flooding of extreme amount and spatial extent of rainfall, unique geography, land use and storm surge, high-resolution water modeling is necessary such that natural and man-made components are fully resolved. In this presentation, we reconstruct spatiotemporal evolution of inundation during Hurricane Harvey using hyper-resolution modeling and quantitative image reanalysis. The two-dimensional urban flood model used is based on dynamic wave approximation and 10 m-resolution terrain data, and is forced by the radar-based multisensor quantitative precipitation estimates. The model domain includes Buffalo, Brays, Greens and White Oak Bayous in Houston. The model is simulated using hybrid parallel computing. To evaluate dynamic inundation mapping, we combine various qualitative crowdsourced images and video footages with LiDAR-based terrain data.

  20. Comparing the Consumption of CPU Hours with Scientific Output for the Extreme Science and Engineering Discovery Environment (XSEDE).

    PubMed

    Knepper, Richard; Börner, Katy

    2016-01-01

    This paper presents the results of a study that compares resource usage with publication output using data about the consumption of CPU cycles from the Extreme Science and Engineering Discovery Environment (XSEDE) and resulting scientific publications for 2,691 institutions/teams. Specifically, the datasets comprise a total of 5,374,032,696 central processing unit (CPU) hours run in XSEDE during July 1, 2011 to August 18, 2015 and 2,882 publications that cite the XSEDE resource. Three types of studies were conducted: a geospatial analysis of XSEDE providers and consumers, co-authorship network analysis of XSEDE publications, and bi-modal network analysis of how XSEDE resources are used by different research fields. Resulting visualizations show that a diverse set of consumers make use of XSEDE resources, that users of XSEDE publish together frequently, and that the users of XSEDE with the highest resource usage tend to be "traditional" high-performance computing (HPC) community members from astronomy, atmospheric science, physics, chemistry, and biology.

Top