Sample records for computing core activities

  1. The importance of actions and the worth of an object: dissociable neural systems representing core value and economic value.

    PubMed

    Brosch, Tobias; Coppin, Géraldine; Schwartz, Sophie; Sander, David

    2012-06-01

    Neuroeconomic research has delineated neural regions involved in the computation of value, referring to a currency for concrete choices and decisions ('economic value'). Research in psychology and sociology, on the other hand, uses the term 'value' to describe motivational constructs that guide choices and behaviors across situations ('core value'). As a first step towards an integration of these literatures, we compared the neural regions computing economic value and core value. Replicating previous work, economic value computations activated a network centered on medial orbitofrontal cortex. Core value computations activated medial prefrontal cortex, a region involved in the processing of self-relevant information and dorsal striatum, involved in action selection. Core value ratings correlated with activity in precuneus and anterior prefrontal cortex, potentially reflecting the degree to which a core value is perceived as internalized part of one's self-concept. Distributed activation pattern in insula and ACC allowed differentiating individual core value types. These patterns may represent evaluation profiles reflecting prototypical fundamental concerns expressed in the core value types. Our findings suggest mechanisms by which core values, as motivationally important long-term goals anchored in the self-schema, may have the behavioral power to drive decisions and behaviors in the absence of immediately rewarding behavioral options.

  2. Is scaffold hopping a reliable indicator for the ability of computational methods to identify structurally diverse active compounds?

    NASA Astrophysics Data System (ADS)

    Dimova, Dilyana; Bajorath, Jürgen

    2017-07-01

    Computational scaffold hopping aims to identify core structure replacements in active compounds. To evaluate scaffold hopping potential from a principal point of view, regardless of the computational methods that are applied, a global analysis of conventional scaffolds in analog series from compound activity classes was carried out. The majority of analog series was found to contain multiple scaffolds, thus enabling the detection of intra-series scaffold hops among closely related compounds. More than 1000 activity classes were found to contain increasing proportions of multi-scaffold analog series. Thus, using such activity classes for scaffold hopping analysis is likely to overestimate the scaffold hopping (core structure replacement) potential of computational methods, due to an abundance of artificial scaffold hops that are possible within analog series.

  3. Determination of the neutron activation profile of core drill samples by gamma-ray spectrometry.

    PubMed

    Gurau, D; Boden, S; Sima, O; Stanga, D

    2018-04-01

    This paper provides guidance for determining the neutron activation profile of core drill samples taken from the biological shield of nuclear reactors using gamma spectrometry measurements. Thus, it provides guidance for selecting a model of the right form to fit data and using least squares methods for model fitting. The activity profiles of two core samples taken from the biological shield of a nuclear reactor were determined. The effective activation depth and the total activity of core samples along with their uncertainties were computed by Monte Carlo simulation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. CANOPEN Controller IP Core

    NASA Astrophysics Data System (ADS)

    Caramia, Maurizio; Montagna, Mario; Furano, Gianluca; Winton, Alistair

    2010-08-01

    This paper will describe the activities performed by Thales Alenia Space Italia supported by the European Space Agency in the definition of a CAN bus interface to be used on Exomars. The final goal of this activity is the development of an IP core, to be used in a slave node, able to manage both the CAN bus Data Link and Application Layer totally in hardware. The activity has been focused on the needs of the EXOMARS mission where devices with different computational performances are all managed by the onboard computer through the CAN bus.

  5. Active Flash: Out-of-core Data Analytics on Flash Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boboila, Simona; Kim, Youngjae; Vazhkudai, Sudharshan S

    2012-01-01

    Next generation science will increasingly come to rely on the ability to perform efficient, on-the-fly analytics of data generated by high-performance computing (HPC) simulations, modeling complex physical phenomena. Scientific computing workflows are stymied by the traditional chaining of simulation and data analysis, creating multiple rounds of redundant reads and writes to the storage system, which grows in cost with the ever-increasing gap between compute and storage speeds in HPC clusters. Recent HPC acquisitions have introduced compute node-local flash storage as a means to alleviate this I/O bottleneck. We propose a novel approach, Active Flash, to expedite data analysis pipelines bymore » migrating to the location of the data, the flash device itself. We argue that Active Flash has the potential to enable true out-of-core data analytics by freeing up both the compute core and the associated main memory. By performing analysis locally, dependence on limited bandwidth to a central storage system is reduced, while allowing this analysis to proceed in parallel with the main application. In addition, offloading work from the host to the more power-efficient controller reduces peak system power usage, which is already in the megawatt range and poses a major barrier to HPC system scalability. We propose an architecture for Active Flash, explore energy and performance trade-offs in moving computation from host to storage, demonstrate the ability of appropriate embedded controllers to perform data analysis and reduction tasks at speeds sufficient for this application, and present a simulation study of Active Flash scheduling policies. These results show the viability of the Active Flash model, and its capability to potentially have a transformative impact on scientific data analysis.« less

  6. Methodes d'optimisation des parametres 2D du reflecteur dans un reacteur a eau pressurisee

    NASA Astrophysics Data System (ADS)

    Clerc, Thomas

    With a third of the reactors in activity, the Pressurized Water Reactor (PWR) is today the most used reactor design in the world. This technology equips all the 19 EDF power plants. PWRs fit into the category of thermal reactors, because it is mainly the thermal neutrons that contribute to the fission reaction. The pressurized light water is both used as the moderator of the reaction and as the coolant. The active part of the core is composed of uranium, slightly enriched in uranium 235. The reflector is a region surrounding the active core, and containing mostly water and stainless steel. The purpose of the reflector is to protect the vessel from radiations, and also to slow down the neutrons and reflect them into the core. Given that the neutrons participate to the reaction of fission, the study of their behavior within the core is capital to understand the general functioning of how the reactor works. The neutrons behavior is ruled by the transport equation, which is very complex to solve numerically, and requires very long calculation. This is the reason why the core codes that will be used in this study solve simplified equations to approach the neutrons behavior in the core, in an acceptable calculation time. In particular, we will focus our study on the diffusion equation and approximated transport equations, such as SPN or S N equations. The physical properties of the reflector are radically different from those of the fissile core, and this structural change causes important tilt in the neutron flux at the core/reflector interface. This is why it is very important to accurately design the reflector, in order to precisely recover the neutrons behavior over the whole core. Existing reflector calculation techniques are based on the Lefebvre-Lebigot method. This method is only valid if the energy continuum of the neutrons is discretized in two energy groups, and if the diffusion equation is used. The method leads to the calculation of a homogeneous reflector. The aim of this study is to create a computational scheme able to compute the parameters of heterogeneous, multi-group reflectors, with both diffusion and SPN/SN operators. For this purpose, two computational schemes are designed to perform such a reflector calculation. The strategy used in both schemes is to minimize the discrepancies between a power distribution computed with a core code and a reference distribution, which will be obtained with an APOLLO2 calculation based on the method Method Of Characteristics (MOC). In both computational schemes, the optimization parameters, also called control variables, are the diffusion coefficients in each zone of the reflector, for diffusion calculations, and the P-1 corrected macroscopic total cross-sections in each zone of the reflector, for SPN/SN calculations (or correction factors on these parameters). After a first validation of our computational schemes, the results are computed, always by optimizing the fast diffusion coefficient for each zone of the reflector. All the tools of the data assimilation have been used to reflect the different behavior of the solvers in the different parts of the core. Moreover, the reflector is refined in six separated zones, corresponding to the physical structure of the reflector. There will be then six control variables for the optimization algorithms. [special characters omitted]. Our computational schemes are then able to compute heterogeneous, 2-group or multi-group reflectors, using diffusion or SPN/SN operators. The optimization performed reduces the discrepancies distribution between the power computed with the core codes and the reference power. However, there are two main limitations to this study: first the homogeneous modeling of the reflector assemblies doesn't allow to properly describe its physical structure near the core/reflector interface. Moreover, the fissile assemblies are modeled in infinite medium, and this model reaches its limit at the core/reflector interface. These two problems should be tackled in future studies. (Abstract shortened by UMI.).

  7. Performance evaluation of throughput computing workloads using multi-core processors and graphics processors

    NASA Astrophysics Data System (ADS)

    Dave, Gaurav P.; Sureshkumar, N.; Blessy Trencia Lincy, S. S.

    2017-11-01

    Current trend in processor manufacturing focuses on multi-core architectures rather than increasing the clock speed for performance improvement. Graphic processors have become as commodity hardware for providing fast co-processing in computer systems. Developments in IoT, social networking web applications, big data created huge demand for data processing activities and such kind of throughput intensive applications inherently contains data level parallelism which is more suited for SIMD architecture based GPU. This paper reviews the architectural aspects of multi/many core processors and graphics processors. Different case studies are taken to compare performance of throughput computing applications using shared memory programming in OpenMP and CUDA API based programming.

  8. Using Haptic and Auditory Interaction Tools to Engage Students with Visual Impairments in Robot Programming Activities

    ERIC Educational Resources Information Center

    Howard, A. M.; Park, Chung Hyuk; Remy, S.

    2012-01-01

    The robotics field represents the integration of multiple facets of computer science and engineering. Robotics-based activities have been shown to encourage K-12 students to consider careers in computing and have even been adopted as part of core computer-science curriculum at a number of universities. Unfortunately, for students with visual…

  9. Bypass flow computations on the LOFA transient in a VHTR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tung, Yu-Hsin; Johnson, Richard W.; Ferng, Yuh-Ming

    2014-01-01

    Bypass flow in the prismatic gas-cooled very high temperature reactor (VHTR) is not intentionally designed to occur, but is present in the gaps between graphite blocks. Previous studies of the bypass flow in the core indicated that the cooling provided by flow in the bypass gaps had a significant effect on temperature and flow distributions for normal operating conditions. However, the flow and heat transports in the core are changed significantly after a Loss of Flow Accident (LOFA). This study aims to study the effect and role of the bypass flow after a LOFA in terms of the temperature andmore » flow distributions and for the heat transport out of the core by natural convection of the coolant for a 1/12 symmetric section of the active core which is composed of images and mirror images of two sub-region models. The two sub-region models, 9 x 1/12 and 15 x 1/12 symmetric sectors of the active core, are employed as the CFD flow models using computational grid systems of 70.2 million and 117 million nodes, respectively. It is concluded that the effect of bypass flow is significant for the initial conditions and the beginning of LOFA, but the bypass flow has little effect after a long period of time in the transient computation of natural circulation.« less

  10. Identifying Dynamic Protein Complexes Based on Gene Expression Profiles and PPI Networks

    PubMed Central

    Li, Min; Chen, Weijie; Wang, Jianxin; Pan, Yi

    2014-01-01

    Identification of protein complexes from protein-protein interaction networks has become a key problem for understanding cellular life in postgenomic era. Many computational methods have been proposed for identifying protein complexes. Up to now, the existing computational methods are mostly applied on static PPI networks. However, proteins and their interactions are dynamic in reality. Identifying dynamic protein complexes is more meaningful and challenging. In this paper, a novel algorithm, named DPC, is proposed to identify dynamic protein complexes by integrating PPI data and gene expression profiles. According to Core-Attachment assumption, these proteins which are always active in the molecular cycle are regarded as core proteins. The protein-complex cores are identified from these always active proteins by detecting dense subgraphs. Final protein complexes are extended from the protein-complex cores by adding attachments based on a topological character of “closeness” and dynamic meaning. The protein complexes produced by our algorithm DPC contain two parts: static core expressed in all the molecular cycle and dynamic attachments short-lived. The proposed algorithm DPC was applied on the data of Saccharomyces cerevisiae and the experimental results show that DPC outperforms CMC, MCL, SPICi, HC-PIN, COACH, and Core-Attachment based on the validation of matching with known complexes and hF-measures. PMID:24963481

  11. [Towards computer-aided catalyst design: Three effective core potential studies of C-H activation]. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-12-31

    Research in the initial grant period focused on computational studies relevant to the selective activation of methane, the prime component of natural gas. Reaction coordinates for methane activation by experimental models were delineated, as well as the bonding and structure of complexes that effect this important reaction. This research, highlighted in the following sections, also provided the impetus for further development, and application of methods for modeling metal-containing catalysts. Sections of the report describe the following: methane activation by multiple-bonded transition metal complexes; computational lanthanide chemistry; and methane activation by non-imido, multiple-bonded ligands.

  12. Free energy change of a dislocation due to a Cottrell atmosphere

    DOE PAGES

    Sills, R. B.; Cai, W.

    2018-03-07

    The free energy reduction of a dislocation due to a Cottrell atmosphere of solutes is computed using a continuum model. In this work, we show that the free energy change is composed of near-core and far-field components. The far-field component can be computed analytically using the linearized theory of solid solutions. Near the core the linearized theory is inaccurate, and the near-core component must be computed numerically. The influence of interactions between solutes in neighbouring lattice sites is also examined using the continuum model. We show that this model is able to reproduce atomistic calculations of the nickel–hydrogen system, predictingmore » hydride formation on dislocations. The formation of these hydrides leads to dramatic reductions in the free energy. Lastly, the influence of the free energy change on a dislocation’s line tension is examined by computing the equilibrium shape of a dislocation shear loop and the activation stress for a Frank–Read source using discrete dislocation dynamics.« less

  13. Free energy change of a dislocation due to a Cottrell atmosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sills, R. B.; Cai, W.

    The free energy reduction of a dislocation due to a Cottrell atmosphere of solutes is computed using a continuum model. In this work, we show that the free energy change is composed of near-core and far-field components. The far-field component can be computed analytically using the linearized theory of solid solutions. Near the core the linearized theory is inaccurate, and the near-core component must be computed numerically. The influence of interactions between solutes in neighbouring lattice sites is also examined using the continuum model. We show that this model is able to reproduce atomistic calculations of the nickel–hydrogen system, predictingmore » hydride formation on dislocations. The formation of these hydrides leads to dramatic reductions in the free energy. Lastly, the influence of the free energy change on a dislocation’s line tension is examined by computing the equilibrium shape of a dislocation shear loop and the activation stress for a Frank–Read source using discrete dislocation dynamics.« less

  14. Free energy change of a dislocation due to a Cottrell atmosphere

    NASA Astrophysics Data System (ADS)

    Sills, R. B.; Cai, W.

    2018-06-01

    The free energy reduction of a dislocation due to a Cottrell atmosphere of solutes is computed using a continuum model. We show that the free energy change is composed of near-core and far-field components. The far-field component can be computed analytically using the linearized theory of solid solutions. Near the core the linearized theory is inaccurate, and the near-core component must be computed numerically. The influence of interactions between solutes in neighbouring lattice sites is also examined using the continuum model. We show that this model is able to reproduce atomistic calculations of the nickel-hydrogen system, predicting hydride formation on dislocations. The formation of these hydrides leads to dramatic reductions in the free energy. Finally, the influence of the free energy change on a dislocation's line tension is examined by computing the equilibrium shape of a dislocation shear loop and the activation stress for a Frank-Read source using discrete dislocation dynamics.

  15. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    DOEpatents

    Faraj, Ahmad [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer. Each compute node includes at least two processing cores. Each processing core has contribution data for the allreduce operation. Performing an allreduce operation on a plurality of compute nodes of a parallel computer includes: establishing one or more logical rings among the compute nodes, each logical ring including at least one processing core from each compute node; performing, for each logical ring, a global allreduce operation using the contribution data for the processing cores included in that logical ring, yielding a global allreduce result for each processing core included in that logical ring; and performing, for each compute node, a local allreduce operation using the global allreduce results for each processing core on that compute node.

  16. Active Flash: Performance-Energy Tradeoffs for Out-of-Core Processing on Non-Volatile Memory Devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boboila, Simona; Kim, Youngjae; Vazhkudai, Sudharshan S

    2012-01-01

    In this abstract, we study the performance and energy tradeoffs involved in migrating data analysis into the flash device, a process we refer to as Active Flash. The Active Flash paradigm is similar to 'active disks', which has received considerable attention. Active Flash allows us to move processing closer to data, thereby minimizing data movement costs and reducing power consumption. It enables true out-of-core computation. The conventional definition of out-of-core solvers refers to an approach to process data that is too large to fit in the main memory and, consequently, requires access to disk. However, in Active Flash, processing outsidemore » the host CPU literally frees the core and achieves real 'out-of-core' analysis. Moving analysis to data has long been desirable, not just at this level, but at all levels of the system hierarchy. However, this requires a detailed study on the tradeoffs involved in achieving analysis turnaround under an acceptable energy envelope. To this end, we first need to evaluate if there is enough computing power on the flash device to warrant such an exploration. Flash processors require decent computing power to run the internal logic pertaining to the Flash Translation Layer (FTL), which is responsible for operations such as address translation, garbage collection (GC) and wear-leveling. Modern SSDs are composed of multiple packages and several flash chips within a package. The packages are connected using multiple I/O channels to offer high I/O bandwidth. SSD computing power is also expected to be high enough to exploit such inherent internal parallelism within the drive to increase the bandwidth and to handle fast I/O requests. More recently, SSD devices are being equipped with powerful processing units and are even embedded with multicore CPUs (e.g. ARM Cortex-A9 embedded processor is advertised to reach 2GHz frequency and deliver 5000 DMIPS; OCZ RevoDrive X2 SSD has 4 SandForce controllers, each with 780MHz max frequency Tensilica core). Efforts that take advantage of the available computing cycles on the processors on SSDs to run auxiliary tasks other than actual I/O requests are beginning to emerge. Kim et al. investigate database scan operations in the context of processing on the SSDs, and propose dedicated hardware logic to speed up scans. Also, cluster architectures have been explored, which consist of low-power embedded CPUs coupled with small local flash to achieve fast, parallel access to data. Processor utilization on SSD is highly dependent on workloads and, therefore, they can be idle during periods with no I/O accesses. We propose to use the available processing capability on the SSD to run tasks that can be offloaded from the host. This paper makes the following contributions: (1) We have investigated Active Flash and its potential to optimize the total energy cost, including power consumption on the host and the flash device; (2) We have developed analytical models to analyze the performance-energy tradeoffs for Active Flash, by treating the SSD as a blackbox, this is particularly valuable due to the proprietary nature of the SSD internal hardware; and (3) We have enhanced a well-known SSD simulator (from MSR) to implement 'on-the-fly' data compression using Active Flash. Our results provide a window into striking a balance between energy consumption and application performance.« less

  17. Parameter Sensitivity Study of the Unreacted-Core Shrinking Model: A Computer Activity for Chemical Reaction Engineering Courses

    ERIC Educational Resources Information Center

    Tudela, Ignacio; Bonete, Pedro; Fullana, Andres; Conesa, Juan Antonio

    2011-01-01

    The unreacted-core shrinking (UCS) model is employed to characterize fluid-particle reactions that are important in industry and research. An approach to understand the UCS model by numerical methods is presented, which helps the visualization of the influence of the variables that control the overall heterogeneous process. Use of this approach in…

  18. 21 CFR 1271.160 - Establishment and maintenance of a quality program.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... perform for management review a quality audit, as defined in § 1271.3(gg), of activities related to core CGTP requirements. (d) Computers. You must validate the performance of computer software for the intended use, and the performance of any changes to that software for the intended use, if you rely upon...

  19. 21 CFR 1271.160 - Establishment and maintenance of a quality program.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... perform for management review a quality audit, as defined in § 1271.3(gg), of activities related to core CGTP requirements. (d) Computers. You must validate the performance of computer software for the intended use, and the performance of any changes to that software for the intended use, if you rely upon...

  20. 21 CFR 1271.160 - Establishment and maintenance of a quality program.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... perform for management review a quality audit, as defined in § 1271.3(gg), of activities related to core CGTP requirements. (d) Computers. You must validate the performance of computer software for the intended use, and the performance of any changes to that software for the intended use, if you rely upon...

  1. 21 CFR 1271.160 - Establishment and maintenance of a quality program.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... perform for management review a quality audit, as defined in § 1271.3(gg), of activities related to core CGTP requirements. (d) Computers. You must validate the performance of computer software for the intended use, and the performance of any changes to that software for the intended use, if you rely upon...

  2. Computer-Animated Instruction and Students' Conceptual Change in Electrochemistry: Preliminary Qualitative Analysis

    ERIC Educational Resources Information Center

    Talib, Othman; Matthews, Robert; Secombe, Margaret

    2005-01-01

    This paper discusses the potential of applying computer-animated instruction (CAnI) as an effective conceptual change strategy in teaching electrochemistry in comparison to conventional lecture-based instruction (CLI). The core assumption in this study is that conceptual change in learners is an active, constructive process that is enhanced by the…

  3. Real science at the petascale.

    PubMed

    Saksena, Radhika S; Boghosian, Bruce; Fazendeiro, Luis; Kenway, Owain A; Manos, Steven; Mazzeo, Marco D; Sadiq, S Kashif; Suter, James L; Wright, David; Coveney, Peter V

    2009-06-28

    We describe computational science research that uses petascale resources to achieve scientific results at unprecedented scales and resolution. The applications span a wide range of domains, from investigation of fundamental problems in turbulence through computational materials science research to biomedical applications at the forefront of HIV/AIDS research and cerebrovascular haemodynamics. This work was mainly performed on the US TeraGrid 'petascale' resource, Ranger, at Texas Advanced Computing Center, in the first half of 2008 when it was the largest computing system in the world available for open scientific research. We have sought to use this petascale supercomputer optimally across application domains and scales, exploiting the excellent parallel scaling performance found on up to at least 32 768 cores for certain of our codes in the so-called 'capability computing' category as well as high-throughput intermediate-scale jobs for ensemble simulations in the 32-512 core range. Furthermore, this activity provides evidence that conventional parallel programming with MPI should be successful at the petascale in the short to medium term. We also report on the parallel performance of some of our codes on up to 65 636 cores on the IBM Blue Gene/P system at the Argonne Leadership Computing Facility, which has recently been named the fastest supercomputer in the world for open science.

  4. Computer-assisted learning in anatomy at the international medical school in Debrecen, Hungary: a preliminary report.

    PubMed

    Kish, Gary; Cook, Samuel A; Kis, Gréta

    2013-01-01

    The University of Debrecen's Faculty of Medicine has an international, multilingual student population with anatomy courses taught in English to all but Hungarian students. An elective computer-assisted gross anatomy course, the Computer Human Anatomy (CHA), has been taught in English at the Anatomy Department since 2008. This course focuses on an introduction to anatomical digital images along with clinical cases. This low-budget course has a large visual component using images from magnetic resonance imaging and computer axial tomogram scans, ultrasound clinical studies, and readily available anatomy software that presents topics which run in parallel to the university's core anatomy curriculum. From the combined computer images and CHA lecture information, students are asked to solve computer-based clinical anatomy problems in the CHA computer laboratory. A statistical comparison was undertaken of core anatomy oral examination performances of English program first-year medical students who took the elective CHA course and those who did not in the three academic years 2007-2008, 2008-2009, and 2009-2010. The results of this study indicate that the CHA-enrolled students improved their performance on required anatomy core curriculum oral examinations (P < 0.001), suggesting that computer-assisted learning may play an active role in anatomy curriculum improvement. These preliminary results have prompted ongoing evaluation of what specific aspects of CHA are valuable and which students benefit from computer-assisted learning in a multilingual and diverse cultural environment. Copyright © 2012 American Association of Anatomists.

  5. APPLICATION OF COMPUTER-AIDED TOMOGRAPHY (CAT) AS A POTENTIAL INDICATOR OF MARINE MARCO BENTHIC ACTIVITY ALONG POLLUTION GRADIENTS

    EPA Science Inventory

    Sediment cores were imaged using a local hospital CAT scanner. These image data were transferred to a personal computer at our laboratory using specially developed software. Previously, we reported an inverse correlation (r2 = 0.98, P<0.01) between the average sediment x-ray atte...

  6. Parallelized computation for computer simulation of electrocardiograms using personal computers with multi-core CPU and general-purpose GPU.

    PubMed

    Shen, Wenfeng; Wei, Daming; Xu, Weimin; Zhu, Xin; Yuan, Shizhong

    2010-10-01

    Biological computations like electrocardiological modelling and simulation usually require high-performance computing environments. This paper introduces an implementation of parallel computation for computer simulation of electrocardiograms (ECGs) in a personal computer environment with an Intel CPU of Core (TM) 2 Quad Q6600 and a GPU of Geforce 8800GT, with software support by OpenMP and CUDA. It was tested in three parallelization device setups: (a) a four-core CPU without a general-purpose GPU, (b) a general-purpose GPU plus 1 core of CPU, and (c) a four-core CPU plus a general-purpose GPU. To effectively take advantage of a multi-core CPU and a general-purpose GPU, an algorithm based on load-prediction dynamic scheduling was developed and applied to setting (c). In the simulation with 1600 time steps, the speedup of the parallel computation as compared to the serial computation was 3.9 in setting (a), 16.8 in setting (b), and 20.0 in setting (c). This study demonstrates that a current PC with a multi-core CPU and a general-purpose GPU provides a good environment for parallel computations in biological modelling and simulation studies. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  7. Validation of the MCNP computational model for neutron flux distribution with the neutron activation analysis measurement

    NASA Astrophysics Data System (ADS)

    Tiyapun, K.; Chimtin, M.; Munsorn, S.; Somchit, S.

    2015-05-01

    The objective of this work is to demonstrate the method for validating the predication of the calculation methods for neutron flux distribution in the irradiation tubes of TRIGA research reactor (TRR-1/M1) using the MCNP computer code model. The reaction rate using in the experiment includes 27Al(n, α)24Na and 197Au(n, γ)198Au reactions. Aluminium (99.9 wt%) and gold (0.1 wt%) foils and the gold foils covered with cadmium were irradiated in 9 locations in the core referred to as CT, C8, C12, F3, F12, F22, F29, G5, and G33. The experimental results were compared to the calculations performed using MCNP which consisted of the detailed geometrical model of the reactor core. The results from the experimental and calculated normalized reaction rates in the reactor core are in good agreement for both reactions showing that the material and geometrical properties of the reactor core are modelled very well. The results indicated that the difference between the experimental measurements and the calculation of the reactor core using the MCNP geometrical model was below 10%. In conclusion the MCNP computational model which was used to calculate the neutron flux and reaction rate distribution in the reactor core can be used for others reactor core parameters including neutron spectra calculation, dose rate calculation, power peaking factors calculation and optimization of research reactor utilization in the future with the confidence in the accuracy and reliability of the calculation.

  8. 12 CFR 567.12 - Purchased credit card relationships, servicing assets, intangible assets (other than purchased...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and core capital. (b) Computation of core and tangible capital. (1) Purchased credit card relationships may be included (that is, not deducted) in computing core capital in accordance with the... restrictions in this section, mortgage servicing assets may be included in computing core and tangible capital...

  9. Synthetic Core Promoters as Universal Parts for Fine-Tuning Expression in Different Yeast Species

    PubMed Central

    2016-01-01

    Synthetic biology and metabolic engineering experiments frequently require the fine-tuning of gene expression to balance and optimize protein levels of regulators or metabolic enzymes. A key concept of synthetic biology is the development of modular parts that can be used in different contexts. Here, we have applied a computational multifactor design approach to generate de novo synthetic core promoters and 5′ untranslated regions (UTRs) for yeast cells. In contrast to upstream cis-regulatory modules (CRMs), core promoters are typically not subject to specific regulation, making them ideal engineering targets for gene expression fine-tuning. 112 synthetic core promoter sequences were designed on the basis of the sequence/function relationship of natural core promoters, nucleosome occupancy and the presence of short motifs. The synthetic core promoters were fused to the Pichia pastoris AOX1 CRM, and the resulting activity spanned more than a 200-fold range (0.3% to 70.6% of the wild type AOX1 level). The top-ten synthetic core promoters with highest activity were fused to six additional CRMs (three in P. pastoris and three in Saccharomyces cerevisiae). Inducible CRM constructs showed significantly higher activity than constitutive CRMs, reaching up to 176% of natural core promoters. Comparing the activity of the same synthetic core promoters fused to different CRMs revealed high correlations only for CRMs within the same organism. These data suggest that modularity is maintained to some extent but only within the same organism. Due to the conserved role of eukaryotic core promoters, this rational design concept may be transferred to other organisms as a generic engineering tool. PMID:27973777

  10. Estimating the spatial distribution of soil organic matter density and geochemical properties in a polygonal shaped Arctic Tundra using core sample analysis and X-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Soom, F.; Ulrich, C.; Dafflon, B.; Wu, Y.; Kneafsey, T. J.; López, R. D.; Peterson, J.; Hubbard, S. S.

    2016-12-01

    The Arctic tundra with its permafrost dominated soils is one of the regions most affected by global climate change, and in turn, can also influence the changing climate through biogeochemical processes, including greenhouse gas release or storage. Characterization of shallow permafrost distribution and characteristics are required for predicting ecosystem feedbacks to a changing climate over decadal to century timescales, because they can drive active layer deepening and land surface deformation, which in turn can significantly affect hydrological and biogeochemical responses, including greenhouse gas dynamics. In this study, part of the Next-Generation Ecosystem Experiment (NGEE-Arctic), we use X-ray computed tomography (CT) to estimate wet bulk density of cores extracted from a field site near Barrow AK, which extend 2-3m through the active layer into the permafrost. We use multi-dimensional relationships inferred from destructive core sample analysis to infer organic matter density, dry bulk density and ice content, along with some geochemical properties from nondestructive CT-scans along the entire length of the cores, which was not obtained by the spatially limited destructive laboratory analysis. Multi-parameter cross-correlations showed good agreement between soil properties estimated from CT scans versus properties obtained through destructive sampling. Soil properties estimated from cores located in different types of polygons provide valuable information about the vertical distribution of soil and permafrost properties as a function of geomorphology.

  11. Performing a local reduction operation on a parallel computer

    DOEpatents

    Blocksome, Michael A; Faraj, Daniel A

    2013-06-04

    A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.

  12. Performing a local reduction operation on a parallel computer

    DOEpatents

    Blocksome, Michael A.; Faraj, Daniel A.

    2012-12-11

    A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.

  13. Ensemble-Based Computational Approach Discriminates Functional Activity of p53 Cancer and Rescue Mutants

    PubMed Central

    Demir, Özlem; Baronio, Roberta; Salehi, Faezeh; Wassman, Christopher D.; Hall, Linda; Hatfield, G. Wesley; Chamberlin, Richard; Kaiser, Peter; Lathrop, Richard H.; Amaro, Rommie E.

    2011-01-01

    The tumor suppressor protein p53 can lose its function upon single-point missense mutations in the core DNA-binding domain (“cancer mutants”). Activity can be restored by second-site suppressor mutations (“rescue mutants”). This paper relates the functional activity of p53 cancer and rescue mutants to their overall molecular dynamics (MD), without focusing on local structural details. A novel global measure of protein flexibility for the p53 core DNA-binding domain, the number of clusters at a certain RMSD cutoff, was computed by clustering over 0.7 µs of explicitly solvated all-atom MD simulations. For wild-type p53 and a sample of p53 cancer or rescue mutants, the number of clusters was a good predictor of in vivo p53 functional activity in cell-based assays. This number-of-clusters (NOC) metric was strongly correlated (r2 = 0.77) with reported values of experimentally measured ΔΔG protein thermodynamic stability. Interpreting the number of clusters as a measure of protein flexibility: (i) p53 cancer mutants were more flexible than wild-type protein, (ii) second-site rescue mutations decreased the flexibility of cancer mutants, and (iii) negative controls of non-rescue second-site mutants did not. This new method reflects the overall stability of the p53 core domain and can discriminate which second-site mutations restore activity to p53 cancer mutants. PMID:22028641

  14. ElemeNT: a computational tool for detecting core promoter elements.

    PubMed

    Sloutskin, Anna; Danino, Yehuda M; Orenstein, Yaron; Zehavi, Yonathan; Doniger, Tirza; Shamir, Ron; Juven-Gershon, Tamar

    2015-01-01

    Core promoter elements play a pivotal role in the transcriptional output, yet they are often detected manually within sequences of interest. Here, we present 2 contributions to the detection and curation of core promoter elements within given sequences. First, the Elements Navigation Tool (ElemeNT) is a user-friendly web-based, interactive tool for prediction and display of putative core promoter elements and their biologically-relevant combinations. Second, the CORE database summarizes ElemeNT-predicted core promoter elements near CAGE and RNA-seq-defined Drosophila melanogaster transcription start sites (TSSs). ElemeNT's predictions are based on biologically-functional core promoter elements, and can be used to infer core promoter compositions. ElemeNT does not assume prior knowledge of the actual TSS position, and can therefore assist in annotation of any given sequence. These resources, freely accessible at http://lifefaculty.biu.ac.il/gershon-tamar/index.php/resources, facilitate the identification of core promoter elements as active contributors to gene expression.

  15. A highly efficient multi-core algorithm for clustering extremely large datasets

    PubMed Central

    2010-01-01

    Background In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer. Results We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization. Conclusions Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer. PMID:20370922

  16. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    DOEpatents

    Faraj, Ahmad

    2013-07-09

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer, each node including at least two processing cores, that include: establishing, for each node, a plurality of logical rings, each ring including a different set of at least one core on that node, each ring including the cores on at least two of the nodes; iteratively for each node: assigning each core of that node to one of the rings established for that node to which the core has not previously been assigned, and performing, for each ring for that node, a global allreduce operation using contribution data for the cores assigned to that ring or any global allreduce results from previous global allreduce operations, yielding current global allreduce results for each core; and performing, for each node, a local allreduce operation using the global allreduce results.

  17. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    DOEpatents

    Faraj, Ahmad

    2013-02-12

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer, each node including at least two processing cores, that include: performing, for each node, a local reduction operation using allreduce contribution data for the cores of that node, yielding, for each node, a local reduction result for one or more representative cores for that node; establishing one or more logical rings among the nodes, each logical ring including only one of the representative cores from each node; performing, for each logical ring, a global allreduce operation using the local reduction result for the representative cores included in that logical ring, yielding a global allreduce result for each representative core included in that logical ring; and performing, for each node, a local broadcast operation using the global allreduce results for each representative core on that node.

  18. Performance Prediction Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chennupati, Gopinath; Santhi, Nanadakishore; Eidenbenz, Stephen

    The Performance Prediction Toolkit (PPT), is a scalable co-design tool that contains the hardware and middle-ware models, which accept proxy applications as input in runtime prediction. PPT relies on Simian, a parallel discrete event simulation engine in Python or Lua, that uses the process concept, where each computing unit (host, node, core) is a Simian entity. Processes perform their task through message exchanges to remain active, sleep, wake-up, begin and end. The PPT hardware model of a compute core (such as a Haswell core) consists of a set of parameters, such as clock speed, memory hierarchy levels, their respective sizes,more » cache-lines, access times for different cache levels, average cycle counts of ALU operations, etc. These parameters are ideally read off a spec sheet or are learned using regression models learned from hardware counters (PAPI) data. The compute core model offers an API to the software model, a function called time_compute(), which takes as input a tasklist. A tasklist is an unordered set of ALU, and other CPU-type operations (in particular virtual memory loads and stores). The PPT application model mimics the loop structure of the application and replaces the computational kernels with a call to the hardware model's time_compute() function giving tasklists as input that model the compute kernel. A PPT application model thus consists of tasklists representing kernels and the high-er level loop structure that we like to think of as pseudo code. The key challenge for the hardware model's time_compute-function is to translate virtual memory accesses into actual cache hierarchy level hits and misses.PPT also contains another CPU core level hardware model, Analytical Memory Model (AMM). The AMM solves this challenge soundly, where our previous alternatives explicitly include the L1,L2,L3 hit-rates as inputs to the tasklists. Explicit hit-rates inevitably only reflect the application modeler's best guess, perhaps informed by a few small test problems using hardware counters; also, hard-coded hit-rates make the hardware model insensitive to changes in cache sizes. Alternatively, we use reuse distance distributions in the tasklists. In general, reuse profiles require the application modeler to run a very expensive trace analysis on the real code that realistically can be done at best for small examples.« less

  19. 2007 international meeting on Reduced Enrichment for Research and Test Reactors (RERTR). Abstracts and available papers presented at the meeting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    2008-07-15

    The Meeting papers discuss research and test reactor fuel performance, manufacturing and testing. Some of the main topics are: conversion from HEU to LEU in different reactors and corresponding problems and activities; flux performance and core lifetime analysis with HEU and LEU fuels; physics and safety characteristics; measurement of gamma field parameters in core with LEU fuel; nondestructive analysis of RERTR fuel; thermal hydraulic analysis; fuel interactions; transient analyses and thermal hydraulics for HEU and LEU cores; microstructure research reactor fuels; post irradiation analysis and performance; computer codes and other related problems.

  20. Remote file inquiry (RFI) system

    NASA Technical Reports Server (NTRS)

    1975-01-01

    System interrogates and maintains user-definable data files from remote terminals, using English-like, free-form query language easily learned by persons not proficient in computer programming. System operates in asynchronous mode, allowing any number of inquiries within limitation of available core to be active concurrently.

  1. Design of inductive sensors for tongue control system for computers and assistive devices.

    PubMed

    Lontis, Eugen R; Struijk, Lotte N S A

    2010-07-01

    The paper introduces a novel design of air-core inductive sensors in printed circuit board (PCB) technology for a tongue control system. The tongue control system provides a quadriplegic person with a keyboard and a joystick type of mouse for interaction with a computer or for control of an assistive device. Activation of inductive sensors was performed with a cylindrical, soft ferromagnetic material (activation unit). Comparative analysis of inductive sensors in PCB technology with existing hand-made inductive sensors was performed with respect to inductance, resistance, and sensitivity to activation when the activation unit was placed in the center of the sensor. Optimisation of the activation unit was performed in a finite element model. PCBs with air-core inductive sensors were manufactured in a 10 layers, 100 microm and 120 microm line width technology. These sensors provided quality signals that could drive the electronics of the hand-made sensors. Furthermore, changing the geometry of the sensors allowed generation of variable signals correlated with the 2D movement of the activation unit at the sensors' surface. PCB technology for inductive sensors allows flexibility in design, automation of production and ease of possible integration with supplying electronics. The basic switch function of the inductive sensor can be extended to two-dimensional movement detection for pointing devices.

  2. Transfluxor circuit amplifies sensing current for computer memories

    NASA Technical Reports Server (NTRS)

    Milligan, G. C.

    1964-01-01

    To transfer data from the magnetic memory core to an independent core, a reliable sensing amplifier has been developed. Later the data in the independent core is transferred to the arithmetical section of the computer.

  3. Atomistic tight-binding theory of excitonic splitting energies in CdX(X = Se, S and Te)/ZnS core/shell nanocrystals

    NASA Astrophysics Data System (ADS)

    Sukkabot, Worasak; Pinsook, Udomsilp

    2017-01-01

    Using the atomistic tight-binding theory (TB) and a configuration interaction description (CI), we numerically compute the excitonic splitting of CdX(X = Se, S and Te)/ZnS core/shell nanocrystals with the objective to explain how types of the core materials and growth shell thickness can provide the detailed manipulation of the dark-dark (DD), dark-bright (DB) and bright-bright (BB) excitonic splitting, beneficial for the active application of quantum information. To analyze the splitting of the excitonic states, the optical band gaps, ground-state wave function overlaps and atomistic electron-hole interactions tend to be numerically demonstrated. Based on the atomistic computations, the single-particle and excitonic gaps are mainly reduced with the increasing ZnS shell thickness owing to the quantum confinement. In the range of the higher to lower energies, the order of the single-particle gaps is CdSe/ZnS, CdS/ZnS and CdTe/ZnS core/shell nanocrystals, while one of the excitonic gaps is CdS/ZnS, CdSe/ZnS and CdTe/ZnS core/shell nanocrystals because of the atomistic electron-hole interaction. The strongest electron-hole interactions are mainly observed in CdSe/ZnS core/shell nanocrystals. In addition, the computational results underline that the energies of the dark-dark (DD), dark-bright (DB) and bright-bright (BB) excitonic splitting are generally reduced with the increasing ZnS growth shell thickness as described by the trend of the electron-hole exchange interaction. The high-to-low splitting of the excitonic states is demonstrated in CdSe/ZnS, CdTe/ZnS and CdS/ZnS core/shell nanocrystals because of the fashion in the electron-hole exchange interaction and overlaps of the electron-hole wave functions. As the resulting calculations, it is expected that CdS/ZnS core/shell nanocrystals are the best candidates to be the source of entangled photons. Finally, the comprehensive information on the excitonic splitting can enable the use of suitable core/shell nanocrystals for the entangled photons in the application of quantum information.

  4. Nanocrystal Core Lipoprotein Biomimetics for Imaging of Lipoproteins and Associated Diseases.

    PubMed

    Fay, Francois; Sanchez-Gaytan, Brenda L; Cormode, David P; Skajaa, Torjus; Fisher, Edward A; Fayad, Zahi A; Mulder, Willem J M

    2013-02-01

    Lipoproteins are natural nanoparticles composed of phospholipids and apolipoproteins that transport lipids throughout the body. As key effectors of lipid homeostasis, the functions of lipoproteins have been demonstrated to be crucial during the development of cardiovascular diseases. Therefore various strategies have been used to study their biology and detect them in vivo. A recent approach has been the production of lipoprotein biomimetic particles loaded with diagnostically active nanocrystals in their core. These include, but are not limited to: quantum dots, iron oxide or gold nanocrystals. Inclusion of these nanocrystals enables the utilization of lipoproteins as probes for a variety of imaging modalities (computed tomography, magnetic resonance imaging, fluorescence) while preserving their biological activity. Furthermore as some lipoproteins naturally accumulate in atherosclerotic plaque or specific tumor tissues, nanocrystal core lipoprotein biomimetics have been developed as contrast agents for early diagnosis of these diseases.

  5. Nanocrystal Core Lipoprotein Biomimetics for Imaging of Lipoproteins and Associated Diseases

    PubMed Central

    Fay, Francois; Sanchez-Gaytan, Brenda L.; Cormode, David P.; Skajaa, Torjus; Fisher, Edward A.; Fayad, Zahi A.

    2013-01-01

    Lipoproteins are natural nanoparticles composed of phospholipids and apolipoproteins that transport lipids throughout the body. As key effectors of lipid homeostasis, the functions of lipoproteins have been demonstrated to be crucial during the development of cardiovascular diseases. Therefore various strategies have been used to study their biology and detect them in vivo. A recent approach has been the production of lipoprotein biomimetic particles loaded with diagnostically active nanocrystals in their core. These include, but are not limited to: quantum dots, iron oxide or gold nanocrystals. Inclusion of these nanocrystals enables the utilization of lipoproteins as probes for a variety of imaging modalities (computed tomography, magnetic resonance imaging, fluorescence) while preserving their biological activity. Furthermore as some lipoproteins naturally accumulate in atherosclerotic plaque or specific tumor tissues, nanocrystal core lipoprotein biomimetics have been developed as contrast agents for early diagnosis of these diseases. PMID:23687557

  6. Generating unstructured nuclear reactor core meshes in parallel

    DOE PAGES

    Jain, Rajeev; Tautges, Timothy J.

    2014-10-24

    Recent advances in supercomputers and parallel solver techniques have enabled users to run large simulations problems using millions of processors. Techniques for multiphysics nuclear reactor core simulations are under active development in several countries. Most of these techniques require large unstructured meshes that can be hard to generate in a standalone desktop computers because of high memory requirements, limited processing power, and other complexities. We have previously reported on a hierarchical lattice-based approach for generating reactor core meshes. Here, we describe efforts to exploit coarse-grained parallelism during reactor assembly and reactor core mesh generation processes. We highlight several reactor coremore » examples including a very high temperature reactor, a full-core model of the Korean MONJU reactor, a ¼ pressurized water reactor core, the fast reactor Experimental Breeder Reactor-II core with a XX09 assembly, and an advanced breeder test reactor core. The times required to generate large mesh models, along with speedups obtained from running these problems in parallel, are reported. A graphical user interface to the tools described here has also been developed.« less

  7. Dissociation of MgSiO3 in the cores of gas giants and terrestrial exoplanets.

    PubMed

    Umemoto, Koichiro; Wentzcovitch, Renata M; Allen, Philip B

    2006-02-17

    CaIrO3-type MgSiO3 is the planet-forming silicate stable at pressures and temperatures beyond those of Earth's core-mantle boundary. First-principles quasiharmonic free-energy computations show that this mineral should dissociate into CsCl-type MgO cotunnite-type SiO2 at pressures and temperatures expected to occur in the cores of the gas giants + and in terrestrial exoplanets. At approximately 10 megabars and approximately 10,000 kelvin, cotunnite-type SiO2 should have thermally activated electron carriers and thus electrical conductivity close to metallic values. Electrons will give a large contribution to thermal conductivity, and electronic damping will suppress radiative heat transport.

  8. Computational performance of a smoothed particle hydrodynamics simulation for shared-memory parallel computing

    NASA Astrophysics Data System (ADS)

    Nishiura, Daisuke; Furuichi, Mikito; Sakaguchi, Hide

    2015-09-01

    The computational performance of a smoothed particle hydrodynamics (SPH) simulation is investigated for three types of current shared-memory parallel computer devices: many integrated core (MIC) processors, graphics processing units (GPUs), and multi-core CPUs. We are especially interested in efficient shared-memory allocation methods for each chipset, because the efficient data access patterns differ between compute unified device architecture (CUDA) programming for GPUs and OpenMP programming for MIC processors and multi-core CPUs. We first introduce several parallel implementation techniques for the SPH code, and then examine these on our target computer architectures to determine the most effective algorithms for each processor unit. In addition, we evaluate the effective computing performance and power efficiency of the SPH simulation on each architecture, as these are critical metrics for overall performance in a multi-device environment. In our benchmark test, the GPU is found to produce the best arithmetic performance as a standalone device unit, and gives the most efficient power consumption. The multi-core CPU obtains the most effective computing performance. The computational speed of the MIC processor on Xeon Phi approached that of two Xeon CPUs. This indicates that using MICs is an attractive choice for existing SPH codes on multi-core CPUs parallelized by OpenMP, as it gains computational acceleration without the need for significant changes to the source code.

  9. Performing an allreduce operation using shared memory

    DOEpatents

    Archer, Charles J [Rochester, MN; Dozsa, Gabor [Ardsley, NY; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for performing an allreduce operation using shared memory that include: receiving, by at least one of a plurality of processing cores on a compute node, an instruction to perform an allreduce operation; establishing, by the core that received the instruction, a job status object for specifying a plurality of shared memory allreduce work units, the plurality of shared memory allreduce work units together performing the allreduce operation on the compute node; determining, by an available core on the compute node, a next shared memory allreduce work unit in the job status object; and performing, by that available core on the compute node, that next shared memory allreduce work unit.

  10. Performing an allreduce operation using shared memory

    DOEpatents

    Archer, Charles J; Dozsa, Gabor; Ratterman, Joseph D; Smith, Brian E

    2014-06-10

    Methods, apparatus, and products are disclosed for performing an allreduce operation using shared memory that include: receiving, by at least one of a plurality of processing cores on a compute node, an instruction to perform an allreduce operation; establishing, by the core that received the instruction, a job status object for specifying a plurality of shared memory allreduce work units, the plurality of shared memory allreduce work units together performing the allreduce operation on the compute node; determining, by an available core on the compute node, a next shared memory allreduce work unit in the job status object; and performing, by that available core on the compute node, that next shared memory allreduce work unit.

  11. [Three-dimensional computer aided design for individualized post-and-core restoration].

    PubMed

    Gu, Xiao-yu; Wang, Ya-ping; Wang, Yong; Lü, Pei-jun

    2009-10-01

    To develop a method of three-dimensional computer aided design (CAD) of post-and-core restoration. Two plaster casts with extracted natural teeth were used in this study. The extracted teeth were prepared and scanned using tomography method to obtain three-dimensional digitalized models. According to the basic rules of post-and-core design, posts, cores and cavity surfaces of the teeth were designed using the tools for processing point clouds, curves and surfaces on the forward engineering software of Tanglong prosthodontic system. Then three-dimensional figures of the final restorations were corrected according to the configurations of anterior teeth, premolars and molars respectively. Computer aided design of 14 post-and-core restorations were finished, and good fitness between the restoration and the three-dimensional digital models were obtained. Appropriate retention forms and enough spaces for the full crown restorations can be obtained through this method. The CAD of three-dimensional figures of the post-and-core restorations can fulfill clinical requirements. Therefore they can be used in computer-aided manufacture (CAM) of post-and-core restorations.

  12. Seismic, side-scan survey, diving, and coring data analyzed by a Macintosh II sup TM computer and inexpensive software provide answers to a possible offshore extension of landslides at Palos Verdes Peninsula, California

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dill, R.F.; Slosson, J.E.; McEachen, D.B.

    1990-05-01

    A Macintosh II{sup TM} computer and commercially available software were used to analyze and depict the topography, construct an isopach sediment thickness map, plot core positions, and locate the geology of an offshore area facing an active landslide on the southern side of Palos Verdes Peninsula California. Profile data from side scan sonar, 3.5 kHz, and Boomer subbottom, high-resolution seismic, diving, echo sounder traverses, and cores - all controlled with a mini Ranger II navigation system - were placed in MacGridzo{sup TM} and WingZ{sup TM} software programs. The computer-plotted data from seven sources were used to construct maps with overlaysmore » for evaluating the possibility of a shoreside landslide extending offshore. The poster session describes the offshore survey system and demonstrates the development of the computer data base, its placement into the MacGridzo{sup TM} gridding program, and transfer of gridded navigational locations to the WingZ{sup TM} data base and graphics program. Data will be manipulated to show how sea-floor features are enhanced and how isopach data were used to interpret the possibility of landslide displacement and Holocene sea level rise. The software permits rapid assessment of data using computerized overlays and a simple, inexpensive means of constructing and evaluating information in map form and the preparation of final written reports. This system could be useful in many other areas where seismic profiles, precision navigational locations, soundings, diver observations, and core provide a great volume of information that must be compared on regional plots to develop of field maps for geological evaluation and reports.« less

  13. Biological computational approaches: new hopes to improve (re)programming robustness, regenerative medicine and cancer therapeutics.

    PubMed

    Ebrahimi, Behnam

    2016-01-01

    Hundreds of transcription factors (TFs) are expressed and work in each cell type, but the identity of the cells is defined and maintained through the activity of a small number of core TFs. Existing reprogramming strategies predominantly focus on the ectopic expression of core TFs of an intended fate in a given cell type regardless of the state of native/somatic gene regulatory networks (GRNs) of the starting cells. Interestingly, an important point is that how much products of the reprogramming, transdifferentiation and differentiation (programming) are identical to their in vivo counterparts. There is evidence that shows that direct fate conversions of somatic cells are not complete, with target cell identity not fully achieved. Manipulation of core TFs provides a powerful tool for engineering cell fate in terms of extinguishment of native GRNs, the establishment of a new GRN, and preventing installation of aberrant GRNs. Conventionally, core TFs are selected to convert one cell type into another mostly based on literature and the experimental identification of genes that are differentially expressed in one cell type compared to the specific cell types. Currently, there is not a universal standard strategy for identifying candidate core TFs. Remarkably, several biological computational platforms are developed, which are capable of evaluating the fidelity of reprogramming methods and refining existing protocols. The current review discusses some deficiencies of reprogramming technologies in the production of a pure population of authentic target cells. Furthermore, it reviews the role of computational approaches (e.g. CellNet, KeyGenes, Mogrify, etc.) in improving (re)programming methods and consequently in regenerative medicine and cancer therapeutics. Copyright © 2016 International Society of Differentiation. Published by Elsevier B.V. All rights reserved.

  14. Active non-volatile memory post-processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kannan, Sudarsun; Milojicic, Dejan S.; Talwar, Vanish

    A computing node includes an active Non-Volatile Random Access Memory (NVRAM) component which includes memory and a sub-processor component. The memory is to store data chunks received from a processor core, the data chunks comprising metadata indicating a type of post-processing to be performed on data within the data chunks. The sub-processor component is to perform post-processing of said data chunks based on said metadata.

  15. On efficiency of fire simulation realization: parallelization with greater number of computational meshes

    NASA Astrophysics Data System (ADS)

    Valasek, Lukas; Glasa, Jan

    2017-12-01

    Current fire simulation systems are capable to utilize advantages of high-performance computer (HPC) platforms available and to model fires efficiently in parallel. In this paper, efficiency of a corridor fire simulation on a HPC computer cluster is discussed. The parallel MPI version of Fire Dynamics Simulator is used for testing efficiency of selected strategies of allocation of computational resources of the cluster using a greater number of computational cores. Simulation results indicate that if the number of cores used is not equal to a multiple of the total number of cluster node cores there are allocation strategies which provide more efficient calculations.

  16. Final Report: A Broad Research Project on the Sciences of Complexity, September 15, 1994 - November 15, 1999

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2000-02-01

    DOE support for a broad research program in the sciences of complexity permitted the Santa Fe Institute to initiate new collaborative research within its integrative core activities as well as to host visitors to participate in research on specific topics that serve as motivation and testing ground for the study of the general principles of complex systems. Results are presented on computational biology, biodiversity and ecosystem research, and advanced computing and simulation.

  17. Investigating the Naval Logistics Role in Humanitarian Assistance Activities

    DTIC Science & Technology

    2015-03-01

    transportation means. E. BASE CASE RESULTS The computations were executed on a MacBook Pro , 3 GHz Intel Core i7-4578U processor with 8 GB. The...MacBook Pro was partitioned to also contain a Windows 7, 64-bit operating system. The computations were run in the Windows 7 operating system using the...it impacts the types of metamodels that can be developed as a result of data farming (Lucas et al., 2015). Using a metamodel, one can closely

  18. Computational model for living nematic

    NASA Astrophysics Data System (ADS)

    Genkin, Mikhail; Sokolov, Andrey; Lavrentovich, Oleg; Aranson, Igor

    A realization of an active system has been conceived by combining swimming bacteria and a lyotropic nematic liquid crystal. Here, by coupling the well-established and validated model of nematic liquid crystals with the bacterial dynamics we developed a computational model describing intricate properties of such a living nematic. In faithful agreement with the experiment, the model reproduces the onset of periodic undulation of the nematic director and consequent proliferation of topological defects with the increase in bacterial concentration. It yields testable prediction on the accumulation and transport of bacteria in the cores of +1/2 topological defects and depletion of bacteria in the cores of -1/2 defects. Our new experiment on motile bacteria suspended in a free-standing liquid crystalline film fully confirmed this prediction. This effect can be used to capture and manipulation of small amounts of bacteria.

  19. FDTD computation of temperature elevation in the elderly for far-field RF exposures.

    PubMed

    Nomura, Tomoki; Laakso, Ilkka; Hirata, Akimasa

    2014-03-01

    Core temperature elevation and perspiration in younger and older adults is investigated for plane-wave exposure at whole-body averaged specific absorption rate of 0.4 W kg(-1). Numeric Japanese male model is considered together with a thermoregulatory response formula proposed in the authors' previous study. The frequencies considered were at 65 MHz and 2 GHz where the total power absorption in humans becomes maximal for the allowable power density prescribed in the international guidelines. From the computational results used here, the core temperature elevation in the older adult model was larger than that in the younger one at both frequencies. The reason for this difference is attributable to the difference of sweating, which is originated from the difference in the threshold activating the sweating and the decline in sweating in the legs.

  20. Investigations into Gravitational Wave Emission from Compact Body Inspiral Into Massive Black Holes

    NASA Technical Reports Server (NTRS)

    Hughes, Scott A.

    2004-01-01

    Much of the grant's support (and associated time) was used in developmental activity, building infrastructure for the core of the work that the grant supports. Though infrastructure development was the bulk of the activity supported this year, important progress was made in research as well. The two most important "infrastructure" items were in computing hardware and personnel. Research activities were primarily focused on improving and extending. Hughes' Teukolsky-equation-based gravitational-wave generator. Several improvements have been incorporated into this generator.

  1. Testing the Test: A Study of PARCC Field Trials in Two School Districts. Policy Brief

    ERIC Educational Resources Information Center

    Rennie Center for Education Research & Policy, 2015

    2015-01-01

    The potential use of computer-based assessments has raised concerns from educators, policymakers, and parents about information technology infrastructure in school districts and the preparation of staff and students to use new technologies for assessment purposes, and the potential impact of testing activities on core school functions,…

  2. Teachers' Invisible Presence in Net-Based Distance Education

    ERIC Educational Resources Information Center

    Hult, Agneta; Dahlgren, Ethel; Hamilton, David; Soderstrom, Tor

    2005-01-01

    Conferencing--or dialogue--has always been a core activity in liberal adult education. More recently, attempts have been made to transfer such conversations online in the form of computer-mediated conferencing. This transfer has raised a range of pedagogical questions, most notably Can established practices be continued? Or must new forms of…

  3. ACFIS: a web server for fragment-based drug discovery

    PubMed Central

    Hao, Ge-Fei; Jiang, Wen; Ye, Yuan-Nong; Wu, Feng-Xu; Zhu, Xiao-Lei; Guo, Feng-Biao; Yang, Guang-Fu

    2016-01-01

    In order to foster innovation and improve the effectiveness of drug discovery, there is a considerable interest in exploring unknown ‘chemical space’ to identify new bioactive compounds with novel and diverse scaffolds. Hence, fragment-based drug discovery (FBDD) was developed rapidly due to its advanced expansive search for ‘chemical space’, which can lead to a higher hit rate and ligand efficiency (LE). However, computational screening of fragments is always hampered by the promiscuous binding model. In this study, we developed a new web server Auto Core Fragment in silico Screening (ACFIS). It includes three computational modules, PARA_GEN, CORE_GEN and CAND_GEN. ACFIS can generate core fragment structure from the active molecule using fragment deconstruction analysis and perform in silico screening by growing fragments to the junction of core fragment structure. An integrated energy calculation rapidly identifies which fragments fit the binding site of a protein. We constructed a simple interface to enable users to view top-ranking molecules in 2D and the binding mode in 3D for further experimental exploration. This makes the ACFIS a highly valuable tool for drug discovery. The ACFIS web server is free and open to all users at http://chemyang.ccnu.edu.cn/ccb/server/ACFIS/. PMID:27150808

  4. ACFIS: a web server for fragment-based drug discovery.

    PubMed

    Hao, Ge-Fei; Jiang, Wen; Ye, Yuan-Nong; Wu, Feng-Xu; Zhu, Xiao-Lei; Guo, Feng-Biao; Yang, Guang-Fu

    2016-07-08

    In order to foster innovation and improve the effectiveness of drug discovery, there is a considerable interest in exploring unknown 'chemical space' to identify new bioactive compounds with novel and diverse scaffolds. Hence, fragment-based drug discovery (FBDD) was developed rapidly due to its advanced expansive search for 'chemical space', which can lead to a higher hit rate and ligand efficiency (LE). However, computational screening of fragments is always hampered by the promiscuous binding model. In this study, we developed a new web server Auto Core Fragment in silico Screening (ACFIS). It includes three computational modules, PARA_GEN, CORE_GEN and CAND_GEN. ACFIS can generate core fragment structure from the active molecule using fragment deconstruction analysis and perform in silico screening by growing fragments to the junction of core fragment structure. An integrated energy calculation rapidly identifies which fragments fit the binding site of a protein. We constructed a simple interface to enable users to view top-ranking molecules in 2D and the binding mode in 3D for further experimental exploration. This makes the ACFIS a highly valuable tool for drug discovery. The ACFIS web server is free and open to all users at http://chemyang.ccnu.edu.cn/ccb/server/ACFIS/. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. Computational Thinking Concepts for Grade School

    ERIC Educational Resources Information Center

    Sanford, John F.; Naidu, Jaideep T.

    2016-01-01

    Early education has classically introduced reading, writing, and mathematics. Recent literature discusses the importance of adding "computational thinking" as a core ability that every child must learn. The goal is to develop students by making them equally comfortable with computational thinking as they are with other core areas of…

  6. Featured Image: The Simulated Collapse of a Core

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2016-11-01

    This stunning snapshot (click for a closer look!) is from a simulation of a core-collapse supernova. Despite having been studied for many decades, the mechanism driving the explosions of core-collapse supernovae is still an area of active research. Extremely complex simulations such as this one represent best efforts to include as many realistic physical processes as is currently computationally feasible. In this study led by Luke Roberts (a NASA Einstein Postdoctoral Fellow at Caltech at the time), a core-collapse supernova is modeled long-term in fully 3D simulations that include the effects of general relativity, radiation hydrodynamics, and even neutrino physics. The authors use these simulations to examine the evolution of a supernova after its core bounce. To read more about the teams findings (and see more awesome images from their simulations), check out the paper below!CitationLuke F. Roberts et al 2016 ApJ 831 98. doi:10.3847/0004-637X/831/1/98

  7. Neutronics calculation of RTP core

    NASA Astrophysics Data System (ADS)

    Rabir, Mohamad Hairie B.; Zin, Muhammad Rawi B. Mohamed; Karim, Julia Bt. Abdul; Bayar, Abi Muttaqin B. Jalal; Usang, Mark Dennis Anak; Mustafa, Muhammad Khairul Ariff B.; Hamzah, Na'im Syauqi B.; Said, Norfarizan Bt. Mohd; Jalil, Muhammad Husamuddin B.

    2017-01-01

    Reactor calculation and simulation are significantly important to ensure safety and better utilization of a research reactor. The Malaysian's PUSPATI TRIGA Reactor (RTP) achieved initial criticality on June 28, 1982. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes. Since early 90s, neutronics modelling were used as part of its routine in-core fuel management activities. The are several computer codes have been used in RTP since then, based on 1D neutron diffusion, 2D neutron diffusion and 3D Monte Carlo neutron transport method. This paper describes current progress and overview on neutronics modelling development in RTP. Several important parameters were analysed such as keff, reactivity, neutron flux, power distribution and fission product build-up for the latest core configuration. The developed core neutronics model was validated by means of comparison with experimental and measurement data. Along with the RTP core model, the calculation procedure also developed to establish better prediction capability of RTP's behaviour.

  8. High Level Analysis, Design and Validation of Distributed Mobile Systems with CoreASM

    NASA Astrophysics Data System (ADS)

    Farahbod, R.; Glässer, U.; Jackson, P. J.; Vajihollahi, M.

    System design is a creative activity calling for abstract models that facilitate reasoning about the key system attributes (desired requirements and resulting properties) so as to ensure these attributes are properly established prior to actually building a system. We explore here the practical side of using the abstract state machine (ASM) formalism in combination with the CoreASM open source tool environment for high-level design and experimental validation of complex distributed systems. Emphasizing the early phases of the design process, a guiding principle is to support freedom of experimentation by minimizing the need for encoding. CoreASM has been developed and tested building on a broad scope of applications, spanning computational criminology, maritime surveillance and situation analysis. We critically reexamine here the CoreASM project in light of three different application scenarios.

  9. Sequoia Messaging Rate Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedley, Andrew

    2008-01-22

    The purpose of this benchmark is to measure the maximal message rate of a single compute node. The first num_cores ranks are expected to reside on the 'core' compute node for which message rate is being tested. After that, the next num_nbors ranks are neighbors for the first core rank, the next set of num_nbors ranks are neighbors for the second core rank, and so on. For example, testing an 8-core node (num_cores = 8) with 4 neighbors (num_nbors = 4) requires 8 + 8 * 4 - 40 ranks. The first 8 of those 40 ranks are expected tomore » be on the 'core' node being benchmarked, while the rest of the ranks are on separate nodes.« less

  10. Active-learning strategies in computer-assisted drug discovery.

    PubMed

    Reker, Daniel; Schneider, Gisbert

    2015-04-01

    High-throughput compound screening is time and resource consuming, and considerable effort is invested into screening compound libraries, profiling, and selecting the most promising candidates for further testing. Active-learning methods assist the selection process by focusing on areas of chemical space that have the greatest chance of success while considering structural novelty. The core feature of these algorithms is their ability to adapt the structure-activity landscapes through feedback. Instead of full-deck screening, only focused subsets of compounds are tested, and the experimental readout is used to refine molecule selection for subsequent screening cycles. Once implemented, these techniques have the potential to reduce costs and save precious materials. Here, we provide a comprehensive overview of the various computational active-learning approaches and outline their potential for drug discovery. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. THE ROLE OF THE MAGNETOROTATIONAL INSTABILITY IN MASSIVE STARS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wheeler, J. Craig; Kagan, Daniel; Chatzopoulos, Emmanouil, E-mail: wheel@astro.as.utexas.edu

    2015-01-20

    The magnetorotational instability (MRI) is key to physics in accretion disks and is widely considered to play some role in massive star core collapse. Models of rotating massive stars naturally develop very strong shear at composition boundaries, a necessary condition for MRI instability, and the MRI is subject to triply diffusive destabilizing effects in radiative regions. We have used the MESA stellar evolution code to compute magnetic effects due to the Spruit-Tayler (ST) mechanism and the MRI, separately and together, in a sample of massive star models. We find that the MRI can be active in the later stages ofmore » massive star evolution, leading to mixing effects that are not captured in models that neglect the MRI. The MRI and related magnetorotational effects can move models of given zero-age main sequence mass across ''boundaries'' from degenerate CO cores to degenerate O/Ne/Mg cores and from degenerate O/Ne/Mg cores to iron cores, thus affecting the final evolution and the physics of core collapse. The MRI acting alone can slow the rotation of the inner core in general agreement with the observed ''initial'' rotation rates of pulsars. The MRI analysis suggests that localized fields ∼10{sup 12} G may exist at the boundary of the iron core. With both the ST and MRI mechanisms active in the 20 M {sub ☉} model, we find that the helium shell mixes entirely out into the envelope. Enhanced mixing could yield a population of yellow or even blue supergiant supernova progenitors that would not be standard SN IIP.« less

  12. Asymmetric Core Computing for U.S. Army High-Performance Computing Applications

    DTIC Science & Technology

    2009-04-01

    Playstation 4 (should one be announced). 8 4.2 FPGAs Reconfigurable computing refers to performing computations using Field Programmable Gate Arrays...2008 4 . TITLE AND SUBTITLE Asymmetric Core Computing for U.S. Army High-Performance Computing Applications 5a. CONTRACT NUMBER 5b. GRANT NUMBER...Acknowledgments vi  1.  Introduction 1  2.  Relevant Technologies 2  3.  Technical Approach 5  4 .  Research and Development Highlights 7  4.1  Cell

  13. Multiphysics Computational Analysis of a Solid-Core Nuclear Thermal Engine Thrust Chamber

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Canabal, Francisco; Cheng, Gary; Chen, Yen-Sen

    2007-01-01

    The objective of this effort is to develop an efficient and accurate computational heat transfer methodology to predict thermal, fluid, and hydrogen environments for a hypothetical solid-core, nuclear thermal engine - the Small Engine. In addition, the effects of power profile and hydrogen conversion on heat transfer efficiency and thrust performance were also investigated. The computational methodology is based on an unstructured-grid, pressure-based, all speeds, chemically reacting, computational fluid dynamics platform, while formulations of conjugate heat transfer were implemented to describe the heat transfer from solid to hydrogen inside the solid-core reactor. The computational domain covers the entire thrust chamber so that the afore-mentioned heat transfer effects impact the thrust performance directly. The result shows that the computed core-exit gas temperature, specific impulse, and core pressure drop agree well with those of design data for the Small Engine. Finite-rate chemistry is very important in predicting the proper energy balance as naturally occurring hydrogen decomposition is endothermic. Locally strong hydrogen conversion associated with centralized power profile gives poor heat transfer efficiency and lower thrust performance. On the other hand, uniform hydrogen conversion associated with a more uniform radial power profile achieves higher heat transfer efficiency, and higher thrust performance.

  14. A computer method for schedule processing and quick-time updating.

    NASA Technical Reports Server (NTRS)

    Mccoy, W. H.

    1972-01-01

    A schedule analysis program is presented which can be used to process any schedule with continuous flow and with no loops. Although generally thought of as a management tool, it has applicability to such extremes as music composition and computer program efficiency analysis. Other possibilities for its use include the determination of electrical power usage during some operation such as spacecraft checkout, and the determination of impact envelopes for the purpose of scheduling payloads in launch processing. At the core of the described computer method is an algorithm which computes the position of each activity bar on the output waterfall chart. The algorithm is basically a maximal-path computation which gives to each node in the schedule network the maximal path from the initial node to the given node.

  15. VERAIn

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simunovic, Srdjan

    2015-02-16

    CASL's modeling and simulation technology, the Virtual Environment for Reactor Applications (VERA), incorporates coupled physics and science-based models, state-of-the-art numerical methods, modern computational science, integrated uncertainty quantification (UQ) and validation against data from operating pressurized water reactors (PWRs), single-effect experiments, and integral tests. The computational simulation component of VERA is the VERA Core Simulator (VERA-CS). The core simulator is the specific collection of multi-physics computer codes used to model and deplete a LWR core over multiple cycles. The core simulator has a single common input file that drives all of the different physics codes. The parser code, VERAIn, converts VERAmore » Input into an XML file that is used as input to different VERA codes.« less

  16. Utilizing Human Patient Simulators (HPS) to Meet Learning Objectives across Concurrent Core Nursing Courses: A Pilot Study

    ERIC Educational Resources Information Center

    Miller, Charman L.; Leadingham, Camille; Vance, Ronald

    2010-01-01

    Associate Degree Nursing (ADN) faculty are challenged by the monumental responsibility of preparing students to function as safe, professional nurses in a two year course of study. Advances in computer technology and emphasis on integrating technology and active learning strategies into existing course structures have prompted many nurse educators…

  17. The Effects of Different Computer-Supported Collaboration Scripts on Students' Learning Processes and Outcome in a Simulation-Based Collaborative Learning Environment

    ERIC Educational Resources Information Center

    Wieland, Kristina

    2010-01-01

    Students benefit from collaborative learning activities, but they do not automatically reach desired learning outcomes when working together (Fischer, Kollar, Mandl, & Haake, 2007; King, 2007). Learners need instructional support to increase the quality of collaborative processes and individual learning outcomes. The core challenge is to find…

  18. Competition and cooperation among similar representations: toward a unified account of facilitative and inhibitory effects of lexical neighbors.

    PubMed

    Chen, Qi; Mirman, Daniel

    2012-04-01

    One of the core principles of how the mind works is the graded, parallel activation of multiple related or similar representations. Parallel activation of multiple representations has been particularly important in the development of theories and models of language processing, where coactivated representations (neighbors) have been shown to exhibit both facilitative and inhibitory effects on word recognition and production. Researchers generally ascribe these effects to interactive activation and competition, but there is no unified explanation for why the effects are facilitative in some cases and inhibitory in others. We present a series of simulations of a simple domain-general interactive activation and competition model that is broadly consistent with more specialized domain-specific models of lexical processing. The results showed that interactive activation and competition can indeed account for the complex pattern of reversals. Critically, the simulations revealed a core computational principle that determines whether neighbor effects are facilitative or inhibitory: strongly active neighbors exert a net inhibitory effect, and weakly active neighbors exert a net facilitative effect.

  19. Test Anxiety, Computer-Adaptive Testing and the Common Core

    ERIC Educational Resources Information Center

    Colwell, Nicole Makas

    2013-01-01

    This paper highlights the current findings and issues regarding the role of computer-adaptive testing in test anxiety. The computer-adaptive test (CAT) proposed by one of the Common Core consortia brings these issues to the forefront. Research has long indicated that test anxiety impairs student performance. More recent research indicates that…

  20. Using E-mail in a Math/Computer Core Course.

    ERIC Educational Resources Information Center

    Gurwitz, Chaya

    This paper notes the advantages of using e-mail in computer literacy classes, and discusses the results of incorporating an e-mail assignment in the "Introduction to Mathematical Reasoning and Computer Programming" core course at Brooklyn College (New York). The assignment consisted of several steps. The students first read and responded…

  1. Luminescence and efficiency optimization of InGaN/GaN core-shell nanowire LEDs by numerical modelling

    NASA Astrophysics Data System (ADS)

    Römer, Friedhard; Deppner, Marcus; Andreev, Zhelio; Kölper, Christopher; Sabathil, Matthias; Strassburg, Martin; Ledig, Johannes; Li, Shunfeng; Waag, Andreas; Witzigmann, Bernd

    2012-02-01

    We present a computational study on the anisotropic luminescence and the efficiency of a core-shell type nanowire LED based on GaN with InGaN active quantum wells. The physical simulator used for analyzing this device integrates a multidimensional drift-diffusion transport solver and a k . p Schrödinger problem solver for quantization effects and luminescence. The solution of both problems is coupled to achieve self-consistency. Using this solver we investigate the effect of dimensions, design of quantum wells, and current injection on the efficiency and luminescence of the core-shell nanowire LED. The anisotropy of the luminescence and re-absorption is analyzed with respect to the external efficiency of the LED. From the results we derive strategies for design optimization.

  2. Comparisons for ESTA-Task3: ASTEC, CESAM and CLÉS

    NASA Astrophysics Data System (ADS)

    Christensen-Dalsgaard, J.

    The ESTA activity under the CoRoT project aims at testing the tools for computing stellar models and oscillation frequencies that will be used in the analysis of asteroseismic data from CoRoT and other large-scale upcoming asteroseismic projects. Here I report results of comparisons between calculations using the Aarhus code (ASTEC) and two other codes, for models that include diffusion and settling. It is found that there are likely deficiencies, requiring further study, in the ASTEC computation of models including convective cores.

  3. Computer-Assisted Structure Elucidation of Black Chokeberry (Aronia melanocarpa) Fruit Juice Isolates with a New Fused Pentacyclic Flavonoid Skeleton.

    PubMed

    Naman, C Benjamin; Li, Jie; Moser, Arvin; Hendrycks, Jeffery M; Benatrehina, P Annécie; Chai, Heebyung; Yuan, Chunhua; Keller, William J; Kinghorn, A Douglas

    2015-06-19

    Melanodiol 4″-O-protocatechuate (1) and melanodiol (2) represent novel flavonoid derivatives isolated from a botanical dietary supplement ingredient, dried black chokeberry (Aronia melanocarpa) fruit juice. These noncrystalline compounds possess an unprecedented fused pentacyclic core with two contiguous hemiketals. Due to having significant hydrogen deficiency indices, their structures were determined using computer-assisted structure elucidation software. The in vitro hydroxyl radical-scavenging and quinone reductase-inducing activity of each compound are reported, and a plausible biogenetic scheme is proposed.

  4. Computer-Assisted Structure Elucidation of Black Chokeberry (Aronia melanocarpa) Fruit Juice Isolates with a New Fused Pentacyclic Flavonoid Skeleton

    PubMed Central

    Naman, C. Benjamin; Li, Jie; Moser, Arvin; Hendrycks, Jeffery M.; Benatrehina, P. Annécie; Chai, Heebyung; Yuan, Chunhua; Keller, William J.; Kinghorn, A. Douglas

    2015-01-01

    Melanodiol 4″-O-protocatechuate (1) and melanodiol (2) represent novel flavonoid derivatives isolated from a botanical dietary supplement ingredient, dried black chokeberry (Aronia melanocarpa) fruit juice. These non-crystalline compounds possess an unprecedented fused pentacyclic core with two contiguous hemiketals. Due to having significant hydrogen deficiency indices, their structures were determined using computer-assisted structure elucidation software. The in vitro hydroxyl radical-scavenging and quinone reductase-inducing activity of each compound are reported, and a plausible biogenetic scheme is proposed PMID:26030740

  5. Computer-assisted design of flux-cored wires

    NASA Astrophysics Data System (ADS)

    Dubtsov, Yu N.; Zorin, I. V.; Sokolov, G. N.; Antonov, A. A.; Artem'ev, A. A.; Lysak, V. I.

    2017-02-01

    The algorithm and description of the AlMe-WireLaB software for the computer-assisted design of flux-cored wires are introduced. The software functionality is illustrated with the selection of the components for the flux-cored wire, ensuring the acquisition of the deposited metal of the Fe-Cr-C-Mo-Ni-Ti-B system. It is demonstrated that the developed software enables the technologically reliable flux-cored wire to be designed for surfacing, resulting in a metal of an ordered composition.

  6. Multiple core computer processor with globally-accessible local memories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shalf, John; Donofrio, David; Oliker, Leonid

    A multi-core computer processor including a plurality of processor cores interconnected in a Network-on-Chip (NoC) architecture, a plurality of caches, each of the plurality of caches being associated with one and only one of the plurality of processor cores, and a plurality of memories, each of the plurality of memories being associated with a different set of at least one of the plurality of processor cores and each of the plurality of memories being configured to be visible in a global memory address space such that the plurality of memories are visible to two or more of the plurality ofmore » processor cores.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huml, O.

    The objective of this work was to determine the neutron flux density distribution in various places of the training reactor VR-1 Sparrow. This experiment was performed on the new core design C1, composed of the new low-enriched uranium fuel cells IRT-4M (19.7 %). This fuel replaced the old high-enriched uranium fuel IRT-3M (36 %) within the framework of the RERTR Program in September 2005. The measurement used the neutron activation analysis method with gold wires. The principle of this method consists in neutron capture in a nucleus of the material forming the activation detector. This capture can change the nucleusmore » in a radioisotope, whose activity can be measured. The absorption cross-section values were evaluated by MCNP computer code. The gold wires were irradiated in seven different positions in the core C1. All irradiations were performed at reactor power level 1E8 (1 kW{sub therm}). The activity of segments of irradiated wires was measured by special automatic device called 'Drat' (Wire in English). (author)« less

  8. Substrate tunnels in enzymes: structure-function relationships and computational methodology.

    PubMed

    Kingsley, Laura J; Lill, Markus A

    2015-04-01

    In enzymes, the active site is the location where incoming substrates are chemically converted to products. In some enzymes, this site is deeply buried within the core of the protein, and, in order to access the active site, substrates must pass through the body of the protein via a tunnel. In many systems, these tunnels act as filters and have been found to influence both substrate specificity and catalytic mechanism. Identifying and understanding how these tunnels exert such control has been of growing interest over the past several years because of implications in fields such as protein engineering and drug design. This growing interest has spurred the development of several computational methods to identify and analyze tunnels and how ligands migrate through these tunnels. The goal of this review is to outline how tunnels influence substrate specificity and catalytic efficiency in enzymes with buried active sites and to provide a brief summary of the computational tools used to identify and evaluate these tunnels. © 2015 Wiley Periodicals, Inc.

  9. Computational multicore on two-layer 1D shallow water equations for erodible dambreak

    NASA Astrophysics Data System (ADS)

    Simanjuntak, C. A.; Bagustara, B. A. R. H.; Gunawan, P. H.

    2018-03-01

    The simulation of erodible dambreak using two-layer shallow water equations and SCHR scheme are elaborated in this paper. The results show that the two-layer SWE model in a good agreement with the data experiment which is performed by Louvain-la-Neuve Université Catholique de Louvain. Moreover, the parallel algorithm with multicore architecture are given in the results. The results show that Computer I with processor Intel(R) Core(TM) i5-2500 CPU Quad-Core has the best performance to accelerate the computational time. Moreover, Computer III with processor AMD A6-5200 APU Quad-Core is observed has higher speedup and efficiency. The speedup and efficiency of Computer III with number of grids 3200 are 3.716050530 times and 92.9% respectively.

  10. VORCOR: A computer program for calculating characteristics of wings with edge vortex separation by using a vortex-filament and-core model

    NASA Technical Reports Server (NTRS)

    Pao, J. L.; Mehrotra, S. C.; Lan, C. E.

    1982-01-01

    A computer code base on an improved vortex filament/vortex core method for predicting aerodynamic characteristics of slender wings with edge vortex separations is developed. The code is applicable to camber wings, straked wings or wings with leading edge vortex flaps at subsonic speeds. The prediction of lifting pressure distribution and the computer time are improved by using a pair of concentrated vortex cores above the wing surface. The main features of this computer program are: (1) arbitrary camber shape may be defined and an option for exactly defining leading edge flap geometry is also provided; (2) the side edge vortex system is incorporated.

  11. Acoustic Source Localization via Time Difference of Arrival Estimation for Distributed Sensor Networks Using Tera-Scale Optical Core Devices

    DOE PAGES

    Imam, Neena; Barhen, Jacob

    2009-01-01

    For real-time acoustic source localization applications, one of the primary challenges is the considerable growth in computational complexity associated with the emergence of ever larger, active or passive, distributed sensor networks. These sensors rely heavily on battery-operated system components to achieve highly functional automation in signal and information processing. In order to keep communication requirements minimal, it is desirable to perform as much processing on the receiver platforms as possible. However, the complexity of the calculations needed to achieve accurate source localization increases dramatically with the size of sensor arrays, resulting in substantial growth of computational requirements that cannot bemore » readily met with standard hardware. One option to meet this challenge builds upon the emergence of digital optical-core devices. The objective of this work was to explore the implementation of key building block algorithms used in underwater source localization on the optical-core digital processing platform recently introduced by Lenslet Inc. This demonstration of considerably faster signal processing capability should be of substantial significance to the design and innovation of future generations of distributed sensor networks.« less

  12. Case for a field-programmable gate array multicore hybrid machine for an image-processing application

    NASA Astrophysics Data System (ADS)

    Rakvic, Ryan N.; Ives, Robert W.; Lira, Javier; Molina, Carlos

    2011-01-01

    General purpose computer designers have recently begun adding cores to their processors in order to increase performance. For example, Intel has adopted a homogeneous quad-core processor as a base for general purpose computing. PlayStation3 (PS3) game consoles contain a multicore heterogeneous processor known as the Cell, which is designed to perform complex image processing algorithms at a high level. Can modern image-processing algorithms utilize these additional cores? On the other hand, modern advancements in configurable hardware, most notably field-programmable gate arrays (FPGAs) have created an interesting question for general purpose computer designers. Is there a reason to combine FPGAs with multicore processors to create an FPGA multicore hybrid general purpose computer? Iris matching, a repeatedly executed portion of a modern iris-recognition algorithm, is parallelized on an Intel-based homogeneous multicore Xeon system, a heterogeneous multicore Cell system, and an FPGA multicore hybrid system. Surprisingly, the cheaper PS3 slightly outperforms the Intel-based multicore on a core-for-core basis. However, both multicore systems are beaten by the FPGA multicore hybrid system by >50%.

  13. The Shock and Vibration Digest. Volume 14, Number 11

    DTIC Science & Technology

    1982-11-01

    cooled reactor 1981) ( HTGR ) core under seismic excitation his been developed . N82-18644 The computer program can be used to predict the behavior (In...French) of the HTGR core under seismic excitation. Key Words: Computer programs , Modal analysis, Beams, Undamped structures A computation method is...30) PROGRAMMING c c Dale and Cohen [221 extended the method of McMunn and Plunkett [201 developed a compute- McMunn and Plunkett to continuous systems

  14. Confirmation of a realistic reactor model for BNCT dosimetry at the TRIGA Mainz

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ziegner, Markus, E-mail: Markus.Ziegner.fl@ait.ac.at; Schmitz, Tobias; Hampel, Gabriele

    2014-11-01

    Purpose: In order to build up a reliable dose monitoring system for boron neutron capture therapy (BNCT) applications at the TRIGA reactor in Mainz, a computer model for the entire reactor was established, simulating the radiation field by means of the Monte Carlo method. The impact of different source definition techniques was compared and the model was validated by experimental fluence and dose determinations. Methods: The depletion calculation code ORIGEN2 was used to compute the burn-up and relevant material composition of each burned fuel element from the day of first reactor operation to its current core. The material composition ofmore » the current core was used in a MCNP5 model of the initial core developed earlier. To perform calculations for the region outside the reactor core, the model was expanded to include the thermal column and compared with the previously established ATTILA model. Subsequently, the computational model is simplified in order to reduce the calculation time. Both simulation models are validated by experiments with different setups using alanine dosimetry and gold activation measurements with two different types of phantoms. Results: The MCNP5 simulated neutron spectrum and source strength are found to be in good agreement with the previous ATTILA model whereas the photon production is much lower. Both MCNP5 simulation models predict all experimental dose values with an accuracy of about 5%. The simulations reveal that a Teflon environment favorably reduces the gamma dose component as compared to a polymethyl methacrylate phantom. Conclusions: A computer model for BNCT dosimetry was established, allowing the prediction of dosimetric quantities without further calibration and within a reasonable computation time for clinical applications. The good agreement between the MCNP5 simulations and experiments demonstrates that the ATTILA model overestimates the gamma dose contribution. The detailed model can be used for the planning of structural modifications in the thermal column irradiation channel or the use of different irradiation sites than the thermal column, e.g., the beam tubes.« less

  15. The NTeQ ISD Model: A Tech-Driven Model for Digital Natives (DNs)

    ERIC Educational Resources Information Center

    Williams, C.; Anekwe, J. U.

    2017-01-01

    Integrating Technology for enquiry (NTeQ) instructional development model (ISD), is believed to be a technology-driven model. The authors x-rayed the ten-step model to reaffirm the ICT knowledge demand of the learner and the educator; hence computer-based activities at various stages of the model are core elements. The model also is conscious of…

  16. Building an organic computing device with multiple interconnected brains

    PubMed Central

    Pais-Vieira, Miguel; Chiuffa, Gabriela; Lebedev, Mikhail; Yadav, Amol; Nicolelis, Miguel A. L.

    2015-01-01

    Recently, we proposed that Brainets, i.e. networks formed by multiple animal brains, cooperating and exchanging information in real time through direct brain-to-brain interfaces, could provide the core of a new type of computing device: an organic computer. Here, we describe the first experimental demonstration of such a Brainet, built by interconnecting four adult rat brains. Brainets worked by concurrently recording the extracellular electrical activity generated by populations of cortical neurons distributed across multiple rats chronically implanted with multi-electrode arrays. Cortical neuronal activity was recorded and analyzed in real time, and then delivered to the somatosensory cortices of other animals that participated in the Brainet using intracortical microstimulation (ICMS). Using this approach, different Brainet architectures solved a number of useful computational problems, such as discrete classification, image processing, storage and retrieval of tactile information, and even weather forecasting. Brainets consistently performed at the same or higher levels than single rats in these tasks. Based on these findings, we propose that Brainets could be used to investigate animal social behaviors as well as a test bed for exploring the properties and potential applications of organic computers. PMID:26158615

  17. Neural simulations on multi-core architectures.

    PubMed

    Eichner, Hubert; Klug, Tobias; Borst, Alexander

    2009-01-01

    Neuroscience is witnessing increasing knowledge about the anatomy and electrophysiological properties of neurons and their connectivity, leading to an ever increasing computational complexity of neural simulations. At the same time, a rather radical change in personal computer technology emerges with the establishment of multi-cores: high-density, explicitly parallel processor architectures for both high performance as well as standard desktop computers. This work introduces strategies for the parallelization of biophysically realistic neural simulations based on the compartmental modeling technique and results of such an implementation, with a strong focus on multi-core architectures and automation, i.e. user-transparent load balancing.

  18. Neural Simulations on Multi-Core Architectures

    PubMed Central

    Eichner, Hubert; Klug, Tobias; Borst, Alexander

    2009-01-01

    Neuroscience is witnessing increasing knowledge about the anatomy and electrophysiological properties of neurons and their connectivity, leading to an ever increasing computational complexity of neural simulations. At the same time, a rather radical change in personal computer technology emerges with the establishment of multi-cores: high-density, explicitly parallel processor architectures for both high performance as well as standard desktop computers. This work introduces strategies for the parallelization of biophysically realistic neural simulations based on the compartmental modeling technique and results of such an implementation, with a strong focus on multi-core architectures and automation, i.e. user-transparent load balancing. PMID:19636393

  19. Seed robustness of oriented relative fuzzy connectedness: core computation and its applications

    NASA Astrophysics Data System (ADS)

    Tavares, Anderson C. M.; Bejar, Hans H. C.; Miranda, Paulo A. V.

    2017-02-01

    In this work, we present a formal definition and an efficient algorithm to compute the cores of Oriented Relative Fuzzy Connectedness (ORFC), a recent seed-based segmentation technique. The core is a region where the seed can be moved without altering the segmentation, an important aspect for robust techniques and reduction of user effort. We show how ORFC cores can be used to build a powerful hybrid image segmentation approach. We also provide some new theoretical relations between ORFC and Oriented Image Foresting Transform (OIFT), as well as their cores. Experimental results among several methods show that the hybrid approach conserves high accuracy, avoids the shrinking problem and provides robustness to seed placement inside the desired object due to the cores properties.

  20. Kr-85m activity as burnup measurement indicator in a pebble bed reactor based on ORIGEN2.1 Computer Simulation

    NASA Astrophysics Data System (ADS)

    Husnayani, I.; Udiyani, P. M.; Bakhri, S.; Sunaryo, G. R.

    2018-02-01

    Pebble Bed Reactor (PBR) is a high temperature gas-cooled reactor which employs graphite as a moderator and helium as a coolant. In a multi-pass PBR, burnup of the fuel pebble must be measured in each cycle by online measurement in order to determine whether the fuel pebble should be reloaded into the core for another cycle or moved out of the core into spent fuel storage. One of the well-known methods for measuring burnup is based on the activity of radionuclide decay inside the fuel pebble. In this work, the activity and gamma emission of Kr-85m were studied in order to investigate the feasibility of Kr-85m as burnup measurement indicator in a PBR. The activity and gamma emission of Kr-85 were estimated using ORIGEN2.1 computer code. The parameters of HTR-10 were taken as a case study in performing ORIGEN2.1 simulation. The results show that the activity revolution of Kr-85m has a good relationship with the burnup of the pebble fuel in each cycle. The Kr-85m activity reduction in each burnup step,in the range of 12% to 4%, is considered sufficient to show the burnup level in each cycle. The gamma emission of Kr-85m is also sufficiently high which is in the order of 1010 photon/second. From these results, it can be concluded that Kr-85m is suitable to be used as burnup measurement indicator in a pebble bed reactor.

  1. 1001 Ways to run AutoDock Vina for virtual screening

    NASA Astrophysics Data System (ADS)

    Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D.

    2016-03-01

    Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.

  2. 1001 Ways to run AutoDock Vina for virtual screening.

    PubMed

    Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D

    2016-03-01

    Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.

  3. The development and reliability of a simple field based screening tool to assess core stability in athletes.

    PubMed

    O'Connor, S; McCaffrey, N; Whyte, E; Moran, K

    2016-07-01

    To adapt the trunk stability test to facilitate further sub-classification of higher levels of core stability in athletes for use as a screening tool. To establish the inter-tester and intra-tester reliability of this adapted core stability test. Reliability study. Collegiate athletic therapy facilities. Fifteen physically active male subjects (19.46 ± 0.63) free from any orthopaedic or neurological disorders were recruited from a convenience sample of collegiate students. The intraclass correlation coefficients (ICC) and 95% Confidence Intervals (CI) were computed to establish inter-tester and intra-tester reliability. Excellent ICC values were observed in the adapted core stability test for inter-tester reliability (0.97) and good to excellent intra-tester reliability (0.73-0.90). While the 95% CI were narrow for inter-tester reliability, Tester A and C 95% CI's were widely distributed compared to Tester B. The adapted core stability test developed in this study is a quick and simple field based test to administer that can further subdivide athletes with high levels of core stability. The test demonstrated high inter-tester and intra-tester reliability. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Interactions between core and matrix thalamocortical projections in human sleep spindle synchronization

    PubMed Central

    Bonjean, Maxime; Baker, Tanya; Bazhenov, Maxim; Cash, Sydney; Halgren, Eric; Sejnowski, Terrence

    2012-01-01

    Sleep spindles, which are bursts of 11–15 Hz that occur during non-REM sleep, are highly synchronous across the scalp when measured with EEG, but have low spatial coherence and exhibit low correlation with EEG signals when simultaneously measured with MEG spindles in humans. We developed a computational model to explore the hypothesis that the spatial coherence of the EEG spindle is a consequence of diffuse matrix projections of the thalamus to layer 1 compared to the focal projections of the core pathway to layer 4 recorded by the MEG. Increasing the fanout of thalamocortical connectivity in the matrix pathway while keeping the core pathway fixed led to increased synchrony of the spindle activity in the superficial cortical layers in the model. In agreement with cortical recordings, the latency for spindles to spread from the core to the matrix was independent of the thalamocortical fanout but highly dependent on the probability of connections between cortical areas. PMID:22496571

  5. SedCT: MATLAB™ tools for standardized and quantitative processing of sediment core computed tomography (CT) data collected using a medical CT scanner

    NASA Astrophysics Data System (ADS)

    Reilly, B. T.; Stoner, J. S.; Wiest, J.

    2017-08-01

    Computed tomography (CT) of sediment cores allows for high-resolution images, three-dimensional volumes, and down core profiles. These quantitative data are generated through the attenuation of X-rays, which are sensitive to sediment density and atomic number, and are stored in pixels as relative gray scale values or Hounsfield units (HU). We present a suite of MATLAB™ tools specifically designed for routine sediment core analysis as a means to standardize and better quantify the products of CT data collected on medical CT scanners. SedCT uses a graphical interface to process Digital Imaging and Communications in Medicine (DICOM) files, stitch overlapping scanned intervals, and create down core HU profiles in a manner robust to normal coring imperfections. Utilizing a random sampling technique, SedCT reduces data size and allows for quick processing on typical laptop computers. SedCTimage uses a graphical interface to create quality tiff files of CT slices that are scaled to a user-defined HU range, preserving the quantitative nature of CT images and easily allowing for comparison between sediment cores with different HU means and variance. These tools are presented along with examples from lacustrine and marine sediment cores to highlight the robustness and quantitative nature of this method.

  6. Dynamic Analysis Method for Electromagnetic Artificial Muscle Actuator under PID Control

    NASA Astrophysics Data System (ADS)

    Nakata, Yoshihiro; Ishiguro, Hiroshi; Hirata, Katsuhiro

    We have been studying an interior permanent magnet linear actuator for an artificial muscle. This actuator mainly consists of a mover and stator. The mover is composed of permanent magnets, magnetic cores and a non-magnetic shaft. The stator is composed of 3-phase coils and a back yoke. In this paper, the dynamic analysis method under PID control is proposed employing the 3-D finite element method (3-D FEM) to compute the dynamic response and current response when the positioning control is active. As a conclusion, computed results show good agreement with measured ones of a prototype.

  7. Adaptive Core Simulation Employing Discrete Inverse Theory - Part I: Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdel-Khalik, Hany S.; Turinsky, Paul J.

    2005-07-15

    Use of adaptive simulation is intended to improve the fidelity and robustness of important core attribute predictions such as core power distribution, thermal margins, and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables, i.e., in-core instrumentation readings, to adapt the simulation in a meaningful way. A meaningful adaption will result in high-fidelity and robust adapted core simulator models. To perform adaption, we propose an inverse theory approach in which the multitudes of input data to core simulators, i.e., reactor physics and thermal-hydraulic data, are to be adjusted to improve agreement withmore » measured observables while keeping core simulator models unadapted. At first glance, devising such adaption for typical core simulators with millions of input and observables data would spawn not only several prohibitive challenges but also numerous disparaging concerns. The challenges include the computational burdens of the sensitivity-type calculations required to construct Jacobian operators for the core simulator models. Also, the computational burdens of the uncertainty-type calculations required to estimate the uncertainty information of core simulator input data present a demanding challenge. The concerns however are mainly related to the reliability of the adjusted input data. The methodologies of adaptive simulation are well established in the literature of data adjustment. We adopt the same general framework for data adjustment; however, we refrain from solving the fundamental adjustment equations in a conventional manner. We demonstrate the use of our so-called Efficient Subspace Methods (ESMs) to overcome the computational and storage burdens associated with the core adaption problem. We illustrate the successful use of ESM-based adaptive techniques for a typical boiling water reactor core simulator adaption problem.« less

  8. Node Resource Manager: A Distributed Computing Software Framework Used for Solving Geophysical Problems

    NASA Astrophysics Data System (ADS)

    Lawry, B. J.; Encarnacao, A.; Hipp, J. R.; Chang, M.; Young, C. J.

    2011-12-01

    With the rapid growth of multi-core computing hardware, it is now possible for scientific researchers to run complex, computationally intensive software on affordable, in-house commodity hardware. Multi-core CPUs (Central Processing Unit) and GPUs (Graphics Processing Unit) are now commonplace in desktops and servers. Developers today have access to extremely powerful hardware that enables the execution of software that could previously only be run on expensive, massively-parallel systems. It is no longer cost-prohibitive for an institution to build a parallel computing cluster consisting of commodity multi-core servers. In recent years, our research team has developed a distributed, multi-core computing system and used it to construct global 3D earth models using seismic tomography. Traditionally, computational limitations forced certain assumptions and shortcuts in the calculation of tomographic models; however, with the recent rapid growth in computational hardware including faster CPU's, increased RAM, and the development of multi-core computers, we are now able to perform seismic tomography, 3D ray tracing and seismic event location using distributed parallel algorithms running on commodity hardware, thereby eliminating the need for many of these shortcuts. We describe Node Resource Manager (NRM), a system we developed that leverages the capabilities of a parallel computing cluster. NRM is a software-based parallel computing management framework that works in tandem with the Java Parallel Processing Framework (JPPF, http://www.jppf.org/), a third party library that provides a flexible and innovative way to take advantage of modern multi-core hardware. NRM enables multiple applications to use and share a common set of networked computers, regardless of their hardware platform or operating system. Using NRM, algorithms can be parallelized to run on multiple processing cores of a distributed computing cluster of servers and desktops, which results in a dramatic speedup in execution time. NRM is sufficiently generic to support applications in any domain, as long as the application is parallelizable (i.e., can be subdivided into multiple individual processing tasks). At present, NRM has been effective in decreasing the overall runtime of several algorithms: 1) the generation of a global 3D model of the compressional velocity distribution in the Earth using tomographic inversion, 2) the calculation of the model resolution matrix, model covariance matrix, and travel time uncertainty for the aforementioned velocity model, and 3) the correlation of waveforms with archival data on a massive scale for seismic event detection. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  9. Large Scale Flutter Data for Design of Rotating Blades Using Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.

    2012-01-01

    A procedure to compute flutter boundaries of rotating blades is presented; a) Navier-Stokes equations. b) Frequency domain method compatible with industry practice. Procedure is initially validated: a) Unsteady loads with flapping wing experiment. b) Flutter boundary with fixed wing experiment. Large scale flutter computation is demonstrated for rotating blade: a) Single job submission script. b) Flutter boundary in 24 hour wall clock time with 100 cores. c) Linearly scalable with number of cores. Tested with 1000 cores that produced data in 25 hrs for 10 flutter boundaries. Further wall-clock speed-up is possible by performing parallel computations within each case.

  10. The AGN-Starburst connection in COLA galaxies

    NASA Astrophysics Data System (ADS)

    Hurley, Rossa; Phillips, Chris; Norris, Ray; Appleton, Phil; Conway, John; Parra, Rodrigo

    2007-10-01

    We propose to observe the COLA-S sample of 107 galaxies to test the hypothesis that VLBI-detectable AGN cores in IR-luminous sources are accompanied by intense compact star-formation activity. To maximise our sensitivity with available resources, we propose single-baseline 1 Gbit/s VLBI observations between Narrabri and Parkes, and will correlate the data in near-real time using the CPSR-II computer at Parkes.

  11. A programmable computational image sensor for high-speed vision

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Shi, Cong; Long, Xitian; Wu, Nanjian

    2013-08-01

    In this paper we present a programmable computational image sensor for high-speed vision. This computational image sensor contains four main blocks: an image pixel array, a massively parallel processing element (PE) array, a row processor (RP) array and a RISC core. The pixel-parallel PE is responsible for transferring, storing and processing image raw data in a SIMD fashion with its own programming language. The RPs are one dimensional array of simplified RISC cores, it can carry out complex arithmetic and logic operations. The PE array and RP array can finish great amount of computation with few instruction cycles and therefore satisfy the low- and middle-level high-speed image processing requirement. The RISC core controls the whole system operation and finishes some high-level image processing algorithms. We utilize a simplified AHB bus as the system bus to connect our major components. Programming language and corresponding tool chain for this computational image sensor are also developed.

  12. A method for modeling finite-core vortices in wake-flow calculations

    NASA Technical Reports Server (NTRS)

    Stremel, P. M.

    1984-01-01

    A numerical method for computing nonplanar vortex wakes represented by finite-core vortices is presented. The approach solves for the velocity on an Eulerian grid, using standard finite-difference techniques; the vortex wake is tracked by Lagrangian methods. In this method, the distribution of continuous vorticity in the wake is replaced by a group of discrete vortices. An axially symmetric distribution of vorticity about the center of each discrete vortex is used to represent the finite-core model. Two distributions of vorticity, or core models, are investigated: a finite distribution of vorticity represented by a third-order polynomial, and a continuous distribution of vorticity throughout the wake. The method provides for a vortex-core model that is insensitive to the mesh spacing. Results for a simplified case are presented. Computed results for the roll-up of a vortex wake generated by wings with different spanwise load distributions are presented; contour plots of the flow-field velocities are included; and comparisons are made of the computed flow-field velocities with experimentally measured velocities.

  13. Reconfigurable Hardware Adapts to Changing Mission Demands

    NASA Technical Reports Server (NTRS)

    2003-01-01

    A new class of computing architectures and processing systems, which use reconfigurable hardware, is creating a revolutionary approach to implementing future spacecraft systems. With the increasing complexity of electronic components, engineers must design next-generation spacecraft systems with new technologies in both hardware and software. Derivation Systems, Inc., of Carlsbad, California, has been working through NASA s Small Business Innovation Research (SBIR) program to develop key technologies in reconfigurable computing and Intellectual Property (IP) soft cores. Founded in 1993, Derivation Systems has received several SBIR contracts from NASA s Langley Research Center and the U.S. Department of Defense Air Force Research Laboratories in support of its mission to develop hardware and software for high-assurance systems. Through these contracts, Derivation Systems began developing leading-edge technology in formal verification, embedded Java, and reconfigurable computing for its PF3100, Derivational Reasoning System (DRS ), FormalCORE IP, FormalCORE PCI/32, FormalCORE DES, and LavaCORE Configurable Java Processor, which are designed for greater flexibility and security on all space missions.

  14. The Effect of Birthrate Granularity on the Release- to- Birth Ratio for the AGR-1 In-core Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dawn Scates; John Walter

    The AGR-1 Advanced Gas Reactor (AGR) tristructural-isotropic-particle fuel experiment underwent 13 irradiation intervals from December 2006 until November 2009 within the Idaho National Laboratory Advanced Test Reactor in support of the Next Generation Nuclear Power Plant program. During this multi-year experiment, release-to-birth rate ratios were computed at the end of each operating interval to provide information about fuel performance. Fission products released during irradiation were tracked daily by the Fission Product Monitoring System using 8-hour measurements. Birth rates calculated by MCNP with ORIGEN for as-run conditions were computed at the end of each irradiation interval. Each time step in MCNPmore » provided neutron flux, reaction rates and AGR-1 compact composition, which were used to determine birth rates using ORIGEN. The initial birth-rate data, consisting of four values for each irradiation interval at the beginning, end, and two intermediate times, were interpolated to obtain values for each 8-hour activity. The problem with this method is that any daily changes in heat rates or perturbations, such as shim control movement or core/lobe power fluctuations, would not be reflected in the interpolated data and a true picture of the system would not be presented. At the conclusion of the AGR-1 experiment, great efforts were put forth to compute daily birthrates, which were reprocessed with the 8-hour release activity. The results of this study are presented in this paper.« less

  15. The effect of birthrate granularity on the release-to-birth ratio for the AGR-1 in-core experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D. M. Scates; J. B. Walter; J. T. Maki

    The AGR-1 Advanced Gas Reactor (AGR) tristructural-isotropic-particle fuel experiment underwent 13 irradiation intervals from December 2006 until November 2009 within the Idaho National Laboratory Advanced Test Reactor in support of the Next Generation Nuclear Power Plant program. During this multi-year experiment, release-to-birth rate ratios were computed at the end of each operating interval to provide information about fuel performance. Fission products released during irradiation were tracked daily by the Fission Product Monitoring System using 8-h measurements. Birth rate calculated by MCNP with ORIGEN for as-run conditions were computed at the end of each irradiation interval. Each time step in MCNPmore » provided neutron flux, reaction rates and AGR-1 compact composition, which were used to determine birth rate using ORIGEN. The initial birth-rate data, consisting of four values for each irradiation interval at the beginning, end, and two intermediate times, were interpolated to obtain values for each 8-h activity. The problem with this method is that any daily changes in heat rates or perturbations, such as shim control movement or core/lobe power fluctuations, would not be reflected in the interpolated data and a true picture of the system would not be presented. At the conclusion of the AGR-1 experiment, great efforts were put forth to compute daily birthrates, which were reprocessed with the 8-h release activity. The results of this study are presented in this paper.« less

  16. Geocomputation over Hybrid Computer Architecture and Systems: Prior Works and On-going Initiatives at UARK

    NASA Astrophysics Data System (ADS)

    Shi, X.

    2015-12-01

    As NSF indicated - "Theory and experimentation have for centuries been regarded as two fundamental pillars of science. It is now widely recognized that computational and data-enabled science forms a critical third pillar." Geocomputation is the third pillar of GIScience and geosciences. With the exponential growth of geodata, the challenge of scalable and high performance computing for big data analytics become urgent because many research activities are constrained by the inability of software or tool that even could not complete the computation process. Heterogeneous geodata integration and analytics obviously magnify the complexity and operational time frame. Many large-scale geospatial problems may be not processable at all if the computer system does not have sufficient memory or computational power. Emerging computer architectures, such as Intel's Many Integrated Core (MIC) Architecture and Graphics Processing Unit (GPU), and advanced computing technologies provide promising solutions to employ massive parallelism and hardware resources to achieve scalability and high performance for data intensive computing over large spatiotemporal and social media data. Exploring novel algorithms and deploying the solutions in massively parallel computing environment to achieve the capability for scalable data processing and analytics over large-scale, complex, and heterogeneous geodata with consistent quality and high-performance has been the central theme of our research team in the Department of Geosciences at the University of Arkansas (UARK). New multi-core architectures combined with application accelerators hold the promise to achieve scalability and high performance by exploiting task and data levels of parallelism that are not supported by the conventional computing systems. Such a parallel or distributed computing environment is particularly suitable for large-scale geocomputation over big data as proved by our prior works, while the potential of such advanced infrastructure remains unexplored in this domain. Within this presentation, our prior and on-going initiatives will be summarized to exemplify how we exploit multicore CPUs, GPUs, and MICs, and clusters of CPUs, GPUs and MICs, to accelerate geocomputation in different applications.

  17. The markup is the model: reasoning about systems biology models in the Semantic Web era.

    PubMed

    Kell, Douglas B; Mendes, Pedro

    2008-06-07

    Metabolic control analysis, co-invented by Reinhart Heinrich, is a formalism for the analysis of biochemical networks, and is a highly important intellectual forerunner of modern systems biology. Exchanging ideas and exchanging models are part of the international activities of science and scientists, and the Systems Biology Markup Language (SBML) allows one to perform the latter with great facility. Encoding such models in SBML allows their distributed analysis using loosely coupled workflows, and with the advent of the Internet the various software modules that one might use to analyze biochemical models can reside on entirely different computers and even on different continents. Optimization is at the core of many scientific and biotechnological activities, and Reinhart made many major contributions in this area, stimulating our own activities in the use of the methods of evolutionary computing for optimization.

  18. Spaceborne Processor Array

    NASA Technical Reports Server (NTRS)

    Chow, Edward T.; Schatzel, Donald V.; Whitaker, William D.; Sterling, Thomas

    2008-01-01

    A Spaceborne Processor Array in Multifunctional Structure (SPAMS) can lower the total mass of the electronic and structural overhead of spacecraft, resulting in reduced launch costs, while increasing the science return through dynamic onboard computing. SPAMS integrates the multifunctional structure (MFS) and the Gilgamesh Memory, Intelligence, and Network Device (MIND) multi-core in-memory computer architecture into a single-system super-architecture. This transforms every inch of a spacecraft into a sharable, interconnected, smart computing element to increase computing performance while simultaneously reducing mass. The MIND in-memory architecture provides a foundation for high-performance, low-power, and fault-tolerant computing. The MIND chip has an internal structure that includes memory, processing, and communication functionality. The Gilgamesh is a scalable system comprising multiple MIND chips interconnected to operate as a single, tightly coupled, parallel computer. The array of MIND components shares a global, virtual name space for program variables and tasks that are allocated at run time to the distributed physical memory and processing resources. Individual processor- memory nodes can be activated or powered down at run time to provide active power management and to configure around faults. A SPAMS system is comprised of a distributed Gilgamesh array built into MFS, interfaces into instrument and communication subsystems, a mass storage interface, and a radiation-hardened flight computer.

  19. Assembly of large metagenome data sets using a Convey HC-1 hybrid core computer (7th Annual SFAF Meeting, 2012)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Copeland, Alex

    2012-06-01

    Alex Copeland on "Assembly of large metagenome data sets using a Convey HC-1 hybrid core computer" at the 2012 Sequencing, Finishing, Analysis in the Future Meeting held June 5-7, 2012 in Santa Fe, New Mexico.

  20. Assembly of large metagenome data sets using a Convey HC-1 hybrid core computer (7th Annual SFAF Meeting, 2012)

    ScienceCinema

    Copeland, Alex [DOE JGI

    2017-12-09

    Alex Copeland on "Assembly of large metagenome data sets using a Convey HC-1 hybrid core computer" at the 2012 Sequencing, Finishing, Analysis in the Future Meeting held June 5-7, 2012 in Santa Fe, New Mexico.

  1. A novel computer-aided method to fabricate a custom one-piece glass fiber dowel-and-core based on digitized impression and crown preparation data.

    PubMed

    Chen, Zhiyu; Li, Ya; Deng, Xuliang; Wang, Xinzhi

    2014-06-01

    Fiber-reinforced composite dowels have been widely used for their superior biomechanical properties; however, their preformed shape cannot fit irregularly shaped root canals. This study aimed to describe a novel computer-aided method to create a custom-made one-piece dowel-and-core based on the digitization of impressions and clinical standard crown preparations. A standard maxillary die stone model containing three prepared teeth each (maxillary lateral incisor, canine, premolar) requiring dowel restorations was made. It was then mounted on an average value articulator with the mandibular stone model to simulate natural occlusion. Impressions for each tooth were obtained using vinylpolysiloxane with a sectional dual-arch tray and digitized with an optical scanner. The dowel-and-core virtual model was created by slicing 3D dowel data from impression digitization with core data selected from a standard crown preparation database of 107 records collected from clinics and digitized. The position of the chosen digital core was manually regulated to coordinate with the adjacent teeth to fulfill the crown restorative requirements. Based on virtual models, one-piece custom dowel-and-cores for three experimental teeth were milled from a glass fiber block with computer-aided manufacturing techniques. Furthermore, two patients were treated to evaluate the practicality of this new method. The one-piece glass fiber dowel-and-core made for experimental teeth fulfilled the clinical requirements for dowel restorations. Moreover, two patients were treated to validate the technique. This novel computer-aided method to create a custom one-piece glass fiber dowel-and-core proved to be practical and efficient. © 2013 by the American College of Prosthodontists.

  2. Replication of Space-Shuttle Computers in FPGAs and ASICs

    NASA Technical Reports Server (NTRS)

    Ferguson, Roscoe C.

    2008-01-01

    A document discusses the replication of the functionality of the onboard space-shuttle general-purpose computers (GPCs) in field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs). The purpose of the replication effort is to enable utilization of proven space-shuttle flight software and software-development facilities to the extent possible during development of software for flight computers for a new generation of launch vehicles derived from the space shuttles. The replication involves specifying the instruction set of the central processing unit and the input/output processor (IOP) of the space-shuttle GPC in a hardware description language (HDL). The HDL is synthesized to form a "core" processor in an FPGA or, less preferably, in an ASIC. The core processor can be used to create a flight-control card to be inserted into a new avionics computer. The IOP of the GPC as implemented in the core processor could be designed to support data-bus protocols other than that of a multiplexer interface adapter (MIA) used in the space shuttle. Hence, a computer containing the core processor could be tailored to communicate via the space-shuttle GPC bus and/or one or more other buses.

  3. Energy consumption estimation of an OMAP-based Android operating system

    NASA Astrophysics Data System (ADS)

    González, Gabriel; Juárez, Eduardo; Castro, Juan José; Sanz, César

    2011-05-01

    System-level energy optimization of battery-powered multimedia embedded systems has recently become a design goal. The poor operational time of multimedia terminals makes computationally demanding applications impractical in real scenarios. For instance, the so-called smart-phones are currently unable to remain in operation longer than several hours. The OMAP3530 processor basically consists of two processing cores, a General Purpose Processor (GPP) and a Digital Signal Processor (DSP). The former, an ARM Cortex-A8 processor, is aimed to run a generic Operating System (OS) while the latter, a DSP core based on the C64x+, has architecture optimized for video processing. The BeagleBoard, a commercial prototyping board based on the OMAP processor, has been used to test the Android Operating System and measure its performance. The board has 128 MB of SDRAM external memory, 256 MB of Flash external memory and several interfaces. Note that the clock frequency of the ARM and DSP OMAP cores is 600 MHz and 430 MHz, respectively. This paper describes the energy consumption estimation of the processes and multimedia applications of an Android v1.6 (Donut) OS on the OMAP3530-Based BeagleBoard. In addition, tools to communicate the two processing cores have been employed. A test-bench to profile the OS resource usage has been developed. As far as the energy estimates concern, the OMAP processor energy consumption model provided by the manufacturer has been used. The model is basically divided in two energy components. The former, the baseline core energy, describes the energy consumption that is independent of any chip activity. The latter, the module active energy, describes the energy consumed by the active modules depending on resource usage.

  4. Theoretical surface core-level shifts for Be(0001)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feibelman, P.J.

    1994-05-15

    Core-ionization potentials (CIP's) are computed for Be(0001). Three core features are observed in corresponding photoelectron spectra, with CIP's shifted relative to the bulk core level by [minus]0.825, [minus]0.570, and [minus]0.265 eV. The computed CIP shifts for the outer and subsurface layers, [minus]0.60 and [minus]0.29 eV, respectively, agree with the latter two of these. It is surmised that the [minus]0.825-eV shift is associated with a surface defect. The negative signs of the Be(0001) surface core-level shifts do not fit into the thermochemical picture widely used to explain CIP shifts. The reason is that a core-ionized Be atom is too small tomore » bond effectively to the remainder of the unrelaxed Be lattice.« less

  5. Engineering of lipid-coated PLGA nanoparticles with a tunable payload of diagnostically active nanocrystals for medical imaging.

    PubMed

    Mieszawska, Aneta J; Gianella, Anita; Cormode, David P; Zhao, Yiming; Meijerink, Andries; Langer, Robert; Farokhzad, Omid C; Fayad, Zahi A; Mulder, Willem J M

    2012-06-14

    Polylactic-co-glycolic acid (PLGA) based nanoparticles are biocompatible and biodegradable and therefore have been extensively investigated as therapeutic carriers. Here, we engineered diagnostically active PLGA nanoparticles that incorporate high payloads of nanocrystals into their core for tunable bioimaging features. We accomplished this through esterification reactions of PLGA to generate polymers modified with nanocrystals. The PLGA nanoparticles formed from modified PLGA polymers that were functionalized with either gold nanocrystals or quantum dots exhibited favorable features for computed tomography and optical imaging, respectively.

  6. Oak Ridge National Laboratory Core Competencies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberto, J.B.; Anderson, T.D.; Berven, B.A.

    1994-12-01

    A core competency is a distinguishing integration of capabilities which enables an organization to deliver mission results. Core competencies represent the collective learning of an organization and provide the capacity to perform present and future missions. Core competencies are distinguishing characteristics which offer comparative advantage and are difficult to reproduce. They exhibit customer focus, mission relevance, and vertical integration from research through applications. They are demonstrable by metrics such as level of investment, uniqueness of facilities and expertise, and national impact. The Oak Ridge National Laboratory (ORNL) has identified four core competencies which satisfy the above criteria. Each core competencymore » represents an annual investment of at least $100M and is characterized by an integration of Laboratory technical foundations in physical, chemical, and materials sciences; biological, environmental, and social sciences; engineering sciences; and computational sciences and informatics. The ability to integrate broad technical foundations to develop and sustain core competencies in support of national R&D goals is a distinguishing strength of the national laboratories. The ORNL core competencies are: 9 Energy Production and End-Use Technologies o Biological and Environmental Sciences and Technology o Advanced Materials Synthesis, Processing, and Characterization & Neutron-Based Science and Technology. The distinguishing characteristics of each ORNL core competency are described. In addition, written material is provided for two emerging competencies: Manufacturing Technologies and Computational Science and Advanced Computing. Distinguishing institutional competencies in the Development and Operation of National Research Facilities, R&D Integration and Partnerships, Technology Transfer, and Science Education are also described. Finally, financial data for the ORNL core competencies are summarized in the appendices.« less

  7. Parallel-vector out-of-core equation solver for computational mechanics

    NASA Technical Reports Server (NTRS)

    Qin, J.; Agarwal, T. K.; Storaasli, O. O.; Nguyen, D. T.; Baddourah, M. A.

    1993-01-01

    A parallel/vector out-of-core equation solver is developed for shared-memory computers, such as the Cray Y-MP machine. The input/ output (I/O) time is reduced by using the a synchronous BUFFER IN and BUFFER OUT, which can be executed simultaneously with the CPU instructions. The parallel and vector capability provided by the supercomputers is also exploited to enhance the performance. Numerical applications in large-scale structural analysis are given to demonstrate the efficiency of the present out-of-core solver.

  8. Self-Aware Computing

    DTIC Science & Technology

    2009-06-01

    to floating point , to multi-level logic. 2 Overview Self-aware computation can be distinguished from existing computational models which are...systems have advanced to the point that the time is ripe to realize such a system. To illustrate, let us examine each of the key aspects of self...servers for each service, there are no single points of failure in the system. If an OS or user core has a failure, one of several introspection cores

  9. Computational Investigation of Shock-Mitigation Efficacy of Polyurea When Used in a Combat Helmet

    DTIC Science & Technology

    2012-01-01

    Multidiscipline Modeling in Materials and Structures Emerald Article: Computational investigation of shock-mitigation efficacy of polyurea when used...mitigation efficacy of polyurea when used in a combat helmet: A core sample analysis", Multidiscipline Modeling in Materials and Structures, Vol. 8 Iss...to 00-00-2012 4. TITLE AND SUBTITLE Computational investigation of shock-mitigation efficacy of polyurea when used in a combat helmet: A core

  10. Density functional description of size-dependent effects at nucleation on neutral and charged nanoparticles

    NASA Astrophysics Data System (ADS)

    Shchekin, Alexander K.; Lebedeva, Tatiana S.

    2017-03-01

    A numerical study of size-dependent effects in the thermodynamics of a small droplet formed around a solid nanoparticle has been performed within the square-gradient density functional theory. The Lennard-Jones fluid with the Carnahan-Starling model for the hard-sphere contribution to intermolecular interaction in liquid and vapor phases and interfaces has been used for description of the condensate. The intermolecular forces between the solid core and condensate molecules have been taken into account with the help of the Lennard-Jones part of the total molecular potential of the core. The influence of the electric charge of the particle has been considered under assumption of the central Coulomb potential in the medium with dielectric permittivity depending on local condensate density. The condensate density profiles and equimolecular radii for equilibrium droplets at different values of the condensate chemical potential have been computed in the cases of an uncharged solid core with the molecular potential, a charged core without molecular potential, and a core with joint action of the Coulomb and molecular potentials. The appearance of stable equilibrium droplets even in the absence of the electric charge has been commented. As a next step, the capillary, disjoining pressure, and electrostatic contributions to the condensate chemical potential have been considered and compared with the predictions of classical thermodynamics in a wide range of values of the droplet and the particle equimolecular radii. With the help of the found dependence of the condensate chemical potential in droplet on the droplet size, the activation barrier for nucleation on uncharged and charged particles has been computed as a function of the vapor supersaturation. Finally, the work of droplet formation and the work of wetting the particle have been found as functions of the droplet size.

  11. An approach to model reactor core nodalization for deterministic safety analysis

    NASA Astrophysics Data System (ADS)

    Salim, Mohd Faiz; Samsudin, Mohd Rafie; Mamat @ Ibrahim, Mohd Rizal; Roslan, Ridha; Sadri, Abd Aziz; Farid, Mohd Fairus Abd

    2016-01-01

    Adopting good nodalization strategy is essential to produce an accurate and high quality input model for Deterministic Safety Analysis (DSA) using System Thermal-Hydraulic (SYS-TH) computer code. The purpose of such analysis is to demonstrate the compliance against regulatory requirements and to verify the behavior of the reactor during normal and accident conditions as it was originally designed. Numerous studies in the past have been devoted to the development of the nodalization strategy for small research reactor (e.g. 250kW) up to the bigger research reactor (e.g. 30MW). As such, this paper aims to discuss the state-of-arts thermal hydraulics channel to be employed in the nodalization for RTP-TRIGA Research Reactor specifically for the reactor core. At present, the required thermal-hydraulic parameters for reactor core, such as core geometrical data (length, coolant flow area, hydraulic diameters, and axial power profile) and material properties (including the UZrH1.6, stainless steel clad, graphite reflector) have been collected, analyzed and consolidated in the Reference Database of RTP using standardized methodology, mainly derived from the available technical documentations. Based on the available information in the database, assumptions made on the nodalization approach and calculations performed will be discussed and presented. The development and identification of the thermal hydraulics channel for the reactor core will be implemented during the SYS-TH calculation using RELAP5-3D® computer code. This activity presented in this paper is part of the development of overall nodalization description for RTP-TRIGA Research Reactor under the IAEA Norwegian Extra-Budgetary Programme (NOKEBP) mentoring project on Expertise Development through the Analysis of Reactor Thermal-Hydraulics for Malaysia, denoted as EARTH-M.

  12. An approach to model reactor core nodalization for deterministic safety analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salim, Mohd Faiz, E-mail: mohdfaizs@tnb.com.my; Samsudin, Mohd Rafie, E-mail: rafies@tnb.com.my; Mamat Ibrahim, Mohd Rizal, E-mail: m-rizal@nuclearmalaysia.gov.my

    Adopting good nodalization strategy is essential to produce an accurate and high quality input model for Deterministic Safety Analysis (DSA) using System Thermal-Hydraulic (SYS-TH) computer code. The purpose of such analysis is to demonstrate the compliance against regulatory requirements and to verify the behavior of the reactor during normal and accident conditions as it was originally designed. Numerous studies in the past have been devoted to the development of the nodalization strategy for small research reactor (e.g. 250kW) up to the bigger research reactor (e.g. 30MW). As such, this paper aims to discuss the state-of-arts thermal hydraulics channel to bemore » employed in the nodalization for RTP-TRIGA Research Reactor specifically for the reactor core. At present, the required thermal-hydraulic parameters for reactor core, such as core geometrical data (length, coolant flow area, hydraulic diameters, and axial power profile) and material properties (including the UZrH{sub 1.6}, stainless steel clad, graphite reflector) have been collected, analyzed and consolidated in the Reference Database of RTP using standardized methodology, mainly derived from the available technical documentations. Based on the available information in the database, assumptions made on the nodalization approach and calculations performed will be discussed and presented. The development and identification of the thermal hydraulics channel for the reactor core will be implemented during the SYS-TH calculation using RELAP5-3D{sup ®} computer code. This activity presented in this paper is part of the development of overall nodalization description for RTP-TRIGA Research Reactor under the IAEA Norwegian Extra-Budgetary Programme (NOKEBP) mentoring project on Expertise Development through the Analysis of Reactor Thermal-Hydraulics for Malaysia, denoted as EARTH-M.« less

  13. Scaling up Planetary Dynamo Modeling to Massively Parallel Computing Systems: The Rayleigh Code at ALCF

    NASA Astrophysics Data System (ADS)

    Featherstone, N. A.; Aurnou, J. M.; Yadav, R. K.; Heimpel, M. H.; Soderlund, K. M.; Matsui, H.; Stanley, S.; Brown, B. P.; Glatzmaier, G.; Olson, P.; Buffett, B. A.; Hwang, L.; Kellogg, L. H.

    2017-12-01

    In the past three years, CIG's Dynamo Working Group has successfully ported the Rayleigh Code to the Argonne Leadership Computer Facility's Mira BG/Q device. In this poster, we present some our first results, showing simulations of 1) convection in the solar convection zone; 2) dynamo action in Earth's core and 3) convection in the jovian deep atmosphere. These simulations have made efficient use of 131 thousand cores, 131 thousand cores and 232 thousand cores, respectively, on Mira. In addition to our novel results, the joys and logistical challenges of carrying out such large runs will also be discussed.

  14. Fast data reconstructed method of Fourier transform imaging spectrometer based on multi-core CPU

    NASA Astrophysics Data System (ADS)

    Yu, Chunchao; Du, Debiao; Xia, Zongze; Song, Li; Zheng, Weijian; Yan, Min; Lei, Zhenggang

    2017-10-01

    Imaging spectrometer can gain two-dimensional space image and one-dimensional spectrum at the same time, which shows high utility in color and spectral measurements, the true color image synthesis, military reconnaissance and so on. In order to realize the fast reconstructed processing of the Fourier transform imaging spectrometer data, the paper designed the optimization reconstructed algorithm with OpenMP parallel calculating technology, which was further used for the optimization process for the HyperSpectral Imager of `HJ-1' Chinese satellite. The results show that the method based on multi-core parallel computing technology can control the multi-core CPU hardware resources competently and significantly enhance the calculation of the spectrum reconstruction processing efficiency. If the technology is applied to more cores workstation in parallel computing, it will be possible to complete Fourier transform imaging spectrometer real-time data processing with a single computer.

  15. Experimental and computational studies on the femoral fracture risk for advanced core decompression.

    PubMed

    Tran, T N; Warwas, S; Haversath, M; Classen, T; Hohn, H P; Jäger, M; Kowalczyk, W; Landgraeber, S

    2014-04-01

    Two questions are often addressed by orthopedists relating to core decompression procedure: 1) Is the core decompression procedure associated with a considerable lack of structural support of the bone? and 2) Is there an optimal region for the surgical entrance point for which the fracture risk would be lowest? As bioresorbable bone substitutes become more and more common and core decompression has been described in combination with them, the current study takes this into account. Finite element model of a femur treated by core decompression with bone substitute was simulated and analyzed. In-vitro compression testing of femora was used to confirm finite element results. The results showed that for core decompression with standard drilling in combination with artificial bone substitute refilling, daily activities (normal walking and walking downstairs) are not risky for femoral fracture. The femoral fracture risk increased successively when the entrance point is located further distal. The critical value of the deviation of the entrance point to a more distal part is about 20mm. The study findings demonstrate that optimal entrance point should locate on the proximal subtrochanteric region in order to reduce the subtrochanteric fracture risk. Furthermore the consistent results of finite element and in-vitro testing imply that the simulations are sufficient. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Computational design of variants for cephalosporin C acylase from Pseudomonas strain N176 with improved stability and activity.

    PubMed

    Tian, Ye; Huang, Xiaoqiang; Li, Qing; Zhu, Yushan

    2017-01-01

    In this report, redesigning cephalosporin C acylase from the Pseudomonas strain N176 revealed that the loss of stability owing to the introduced mutations at the active site can be recovered by repacking the nearby hydrophobic core regions. Starting from a quadruple mutant M31βF/H57βS/V68βA/H70βS, whose decrease in stability is largely owing to the mutation V68βA at the active site, we employed a computational enzyme design strategy that integrated design both at hydrophobic core regions for stability enhancement and at the active site for activity improvement. Single-point mutations L154βF, Y167βF, L180βF and their combinations L154βF/L180βF and L154βF/Y167βF/L180βF were found to display improved stability and activity. The two-point mutant L154βF/L180βF increased the protein melting temperature (T m ) by 11.7 °C and the catalytic efficiency V max /K m by 57 % compared with the values of the starting quadruple mutant. The catalytic efficiency of the resulting sixfold mutant M31βF/H57βS/V68βA/H70βS/L154βF/L180βF is recovered to become comparable to that of the triple mutant M31βF/H57βS/H70βS, but with a higher T m . Further experiments showed that single-point mutations L154βF, L180βF, and their combination contribute no stability enhancement to the triple mutant M31βF/H57βS/H70βS. These results verify that the lost stability because of mutation V68βA at the active site was recovered by introducing mutations L154βF and L180βF at hydrophobic core regions. Importantly, mutation V68βA in the six-residue mutant provides more space to accommodate the bulky side chain of cephalosporin C, which could help in designing cephalosporin C acylase mutants with higher activities and the practical one-step enzymatic route to prepare 7-aminocephalosporanic acid at industrial-scale levels.

  17. DoE Early Career Research Program: Final Report: Model-Independent Dark-Matter Searches at the ATLAS Experiment and Applications of Many-core Computing to High Energy Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farbin, Amir

    2015-07-15

    This is the final report of for DoE Early Career Research Program Grant Titled "Model-Independent Dark-Matter Searches at the ATLAS Experiment and Applications of Many-core Computing to High Energy Physics".

  18. An FPGA computing demo core for space charge simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Jinyuan; Huang, Yifei; /Fermilab

    2009-01-01

    In accelerator physics, space charge simulation requires large amount of computing power. In a particle system, each calculation requires time/resource consuming operations such as multiplications, divisions, and square roots. Because of the flexibility of field programmable gate arrays (FPGAs), we implemented this task with efficient use of the available computing resources and completely eliminated non-calculating operations that are indispensable in regular micro-processors (e.g. instruction fetch, instruction decoding, etc.). We designed and tested a 16-bit demo core for computing Coulomb's force in an Altera Cyclone II FPGA device. To save resources, the inverse square-root cube operation in our design is computedmore » using a memory look-up table addressed with nine to ten most significant non-zero bits. At 200 MHz internal clock, our demo core reaches a throughput of 200 M pairs/s/core, faster than a typical 2 GHz micro-processor by about a factor of 10. Temperature and power consumption of FPGAs were also lower than those of micro-processors. Fast and convenient, FPGAs can serve as alternatives to time-consuming micro-processors for space charge simulation.« less

  19. The change of radial power factor distribution due to RCCA insertion at the first cycle core of AP1000

    NASA Astrophysics Data System (ADS)

    Susilo, J.; Suparlina, L.; Deswandri; Sunaryo, G. R.

    2018-02-01

    The using of a computer program for the PWR type core neutronic design parameters analysis has been carried out in some previous studies. These studies included a computer code validation on the neutronic parameters data values resulted from measurements and benchmarking calculation. In this study, the AP1000 first cycle core radial power peaking factor validation and analysis were performed using CITATION module of the SRAC2006 computer code. The computer code has been also validated with a good result to the criticality values of VERA benchmark core. The AP1000 core power distribution calculation has been done in two-dimensional X-Y geometry through ¼ section modeling. The purpose of this research is to determine the accuracy of the SRAC2006 code, and also the safety performance of the AP1000 core first cycle operating. The core calculations were carried out with the several conditions, those are without Rod Cluster Control Assembly (RCCA), by insertion of a single RCCA (AO, M1, M2, MA, MB, MC, MD) and multiple insertion RCCA (MA + MB, MA + MB + MC, MA + MB + MC + MD, and MA + MB + MC + MD + M1). The maximum power factor of the fuel rods value in the fuel assembly assumedapproximately 1.406. The calculation results analysis showed that the 2-dimensional CITATION module of SRAC2006 code is accurate in AP1000 power distribution calculation without RCCA and with MA+MB RCCA insertion.The power peaking factor on the first operating cycle of the AP1000 core without RCCA, as well as with single and multiple RCCA are still below in the safety limit values (less then about 1.798). So in terms of thermal power generated by the fuel assembly, then it can be considered that the AP100 core at the first operating cycle is safe.

  20. A systems approach to hemostasis: 3. Thrombus consolidation regulates intrathrombus solute transport and local thrombin activity

    PubMed Central

    Welsh, John D.; Tomaiuolo, Maurizio; Wu, Jie; Colace, Thomas V.; Diamond, Scott L.

    2014-01-01

    Hemostatic thrombi formed after a penetrating injury have a distinctive structure in which a core of highly activated, closely packed platelets is covered by a shell of less-activated, loosely packed platelets. We have shown that differences in intrathrombus molecular transport emerge in parallel with regional differences in platelet packing density and predicted that these differences affect thrombus growth and stability. Here we test that prediction in a mouse vascular injury model. The studies use a novel method for measuring thrombus contraction in vivo and a previously characterized mouse line with a defect in integrin αIIbβ3 outside-in signaling that affects clot retraction ex vivo. The results show that the mutant mice have a defect in thrombus consolidation following vascular injury, resulting in an increase in intrathrombus transport rates and, as predicted by computational modeling, a decrease in thrombin activity and platelet activation in the thrombus core. Collectively, these data (1) demonstrate that in addition to the activation state of individual platelets, the physical properties of the accumulated mass of adherent platelets is critical in determining intrathrombus agonist distribution and platelet activation and (2) define a novel role for integrin signaling in the regulation of intrathrombus transport rates and localization of thrombin activity. PMID:24951426

  1. Substrate Tunnels in Enzymes: Structure-Function Relationships and Computational Methodology

    PubMed Central

    Kingsley, Laura J.; Lill, Markus A.

    2015-01-01

    In enzymes, the active site is the location where incoming substrates are chemically converted to products. In some enzymes, this site is deeply buried within the core of the protein and in order to access the active site, substrates must pass through the body of the protein via a tunnel. In many systems, these tunnels act as filters and have been found to influence both substrate specificity and catalytic mechanism. Identifying and understanding how these tunnels exert such control has been of growing interest over the past several years due to implications in fields such as protein engineering and drug design. This growing interest has spurred the development of several computational methods to identify and analyze tunnels and how ligands migrate through these tunnels. The goal of this review is to outline how tunnels influence substrate specificity and catalytic efficiency in enzymes with tunnels and to provide a brief summary of the computational tools used to identify and evaluate these tunnels. PMID:25663659

  2. Running climate model on a commercial cloud computing environment: A case study using Community Earth System Model (CESM) on Amazon AWS

    NASA Astrophysics Data System (ADS)

    Chen, Xiuhong; Huang, Xianglei; Jiao, Chaoyi; Flanner, Mark G.; Raeker, Todd; Palen, Brock

    2017-01-01

    The suites of numerical models used for simulating climate of our planet are usually run on dedicated high-performance computing (HPC) resources. This study investigates an alternative to the usual approach, i.e. carrying out climate model simulations on commercially available cloud computing environment. We test the performance and reliability of running the CESM (Community Earth System Model), a flagship climate model in the United States developed by the National Center for Atmospheric Research (NCAR), on Amazon Web Service (AWS) EC2, the cloud computing environment by Amazon.com, Inc. StarCluster is used to create virtual computing cluster on the AWS EC2 for the CESM simulations. The wall-clock time for one year of CESM simulation on the AWS EC2 virtual cluster is comparable to the time spent for the same simulation on a local dedicated high-performance computing cluster with InfiniBand connections. The CESM simulation can be efficiently scaled with the number of CPU cores on the AWS EC2 virtual cluster environment up to 64 cores. For the standard configuration of the CESM at a spatial resolution of 1.9° latitude by 2.5° longitude, increasing the number of cores from 16 to 64 reduces the wall-clock running time by more than 50% and the scaling is nearly linear. Beyond 64 cores, the communication latency starts to outweigh the benefit of distributed computing and the parallel speedup becomes nearly unchanged.

  3. Modeling Cardiac Electrophysiology at the Organ Level in the Peta FLOPS Computing Age

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, Lawrence; Bishop, Martin; Hoetzl, Elena

    2010-09-30

    Despite a steep increase in available compute power, in-silico experimentation with highly detailed models of the heart remains to be challenging due to the high computational cost involved. It is hoped that next generation high performance computing (HPC) resources lead to significant reductions in execution times to leverage a new class of in-silico applications. However, performance gains with these new platforms can only be achieved by engaging a much larger number of compute cores, necessitating strongly scalable numerical techniques. So far strong scalability has been demonstrated only for a moderate number of cores, orders of magnitude below the range requiredmore » to achieve the desired performance boost.In this study, strong scalability of currently used techniques to solve the bidomain equations is investigated. Benchmark results suggest that scalability is limited to 512-4096 cores within the range of relevant problem sizes even when systems are carefully load-balanced and advanced IO strategies are employed.« less

  4. Computational reconstruction and fluid dynamics of in vivo thrombi from the microcirculation

    NASA Astrophysics Data System (ADS)

    Mirramezani, Mehran; Tomaiuolo, Maurizio; Stalker, Timothy; Shadden, Shawn

    2016-11-01

    Blood flow and mass transfer can have significant effects on clot growth, composition and stability during the hemostatic response. We integrate in vivo data with CFD to better understand transport processes during clot formation. By utilizing electron microscopy, we reconstructed the 3D thrombus structure formed after a penetrating laser injury in a mouse cremaster muscle. Random jammed packing is used to reconstruct the microenvironment of the platelet aggregate, with platelets modeled as ellipsoids. In our 3D model, Stokes flow is simulated to obtain the velocity field in the explicitly meshed gaps between platelets and the lumen surrounding the thrombus. Based on in vivo data, a clot is composed of a core of highly activated platelets covered by a shell of loosely adherent platelets. We studied the effects of clot size (thrombus growth), gap distribution (consolidation), and vessel blood flow rate on mean intrathrombus velocity. The results show that velocity is smaller in the core as compared to the shell, potentially enabling higher concentration of agonists in the core contributing to its activation. In addition, our results do not appear to be sensitive to the geometry of the platelets, but rather gap size plays more important role on intrathrombus velocity and transport.

  5. Polytopol computing for multi-core and distributed systems

    NASA Astrophysics Data System (ADS)

    Spaanenburg, Henk; Spaanenburg, Lambert; Ranefors, Johan

    2009-05-01

    Multi-core computing provides new challenges to software engineering. The paper addresses such issues in the general setting of polytopol computing, that takes multi-core problems in such widely differing areas as ambient intelligence sensor networks and cloud computing into account. It argues that the essence lies in a suitable allocation of free moving tasks. Where hardware is ubiquitous and pervasive, the network is virtualized into a connection of software snippets judiciously injected to such hardware that a system function looks as one again. The concept of polytopol computing provides a further formalization in terms of the partitioning of labor between collector and sensor nodes. Collectors provide functions such as a knowledge integrator, awareness collector, situation displayer/reporter, communicator of clues and an inquiry-interface provider. Sensors provide functions such as anomaly detection (only communicating singularities, not continuous observation), they are generally powered or self-powered, amorphous (not on a grid) with generation-and-attrition, field re-programmable, and sensor plug-and-play-able. Together the collector and the sensor are part of the skeleton injector mechanism, added to every node, and give the network the ability to organize itself into some of many topologies. Finally we will discuss a number of applications and indicate how a multi-core architecture supports the security aspects of the skeleton injector.

  6. 12 CFR 1265.3 - Core mission activities.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 7 2011-01-01 2011-01-01 false Core mission activities. 1265.3 Section 1265.3 Banks and Banking FEDERAL HOUSING FINANCE AGENCY FEDERAL HOME LOAN BANKS CORE MISSION ACTIVITIES § 1265.3 Core mission activities. The following Bank activities qualify as core mission activities: (a...

  7. 12 CFR 940.3 - Core mission activities.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Core mission activities. 940.3 Section 940.3 Banks and Banking FEDERAL HOUSING FINANCE BOARD FEDERAL HOME LOAN BANK MISSION CORE MISSION ACTIVITIES § 940.3 Core mission activities. The following Bank activities qualify as core mission activities: (a...

  8. Cores Of Recurrent Events (CORE) | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    CORE is a statistically supported computational method for finding recurrently targeted regions in massive collections of genomic intervals, such as those arising from DNA copy number analysis of single tumor cells or bulk tumor tissues.

  9. Computer-Assisted Instruction for Teaching Academic Skills to Students with Autism Spectrum Disorders: A Review of Literature

    ERIC Educational Resources Information Center

    Pennington, Robert C.

    2010-01-01

    Although legislation mandates that students with autism receive instruction linked to the general education core content, there is limited research supporting the effectiveness of interventions for teaching core content to these students. In this study, the author reviewed research conducted between the years 1997 and 2008 using computer-assisted…

  10. Characterizing Facesheet/Core Disbonding in Honeycomb Core Sandwich Structure

    NASA Technical Reports Server (NTRS)

    Rinker, Martin; Ratcliffe, James G.; Adams, Daniel O.; Krueger, Ronald

    2013-01-01

    Results are presented from an experimental investigation into facesheet core disbonding in carbon fiber reinforced plastic/Nomex honeycomb sandwich structures using a Single Cantilever Beam test. Specimens with three, six and twelve-ply facesheets were tested. Specimens with different honeycomb cores consisting of four different cell sizes were also tested, in addition to specimens with three different widths. Three different data reduction methods were employed for computing apparent fracture toughness values from the test data, namely an area method, a compliance calibration technique and a modified beam theory method. The compliance calibration and modified beam theory approaches yielded comparable apparent fracture toughness values, which were generally lower than those computed using the area method. Disbonding in the three-ply facesheet specimens took place at the facesheet/core interface and yielded the lowest apparent fracture toughness values. Disbonding in the six and twelve-ply facesheet specimens took place within the core, near to the facesheet/core interface. Specimen width was not found to have a significant effect on apparent fracture toughness. The amount of scatter in the apparent fracture toughness data was found to increase with honeycomb core cell size.

  11. Simple Procedure to Compute the Inductance of a Toroidal Ferrite Core from the Linear to the Saturation Regions

    PubMed Central

    Salas, Rosa Ana; Pleite, Jorge

    2013-01-01

    We propose a specific procedure to compute the inductance of a toroidal ferrite core as a function of the excitation current. The study includes the linear, intermediate and saturation regions. The procedure combines the use of Finite Element Analysis in 2D and experimental measurements. Through the two dimensional (2D) procedure we are able to achieve convergence, a reduction of computational cost and equivalent results to those computed by three dimensional (3D) simulations. The validation is carried out by comparing 2D, 3D and experimental results. PMID:28809283

  12. Segregating the core computational faculty of human language from working memory.

    PubMed

    Makuuchi, Michiru; Bahlmann, Jörg; Anwander, Alfred; Friederici, Angela D

    2009-05-19

    In contrast to simple structures in animal vocal behavior, hierarchical structures such as center-embedded sentences manifest the core computational faculty of human language. Previous artificial grammar learning studies found that the left pars opercularis (LPO) subserves the processing of hierarchical structures. However, it is not clear whether this area is activated by the structural complexity per se or by the increased memory load entailed in processing hierarchical structures. To dissociate the effect of structural complexity from the effect of memory cost, we conducted a functional magnetic resonance imaging study of German sentence processing with a 2-way factorial design tapping structural complexity (with/without hierarchical structure, i.e., center-embedding of clauses) and working memory load (long/short distance between syntactically dependent elements; i.e., subject nouns and their respective verbs). Functional imaging data revealed that the processes for structure and memory operate separately but co-operatively in the left inferior frontal gyrus; activities in the LPO increased as a function of structural complexity, whereas activities in the left inferior frontal sulcus (LIFS) were modulated by the distance over which the syntactic information had to be transferred. Diffusion tensor imaging showed that these 2 regions were interconnected through white matter fibers. Moreover, functional coupling between the 2 regions was found to increase during the processing of complex, hierarchically structured sentences. These results suggest a neuroanatomical segregation of syntax-related aspects represented in the LPO from memory-related aspects reflected in the LIFS, which are, however, highly interconnected functionally and anatomically.

  13. Why we interact: on the functional role of the striatum in the subjective experience of social interaction.

    PubMed

    Pfeiffer, Ulrich J; Schilbach, Leonhard; Timmermans, Bert; Kuzmanovic, Bojana; Georgescu, Alexandra L; Bente, Gary; Vogeley, Kai

    2014-11-01

    There is ample evidence that human primates strive for social contact and experience interactions with conspecifics as intrinsically rewarding. Focusing on gaze behavior as a crucial means of human interaction, this study employed a unique combination of neuroimaging, eye-tracking, and computer-animated virtual agents to assess the neural mechanisms underlying this component of behavior. In the interaction task, participants believed that during each interaction the agent's gaze behavior could either be controlled by another participant or by a computer program. Their task was to indicate whether they experienced a given interaction as an interaction with another human participant or the computer program based on the agent's reaction. Unbeknownst to them, the agent was always controlled by a computer to enable a systematic manipulation of gaze reactions by varying the degree to which the agent engaged in joint attention. This allowed creating a tool to distinguish neural activity underlying the subjective experience of being engaged in social and non-social interaction. In contrast to previous research, this allows measuring neural activity while participants experience active engagement in real-time social interactions. Results demonstrate that gaze-based interactions with a perceived human partner are associated with activity in the ventral striatum, a core component of reward-related neurocircuitry. In contrast, interactions with a computer-driven agent activate attention networks. Comparisons of neural activity during interaction with behaviorally naïve and explicitly cooperative partners demonstrate different temporal dynamics of the reward system and indicate that the mere experience of engagement in social interaction is sufficient to recruit this system. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Real-time depth processing for embedded platforms

    NASA Astrophysics Data System (ADS)

    Rahnama, Oscar; Makarov, Aleksej; Torr, Philip

    2017-05-01

    Obtaining depth information of a scene is an important requirement in many computer-vision and robotics applications. For embedded platforms, passive stereo systems have many advantages over their active counterparts (i.e. LiDAR, Infrared). They are power efficient, cheap, robust to lighting conditions and inherently synchronized to the RGB images of the scene. However, stereo depth estimation is a computationally expensive task that operates over large amounts of data. For embedded applications which are often constrained by power consumption, obtaining accurate results in real-time is a challenge. We demonstrate a computationally and memory efficient implementation of a stereo block-matching algorithm in FPGA. The computational core achieves a throughput of 577 fps at standard VGA resolution whilst consuming less than 3 Watts of power. The data is processed using an in-stream approach that minimizes memory-access bottlenecks and best matches the raster scan readout of modern digital image sensors.

  15. Towards energy-efficient photonic interconnects

    NASA Astrophysics Data System (ADS)

    Demir, Yigit; Hardavellas, Nikos

    2015-03-01

    Silicon photonics have emerged as a promising solution to meet the growing demand for high-bandwidth, low-latency, and energy-efficient on-chip and off-chip communication in many-core processors. However, current silicon-photonic interconnect designs for many-core processors waste a significant amount of power because (a) lasers are always on, even during periods of interconnect inactivity, and (b) microring resonators employ heaters which consume a significant amount of power just to overcome thermal variations and maintain communication on the photonic links, especially in a 3D-stacked design. The problem of high laser power consumption is particularly important as lasers typically have very low energy efficiency, and photonic interconnects often remain underutilized both in scientific computing (compute-intensive execution phases underutilize the interconnect), and in server computing (servers in Google-scale datacenters have a typical utilization of less than 30%). We address the high laser power consumption by proposing EcoLaser+, which is a laser control scheme that saves energy by predicting the interconnect activity and opportunistically turning the on-chip laser off when possible, and also by scaling the width of the communication link based on a runtime prediction of the expected message length. Our laser control scheme can save up to 62 - 92% of the laser energy, and improve the energy efficiency of a manycore processor with negligible performance penalty. We address the high trimming (heating) power consumption of the microrings by proposing insulation methods that reduce the impact of localized heating induced by highly-active components on the 3D-stacked logic die.

  16. Multi-Core Processor Memory Contention Benchmark Analysis Case Study

    NASA Technical Reports Server (NTRS)

    Simon, Tyler; McGalliard, James

    2009-01-01

    Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.

  17. Results on the neutron energy distribution measurements at the RECH-1 Chilean nuclear reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilera, P., E-mail: paguilera87@gmail.com; Romero-Barrientos, J.; Universidad de Chile, Dpto. de Física, Facultad de Ciencias, Las Palmeras 3425, Nuñoa, Santiago

    2016-07-07

    Neutron activations experiments has been perform at the RECH-1 Chilean Nuclear Reactor to measure its neutron flux energy distribution. Samples of pure elements was activated to obtain the saturation activities for each reaction. Using - ray spectroscopy we identify and measure the activity of the reaction product nuclei, obtaining the saturation activities of 20 reactions. GEANT4 and MCNP was used to compute the self shielding factor to correct the cross section for each element. With the Expectation-Maximization algorithm (EM) we were able to unfold the neutron flux energy distribution at dry tube position, near the RECH-1 core. In this work,more » we present the unfolding results using the EM algorithm.« less

  18. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    NASA Astrophysics Data System (ADS)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-05-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  19. Application of Intel Many Integrated Core (MIC) architecture to the Yonsei University planetary boundary layer scheme in Weather Research and Forecasting model

    NASA Astrophysics Data System (ADS)

    Huang, Melin; Huang, Bormin; Huang, Allen H.

    2014-10-01

    The Weather Research and Forecasting (WRF) model provided operational services worldwide in many areas and has linked to our daily activity, in particular during severe weather events. The scheme of Yonsei University (YSU) is one of planetary boundary layer (PBL) models in WRF. The PBL is responsible for vertical sub-grid-scale fluxes due to eddy transports in the whole atmospheric column, determines the flux profiles within the well-mixed boundary layer and the stable layer, and thus provide atmospheric tendencies of temperature, moisture (including clouds), and horizontal momentum in the entire atmospheric column. The YSU scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. To accelerate the computation process of the YSU scheme, we employ Intel Many Integrated Core (MIC) Architecture as it is a multiprocessor computer structure with merits of efficient parallelization and vectorization essentials. Our results show that the MIC-based optimization improved the performance of the first version of multi-threaded code on Xeon Phi 5110P by a factor of 2.4x. Furthermore, the same CPU-based optimizations improved the performance on Intel Xeon E5-2603 by a factor of 1.6x as compared to the first version of multi-threaded code.

  20. A New Dynamical Core Based on the Prediction of the Curl of the Horizontal Vorticity

    NASA Astrophysics Data System (ADS)

    Konor, C. S.; Randall, D. A.; Heikes, R. P.

    2015-12-01

    The Vector-Vorticity Dynamical core (VVM) developed by Jung and Arakawa (2008) has important advantages for the use with the anelastic and unified systems of equations. The VVM predicts the horizontal vorticity vector (HVV) at each interface and the vertical vorticity at the top layer of the model. To guarantee that the three-dimensional vorticity is nondivergent, the vertical vorticity at the interior layers is diagnosed from the horizontal divergence of the HVV through a vertical integral from the top to down. To our knowledge, this is the only dynamical core that guarantees the nondivergence of the three-dimensional vorticity. The VVM uses a C-type horizontal grid, which allows a computational mode. While the computational mode does not seem to be serious in the Cartesian grid applications, it may be serious in the icosahedral grid applications because of the extra degree of freedom in such grids. Although there are special filters to minimize the effects of this computational mode, we prefer to eliminate it altogether. We have developed a new dynamical core, which uses a Z-grid to avoid the computational mode mentioned above. The dynamical core predicts the curl of the HVV and diagnoses the horizontal divergence of the HVV from the predicted vertical vorticity. The three-dimensional vorticity is guaranteed to be nondivergent as in the VVM. In this presentation, we will introduce the new dynamical core and show results obtained by using Cartesian and hexagonal grids. We will also compare the solutions to that obtained by the VVM.

  1. Computer Science (CS) Education in Indian Schools: Situation Analysis Using Darmstadt Model

    ERIC Educational Resources Information Center

    Raman, Raghu; Venkatasubramanian, Smrithi; Achuthan, Krishnashree; Nedungadi, Prema

    2015-01-01

    Computer science (CS) and its enabling technologies are at the heart of this information age, yet its adoption as a core subject by senior secondary students in Indian schools is low and has not reached critical mass. Though there have been efforts to create core curriculum standards for subjects like Physics, Chemistry, Biology, and Math, CS…

  2. Autoblocker: a system for detecting and blocking of network scanning based on analysis of netflow data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bobyshev, A.; Lamore, D.; Demar, P.

    2004-12-01

    In a large campus network, such at Fermilab, with tens of thousands of nodes, scanning initiated from either outside of or within the campus network raises security concerns. This scanning may have very serious impact on network performance, and even disrupt normal operation of many services. In this paper we introduce a system for detecting and automatic blocking excessive traffic of different kinds of scanning, DoS attacks, virus infected computers. The system, called AutoBlocker, is a distributed computing system based on quasi-real time analysis of network flow data collected from the border router and core switches. AutoBlocker also has anmore » interface to accept alerts from IDS systems (e.g. BRO, SNORT) that are based on other technologies. The system has multiple configurable alert levels for the detection of anomalous behavior and configurable trigger criteria for automated blocking of scans at the core or border routers. It has been in use at Fermilab for about 2 years, and has become a very valuable tool to curtail scan activity within the Fermilab campus network.« less

  3. Tracking the NGS revolution: managing life science research on shared high-performance computing clusters.

    PubMed

    Dahlö, Martin; Scofield, Douglas G; Schaal, Wesley; Spjuth, Ola

    2018-05-01

    Next-generation sequencing (NGS) has transformed the life sciences, and many research groups are newly dependent upon computer clusters to store and analyze large datasets. This creates challenges for e-infrastructures accustomed to hosting computationally mature research in other sciences. Using data gathered from our own clusters at UPPMAX computing center at Uppsala University, Sweden, where core hour usage of ∼800 NGS and ∼200 non-NGS projects is now similar, we compare and contrast the growth, administrative burden, and cluster usage of NGS projects with projects from other sciences. The number of NGS projects has grown rapidly since 2010, with growth driven by entry of new research groups. Storage used by NGS projects has grown more rapidly since 2013 and is now limited by disk capacity. NGS users submit nearly twice as many support tickets per user, and 11 more tools are installed each month for NGS projects than for non-NGS projects. We developed usage and efficiency metrics and show that computing jobs for NGS projects use more RAM than non-NGS projects, are more variable in core usage, and rarely span multiple nodes. NGS jobs use booked resources less efficiently for a variety of reasons. Active monitoring can improve this somewhat. Hosting NGS projects imposes a large administrative burden at UPPMAX due to large numbers of inexperienced users and diverse and rapidly evolving research areas. We provide a set of recommendations for e-infrastructures that host NGS research projects. We provide anonymized versions of our storage, job, and efficiency databases.

  4. Tracking the NGS revolution: managing life science research on shared high-performance computing clusters

    PubMed Central

    2018-01-01

    Abstract Background Next-generation sequencing (NGS) has transformed the life sciences, and many research groups are newly dependent upon computer clusters to store and analyze large datasets. This creates challenges for e-infrastructures accustomed to hosting computationally mature research in other sciences. Using data gathered from our own clusters at UPPMAX computing center at Uppsala University, Sweden, where core hour usage of ∼800 NGS and ∼200 non-NGS projects is now similar, we compare and contrast the growth, administrative burden, and cluster usage of NGS projects with projects from other sciences. Results The number of NGS projects has grown rapidly since 2010, with growth driven by entry of new research groups. Storage used by NGS projects has grown more rapidly since 2013 and is now limited by disk capacity. NGS users submit nearly twice as many support tickets per user, and 11 more tools are installed each month for NGS projects than for non-NGS projects. We developed usage and efficiency metrics and show that computing jobs for NGS projects use more RAM than non-NGS projects, are more variable in core usage, and rarely span multiple nodes. NGS jobs use booked resources less efficiently for a variety of reasons. Active monitoring can improve this somewhat. Conclusions Hosting NGS projects imposes a large administrative burden at UPPMAX due to large numbers of inexperienced users and diverse and rapidly evolving research areas. We provide a set of recommendations for e-infrastructures that host NGS research projects. We provide anonymized versions of our storage, job, and efficiency databases. PMID:29659792

  5. Thermal Hydraulics Design and Analysis Methodology for a Solid-Core Nuclear Thermal Rocket Engine Thrust Chamber

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Canabal, Francisco; Chen, Yen-Sen; Cheng, Gary; Ito, Yasushi

    2013-01-01

    Nuclear thermal propulsion is a leading candidate for in-space propulsion for human Mars missions. This chapter describes a thermal hydraulics design and analysis methodology developed at the NASA Marshall Space Flight Center, in support of the nuclear thermal propulsion development effort. The objective of this campaign is to bridge the design methods in the Rover/NERVA era, with a modern computational fluid dynamics and heat transfer methodology, to predict thermal, fluid, and hydrogen environments of a hypothetical solid-core, nuclear thermal engine the Small Engine, designed in the 1960s. The computational methodology is based on an unstructured-grid, pressure-based, all speeds, chemically reacting, computational fluid dynamics and heat transfer platform, while formulations of flow and heat transfer through porous and solid media were implemented to describe those of hydrogen flow channels inside the solid24 core. Design analyses of a single flow element and the entire solid-core thrust chamber of the Small Engine were performed and the results are presented herein

  6. Application Performance Analysis and Efficient Execution on Systems with multi-core CPUs, GPUs and MICs: A Case Study with Microscopy Image Analysis

    PubMed Central

    Teodoro, George; Kurc, Tahsin; Andrade, Guilherme; Kong, Jun; Ferreira, Renato; Saltz, Joel

    2015-01-01

    We carry out a comparative performance study of multi-core CPUs, GPUs and Intel Xeon Phi (Many Integrated Core-MIC) with a microscopy image analysis application. We experimentally evaluate the performance of computing devices on core operations of the application. We correlate the observed performance with the characteristics of computing devices and data access patterns, computation complexities, and parallelization forms of the operations. The results show a significant variability in the performance of operations with respect to the device used. The performances of operations with regular data access are comparable or sometimes better on a MIC than that on a GPU. GPUs are more efficient than MICs for operations that access data irregularly, because of the lower bandwidth of the MIC for random data accesses. We propose new performance-aware scheduling strategies that consider variabilities in operation speedups. Our scheduling strategies significantly improve application performance compared to classic strategies in hybrid configurations. PMID:28239253

  7. Development of an extensible dual-core wireless sensing node for cyber-physical systems

    NASA Astrophysics Data System (ADS)

    Kane, Michael; Zhu, Dapeng; Hirose, Mitsuhito; Dong, Xinjun; Winter, Benjamin; Häckell, Mortiz; Lynch, Jerome P.; Wang, Yang; Swartz, A.

    2014-04-01

    The introduction of wireless telemetry into the design of monitoring and control systems has been shown to reduce system costs while simplifying installations. To date, wireless nodes proposed for sensing and actuation in cyberphysical systems have been designed using microcontrollers with one computational pipeline (i.e., single-core microcontrollers). While concurrent code execution can be implemented on single-core microcontrollers, concurrency is emulated by splitting the pipeline's resources to support multiple threads of code execution. For many applications, this approach to multi-threading is acceptable in terms of speed and function. However, some applications such as feedback controls demand deterministic timing of code execution and maximum computational throughput. For these applications, the adoption of multi-core processor architectures represents one effective solution. Multi-core microcontrollers have multiple computational pipelines that can execute embedded code in parallel and can be interrupted independent of one another. In this study, a new wireless platform named Martlet is introduced with a dual-core microcontroller adopted in its design. The dual-core microcontroller design allows Martlet to dedicate one core to standard wireless sensor operations while the other core is reserved for embedded data processing and real-time feedback control law execution. Another distinct feature of Martlet is a standardized hardware interface that allows specialized daughter boards (termed wing boards) to be interfaced to the Martlet baseboard. This extensibility opens opportunity to encapsulate specialized sensing and actuation functions in a wing board without altering the design of Martlet. In addition to describing the design of Martlet, a few example wings are detailed, along with experiments showing the Martlet's ability to monitor and control physical systems such as wind turbines and buildings.

  8. Partnership For Edge Physics Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parashar, Manish

    In this effort, we will extend our prior work as part of CPES (i.e., DART and DataSpaces) to support in-situ tight coupling between application codes that exploits data locality and core-level parallelism to maximize on-chip data exchange and reuse. This will be accomplished by mapping coupled simulations so that the data exchanges are more localized within the nodes. Coupled simulation workflows can more effectively utilize the resources available on emerging HEC platforms if they can be mapped and executed to exploit data locality as well as the communication patterns between application components. Scheduling and running such workflows requires an extendedmore » framework that should provide a unified hybrid abstraction to enable coordination and data sharing across computation tasks that run on the heterogeneous multi-core-based systems, and develop a data-locality based dynamic tasks scheduling approach to increase on-chip or intra-node data exchanges and in-situ execution. This effort will extend our prior work as part of CPES (i.e., DART and DataSpaces), which provided a simple virtual shared-space abstraction hosted at the staging nodes, to support application coordination, data sharing and active data processing services. Moreover, it will transparently manage the low-level operations associated with the inter-application data exchange, such as data redistributions, and will enable running coupled simulation workflow on multi-cores computing platforms.« less

  9. Preparing for Exascale: Towards convection-permitting, global atmospheric simulations with the Model for Prediction Across Scales (MPAS)

    NASA Astrophysics Data System (ADS)

    Heinzeller, Dominikus; Duda, Michael G.; Kunstmann, Harald

    2017-04-01

    With strong financial and political support from national and international initiatives, exascale computing is projected for the end of this decade. Energy requirements and physical limitations imply the use of accelerators and the scaling out to orders of magnitudes larger numbers of cores then today to achieve this milestone. In order to fully exploit the capabilities of these Exascale computing systems, existing applications need to undergo significant development. The Model for Prediction Across Scales (MPAS) is a novel set of Earth system simulation components and consists of an atmospheric core, an ocean core, a land-ice core and a sea-ice core. Its distinct features are the use of unstructured Voronoi meshes and C-grid discretisation to address shortcomings of global models on regular grids and the use of limited area models nested in a forcing data set, with respect to parallel scalability, numerical accuracy and physical consistency. Here, we present work towards the application of the atmospheric core (MPAS-A) on current and future high performance computing systems for problems at extreme scale. In particular, we address the issue of massively parallel I/O by extending the model to support the highly scalable SIONlib library. Using global uniform meshes with a convection-permitting resolution of 2-3km, we demonstrate the ability of MPAS-A to scale out to half a million cores while maintaining a high parallel efficiency. We also demonstrate the potential benefit of a hybrid parallelisation of the code (MPI/OpenMP) on the latest generation of Intel's Many Integrated Core Architecture, the Intel Xeon Phi Knights Landing.

  10. CoreTSAR: Core Task-Size Adapting Runtime

    DOE PAGES

    Scogland, Thomas R. W.; Feng, Wu-chun; Rountree, Barry; ...

    2014-10-27

    Heterogeneity continues to increase at all levels of computing, with the rise of accelerators such as GPUs, FPGAs, and other co-processors into everything from desktops to supercomputers. As a consequence, efficiently managing such disparate resources has become increasingly complex. CoreTSAR seeks to reduce this complexity by adaptively worksharing parallel-loop regions across compute resources without requiring any transformation of the code within the loop. Lastly, our results show performance improvements of up to three-fold over a current state-of-the-art heterogeneous task scheduler as well as linear performance scaling from a single GPU to four GPUs for many codes. In addition, CoreTSAR demonstratesmore » a robust ability to adapt to both a variety of workloads and underlying system configurations.« less

  11. Heterogeneous high throughput scientific computing with APM X-Gene and Intel Xeon Phi

    DOE PAGES

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; ...

    2015-05-22

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. As a result, we report our experience on software porting, performance and energy efficiency and evaluatemore » the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).« less

  12. Functionally dissociable influences on learning rate in a dynamic environment

    PubMed Central

    McGuire, Joseph T.; Nassar, Matthew R.; Gold, Joshua I.; Kable, Joseph W.

    2015-01-01

    Summary Maintaining accurate beliefs in a changing environment requires dynamically adapting the rate at which one learns from new experiences. Beliefs should be stable in the face of noisy data, but malleable in periods of change or uncertainty. Here we used computational modeling, psychophysics and fMRI to show that adaptive learning is not a unitary phenomenon in the brain. Rather, it can be decomposed into three computationally and neuroanatomically distinct factors that were evident in human subjects performing a spatial-prediction task: (1) surprise-driven belief updating, related to BOLD activity in visual cortex; (2) uncertainty-driven belief updating, related to anterior prefrontal and parietal activity; and (3) reward-driven belief updating, a context-inappropriate behavioral tendency related to activity in ventral striatum. These distinct factors converged in a core system governing adaptive learning. This system, which included dorsomedial frontal cortex, responded to all three factors and predicted belief updating both across trials and across individuals. PMID:25459409

  13. Genten: Software for Generalized Tensor Decompositions v. 1.0.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phipps, Eric T.; Kolda, Tamara G.; Dunlavy, Daniel

    Tensors, or multidimensional arrays, are a powerful mathematical means of describing multiway data. This software provides computational means for decomposing or approximating a given tensor in terms of smaller tensors of lower dimension, focusing on decomposition of large, sparse tensors. These techniques have applications in many scientific areas, including signal processing, linear algebra, computer vision, numerical analysis, data mining, graph analysis, neuroscience and more. The software is designed to take advantage of parallelism present emerging computer architectures such has multi-core CPUs, many-core accelerators such as the Intel Xeon Phi, and computation-oriented GPUs to enable efficient processing of large tensors.

  14. A Fault Oblivious Extreme-Scale Execution Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKie, Jim

    The FOX project, funded under the ASCR X-stack I program, developed systems software and runtime libraries for a new approach to the data and work distribution for massively parallel, fault oblivious application execution. Our work was motivated by the premise that exascale computing systems will provide a thousand-fold increase in parallelism and a proportional increase in failure rate relative to today’s machines. To deliver the capability of exascale hardware, the systems software must provide the infrastructure to support existing applications while simultaneously enabling efficient execution of new programming models that naturally express dynamic, adaptive, irregular computation; coupled simulations; and massivemore » data analysis in a highly unreliable hardware environment with billions of threads of execution. Our OS research has prototyped new methods to provide efficient resource sharing, synchronization, and protection in a many-core compute node. We have experimented with alternative task/dataflow programming models and shown scalability in some cases to hundreds of thousands of cores. Much of our software is in active development through open source projects. Concepts from FOX are being pursued in next generation exascale operating systems. Our OS work focused on adaptive, application tailored OS services optimized for multi → many core processors. We developed a new operating system NIX that supports role-based allocation of cores to processes which was released to open source. We contributed to the IBM FusedOS project, which promoted the concept of latency-optimized and throughput-optimized cores. We built a task queue library based on distributed, fault tolerant key-value store and identified scaling issues. A second fault tolerant task parallel library was developed, based on the Linda tuple space model, that used low level interconnect primitives for optimized communication. We designed fault tolerance mechanisms for task parallel computations employing work stealing for load balancing that scaled to the largest existing supercomputers. Finally, we implemented the Elastic Building Blocks runtime, a library to manage object-oriented distributed software components. To support the research, we won two INCITE awards for time on Intrepid (BG/P) and Mira (BG/Q). Much of our work has had impact in the OS and runtime community through the ASCR Exascale OS/R workshop and report, leading to the research agenda of the Exascale OS/R program. Our project was, however, also affected by attrition of multiple PIs. While the PIs continued to participate and offer guidance as time permitted, losing these key individuals was unfortunate both for the project and for the DOE HPC community.« less

  15. Development INTERDATA 8/32 computer system

    NASA Technical Reports Server (NTRS)

    Sonett, C. P.

    1983-01-01

    The capabilities of the Interdata 8/32 minicomputer were examined regarding data and word processing, editing, retrieval, and budgeting as well as data management demands of the user groups in the network. Based on four projected needs: (1) a hands on (open shop) computer for data analysis with large core and disc capability; (2) the expected requirements of the NASA data networks; (3) the need for intermittent large core capacity for theoretical modeling; (4) the ability to access data rapidly either directly from tape or from core onto hard copy, the system proved useful and adequate for the planned requirements.

  16. Stability and Scalability of the CMS Global Pool: Pushing HTCondor and GlideinWMS to New Limits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balcas, J.; Bockelman, B.; Hufnagel, D.

    The CMS Global Pool, based on HTCondor and glideinWMS, is the main computing resource provisioning system for all CMS workflows, including analysis, Monte Carlo production, and detector data reprocessing activities. The total resources at Tier-1 and Tier-2 grid sites pledged to CMS exceed 100,000 CPU cores, while another 50,000 to 100,000 CPU cores are available opportunistically, pushing the needs of the Global Pool to higher scales each year. These resources are becoming more diverse in their accessibility and configuration over time. Furthermore, the challenge of stably running at higher and higher scales while introducing new modes of operation such asmore » multi-core pilots, as well as the chaotic nature of physics analysis workflows, places huge strains on the submission infrastructure. This paper details some of the most important challenges to scalability and stability that the CMS Global Pool has faced since the beginning of the LHC Run II and how they were overcome.« less

  17. Stability and scalability of the CMS Global Pool: Pushing HTCondor and glideinWMS to new limits

    NASA Astrophysics Data System (ADS)

    Balcas, J.; Bockelman, B.; Hufnagel, D.; Hurtado Anampa, K.; Aftab Khan, F.; Larson, K.; Letts, J.; Marra da Silva, J.; Mascheroni, M.; Mason, D.; Perez-Calero Yzquierdo, A.; Tiradani, A.

    2017-10-01

    The CMS Global Pool, based on HTCondor and glideinWMS, is the main computing resource provisioning system for all CMS workflows, including analysis, Monte Carlo production, and detector data reprocessing activities. The total resources at Tier-1 and Tier-2 grid sites pledged to CMS exceed 100,000 CPU cores, while another 50,000 to 100,000 CPU cores are available opportunistically, pushing the needs of the Global Pool to higher scales each year. These resources are becoming more diverse in their accessibility and configuration over time. Furthermore, the challenge of stably running at higher and higher scales while introducing new modes of operation such as multi-core pilots, as well as the chaotic nature of physics analysis workflows, places huge strains on the submission infrastructure. This paper details some of the most important challenges to scalability and stability that the CMS Global Pool has faced since the beginning of the LHC Run II and how they were overcome.

  18. Parallelization of combinatorial search when solving knapsack optimization problem on computing systems based on multicore processors

    NASA Astrophysics Data System (ADS)

    Rahman, P. A.

    2018-05-01

    This scientific paper deals with the model of the knapsack optimization problem and method of its solving based on directed combinatorial search in the boolean space. The offered by the author specialized mathematical model of decomposition of the search-zone to the separate search-spheres and the algorithm of distribution of the search-spheres to the different cores of the multi-core processor are also discussed. The paper also provides an example of decomposition of the search-zone to the several search-spheres and distribution of the search-spheres to the different cores of the quad-core processor. Finally, an offered by the author formula for estimation of the theoretical maximum of the computational acceleration, which can be achieved due to the parallelization of the search-zone to the search-spheres on the unlimited number of the processor cores, is also given.

  19. Electronic Structure Calculations and Adaptation Scheme in Multi-core Computing Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seshagiri, Lakshminarasimhan; Sosonkina, Masha; Zhang, Zhao

    2009-05-20

    Multi-core processing environments have become the norm in the generic computing environment and are being considered for adding an extra dimension to the execution of any application. The T2 Niagara processor is a very unique environment where it consists of eight cores having a capability of running eight threads simultaneously in each of the cores. Applications like General Atomic and Molecular Electronic Structure (GAMESS), used for ab-initio molecular quantum chemistry calculations, can be good indicators of the performance of such machines and would be a guideline for both hardware designers and application programmers. In this paper we try to benchmarkmore » the GAMESS performance on a T2 Niagara processor for a couple of molecules. We also show the suitability of using a middleware based adaptation algorithm on GAMESS on such a multi-core environment.« less

  20. An Assessment of Magnetic Conditions for Strong Coronal Heating in Solar Active Regions by Comparing Observed Loops with Computed Potential Field Lines

    NASA Technical Reports Server (NTRS)

    Gary, G. A.; Moore, R. L.; Porter, J. G.; Falconer, D. A.

    1999-01-01

    We report further results on the magnetic origins of coronal heating found from registering coronal images with photospheric vector magnetograms. For two complementary active regions, we use computed potential field lines to examine the global non-potentiality of bright extended coronal loops and the three-dimensional structure of the magnetic field at their feet, and assess the role of these magnetic conditions in the strong coronal heating in these loops. The two active regions are complementary, in that one is globally potential and the other is globally nonpotential, while each is predominantly bipolar, and each has an island of included polarity in its trailing polarity domain. We find the following: (1) The brightest main-arch loops of the globally potential active region are brighter than the brightest main- arch loops of the globally strongly nonpotential active region. (2) In each active region, only a few of the mainarch magnetic loops are strongly heated, and these are all rooted near the island. (3) The end of each main-arch bright loop apparently bifurcates above the island, so that it embraces the island and the magnetic null above the island. (4) At any one time, there are other main-arch magnetic loops that embrace the island in the same manner as do the bright loops but that are not selected for strong coronal heating. (5) There is continual microflaring in sheared core fields around the island, but the main-arch bright loops show little response to these microflares. From these observational and modeling results we draw the following conclusions: (1) The heating of the main-arch bright loops arises mainly from conditions at the island end of these loops and not from their global non-potentiality. (2) There is, at most, only a loose coupling between the coronal heating in the bright loops of the main arch and the coronal heating in the sheared core fields at their feet, although in both the heating is driven by conditions/events in and around the island. (3) The main-arch bright loops are likely to be heated via reconnection driven at the magnetic null over the island. The details of how and where (along the null line) the reconnection is driven determine which of the split-end loops are selected for strong heating. (4) The null does not appear to be directly involved in the heating of the sheared core fields or in the heating of an extended loop rooted in the island. Rather, these all appear to be heated by microflares in the sheared core field.

  1. A map of protein dynamics during cell-cycle progression and cell-cycle exit

    PubMed Central

    Gookin, Sara; Min, Mingwei; Phadke, Harsha; Chung, Mingyu; Moser, Justin; Miller, Iain; Carter, Dylan

    2017-01-01

    The cell-cycle field has identified the core regulators that drive the cell cycle, but we do not have a clear map of the dynamics of these regulators during cell-cycle progression versus cell-cycle exit. Here we use single-cell time-lapse microscopy of Cyclin-Dependent Kinase 2 (CDK2) activity followed by endpoint immunofluorescence and computational cell synchronization to determine the temporal dynamics of key cell-cycle proteins in asynchronously cycling human cells. We identify several unexpected patterns for core cell-cycle proteins in actively proliferating (CDK2-increasing) versus spontaneously quiescent (CDK2-low) cells, including Cyclin D1, the levels of which we find to be higher in spontaneously quiescent versus proliferating cells. We also identify proteins with concentrations that steadily increase or decrease the longer cells are in quiescence, suggesting the existence of a continuum of quiescence depths. Our single-cell measurements thus provide a rich resource for the field by characterizing protein dynamics during proliferation versus quiescence. PMID:28892491

  2. Composite Cores

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Spang & Company's new configuration of converter transformer cores is a composite of gapped and ungapped cores assembled together in concentric relationship. The net effect of the composite design is to combine the protection from saturation offered by the gapped core with the lower magnetizing requirement of the ungapped core. The uncut core functions under normal operating conditions and the cut core takes over during abnormal operation to prevent power surges and their potentially destructive effect on transistors. Principal customers are aerospace and defense manufacturers. Cores also have applicability in commercial products where precise power regulation is required, as in the power supplies for large mainframe computers.

  3. Terascale Cluster for Advanced Turbulent Combustion Simulations

    DTIC Science & Technology

    2008-07-25

    the system We have given the name CATS (for Combustion And Turbulence Simulator) to the terascale system that was obtained through this grant. CATS ...lnfiniBand interconnect. CATS includes an interactive login node and a file server, each holding in excess of 1 terabyte of file storage. The 35 active...compute nodes of CATS enable us to run up to 140-core parallel MPI batch jobs; one node is reserved to run the scheduler. CATS is operated and

  4. NPS-NRL-Rice-UIUC Collaboration on Navy Atmosphere-Ocean Coupled Models on Many-Core Computer Architectures Annual Report

    DTIC Science & Technology

    2015-09-30

    DISTRIBUTION STATEMENT A: Distribution approved for public release; distribution is unlimited. NPS-NRL- Rice -UIUC Collaboration on Navy Atmosphere...portability. There is still a gap in the OCCA support for Fortran programmers who do not have accelerator experience. Activities at Rice /Virginia Tech are...for automated data movement and for kernel optimization using source code analysis and run-time detective work. In this quarter the Rice /Virginia

  5. [Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].

    PubMed

    Furuta, Takuya; Sato, Tatsuhiko

    2015-01-01

    Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.

  6. Underwater Threat Source Localization: Processing Sensor Network TDOAs with a Terascale Optical Core Device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barhen, Jacob; Imam, Neena

    2007-01-01

    Revolutionary computing technologies are defined in terms of technological breakthroughs, which leapfrog over near-term projected advances in conventional hardware and software to produce paradigm shifts in computational science. For underwater threat source localization using information provided by a dynamical sensor network, one of the most promising computational advances builds upon the emergence of digital optical-core devices. In this article, we present initial results of sensor network calculations that focus on the concept of signal wavefront time-difference-of-arrival (TDOA). The corresponding algorithms are implemented on the EnLight processing platform recently introduced by Lenslet Laboratories. This tera-scale digital optical core processor is optimizedmore » for array operations, which it performs in a fixed-point-arithmetic architecture. Our results (i) illustrate the ability to reach the required accuracy in the TDOA computation, and (ii) demonstrate that a considerable speed-up can be achieved when using the EnLight 64a prototype processor as compared to a dual Intel XeonTM processor.« less

  7. Efficient Multicriteria Protein Structure Comparison on Modern Processor Architectures

    PubMed Central

    Manolakos, Elias S.

    2015-01-01

    Fast increasing computational demand for all-to-all protein structures comparison (PSC) is a result of three confounding factors: rapidly expanding structural proteomics databases, high computational complexity of pairwise protein comparison algorithms, and the trend in the domain towards using multiple criteria for protein structures comparison (MCPSC) and combining results. We have developed a software framework that exploits many-core and multicore CPUs to implement efficient parallel MCPSC in modern processors based on three popular PSC methods, namely, TMalign, CE, and USM. We evaluate and compare the performance and efficiency of the two parallel MCPSC implementations using Intel's experimental many-core Single-Chip Cloud Computer (SCC) as well as Intel's Core i7 multicore processor. We show that the 48-core SCC is more efficient than the latest generation Core i7, achieving a speedup factor of 42 (efficiency of 0.9), making many-core processors an exciting emerging technology for large-scale structural proteomics. We compare and contrast the performance of the two processors on several datasets and also show that MCPSC outperforms its component methods in grouping related domains, achieving a high F-measure of 0.91 on the benchmark CK34 dataset. The software implementation for protein structure comparison using the three methods and combined MCPSC, along with the developed underlying rckskel algorithmic skeletons library, is available via GitHub. PMID:26605332

  8. Efficient Multicriteria Protein Structure Comparison on Modern Processor Architectures.

    PubMed

    Sharma, Anuj; Manolakos, Elias S

    2015-01-01

    Fast increasing computational demand for all-to-all protein structures comparison (PSC) is a result of three confounding factors: rapidly expanding structural proteomics databases, high computational complexity of pairwise protein comparison algorithms, and the trend in the domain towards using multiple criteria for protein structures comparison (MCPSC) and combining results. We have developed a software framework that exploits many-core and multicore CPUs to implement efficient parallel MCPSC in modern processors based on three popular PSC methods, namely, TMalign, CE, and USM. We evaluate and compare the performance and efficiency of the two parallel MCPSC implementations using Intel's experimental many-core Single-Chip Cloud Computer (SCC) as well as Intel's Core i7 multicore processor. We show that the 48-core SCC is more efficient than the latest generation Core i7, achieving a speedup factor of 42 (efficiency of 0.9), making many-core processors an exciting emerging technology for large-scale structural proteomics. We compare and contrast the performance of the two processors on several datasets and also show that MCPSC outperforms its component methods in grouping related domains, achieving a high F-measure of 0.91 on the benchmark CK34 dataset. The software implementation for protein structure comparison using the three methods and combined MCPSC, along with the developed underlying rckskel algorithmic skeletons library, is available via GitHub.

  9. Systematic optimization model and algorithm for binding sequence selection in computational enzyme design

    PubMed Central

    Huang, Xiaoqiang; Han, Kehang; Zhu, Yushan

    2013-01-01

    A systematic optimization model for binding sequence selection in computational enzyme design was developed based on the transition state theory of enzyme catalysis and graph-theoretical modeling. The saddle point on the free energy surface of the reaction system was represented by catalytic geometrical constraints, and the binding energy between the active site and transition state was minimized to reduce the activation energy barrier. The resulting hyperscale combinatorial optimization problem was tackled using a novel heuristic global optimization algorithm, which was inspired and tested by the protein core sequence selection problem. The sequence recapitulation tests on native active sites for two enzyme catalyzed hydrolytic reactions were applied to evaluate the predictive power of the design methodology. The results of the calculation show that most of the native binding sites can be successfully identified if the catalytic geometrical constraints and the structural motifs of the substrate are taken into account. Reliably predicting active site sequences may have significant implications for the creation of novel enzymes that are capable of catalyzing targeted chemical reactions. PMID:23649589

  10. Many-core computing for space-based stereoscopic imaging

    NASA Astrophysics Data System (ADS)

    McCall, Paul; Torres, Gildo; LeGrand, Keith; Adjouadi, Malek; Liu, Chen; Darling, Jacob; Pernicka, Henry

    The potential benefits of using parallel computing in real-time visual-based satellite proximity operations missions are investigated. Improvements in performance and relative navigation solutions over single thread systems can be achieved through multi- and many-core computing. Stochastic relative orbit determination methods benefit from the higher measurement frequencies, allowing them to more accurately determine the associated statistical properties of the relative orbital elements. More accurate orbit determination can lead to reduced fuel consumption and extended mission capabilities and duration. Inherent to the process of stereoscopic image processing is the difficulty of loading, managing, parsing, and evaluating large amounts of data efficiently, which may result in delays or highly time consuming processes for single (or few) processor systems or platforms. In this research we utilize the Single-Chip Cloud Computer (SCC), a fully programmable 48-core experimental processor, created by Intel Labs as a platform for many-core software research, provided with a high-speed on-chip network for sharing information along with advanced power management technologies and support for message-passing. The results from utilizing the SCC platform for the stereoscopic image processing application are presented in the form of Performance, Power, Energy, and Energy-Delay-Product (EDP) metrics. Also, a comparison between the SCC results and those obtained from executing the same application on a commercial PC are presented, showing the potential benefits of utilizing the SCC in particular, and any many-core platforms in general for real-time processing of visual-based satellite proximity operations missions.

  11. Exploring biorthonormal transformations of pair-correlation functions in atomic structure variational calculations

    NASA Astrophysics Data System (ADS)

    Verdebout, S.; Jönsson, P.; Gaigalas, G.; Godefroid, M.; Froese Fischer, C.

    2010-04-01

    Multiconfiguration expansions frequently target valence correlation and correlation between valence electrons and the outermost core electrons. Correlation within the core is often neglected. A large orbital basis is needed to saturate both the valence and core-valence correlation effects. This in turn leads to huge numbers of configuration state functions (CSFs), many of which are unimportant. To avoid the problems inherent to the use of a single common orthonormal orbital basis for all correlation effects in the multiconfiguration Hartree-Fock (MCHF) method, we propose to optimize independent MCHF pair-correlation functions (PCFs), bringing their own orthonormal one-electron basis. Each PCF is generated by allowing single- and double-excitations from a multireference (MR) function. This computational scheme has the advantage of using targeted and optimally localized orbital sets for each PCF. These pair-correlation functions are coupled together and with each component of the MR space through a low dimension generalized eigenvalue problem. Nonorthogonal orbital sets being involved, the interaction and overlap matrices are built using biorthonormal transformation of the coupled basis sets followed by a counter-transformation of the PCF expansions. Applied to the ground state of beryllium, the new method gives total energies that are lower than the ones from traditional complete active space (CAS)-MCHF calculations using large orbital active sets. It is fair to say that we now have the possibility to account for, in a balanced way, correlation deep down in the atomic core in variational calculations.

  12. Gas hydrates and active mud volcanism on the South Shetland continental margin, Antarctic Peninsula

    NASA Astrophysics Data System (ADS)

    Tinivella, U.; Accaino, F.; Della Vedova, B.

    2008-04-01

    During the Antarctic summer of 2003 2004, new geophysical data were acquired from aboard the R/V OGS Explora in the BSR-rich area discovered in 1996 1997 along the South Shetland continental margin off the Antarctic Peninsula. The objective of the research program, supported by the Italian National Antarctic Program (PNRA), was to verify the existence of a potential gas hydrate reservoir and to reconstruct the tectonic setting of the margin, which probably controls the extent and character of the diffused and discontinuous bottom simulating reflections. The new dataset, i.e. multibeam bathymetry, seismic profiles (airgun and chirp), and two gravity cores analysed by computer-aided tomography as well as for gas composition and content, clearly shows active mud volcanism sustained by hydrocarbon venting in the region: several vents, located mainly close to mud volcanoes, were imaged during the cruise and their occurrence identified in the sediment samples. Mud volcanoes, vents and recent slides border the gas hydrate reservoir discovered in 1996 1997. The cores are composed of stiff silty mud. In core GC01, collected in the proximity of a mud volcano ridge, the following gases were identified (maximum contents in brackets): methane (46 μg/kg), pentane (45), ethane (35), propane (34), hexane (29) and butane (28). In core GC02, collected on the flank of the Vualt mud volcano, the corresponding data are methane (0 μg/kg), pentane (45), ethane (22), propane (0), hexane (27) and butane (25).

  13. 2nd Generation QUATARA Flight Computer Project

    NASA Technical Reports Server (NTRS)

    Falker, Jay; Keys, Andrew; Fraticelli, Jose Molina; Capo-Iugo, Pedro; Peeples, Steven

    2015-01-01

    Single core flight computer boards have been designed, developed, and tested (DD&T) to be flown in small satellites for the last few years. In this project, a prototype flight computer will be designed as a distributed multi-core system containing four microprocessors running code in parallel. This flight computer will be capable of performing multiple computationally intensive tasks such as processing digital and/or analog data, controlling actuator systems, managing cameras, operating robotic manipulators and transmitting/receiving from/to a ground station. In addition, this flight computer will be designed to be fault tolerant by creating both a robust physical hardware connection and by using a software voting scheme to determine the processor's performance. This voting scheme will leverage on the work done for the Space Launch System (SLS) flight software. The prototype flight computer will be constructed with Commercial Off-The-Shelf (COTS) components which are estimated to survive for two years in a low-Earth orbit.

  14. Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets.

    PubMed

    Scharfe, Michael; Pielot, Rainer; Schreiber, Falk

    2010-01-11

    Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics.

  15. The new landscape of parallel computer architecture

    NASA Astrophysics Data System (ADS)

    Shalf, John

    2007-07-01

    The past few years has seen a sea change in computer architecture that will impact every facet of our society as every electronic device from cell phone to supercomputer will need to confront parallelism of unprecedented scale. Whereas the conventional multicore approach (2, 4, and even 8 cores) adopted by the computing industry will eventually hit a performance plateau, the highest performance per watt and per chip area is achieved using manycore technology (hundreds or even thousands of cores). However, fully unleashing the potential of the manycore approach to ensure future advances in sustained computational performance will require fundamental advances in computer architecture and programming models that are nothing short of reinventing computing. In this paper we examine the reasons behind the movement to exponentially increasing parallelism, and its ramifications for system design, applications and programming models.

  16. Accurate and efficient calculation of excitation energies with the active-space particle-particle random phase approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Du; Yang, Weitao

    An efficient method for calculating excitation energies based on the particle-particle random phase approximation (ppRPA) is presented. Neglecting the contributions from the high-lying virtual states and the low-lying core states leads to the significantly smaller active-space ppRPA matrix while keeping the error to within 0.05 eV from the corresponding full ppRPA excitation energies. The resulting computational cost is significantly reduced and becomes less than the construction of the non-local Fock exchange potential matrix in the self-consistent-field (SCF) procedure. With only a modest number of active orbitals, the original ppRPA singlet-triplet (ST) gaps as well as the low-lying single and doublemore » excitation energies can be accurately reproduced at much reduced computational costs, up to 100 times faster than the iterative Davidson diagonalization of the original full ppRPA matrix. For high-lying Rydberg excitations where the Davidson algorithm fails, the computational savings of active-space ppRPA with respect to the direct diagonalization is even more dramatic. The virtues of the underlying full ppRPA combined with the significantly lower computational cost of the active-space approach will significantly expand the applicability of the ppRPA method to calculate excitation energies at a cost of O(K^{4}), with a prefactor much smaller than a single SCF Hartree-Fock (HF)/hybrid functional calculation, thus opening up new possibilities for the quantum mechanical study of excited state electronic structure of large systems.« less

  17. Accurate and efficient calculation of excitation energies with the active-space particle-particle random phase approximation

    DOE PAGES

    Zhang, Du; Yang, Weitao

    2016-10-13

    An efficient method for calculating excitation energies based on the particle-particle random phase approximation (ppRPA) is presented. Neglecting the contributions from the high-lying virtual states and the low-lying core states leads to the significantly smaller active-space ppRPA matrix while keeping the error to within 0.05 eV from the corresponding full ppRPA excitation energies. The resulting computational cost is significantly reduced and becomes less than the construction of the non-local Fock exchange potential matrix in the self-consistent-field (SCF) procedure. With only a modest number of active orbitals, the original ppRPA singlet-triplet (ST) gaps as well as the low-lying single and doublemore » excitation energies can be accurately reproduced at much reduced computational costs, up to 100 times faster than the iterative Davidson diagonalization of the original full ppRPA matrix. For high-lying Rydberg excitations where the Davidson algorithm fails, the computational savings of active-space ppRPA with respect to the direct diagonalization is even more dramatic. The virtues of the underlying full ppRPA combined with the significantly lower computational cost of the active-space approach will significantly expand the applicability of the ppRPA method to calculate excitation energies at a cost of O(K^{4}), with a prefactor much smaller than a single SCF Hartree-Fock (HF)/hybrid functional calculation, thus opening up new possibilities for the quantum mechanical study of excited state electronic structure of large systems.« less

  18. Segregating the core computational faculty of human language from working memory

    PubMed Central

    Makuuchi, Michiru; Bahlmann, Jörg; Anwander, Alfred; Friederici, Angela D.

    2009-01-01

    In contrast to simple structures in animal vocal behavior, hierarchical structures such as center-embedded sentences manifest the core computational faculty of human language. Previous artificial grammar learning studies found that the left pars opercularis (LPO) subserves the processing of hierarchical structures. However, it is not clear whether this area is activated by the structural complexity per se or by the increased memory load entailed in processing hierarchical structures. To dissociate the effect of structural complexity from the effect of memory cost, we conducted a functional magnetic resonance imaging study of German sentence processing with a 2-way factorial design tapping structural complexity (with/without hierarchical structure, i.e., center-embedding of clauses) and working memory load (long/short distance between syntactically dependent elements; i.e., subject nouns and their respective verbs). Functional imaging data revealed that the processes for structure and memory operate separately but co-operatively in the left inferior frontal gyrus; activities in the LPO increased as a function of structural complexity, whereas activities in the left inferior frontal sulcus (LIFS) were modulated by the distance over which the syntactic information had to be transferred. Diffusion tensor imaging showed that these 2 regions were interconnected through white matter fibers. Moreover, functional coupling between the 2 regions was found to increase during the processing of complex, hierarchically structured sentences. These results suggest a neuroanatomical segregation of syntax-related aspects represented in the LPO from memory-related aspects reflected in the LIFS, which are, however, highly interconnected functionally and anatomically. PMID:19416819

  19. 98. View of IBM digital computer model 7090 magnet core ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    98. View of IBM digital computer model 7090 magnet core installation. ITT Artic Services, Inc., Official photograph BMEWS Site II, Clear, AK, by unknown photographer, 17 September 1965. BMEWS, clear as negative no. A-6606. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  20. CT Scans of Cores Metadata, Barrow, Alaska 2015

    DOE Data Explorer

    Katie McKnight; Tim Kneafsey; Craig Ulrich

    2015-03-11

    Individual ice cores were collected from Barrow Environmental Observatory in Barrow, Alaska, throughout 2013 and 2014. Cores were drilled along different transects to sample polygonal features (i.e. the trough, center and rim of high, transitional and low center polygons). Most cores were drilled around 1 meter in depth and a few deep cores were drilled around 3 meters in depth. Three-dimensional images of the frozen cores were constructed using a medical X-ray computed tomography (CT) scanner. TIFF files can be uploaded to ImageJ (an open-source imaging software) to examine soil structure and densities within each core.

  1. Brown Adipose Tissue Is Linked to a Distinct Thermoregulatory Response to Mild Cold in People

    PubMed Central

    Chondronikola, Maria; Volpi, Elena; Børsheim, Elisabet; Chao, Tony; Porter, Craig; Annamalai, Palam; Yfanti, Christina; Labbe, Sebastien M.; Hurren, Nicholas M.; Malagaris, Ioannis; Cesani, Fernardo; Sidossis, Labros S.

    2016-01-01

    Brown adipose tissue (BAT) plays an important role in thermoregulation in rodents. Its role in temperature homeostasis in people is less studied. To this end, we recruited 18 men [8 subjects with no/minimal BAT activity (BAT−) and 10 with pronounced BAT activity (BAT+)]. Each volunteer participated in a 6 h, individualized, non-shivering cold exposure protocol. BAT was quantified using positron emission tomography/computed tomography. Body core and skin temperatures were measured using a telemetric pill and wireless thermistors, respectively. Core body temperature decreased during cold exposure in the BAT− group only (−0.34°C, 95% CI: −0.6 to −0.1, p = 0.03), while the cold-induced change in core temperature was significantly different between BAT+ and BAT− subjects (BAT+ vs. BAT−, 0.43°C, 95% CI: 0.20–0.65, p = 0.0014). BAT volume was associated with the cold-induced change in core temperature (p = 0.01) even after adjustment for age and adiposity. Compared to the BAT− group, BAT+ subjects tolerated a lower ambient temperature (BAT−: 20.6 ± 0.3°C vs. BAT+: 19.8 ± 0.3°C, p = 0.035) without shivering. The cold-induced change in core temperature (r = 0.79, p = 0.001) and supraclavicular temperature (r = 0.58, p = 0.014) correlated with BAT volume, suggesting that these non-invasive measures can be potentially used as surrogate markers of BAT when other methods to detect BAT are not available or their use is not warranted. These results demonstrate a physiologically significant role for BAT in thermoregulation in people. This trial has been registered with Clinaltrials.gov: NCT01791114 (https://clinicaltrials.gov/ct2/show/NCT01791114). PMID:27148068

  2. Dynamics of Hydrophobic Core Phenylalanine Residues Probed by Solid-State Deuteron NMR

    PubMed Central

    Vugmeyster, Liliya; Ostrovsky, Dmitry; Villafranca, Toni; Sharp, Janelle; Xu, Wei; Lipton, Andrew S.; Hoatson, Gina L.; Vold, Robert L.

    2016-01-01

    We conducted a detailed investigation of the dynamics of two phenylalanine side chains in the hydrophobic core of the villin headpiece subdomain protein (HP36) in the hydrated powder state over the 298–80 K temperature range. Our main tools were static deuteron NMR measurements of longitudinal relaxation and line shapes supplemented with computational modeling. The temperature dependence of the relaxation times reveals the presence of two main mechanisms that can be attributed to the ring-flips, dominating at high temperatures, and small-angle fluctuations, dominating at low temperatures. The relaxation is non-exponential at all temperatures with the extent of non-exponentiality increasing from higher to lower temperatures. This behavior suggests a distribution of conformers with unique values of activation energies. The central values of the activation energies for the ring-flipping motions are among the smallest reported for aromatic residues in peptides and proteins and point to a very mobile hydrophobic core. The analysis of the widths of the distributions, in combination with the earlier results on the dynamics of flanking methyl groups (Vugmeyster et al., J. Phys. Chem. B 2013, 117, 6129–6137), suggests that the hydrophobic core undergoes slow concerted fluctuations. There is a pronounced effect of dehydration on the ring-flipping motions, which shifts the distribution toward more rigid conformers. The cross-over temperature between the regions of dominance of the small-angle fluctuations and ring-flips shifts from 195 K in the hydrated protein to 278 K in the dry one. This result points to the role of solvent in softening the core and highlights aromatic residues as markers of the protein dynamical transitions. PMID:26529128

  3. Dynamics of Hydrophobic Core Phenylalanine Residues Probed by Solid-State Deuteron NMR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vugmeyster, Liliya; Ostrovsky, Dmitry; Villafranca, Toni

    We conducted a detailed investigation of the dynamics of two phenylalanine side chains in the hydrophobic core of the villin headpiece subdomain protein (HP36) in the hydrated powder state over the 298–80 K temperature range. We utilized static deuteron NMR measurements of longitudinal relaxation and line shapes supplemented with computational modeling. The temperature dependence of the relaxation times reveals the presence of two main mechanisms that can be attributed to the ring-flips, dominating at high temperatures, and small-angle fluctuations, dominating at low temperatures. The relaxation is non- exponential at all temperatures with the extent of non-exponentiality increasing from higher tomore » lower temperatures. This behavior suggests a distribution of conformers with unique values of activation energies. The central values of the activation energies for the ring-flipping motions are among the smallest reported for aromatic residues in peptides and proteins and point to a very mobile hydrophobic core. The analysis of the widths of the distributions, in combination with the earlier results on the dynamics of flanking methyl groups (Vugmeyster et al., J. Phys. Chem. 2013, 117, 6129–6137), suggests that the hydrophobic core undergoes concerted fluctuations. There is a pronounced effect of dehydration on the ring-flipping motions, which shifts the distribution toward more rigid conformers. The cross-over temperature between the regions of dominance of the small-angle fluctuations and ring-flips shifts from 195 K in the hydrated protein to 278 K in the dry one. This result points to the role of solvent in the onset of the concerted fluctuations of the core and highlights aromatic residues as markers of the protein dynamical transitions.« less

  4. Peregrine Job Queues and Scheduling Policies | High-Performance Computing |

    Science.gov Websites

    batch batch-h long bigmem data-transfer feature Max wall time 1 hour 4 hours 2 days 2 days 10 days 10 # nodes per job 2 8 288 576 120 46 1 # of 24 core 64 GB Haswell nodes 2 8 0 1228 0 0 0 haswell # of 24core 32 GB nodes 2 16 576 0 126 0 0 24core # of 16core 32 GB nodes 2 8 195 0 162 0 5 16core, # of 24core

  5. Toward Connecting Core-Collapse Supernova Theory with Observations: Nucleosynthetic Yields and Distribution of Elements in a 15 M⊙ Blue Supergiant Progenitor with SN 1987A Energetics

    NASA Astrophysics Data System (ADS)

    Plewa, Tomasz; Handy, Timothy; Odrzywolek, Andrzej

    2014-09-01

    We compute and discuss the process of nucleosynthesis in a series of core-collapse explosion models of a 15 solar mass, blue supergiant progenitor. We obtain nucleosynthetic yields and study the evolution of the chemical element distribution from the moment of core bounce until young supernova remnant phase. Our models show how the process of energy deposition due to radioactive decay modifies the dynamics and the core ejecta structure on small and intermediate scales. The results are compared against observations of young supernova remnants including Cas A and the recent data obtained for SN 1987A. We compute and discuss the process of nucleosynthesis in a series of core-collapse explosion models of a 15 solar mass, blue supergiant progenitor. We obtain nucleosynthetic yields and study the evolution of the chemical element distribution from the moment of core bounce until young supernova remnant phase. Our models show how the process of energy deposition due to radioactive decay modifies the dynamics and the core ejecta structure on small and intermediate scales. The results are compared against observations of young supernova remnants including Cas A and the recent data obtained for SN 1987A. The work has been supported by the NSF grant AST-1109113 and DOE grant DE-FG52-09NA29548. This research used resources of the National Energy Research Scientific Computing Center, which is supported by the U.S. DoE under Contract No. DE-AC02-05CH11231.

  6. Optical fiber sensor having an active core

    NASA Technical Reports Server (NTRS)

    Egalon, Claudio Oliveira (Inventor); Rogowski, Robert S. (Inventor)

    1993-01-01

    An optical fiber is provided. The fiber is comprised of an active fiber core which produces waves of light upon excitation. A factor ka is identified and increased until a desired improvement in power efficiency is obtained. The variable a is the radius of the active fiber core and k is defined as 2 pi/lambda wherein lambda is the wavelength of the light produced by the active fiber core. In one embodiment, the factor ka is increased until the power efficiency stabilizes. In addition to a bare fiber core embodiment, a two-stage fluorescent fiber is provided wherein an active cladding surrounds a portion of the active fiber core having an improved ka factor. The power efficiency of the embodiment is further improved by increasing a difference between the respective indices of refraction of the active cladding and the active fiber core.

  7. Virtualizing Super-Computation On-Board Uas

    NASA Astrophysics Data System (ADS)

    Salami, E.; Soler, J. A.; Cuadrado, R.; Barrado, C.; Pastor, E.

    2015-04-01

    Unmanned aerial systems (UAS, also known as UAV, RPAS or drones) have a great potential to support a wide variety of aerial remote sensing applications. Most UAS work by acquiring data using on-board sensors for later post-processing. Some require the data gathered to be downlinked to the ground in real-time. However, depending on the volume of data and the cost of the communications, this later option is not sustainable in the long term. This paper develops the concept of virtualizing super-computation on-board UAS, as a method to ease the operation by facilitating the downlink of high-level information products instead of raw data. Exploiting recent developments in miniaturized multi-core devices is the way to speed-up on-board computation. This hardware shall satisfy size, power and weight constraints. Several technologies are appearing with promising results for high performance computing on unmanned platforms, such as the 36 cores of the TILE-Gx36 by Tilera (now EZchip) or the 64 cores of the Epiphany-IV by Adapteva. The strategy for virtualizing super-computation on-board includes the benchmarking for hardware selection, the software architecture and the communications aware design. A parallelization strategy is given for the 36-core TILE-Gx36 for a UAS in a fire mission or in similar target-detection applications. The results are obtained for payload image processing algorithms and determine in real-time the data snapshot to gather and transfer to ground according to the needs of the mission, the processing time, and consumed watts.

  8. Machine Learning in Intrusion Detection

    DTIC Science & Technology

    2005-07-01

    machine learning tasks. Anomaly detection provides the core technology for a broad spectrum of security-centric applications. In this dissertation, we examine various aspects of anomaly based intrusion detection in computer security. First, we present a new approach to learn program behavior for intrusion detection. Text categorization techniques are adopted to convert each process to a vector and calculate the similarity between two program activities. Then the k-nearest neighbor classifier is employed to classify program behavior as normal or intrusive. We demonstrate

  9. Hepatitis C virus core protein potentiates proangiogenic activity of hepatocellular carcinoma cells.

    PubMed

    Shao, Yu-Yun; Hsieh, Min-Shu; Wang, Han-Yu; Li, Yong-Shi; Lin, Hang; Hsu, Hung-Wei; Huang, Chung-Yi; Hsu, Chih-Hung; Cheng, Ann-Lii

    2017-10-17

    Increased angiogenic activity has been demonstrated in hepatitis C virus (HCV)-related hepatocellular carcinoma (HCC), but the mechanism was unclear. To study the role of HCV core protein, we used tube formation and Matrigel plug assays to assess the proangiogenic activity of an HCC cell line, HuH7, and 2 of its stable clones-HuH7-core-high and HuH7-core-low, with high and low HCV core protein expression, respectively. In both assays, HuH7-core-high and HuH7-core-low cells dose-dependently induced stronger angiogenesis than control cells. HuH7 cells with HCV core protein expression showed increased mRNA and protein expression of vascular endothelial growth factor (VEGF). VEGF inhibition by bevacizumab reduced the proangiogenic activity of HuH7-core-high cells. The promotor region of VEGF contains the binding site of activator protein-1 (AP-1). Compared with controls, HuH7-core-high cells had an increased AP-1 activity and nuclear localization of phospho-c-jun. AP-1 inhibition using either RNA knockdown or AP-1 inhibitors reduced the VEGF mRNA expression and the proangiogenic activity of HuH7-core-high cells. Among 131 tissue samples from HCC patients, HCV-related HCC revealed stronger VEGF expression than did hepatitis B virus-related HCC. In conclusion, increased VEGF expression through AP-1 activation is a crucial mechanism underlying the proangiogenic activity of the HCV core protein in HCC cells.

  10. Hepatitis C virus core protein potentiates proangiogenic activity of hepatocellular carcinoma cells

    PubMed Central

    Shao, Yu-Yun; Hsieh, Min-Shu; Wang, Han-Yu; Li, Yong-Shi; Lin, Hang; Hsu, Hung-Wei; Huang, Chung-Yi; Hsu, Chih-Hung; Cheng, Ann-Lii

    2017-01-01

    Increased angiogenic activity has been demonstrated in hepatitis C virus (HCV)-related hepatocellular carcinoma (HCC), but the mechanism was unclear. To study the role of HCV core protein, we used tube formation and Matrigel plug assays to assess the proangiogenic activity of an HCC cell line, HuH7, and 2 of its stable clones—HuH7-core-high and HuH7-core-low, with high and low HCV core protein expression, respectively. In both assays, HuH7-core-high and HuH7-core-low cells dose-dependently induced stronger angiogenesis than control cells. HuH7 cells with HCV core protein expression showed increased mRNA and protein expression of vascular endothelial growth factor (VEGF). VEGF inhibition by bevacizumab reduced the proangiogenic activity of HuH7-core-high cells. The promotor region of VEGF contains the binding site of activator protein-1 (AP-1). Compared with controls, HuH7-core-high cells had an increased AP-1 activity and nuclear localization of phospho-c-jun. AP-1 inhibition using either RNA knockdown or AP-1 inhibitors reduced the VEGF mRNA expression and the proangiogenic activity of HuH7-core-high cells. Among 131 tissue samples from HCC patients, HCV-related HCC revealed stronger VEGF expression than did hepatitis B virus-related HCC. In conclusion, increased VEGF expression through AP-1 activation is a crucial mechanism underlying the proangiogenic activity of the HCV core protein in HCC cells. PMID:29156827

  11. BOLD VENTURE COMPUTATION SYSTEM for nuclear reactor core analysis, Version III

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vondy, D.R.; Fowler, T.B.; Cunningham, G.W. III.

    1981-06-01

    This report is a condensed documentation for VERSION III of the BOLD VENTURE COMPUTATION SYSTEM for nuclear reactor core analysis. An experienced analyst should be able to use this system routinely for solving problems by referring to this document. Individual reports must be referenced for details. This report covers basic input instructions and describes recent extensions to the modules as well as to the interface data file specifications. Some application considerations are discussed and an elaborate sample problem is used as an instruction aid. Instructions for creating the system on IBM computers are also given.

  12. Addressing capability computing challenges of high-resolution global climate modelling at the Oak Ridge Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Anantharaj, Valentine; Norman, Matthew; Evans, Katherine; Taylor, Mark; Worley, Patrick; Hack, James; Mayer, Benjamin

    2014-05-01

    During 2013, high-resolution climate model simulations accounted for over 100 million "core hours" using Titan at the Oak Ridge Leadership Computing Facility (OLCF). The suite of climate modeling experiments, primarily using the Community Earth System Model (CESM) at nearly 0.25 degree horizontal resolution, generated over a petabyte of data and nearly 100,000 files, ranging in sizes from 20 MB to over 100 GB. Effective utilization of leadership class resources requires careful planning and preparation. The application software, such as CESM, need to be ported, optimized and benchmarked for the target platform in order to meet the computational readiness requirements. The model configuration needs to be "tuned and balanced" for the experiments. This can be a complicated and resource intensive process, especially for high-resolution configurations using complex physics. The volume of I/O also increases with resolution; and new strategies may be required to manage I/O especially for large checkpoint and restart files that may require more frequent output for resiliency. It is also essential to monitor the application performance during the course of the simulation exercises. Finally, the large volume of data needs to be analyzed to derive the scientific results; and appropriate data and information delivered to the stakeholders. Titan is currently the largest supercomputer available for open science. The computational resources, in terms of "titan core hours" are allocated primarily via the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) and ASCR Leadership Computing Challenge (ALCC) programs, both sponsored by the U.S. Department of Energy (DOE) Office of Science. Titan is a Cray XK7 system, capable of a theoretical peak performance of over 27 PFlop/s, consists of 18,688 compute nodes, with a NVIDIA Kepler K20 GPU and a 16-core AMD Opteron CPU in every node, for a total of 299,008 Opteron cores and 18,688 GPUs offering a cumulative 560,640 equivalent cores. Scientific applications, such as CESM, are also required to demonstrate a "computational readiness capability" to efficiently scale across and utilize 20% of the entire system. The 0,25 deg configuration of the spectral element dynamical core of the Community Atmosphere Model (CAM-SE), the atmospheric component of CESM, has been demonstrated to scale efficiently across more than 5,000 nodes (80,000 CPU cores) on Titan. The tracer transport routines of CAM-SE have also been ported to take advantage of the hybrid many-core architecture of Titan using GPUs [see EGU2014-4233], yielding over 2X speedup when transporting over 100 tracers. The high throughput I/O in CESM, based on the Parallel IO Library (PIO), is being further augmented to support even higher resolutions and enhance resiliency. The application performance of the individual runs are archived in a database and routinely analyzed to identify and rectify performance degradation during the course of the experiments. The various resources available at the OLCF now support a scientific workflow to facilitate high-resolution climate modelling. A high-speed center-wide parallel file system, called ATLAS, capable of 1 TB/s, is available on Titan as well as on the clusters used for analysis (Rhea) and visualization (Lens/EVEREST). Long-term archive is facilitated by the HPSS storage system. The Earth System Grid (ESG), featuring search & discovery, is also used to deliver data. The end-to-end workflow allows OLCF users to efficiently share data and publish results in a timely manner.

  13. CMS Readiness for Multi-Core Workload Scheduling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perez-Calero Yzquierdo, A.; Balcas, J.; Hernandez, J.

    In the present run of the LHC, CMS data reconstruction and simulation algorithms benefit greatly from being executed as multiple threads running on several processor cores. The complexity of the Run 2 events requires parallelization of the code to reduce the memory-per- core footprint constraining serial execution programs, thus optimizing the exploitation of present multi-core processor architectures. The allocation of computing resources for multi-core tasks, however, becomes a complex problem in itself. The CMS workload submission infrastructure employs multi-slot partitionable pilots, built on HTCondor and GlideinWMS native features, to enable scheduling of single and multi-core jobs simultaneously. This provides amore » solution for the scheduling problem in a uniform way across grid sites running a diversity of gateways to compute resources and batch system technologies. This paper presents this strategy and the tools on which it has been implemented. The experience of managing multi-core resources at the Tier-0 and Tier-1 sites during 2015, along with the deployment phase to Tier-2 sites during early 2016 is reported. The process of performance monitoring and optimization to achieve efficient and flexible use of the resources is also described.« less

  14. CMS readiness for multi-core workload scheduling

    NASA Astrophysics Data System (ADS)

    Perez-Calero Yzquierdo, A.; Balcas, J.; Hernandez, J.; Aftab Khan, F.; Letts, J.; Mason, D.; Verguilov, V.

    2017-10-01

    In the present run of the LHC, CMS data reconstruction and simulation algorithms benefit greatly from being executed as multiple threads running on several processor cores. The complexity of the Run 2 events requires parallelization of the code to reduce the memory-per- core footprint constraining serial execution programs, thus optimizing the exploitation of present multi-core processor architectures. The allocation of computing resources for multi-core tasks, however, becomes a complex problem in itself. The CMS workload submission infrastructure employs multi-slot partitionable pilots, built on HTCondor and GlideinWMS native features, to enable scheduling of single and multi-core jobs simultaneously. This provides a solution for the scheduling problem in a uniform way across grid sites running a diversity of gateways to compute resources and batch system technologies. This paper presents this strategy and the tools on which it has been implemented. The experience of managing multi-core resources at the Tier-0 and Tier-1 sites during 2015, along with the deployment phase to Tier-2 sites during early 2016 is reported. The process of performance monitoring and optimization to achieve efficient and flexible use of the resources is also described.

  15. 12 CFR 615.5330 - Minimum surplus ratios.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...) and weighted on the basis of risk in accordance with § 615.5210. (b) Core surplus. (1) Each institution shall achieve and at all times maintain a ratio of core surplus to the risk-adjusted asset base of... otherwise includible pursuant to § 615.5301(b). (2) Each association shall compute its core surplus ratio by...

  16. Sputnik: ad hoc distributed computation.

    PubMed

    Völkel, Gunnar; Lausser, Ludwig; Schmid, Florian; Kraus, Johann M; Kestler, Hans A

    2015-04-15

    In bioinformatic applications, computationally demanding algorithms are often parallelized to speed up computation. Nevertheless, setting up computational environments for distributed computation is often tedious. Aim of this project were the lightweight ad hoc set up and fault-tolerant computation requiring only a Java runtime, no administrator rights, while utilizing all CPU cores most effectively. The Sputnik framework provides ad hoc distributed computation on the Java Virtual Machine which uses all supplied CPU cores fully. It provides a graphical user interface for deployment setup and a web user interface displaying the current status of current computation jobs. Neither a permanent setup nor administrator privileges are required. We demonstrate the utility of our approach on feature selection of microarray data. The Sputnik framework is available on Github http://github.com/sysbio-bioinf/sputnik under the Eclipse Public License. hkestler@fli-leibniz.de or hans.kestler@uni-ulm.de Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. Efficient computation of the phylogenetic likelihood function on multi-gene alignments and multi-core architectures.

    PubMed

    Stamatakis, Alexandros; Ott, Michael

    2008-12-27

    The continuous accumulation of sequence data, for example, due to novel wet-laboratory techniques such as pyrosequencing, coupled with the increasing popularity of multi-gene phylogenies and emerging multi-core processor architectures that face problems of cache congestion, poses new challenges with respect to the efficient computation of the phylogenetic maximum-likelihood (ML) function. Here, we propose two approaches that can significantly speed up likelihood computations that typically represent over 95 per cent of the computational effort conducted by current ML or Bayesian inference programs. Initially, we present a method and an appropriate data structure to efficiently compute the likelihood score on 'gappy' multi-gene alignments. By 'gappy' we denote sampling-induced gaps owing to missing sequences in individual genes (partitions), i.e. not real alignment gaps. A first proof-of-concept implementation in RAXML indicates that this approach can accelerate inferences on large and gappy alignments by approximately one order of magnitude. Moreover, we present insights and initial performance results on multi-core architectures obtained during the transition from an OpenMP-based to a Pthreads-based fine-grained parallelization of the ML function.

  18. Sustaining a Community Computing Infrastructure for Online Teacher Professional Development: A Case Study of Designing Tapped In

    NASA Astrophysics Data System (ADS)

    Farooq, Umer; Schank, Patricia; Harris, Alexandra; Fusco, Judith; Schlager, Mark

    Community computing has recently grown to become a major research area in human-computer interaction. One of the objectives of community computing is to support computer-supported cooperative work among distributed collaborators working toward shared professional goals in online communities of practice. A core issue in designing and developing community computing infrastructures — the underlying sociotechnical layer that supports communitarian activities — is sustainability. Many community computing initiatives fail because the underlying infrastructure does not meet end user requirements; the community is unable to maintain a critical mass of users consistently over time; it generates insufficient social capital to support significant contributions by members of the community; or, as typically happens with funded initiatives, financial and human capital resource become unavailable to further maintain the infrastructure. On the basis of more than 9 years of design experience with Tapped In-an online community of practice for education professionals — we present a case study that discusses four design interventions that have sustained the Tapped In infrastructure and its community to date. These interventions represent broader design strategies for developing online environments for professional communities of practice.

  19. Parallelization of a spatial random field characterization process using the Method of Anchored Distributions and the HTCondor high throughput computing system

    NASA Astrophysics Data System (ADS)

    Osorio-Murillo, C. A.; Over, M. W.; Frystacky, H.; Ames, D. P.; Rubin, Y.

    2013-12-01

    A new software application called MAD# has been coupled with the HTCondor high throughput computing system to aid scientists and educators with the characterization of spatial random fields and enable understanding the spatial distribution of parameters used in hydrogeologic and related modeling. MAD# is an open source desktop software application used to characterize spatial random fields using direct and indirect information through Bayesian inverse modeling technique called the Method of Anchored Distributions (MAD). MAD relates indirect information with a target spatial random field via a forward simulation model. MAD# executes inverse process running the forward model multiple times to transfer information from indirect information to the target variable. MAD# uses two parallelization profiles according to computational resources available: one computer with multiple cores and multiple computers - multiple cores through HTCondor. HTCondor is a system that manages a cluster of desktop computers for submits serial or parallel jobs using scheduling policies, resources monitoring, job queuing mechanism. This poster will show how MAD# reduces the time execution of the characterization of random fields using these two parallel approaches in different case studies. A test of the approach was conducted using 1D problem with 400 cells to characterize saturated conductivity, residual water content, and shape parameters of the Mualem-van Genuchten model in four materials via the HYDRUS model. The number of simulations evaluated in the inversion was 10 million. Using the one computer approach (eight cores) were evaluated 100,000 simulations in 12 hours (10 million - 1200 hours approximately). In the evaluation on HTCondor, 32 desktop computers (132 cores) were used, with a processing time of 60 hours non-continuous in five days. HTCondor reduced the processing time for uncertainty characterization by a factor of 20 (1200 hours reduced to 60 hours.)

  20. Progress in a novel architecture for high performance processing

    NASA Astrophysics Data System (ADS)

    Zhang, Zhiwei; Liu, Meng; Liu, Zijun; Du, Xueliang; Xie, Shaolin; Ma, Hong; Ding, Guangxin; Ren, Weili; Zhou, Fabiao; Sun, Wenqin; Wang, Huijuan; Wang, Donglin

    2018-04-01

    The high performance processing (HPP) is an innovative architecture which targets on high performance computing with excellent power efficiency and computing performance. It is suitable for data intensive applications like supercomputing, machine learning and wireless communication. An example chip with four application-specific integrated circuit (ASIC) cores which is the first generation of HPP cores has been taped out successfully under Taiwan Semiconductor Manufacturing Company (TSMC) 40 nm low power process. The innovative architecture shows great energy efficiency over the traditional central processing unit (CPU) and general-purpose computing on graphics processing units (GPGPU). Compared with MaPU, HPP has made great improvement in architecture. The chip with 32 HPP cores is being developed under TSMC 16 nm field effect transistor (FFC) technology process and is planed to use commercially. The peak performance of this chip can reach 4.3 teraFLOPS (TFLOPS) and its power efficiency is up to 89.5 gigaFLOPS per watt (GFLOPS/W).

  1. MATCHED-INDEX-OF-REFRACTION FLOW FACILITY FOR FUNDAMENTAL AND APPLIED RESEARCH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piyush Sabharwall; Carl Stoots; Donald M. McEligot

    2014-11-01

    Significant challenges face reactor designers with regard to thermal hydraulic design and associated modeling for advanced reactor concepts. Computational thermal hydraulic codes solve only a piece of the core. There is a need for a whole core dynamics system code with local resolution to investigate and understand flow behavior with all the relevant physics and thermo-mechanics. The matched index of refraction (MIR) flow facility at Idaho National Laboratory (INL) has a unique capability to contribute to the development of validated computational fluid dynamics (CFD) codes through the use of state-of-the-art optical measurement techniques, such as Laser Doppler Velocimetry (LDV) andmore » Particle Image Velocimetry (PIV). PIV is a non-intrusive velocity measurement technique that tracks flow by imaging the movement of small tracer particles within a fluid. At the heart of a PIV calculation is the cross correlation algorithm, which is used to estimate the displacement of particles in some small part of the image over the time span between two images. Generally, the displacement is indicated by the location of the largest peak. To quantify these measurements accurately, sophisticated processing algorithms correlate the locations of particles within the image to estimate the velocity (Ref. 1). Prior to use with reactor deign, the CFD codes have to be experimentally validated, which requires rigorous experimental measurements to produce high quality, multi-dimensional flow field data with error quantification methodologies. Computational thermal hydraulic codes solve only a piece of the core. There is a need for a whole core dynamics system code with local resolution to investigate and understand flow behavior with all the relevant physics and thermo-mechanics. Computational techniques with supporting test data may be needed to address the heat transfer from the fuel to the coolant during the transition from turbulent to laminar flow, including the possibility of an early laminarization of the flow (Refs. 2 and 3) (laminarization is caused when the coolant velocity is theoretically in the turbulent regime, but the heat transfer properties are indicative of the coolant velocity being in the laminar regime). Such studies are complicated enough that computational fluid dynamics (CFD) models may not converge to the same conclusion. Thus, experimentally scaled thermal hydraulic data with uncertainties should be developed to support modeling and simulation for verification and validation activities. The fluid/solid index of refraction matching technique allows optical access in and around geometries that would otherwise be impossible while the large test section of the INL system provides better spatial and temporal resolution than comparable facilities. Benchmark data for assessing computational fluid dynamics can be acquired for external flows, internal flows, and coupled internal/external flows for better understanding of physical phenomena of interest. The core objective of this study is to describe MIR and its capabilities, and mention current development areas for uncertainty quantification, mainly the uncertainty surface method and cross-correlation method. Using these methods, it is anticipated to establish a suitable approach to quantify PIV uncertainty for experiments performed in the MIR.« less

  2. [Core muscle chains activation during core exercises determined by EMG-a systematic review].

    PubMed

    Rogan, Slavko; Riesen, Jan; Taeymans, Jan

    2014-10-15

    Good core muscles strength is essential for daily life and sports activities. However, the mechanism how core muscles may be effectively triggered by exercises is not yet precisely described in the literature. The aim of this systematic review was to evaluate the rate of activation as measured by electromyography of the ventral, lateral and dorsal core muscle chains during core (trunk) muscle exercises. A total of 16 studies were included. Exercises with a vertical starting position, such as the deadlift or squat activated significantly more core muscles than exercises in the horizontal initial position.

  3. A Computational and Experimental Investigation of a Delta Wing with Vertical Tails

    NASA Technical Reports Server (NTRS)

    Krist. Sherrie L.; Washburn, Anthony E.; Visser, Kenneth D.

    2004-01-01

    The flow over an aspect ratio 1 delta wing with twin vertical tails is studied in a combined computational and experimental investigation. This research is conducted in an effort to understand the vortex and fin interaction process. The computational algorithm used solves both the thin-layer Navier-Stokes and the inviscid Euler equations and utilizes a chimera grid-overlapping technique. The results are compared with data obtained from a detailed experimental investigation. The laminar case presented is for an angle of attack of 20 and a Reynolds number of 500; 000. Good agreement is observed for the physics of the flow field, as evidenced by comparisons of computational pressure contours with experimental flow-visualization images, as well as by comparisons of vortex-core trajectories. While comparisons of the vorticity magnitudes indicate that the computations underpredict the magnitude in the wing primary-vortex-core region, grid embedding improves the computational prediction.

  4. A computational and experimental investigation of a delta wing with vertical tails

    NASA Technical Reports Server (NTRS)

    Krist, Sherrie L.; Washburn, Anthony E.; Visser, Kenneth D.

    1993-01-01

    The flow over an aspect ratio 1 delta wing with twin vertical tails is studied in a combined computational and experimental investigation. This research is conducted in an effort to understand the vortex and fin interaction process. The computational algorithm used solves both the thin-layer Navier-Stokes and the inviscid Euler equations and utilizes a chimera grid-overlapping technique. The results are compared with data obtained from a detailed experimental investigation. The laminar case presented is for an angle of attack of 20 deg and a Reynolds number of 500,000. Good agreement is observed for the physics of the flow field, as evidenced by comparisons of computational pressure contours with experimental flow-visualization images, as well as by comparisons of vortex-core trajectories. While comparisons of the vorticity magnitudes indicate that the computations underpredict the magnitude in the wing primary-vortex-core region, grid embedding improves the computational prediction.

  5. Thalamocortical and intracortical laminar connectivity determines sleep spindle properties.

    PubMed

    Krishnan, Giri P; Rosen, Burke Q; Chen, Jen-Yung; Muller, Lyle; Sejnowski, Terrence J; Cash, Sydney S; Halgren, Eric; Bazhenov, Maxim

    2018-06-27

    Sleep spindles are brief oscillatory events during non-rapid eye movement (NREM) sleep. Spindle density and synchronization properties are different in MEG versus EEG recordings in humans and also vary with learning performance, suggesting spindle involvement in memory consolidation. Here, using computational models, we identified network mechanisms that may explain differences in spindle properties across cortical structures. First, we report that differences in spindle occurrence between MEG and EEG data may arise from the contrasting properties of the core and matrix thalamocortical systems. The matrix system, projecting superficially, has wider thalamocortical fanout compared to the core system, which projects to middle layers, and requires the recruitment of a larger population of neurons to initiate a spindle. This property was sufficient to explain lower spindle density and higher spatial synchrony of spindles in the superficial cortical layers, as observed in the EEG signal. In contrast, spindles in the core system occurred more frequently but less synchronously, as observed in the MEG recordings. Furthermore, consistent with human recordings, in the model, spindles occurred independently in the core system but the matrix system spindles commonly co-occurred with core spindles. We also found that the intracortical excitatory connections from layer III/IV to layer V promote spindle propagation from the core to the matrix system, leading to widespread spindle activity. Our study predicts that plasticity of intra- and inter-cortical connectivity can potentially be a mechanism for increased spindle density as has been observed during learning.

  6. Ultrasound phase rotation beamforming on multi-core DSP.

    PubMed

    Ma, Jieming; Karadayi, Kerem; Ali, Murtaza; Kim, Yongmin

    2014-01-01

    Phase rotation beamforming (PRBF) is a commonly-used digital receive beamforming technique. However, due to its high computational requirement, it has traditionally been supported by hardwired architectures, e.g., application-specific integrated circuits (ASICs) or more recently field-programmable gate arrays (FPGAs). In this study, we investigated the feasibility of supporting software-based PRBF on a multi-core DSP. To alleviate the high computing requirement, the analog front-end (AFE) chips integrating quadrature demodulation in addition to analog-to-digital conversion were defined and used. With these new AFE chips, only delay alignment and phase rotation need to be performed by DSP, substantially reducing the computational load. We implemented the delay alignment and phase rotation modules on a Texas Instruments C6678 DSP with 8 cores. We found it takes 200 μs to beamform 2048 samples from 64 channels using 2 cores. With 4 cores, 20 million samples can be beamformed in one second. Therefore, ADC frequencies up to 40 MHz with 2:1 decimation in AFE chips or up to 20 MHz with no decimation can be supported as long as the ADC-to-DSP I/O requirement can be met. The remaining 4 cores can work on back-end processing tasks and applications, e.g., color Doppler or ultrasound elastography. One DSP being able to handle both beamforming and back-end processing could lead to low-power and low-cost ultrasound machines, benefiting ultrasound imaging in general, particularly portable ultrasound machines. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Near-realtime simulations of biolelectric activity in small mammalian hearts using graphical processing units

    PubMed Central

    Vigmond, Edward J.; Boyle, Patrick M.; Leon, L. Joshua; Plank, Gernot

    2014-01-01

    Simulations of cardiac bioelectric phenomena remain a significant challenge despite continual advancements in computational machinery. Spanning large temporal and spatial ranges demands millions of nodes to accurately depict geometry, and a comparable number of timesteps to capture dynamics. This study explores a new hardware computing paradigm, the graphics processing unit (GPU), to accelerate cardiac models, and analyzes results in the context of simulating a small mammalian heart in real time. The ODEs associated with membrane ionic flow were computed on traditional CPU and compared to GPU performance, for one to four parallel processing units. The scalability of solving the PDE responsible for tissue coupling was examined on a cluster using up to 128 cores. Results indicate that the GPU implementation was between 9 and 17 times faster than the CPU implementation and scaled similarly. Solving the PDE was still 160 times slower than real time. PMID:19964295

  8. Design and Development of ChemInfoCloud: An Integrated Cloud Enabled Platform for Virtual Screening.

    PubMed

    Karthikeyan, Muthukumarasamy; Pandit, Deepak; Bhavasar, Arvind; Vyas, Renu

    2015-01-01

    The power of cloud computing and distributed computing has been harnessed to handle vast and heterogeneous data required to be processed in any virtual screening protocol. A cloud computing platorm ChemInfoCloud was built and integrated with several chemoinformatics and bioinformatics tools. The robust engine performs the core chemoinformatics tasks of lead generation, lead optimisation and property prediction in a fast and efficient manner. It has also been provided with some of the bioinformatics functionalities including sequence alignment, active site pose prediction and protein ligand docking. Text mining, NMR chemical shift (1H, 13C) prediction and reaction fingerprint generation modules for efficient lead discovery are also implemented in this platform. We have developed an integrated problem solving cloud environment for virtual screening studies that also provides workflow management, better usability and interaction with end users using container based virtualization, OpenVz.

  9. Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2011

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David W. Nigg; Devin A. Steuhm

    2011-09-01

    Legacy computational reactor physics software tools and protocols currently used for support of Advanced Test Reactor (ATR) core fuel management and safety assurance and, to some extent, experiment management are obsolete, inconsistent with the state of modern nuclear engineering practice, and are becoming increasingly difficult to properly verify and validate (V&V). Furthermore, the legacy staff knowledge required for application of these tools and protocols from the 1960s and 1970s is rapidly being lost due to staff turnover and retirements. In 2009 the Idaho National Laboratory (INL) initiated a focused effort to address this situation through the introduction of modern high-fidelitymore » computational software and protocols, with appropriate V&V, within the next 3-4 years via the ATR Core Modeling and Simulation and V&V Update (or 'Core Modeling Update') Project. This aggressive computational and experimental campaign will have a broad strategic impact on the operation of the ATR, both in terms of improved computational efficiency and accuracy for support of ongoing DOE programs as well as in terms of national and international recognition of the ATR National Scientific User Facility (NSUF). The ATR Core Modeling Update Project, targeted for full implementation in phase with the anticipated ATR Core Internals Changeout (CIC) in the 2014 time frame, began during the last quarter of Fiscal Year 2009, and has just completed its first full year. Key accomplishments so far have encompassed both computational as well as experimental work. A new suite of stochastic and deterministic transport theory based reactor physics codes and their supporting nuclear data libraries (SCALE, KENO-6, HELIOS, NEWT, and ATTILA) have been installed at the INL under various permanent sitewide license agreements and corresponding baseline models of the ATR and ATRC are now operational, demonstrating the basic feasibility of these code packages for their intended purpose. Furthermore, a capability for rigorous sensitivity analysis and uncertainty quantification based on the TSUNAMI system is being implemented and initial computational results have been obtained. This capability will have many applications in 2011 and beyond as a tool for understanding the margins of uncertainty in the new models as well as for validation experiment design and interpretation. Finally we note that although full implementation of the new computational models and protocols will extend over a period 3-4 years as noted above, interim applications in the much nearer term have already been demonstrated. In particular, these demonstrations included an analysis that was useful for understanding the cause of some issues in December 2009 that were triggered by a larger than acceptable discrepancy between the measured excess core reactivity and a calculated value that was based on the legacy computational methods. As the Modeling Update project proceeds we anticipate further such interim, informal, applications in parallel with formal qualification of the system under the applicable INL Quality Assurance procedures and standards.« less

  10. Mission: Define Computer Literacy. The Illinois-Wisconsin ISACS Computer Coordinators' Committee on Computer Literacy Report (May 1985).

    ERIC Educational Resources Information Center

    Computing Teacher, 1985

    1985-01-01

    Defines computer literacy and describes a computer literacy course which stresses ethics, hardware, and disk operating systems throughout. Core units on keyboarding, word processing, graphics, database management, problem solving, algorithmic thinking, and programing are outlined, together with additional units on spreadsheets, simulations,…

  11. Girls and Computing: Female Participation in Computing in Schools

    ERIC Educational Resources Information Center

    Zagami, Jason; Boden, Marie; Keane, Therese; Moreton, Bronwyn; Schulz, Karsten

    2015-01-01

    Computer education, with a focus on Computer Science, has become a core subject in the Australian Curriculum and the focus of national innovation initiatives. Equal participation by girls, however, remains unlikely based on their engagement with computing in recent decades. In seeking to understand why this may be the case, a Delphi consensus…

  12. Damage Tolerance of Sandwich Plates With Debonded Face Sheets

    NASA Technical Reports Server (NTRS)

    Sankar, Bhavani V.

    2001-01-01

    A nonlinear finite element analysis was performed to simulate axial compression of sandwich beams with debonded face sheets. The load - end-shortening diagrams were generated for a variety of specimens used in a previous experimental study. The energy release rate at the crack tip was computed using the J-integral, and plotted as a function of the load. A detailed stress analysis was performed and the critical stresses in the face sheet and the core were computed. The core was also modeled as an isotropic elastic-perfectly plastic material and a nonlinear post buckling analysis was performed. A Graeco-Latin factorial plan was used to study the effects of debond length, face sheet and core thicknesses, and core density on the load carrying capacity of the sandwich composite. It has been found that a linear buckling analysis is inadequate in determining the maximum load a debonded sandwich beam can carry. A nonlinear post-buckling analysis combined with an elastoplastic model of the core is required to predict the compression behavior of debonded sandwich beams.

  13. Defining Computational Thinking for Mathematics and Science Classrooms

    ERIC Educational Resources Information Center

    Weintrop, David; Beheshti, Elham; Horn, Michael; Orton, Kai; Jona, Kemi; Trouille, Laura; Wilensky, Uri

    2016-01-01

    Science and mathematics are becoming computational endeavors. This fact is reflected in the recently released Next Generation Science Standards and the decision to include "computational thinking" as a core scientific practice. With this addition, and the increased presence of computation in mathematics and scientific contexts, a new…

  14. Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets

    PubMed Central

    2010-01-01

    Background Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. Results We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. Conclusions The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics. PMID:20064262

  15. Frozen-Orbital and Downfolding Calculations with Auxiliary-Field Quantum Monte Carlo.

    PubMed

    Purwanto, Wirawan; Zhang, Shiwei; Krakauer, Henry

    2013-11-12

    We describe the implementation of the frozen-orbital and downfolding approximations in the auxiliary-field quantum Monte Carlo (AFQMC) method. These approaches can provide significant computational savings, compared to fully correlating all of the electrons. While the many-body wave function is never explicit in AFQMC, its random walkers are Slater determinants, whose orbitals may be expressed in terms of any one-particle orbital basis. It is therefore straightforward to partition the full N-particle Hilbert space into active and inactive parts to implement the frozen-orbital method. In the frozen-core approximation, for example, the core electrons can be eliminated in the correlated part of the calculations, greatly increasing the computational efficiency, especially for heavy atoms. Scalar relativistic effects are easily included using the Douglas-Kroll-Hess theory. Using this method, we obtain a way to effectively eliminate the error due to single-projector, norm-conserving pseudopotentials in AFQMC. We also illustrate a generalization of the frozen-orbital approach that downfolds high-energy basis states to a physically relevant low-energy sector, which allows a systematic approach to produce realistic model Hamiltonians to further increase efficiency for extended systems.

  16. Preliminary validation of computational model for neutron flux prediction of Thai Research Reactor (TRR-1/M1)

    NASA Astrophysics Data System (ADS)

    Sabaibang, S.; Lekchaum, S.; Tipayakul, C.

    2015-05-01

    This study is a part of an on-going work to develop a computational model of Thai Research Reactor (TRR-1/M1) which is capable of accurately predicting the neutron flux level and spectrum. The computational model was created by MCNPX program and the CT (Central Thimble) in-core irradiation facility was selected as the location for validation. The comparison was performed with the typical flux measurement method routinely practiced at TRR-1/M1, that is, the foil activation technique. In this technique, gold foil is irradiated for a certain period of time and the activity of the irradiated target is measured to derive the thermal neutron flux. Additionally, the flux measurement with SPND (self-powered neutron detector) was also performed for comparison. The thermal neutron flux from the MCNPX simulation was found to be 1.79×1013 neutron/cm2s while that from the foil activation measurement was 4.68×1013 neutron/cm2s. On the other hand, the thermal neutron flux from the measurement using SPND was 2.47×1013 neutron/cm2s. An assessment of the differences among the three methods was done. The difference of the MCNPX with the foil activation technique was found to be 67.8% and the difference of the MCNPX with the SPND was found to be 27.8%.

  17. More reliable forecasts with less precise computations: a fast-track route to cloud-resolved weather and climate simulators?

    PubMed Central

    Palmer, T. N.

    2014-01-01

    This paper sets out a new methodological approach to solving the equations for simulating and predicting weather and climate. In this approach, the conventionally hard boundary between the dynamical core and the sub-grid parametrizations is blurred. This approach is motivated by the relatively shallow power-law spectrum for atmospheric energy on scales of hundreds of kilometres and less. It is first argued that, because of this, the closure schemes for weather and climate simulators should be based on stochastic–dynamic systems rather than deterministic formulae. Second, as high-wavenumber elements of the dynamical core will necessarily inherit this stochasticity during time integration, it is argued that the dynamical core will be significantly over-engineered if all computations, regardless of scale, are performed completely deterministically and if all variables are represented with maximum numerical precision (in practice using double-precision floating-point numbers). As the era of exascale computing is approached, an energy- and computationally efficient approach to cloud-resolved weather and climate simulation is described where determinism and numerical precision are focused on the largest scales only. PMID:24842038

  18. More reliable forecasts with less precise computations: a fast-track route to cloud-resolved weather and climate simulators?

    PubMed

    Palmer, T N

    2014-06-28

    This paper sets out a new methodological approach to solving the equations for simulating and predicting weather and climate. In this approach, the conventionally hard boundary between the dynamical core and the sub-grid parametrizations is blurred. This approach is motivated by the relatively shallow power-law spectrum for atmospheric energy on scales of hundreds of kilometres and less. It is first argued that, because of this, the closure schemes for weather and climate simulators should be based on stochastic-dynamic systems rather than deterministic formulae. Second, as high-wavenumber elements of the dynamical core will necessarily inherit this stochasticity during time integration, it is argued that the dynamical core will be significantly over-engineered if all computations, regardless of scale, are performed completely deterministically and if all variables are represented with maximum numerical precision (in practice using double-precision floating-point numbers). As the era of exascale computing is approached, an energy- and computationally efficient approach to cloud-resolved weather and climate simulation is described where determinism and numerical precision are focused on the largest scales only.

  19. Neurocognitive mechanisms underlying value-based decision-making: from core values to economic value

    PubMed Central

    Brosch, Tobias; Sander, David

    2013-01-01

    Value plays a central role in practically every aspect of human life that requires a decision: whether we choose between different consumer goods, whether we decide which person we marry or which political candidate gets our vote, we choose the option that has more value to us. Over the last decade, neuroeconomic research has mapped the neural substrates of economic value, revealing that activation in brain regions such as ventromedial prefrontal cortex (VMPFC), ventral striatum or posterior cingulate cortex reflects how much an individual values an option and which of several options he/she will choose. However, while great progress has been made exploring the mechanisms underlying concrete decisions, neuroeconomic research has been less concerned with the questions of why people value what they value, and why different people value different things. Social psychologists and sociologists have long been interested in core values, motivational constructs that are intrinsically linked to the self-schema and are used to guide actions and decisions across different situations and different time points. Core value may thus be an important determinant of individual differences in economic value computation and decision-making. Based on a review of recent neuroimaging studies investigating the neural representation of core values and their interactions with neural systems representing economic value, we outline a common framework that integrates the core value concept and neuroeconomic research on value-based decision-making. PMID:23898252

  20. Efficient provisioning for multi-core applications with LSF

    NASA Astrophysics Data System (ADS)

    Dal Pra, Stefano

    2015-12-01

    Tier-1 sites providing computing power for HEP experiments are usually tightly designed for high throughput performances. This is pursued by reducing the variety of supported use cases and tuning for performances those ones, the most important of which have been that of singlecore jobs. Moreover, the usual workload is saturation: each available core in the farm is in use and there are queued jobs waiting for their turn to run. Enabling multi-core jobs thus requires dedicating a number of hosts where to run, and waiting for them to free the needed number of cores. This drain-time introduces a loss of computing power driven by the number of unusable empty cores. As an increasing demand for multi-core capable resources have emerged, a Task Force have been constituted in WLCG, with the goal to define a simple and efficient multi-core resource provisioning model. This paper details the work done at the INFN Tier-1 to enable multi-core support for the LSF batch system, with the intent of reducing to the minimum the average number of unused cores. The adopted strategy has been that of dedicating to multi-core a dynamic set of nodes, whose dimension is mainly driven by the number of pending multi-core requests and fair-share priority of the submitting user. The node status transition, from single to multi core et vice versa, is driven by a finite state machine which is implemented in a custom multi-core director script, running in the cluster. After describing and motivating both the implementation and the details specific to the LSF batch system, results about performance are reported. Factors having positive and negative impact on the overall efficiency are discussed and solutions to reduce at most the negative ones are proposed.

  1. Has First-Grade Core Reading Program Text Complexity Changed across Six Decades?

    ERIC Educational Resources Information Center

    Fitzgerald, Jill; Elmore, Jeff; Relyea, Jackie Eunjung; Hiebert, Elfrieda H.; Stenner, A. Jackson

    2016-01-01

    The purpose of the study was to address possible text complexity shifts across the past six decades for a continually best-selling first-grade core reading program. The anthologies of one publisher's seven first-grade core reading programs were examined using computer-based analytics, dating from 1962 to 2013. Variables were Overall Text…

  2. Assessing Mathematics Automatically Using Computer Algebra and the Internet

    ERIC Educational Resources Information Center

    Sangwin, Chris

    2004-01-01

    This paper reports some recent developments in mathematical computer-aided assessment which employs computer algebra to evaluate students' work using the Internet. Technical and educational issues raised by this use of computer algebra are addressed. Working examples from core calculus and algebra which have been used with first year university…

  3. Computer-Assisted Exposure Treatment for Flight Phobia

    ERIC Educational Resources Information Center

    Tortella-Feliu, Miguel; Bornas, Xavier; Llabres, Jordi

    2008-01-01

    This review introduces the state of the art in computer-assisted treatment for behavioural disorders. The core of the paper is devoted to describe one of these interventions providing computer-assisted exposure for flight phobia treatment, the Computer-Assisted Fear of Flying Treatment (CAFFT). The rationale, contents and structure of the CAFFT…

  4. Circuit-Switched Memory Access in Photonic Interconnection Networks for High-Performance Embedded Computing

    DTIC Science & Technology

    2010-07-22

    dependent , providing a natural bandwidth match between compute cores and the memory subsystem. • High Bandwidth Dcnsity. Waveguides crossing the chip...simulate this memory access architecture on a 2S6-core chip with a concentrated 64-node network lIsing detailed traces of high-performance embedded...memory modulcs, wc placc memory access poi nts (MAPs) around the pcriphery of the chip connected to thc nctwork. These MAPs, shown in Figure 4, contain

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chow, Edmond

    Solving sparse problems is at the core of many DOE computational science applications. We focus on the challenge of developing sparse algorithms that can fully exploit the parallelism in extreme-scale computing systems, in particular systems with massive numbers of cores per node. Our approach is to express a sparse matrix factorization as a large number of bilinear constraint equations, and then solving these equations via an asynchronous iterative method. The unknowns in these equations are the matrix entries of the factorization that is desired.

  6. Reconfigurable modular computer networks for spacecraft on-board processing

    NASA Technical Reports Server (NTRS)

    Rennels, D. A.

    1978-01-01

    The core electronics subsystems on unmanned spacecraft, which have been sent over the last 20 years to investigate the moon, Mars, Venus, and Mercury, have progressed through an evolution from simple fixed controllers and analog computers in the 1960's to general-purpose digital computers in current designs. This evolution is now moving in the direction of distributed computer networks. Current Voyager spacecraft already use three on-board computers. One is used to store commands and provide overall spacecraft management. Another is used for instrument control and telemetry collection, and the third computer is used for attitude control and scientific instrument pointing. An examination of the control logic in the instruments shows that, for many, it is cost-effective to replace the sequencing logic with a microcomputer. The Unified Data System architecture considered consists of a set of standard microcomputers connected by several redundant buses. A typical self-checking computer module will contain 23 RAMs, two microprocessors, one memory interface, three bus interfaces, and one core building block.

  7. The EPOS Vision for the Open Science Cloud

    NASA Astrophysics Data System (ADS)

    Jeffery, Keith; Harrison, Matt; Cocco, Massimo

    2016-04-01

    Cloud computing offers dynamic elastic scalability for data processing on demand. For much research activity, demand for computing is uneven over time and so CLOUD computing offers both cost-effectiveness and capacity advantages. However, as reported repeatedly by the EC Cloud Expert Group, there are barriers to the uptake of Cloud Computing: (1) security and privacy; (2) interoperability (avoidance of lock-in); (3) lack of appropriate systems development environments for application programmers to characterise their applications to allow CLOUD middleware to optimize their deployment and execution. From CERN, the Helix-Nebula group has proposed the architecture for the European Open Science Cloud. They are discussing with other e-Infrastructure groups such as EGI (GRIDs), EUDAT (data curation), AARC (network authentication and authorisation) and also with the EIROFORUM group of 'international treaty' RIs (Research Infrastructures) and the ESFRI (European Strategic Forum for Research Infrastructures) RIs including EPOS. Many of these RIs are either e-RIs (electronic-RIs) or have an e-RI interface for access and use. The EPOS architecture is centred on a portal: ICS (Integrated Core Services). The architectural design already allows for access to e-RIs (which may include any or all of data, software, users and resources such as computers or instruments). Those within any one domain (subject area) of EPOS are considered within the TCS (Thematic Core Services). Those outside, or available across multiple domains of EPOS, are ICS-d (Integrated Core Services-Distributed) since the intention is that they will be used by any or all of the TCS via the ICS. Another such service type is CES (Computational Earth Science); effectively an ICS-d specializing in high performance computation, analytics, simulation or visualization offered by a TCS for others to use. Already discussions are underway between EPOS and EGI, EUDAT, AARC and Helix-Nebula for those offerings to be considered as ICS-ds by EPOS.. Provision of access to ICS-Ds from ICS-C concerns several aspects: (a) Technical : it may be more or less difficult to connect and pass from ICS-C to the ICS-d/ CES the 'package' (probably a virtual machine) of data and software; (b) Security/privacy : including passing personal information e.g. related to AAAI (Authentication, authorization, accounting Infrastructure); (c) financial and legal : such as payment, licence conditions; Appropriate interfaces from ICS-C to ICS-d are being designed to accommodate these aspects. The Open Science Cloud is timely because it provides a framework to discuss governance and sustainability for computational resource provision as well as an effective interpretation of federated approach to HPC(High Performance Computing) -HTC (High Throughput Computing). It will be a unique opportunity to share and adopt procurement policies to provide access to computational resources for RIs. The current state of discussions and expected roadmap for the EPOS-Open Science Cloud relationship are presented.

  8. Active Job Monitoring in Pilots

    NASA Astrophysics Data System (ADS)

    Kuehn, Eileen; Fischer, Max; Giffels, Manuel; Jung, Christopher; Petzold, Andreas

    2015-12-01

    Recent developments in high energy physics (HEP) including multi-core jobs and multi-core pilots require data centres to gain a deep understanding of the system to monitor, design, and upgrade computing clusters. Networking is a critical component. Especially the increased usage of data federations, for example in diskless computing centres or as a fallback solution, relies on WAN connectivity and availability. The specific demands of different experiments and communities, but also the need for identification of misbehaving batch jobs, requires an active monitoring. Existing monitoring tools are not capable of measuring fine-grained information at batch job level. This complicates network-aware scheduling and optimisations. In addition, pilots add another layer of abstraction. They behave like batch systems themselves by managing and executing payloads of jobs internally. The number of real jobs being executed is unknown, as the original batch system has no access to internal information about the scheduling process inside the pilots. Therefore, the comparability of jobs and pilots for predicting run-time behaviour or network performance cannot be ensured. Hence, identifying the actual payload is important. At the GridKa Tier 1 centre a specific tool is in use that allows the monitoring of network traffic information at batch job level. This contribution presents the current monitoring approach and discusses recent efforts and importance to identify pilots and their substructures inside the batch system. It will also show how to determine monitoring data of specific jobs from identified pilots. Finally, the approach is evaluated.

  9. The effects of therapeutic hip exercise with abdominal core activation on recruitment of the hip muscles.

    PubMed

    Chan, Mandy Ky; Chow, Ka Wai; Lai, Alfred Ys; Mak, Noble Kc; Sze, Jason Ch; Tsang, Sharon Mh

    2017-07-21

    Core stabilization has been utilized for rehabilitation and prevention of lower limb musculoskeletal injuries. Previous studies showed that activation of the abdominal core muscles enhanced the hip muscle activity in hip extension and abduction exercises. However, the lack of the direct measurement and quantification of the activation level of the abdominal core muscles during the execution of the hip exercises affect the level of evidence to substantiate the proposed application of core exercises to promote training and rehabilitation outcome of the hip region. The aim of the present study was to examine the effects of abdominal core activation, which is monitored directly by surface electromyography (EMG), on hip muscle activation while performing different hip exercises, and to explore whether participant characteristics such as gender, physical activity level and contractile properties of muscles, which is assessed by tensiomyography (TMG), have confounding effect to the activation of hip muscles in enhanced core condition. Surface EMG of bilateral internal obliques (IO), upper gluteus maximus (UGMax), lower gluteus maximus (LGMax), gluteus medius (GMed) and biceps femoris (BF) of dominant leg was recorded in 20 young healthy subjects while performing 3 hip exercises: Clam, side-lying hip abduction (HABD), and prone hip extension (PHE) in 2 conditions: natural core activation (NC) and enhanced core activation (CO). EMG signals normalized to percentage of maximal voluntary isometric contraction (%MVIC) were compared between two core conditions with the threshold of the enhanced abdominal core condition defined as >20%MVIC of IO. Enhanced abdominal core activation has significantly promoted the activation level of GMed in all phases of clam exercise (P < 0.05), and UGMax in all phases of PHE exercise (P < 0.05), LGMax in eccentric phases of all 3 exercises (P < 0.05), and BF in all phases of all 3 exercises except the eccentric phase of PHE exercise (P < 0.05). The %MVIC of UGMax was significantly higher than that of LGMax in all phases of clam and HABD exercises under both CO and NC conditions (P < 0.001) while the %MVIC of LGMax was significantly higher than UGMax in concentric phase of PHE exercise under NC condition (P = 0.003). Gender, physical activity level and TMG parameters were not major covariates to activation of hip muscles under enhanced core condition. Abdominal core activation enhances the hip muscles recruitment in Clam, HABD and PHE exercises, and this enhancement is correlated with higher physical activity and stiffer hip muscle. Our results suggest the potential application of abdominal core activation for lower limb rehabilitation since the increased activation of target hip muscles may enhance the therapeutic effects of hip strengthening exercises.

  10. Monte Carlo Approach for Estimating Density and Atomic Number From Dual-Energy Computed Tomography Images of Carbonate Rocks

    NASA Astrophysics Data System (ADS)

    Victor, Rodolfo A.; Prodanović, Maša.; Torres-Verdín, Carlos

    2017-12-01

    We develop a new Monte Carlo-based inversion method for estimating electron density and effective atomic number from 3-D dual-energy computed tomography (CT) core scans. The method accounts for uncertainties in X-ray attenuation coefficients resulting from the polychromatic nature of X-ray beam sources of medical and industrial scanners, in addition to delivering uncertainty estimates of inversion products. Estimation of electron density and effective atomic number from CT core scans enables direct deterministic or statistical correlations with salient rock properties for improved petrophysical evaluation; this condition is specifically important in media such as vuggy carbonates where CT resolution better captures core heterogeneity that dominates fluid flow properties. Verification tests of the inversion method performed on a set of highly heterogeneous carbonate cores yield very good agreement with in situ borehole measurements of density and photoelectric factor.

  11. Accelerating 3D Elastic Wave Equations on Knights Landing based Intel Xeon Phi processors

    NASA Astrophysics Data System (ADS)

    Sourouri, Mohammed; Birger Raknes, Espen

    2017-04-01

    In advanced imaging methods like reverse-time migration (RTM) and full waveform inversion (FWI) the elastic wave equation (EWE) is numerically solved many times to create the seismic image or the elastic parameter model update. Thus, it is essential to optimize the solution time for solving the EWE as this will have a major impact on the total computational cost in running RTM or FWI. From a computational point of view applications implementing EWEs are associated with two major challenges. The first challenge is the amount of memory-bound computations involved, while the second challenge is the execution of such computations over very large datasets. So far, multi-core processors have not been able to tackle these two challenges, which eventually led to the adoption of accelerators such as Graphics Processing Units (GPUs). Compared to conventional CPUs, GPUs are densely populated with many floating-point units and fast memory, a type of architecture that has proven to map well to many scientific computations. Despite its architectural advantages, full-scale adoption of accelerators has yet to materialize. First, accelerators require a significant programming effort imposed by programming models such as CUDA or OpenCL. Second, accelerators come with a limited amount of memory, which also require explicit data transfers between the CPU and the accelerator over the slow PCI bus. The second generation of the Xeon Phi processor based on the Knights Landing (KNL) architecture, promises the computational capabilities of an accelerator but require the same programming effort as traditional multi-core processors. The high computational performance is realized through many integrated cores (number of cores and tiles and memory varies with the model) organized in tiles that are connected via a 2D mesh based interconnect. In contrary to accelerators, KNL is a self-hosted system, meaning explicit data transfers over the PCI bus are no longer required. However, like most accelerators, KNL sports a memory subsystem consisting of low-level caches and 16GB of high-bandwidth MCDRAM memory. For capacity computing, up to 400GB of conventional DDR4 memory is provided. Such a strict hierarchical memory layout means that data locality is imperative if the true potential of this product is to be harnessed. In this work, we study a series of optimizations specifically targeting KNL for our EWE based application to reduce the time-to-solution time for the following 3D model sizes in grid points: 1283, 2563 and 5123. We compare the results with an optimized version for multi-core CPUs running on a dual-socket Xeon E5 2680v3 system using OpenMP. Our initial naive implementation on the KNL is roughly 20% faster than the multi-core version, but by using only one thread per core and careful memory placement using the memkind library, we could achieve higher speedups. Additionally, by using the MCDRAM as cache for problem sizes that are smaller than 16 GB further performance improvements were unlocked. Depending on the problem size, our overall results indicate that the KNL based system is approximately 2.2x faster than the 24-core Xeon E5 2680v3 system, with only modest changes to the code.

  12. Performance implications from sizing a VM on multi-core systems: A Data analytic application s view

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Seung-Hwan; Horey, James L; Begoli, Edmon

    In this paper, we present a quantitative performance analysis of data analytics applications running on multi-core virtual machines. Such environments form the core of cloud computing. In addition, data analytics applications, such as Cassandra and Hadoop, are becoming increasingly popular on cloud computing platforms. This convergence necessitates a better understanding of the performance and cost implications of such hybrid systems. For example, the very rst step in hosting applications in virtualized environments, requires the user to con gure the number of virtual processors and the size of memory. To understand performance implications of this step, we benchmarked three Yahoo Cloudmore » Serving Benchmark (YCSB) workloads in a virtualized multi-core environment. Our measurements indicate that the performance of Cassandra for YCSB workloads does not heavily depend on the processing capacity of a system, while the size of the data set is critical to performance relative to allocated memory. We also identi ed a strong relationship between the running time of workloads and various hardware events (last level cache loads, misses, and CPU migrations). From this analysis, we provide several suggestions to improve the performance of data analytics applications running on cloud computing environments.« less

  13. Application of a hybrid MPI/OpenMP approach for parallel groundwater model calibration using multi-core computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Guoping; D'Azevedo, Ed F; Zhang, Fan

    2010-01-01

    Calibration of groundwater models involves hundreds to thousands of forward solutions, each of which may solve many transient coupled nonlinear partial differential equations, resulting in a computationally intensive problem. We describe a hybrid MPI/OpenMP approach to exploit two levels of parallelisms in software and hardware to reduce calibration time on multi-core computers. HydroGeoChem 5.0 (HGC5) is parallelized using OpenMP for direct solutions for a reactive transport model application, and a field-scale coupled flow and transport model application. In the reactive transport model, a single parallelizable loop is identified to account for over 97% of the total computational time using GPROF.more » Addition of a few lines of OpenMP compiler directives to the loop yields a speedup of about 10 on a 16-core compute node. For the field-scale model, parallelizable loops in 14 of 174 HGC5 subroutines that require 99% of the execution time are identified. As these loops are parallelized incrementally, the scalability is found to be limited by a loop where Cray PAT detects over 90% cache missing rates. With this loop rewritten, similar speedup as the first application is achieved. The OpenMP-parallelized code can be run efficiently on multiple workstations in a network or multiple compute nodes on a cluster as slaves using parallel PEST to speedup model calibration. To run calibration on clusters as a single task, the Levenberg Marquardt algorithm is added to HGC5 with the Jacobian calculation and lambda search parallelized using MPI. With this hybrid approach, 100 200 compute cores are used to reduce the calibration time from weeks to a few hours for these two applications. This approach is applicable to most of the existing groundwater model codes for many applications.« less

  14. New core-reflector boundary conditions for transient nodal reactor calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, E.K.; Kim, C.H.; Joo, H.K.

    1995-09-01

    New core-reflector boundary conditions designed for the exclusion of the reflector region in transient nodal reactor calculations are formulated. Spatially flat frequency approximations for the temporal neutron behavior and two types of transverse leakage approximations in the reflector region are introduced to solve the transverse-integrated time-dependent one-dimensional diffusion equation and then to obtain relationships between net current and flux at the core-reflector interfaces. To examine the effectiveness of new core-reflector boundary conditions in transient nodal reactor computations, nodal expansion method (NEM) computations with and without explicit representation of the reflector are performed for Laboratorium fuer Reaktorregelung und Anlagen (LRA) boilingmore » water reactor (BWR) and Nuclear Energy Agency Committee on Reactor Physics (NEACRP) pressurized water reactor (PWR) rod ejection kinetics benchmark problems. Good agreement between two NEM computations is demonstrated in all the important transient parameters of two benchmark problems. A significant amount of CPU time saving is also demonstrated with the boundary condition model with transverse leakage (BCMTL) approximations in the reflector region. In the three-dimensional LRA BWR, the BCMTL and the explicit reflector model computations differ by {approximately}4% in transient peak power density while the BCMTL results in >40% of CPU time saving by excluding both the axial and the radial reflector regions from explicit computational nodes. In the NEACRP PWR problem, which includes six different transient cases, the largest difference is 24.4% in the transient maximum power in the one-node-per-assembly B1 transient results. This difference in the transient maximum power of the B1 case is shown to reduce to 11.7% in the four-node-per-assembly computations. As for the computing time, BCMTL is shown to reduce the CPU time >20% in all six transient cases of the NEACRP PWR.« less

  15. Tuning the Activity of Carbon for Electrocatalytic Hydrogen Evolution via an Iridium-Cobalt Alloy Core Encapsulated in Nitrogen-Doped Carbon Cages.

    PubMed

    Jiang, Peng; Chen, Jitang; Wang, Changlai; Yang, Kang; Gong, Shipeng; Liu, Shuai; Lin, Zhiyu; Li, Mengsi; Xia, Guoliang; Yang, Yang; Su, Jianwei; Chen, Qianwang

    2018-03-01

    Graphene, a 2D material consisting of a single layer of sp 2 -hybridized carbon, exhibits inert activity as an electrocatalyst, while the incorporation of heteroatoms (such as N) into the framework can tune its electronic properties. Because of the different electronegativity between N and C atoms, electrons will transfer from C to N in N-doped graphene nanosheets, changing inert C atoms adjacent to the N-dopants into active sites. Notwithstanding the achieved progress, its intrinsic activity in acidic media is still far from Pt/C. Here, a facile annealing strategy is adopted for Ir-doped metal-organic frameworks to synthesize IrCo nanoalloys encapsulated in N-doped graphene layers. The highly active electrocatalyst, with remarkably reduced Ir loading (1.56 wt%), achieves an ultralow Tafel slope of 23 mV dec -1 and an overpotential of only 24 mV at a current density of 10 mA cm -2 in 0.5 m sulfuric acid solution. Such superior performance is even superior to the noble-metal catalyst Pt. Surface structural and computational studies reveal that the superior behavior originates from the decreased ΔG H* for HER induced by the electrons transferred from the alloy core to the graphene layers, which is beneficial for enhancing CH binding. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. GENIE: a software package for gene-gene interaction analysis in genetic association studies using multiple GPU or CPU cores.

    PubMed

    Chikkagoudar, Satish; Wang, Kai; Li, Mingyao

    2011-05-26

    Gene-gene interaction in genetic association studies is computationally intensive when a large number of SNPs are involved. Most of the latest Central Processing Units (CPUs) have multiple cores, whereas Graphics Processing Units (GPUs) also have hundreds of cores and have been recently used to implement faster scientific software. However, currently there are no genetic analysis software packages that allow users to fully utilize the computing power of these multi-core devices for genetic interaction analysis for binary traits. Here we present a novel software package GENIE, which utilizes the power of multiple GPU or CPU processor cores to parallelize the interaction analysis. GENIE reads an entire genetic association study dataset into memory and partitions the dataset into fragments with non-overlapping sets of SNPs. For each fragment, GENIE analyzes: 1) the interaction of SNPs within it in parallel, and 2) the interaction between the SNPs of the current fragment and other fragments in parallel. We tested GENIE on a large-scale candidate gene study on high-density lipoprotein cholesterol. Using an NVIDIA Tesla C1060 graphics card, the GPU mode of GENIE achieves a speedup of 27 times over its single-core CPU mode run. GENIE is open-source, economical, user-friendly, and scalable. Since the computing power and memory capacity of graphics cards are increasing rapidly while their cost is going down, we anticipate that GENIE will achieve greater speedups with faster GPU cards. Documentation, source code, and precompiled binaries can be downloaded from http://www.cceb.upenn.edu/~mli/software/GENIE/.

  17. GENIE: a software package for gene-gene interaction analysis in genetic association studies using multiple GPU or CPU cores

    PubMed Central

    2011-01-01

    Background Gene-gene interaction in genetic association studies is computationally intensive when a large number of SNPs are involved. Most of the latest Central Processing Units (CPUs) have multiple cores, whereas Graphics Processing Units (GPUs) also have hundreds of cores and have been recently used to implement faster scientific software. However, currently there are no genetic analysis software packages that allow users to fully utilize the computing power of these multi-core devices for genetic interaction analysis for binary traits. Findings Here we present a novel software package GENIE, which utilizes the power of multiple GPU or CPU processor cores to parallelize the interaction analysis. GENIE reads an entire genetic association study dataset into memory and partitions the dataset into fragments with non-overlapping sets of SNPs. For each fragment, GENIE analyzes: 1) the interaction of SNPs within it in parallel, and 2) the interaction between the SNPs of the current fragment and other fragments in parallel. We tested GENIE on a large-scale candidate gene study on high-density lipoprotein cholesterol. Using an NVIDIA Tesla C1060 graphics card, the GPU mode of GENIE achieves a speedup of 27 times over its single-core CPU mode run. Conclusions GENIE is open-source, economical, user-friendly, and scalable. Since the computing power and memory capacity of graphics cards are increasing rapidly while their cost is going down, we anticipate that GENIE will achieve greater speedups with faster GPU cards. Documentation, source code, and precompiled binaries can be downloaded from http://www.cceb.upenn.edu/~mli/software/GENIE/. PMID:21615923

  18. Simulating an Exploding Fission-Bomb Core

    NASA Astrophysics Data System (ADS)

    Reed, Cameron

    2016-03-01

    A time-dependent desktop-computer simulation of the core of an exploding fission bomb (nuclear weapon) has been developed. The simulation models a core comprising a mixture of two isotopes: a fissile one (such as U-235) and an inert one (such as U-238) that captures neutrons and removes them from circulation. The user sets the enrichment percentage and scattering and fission cross-sections of the fissile isotope, the capture cross-section of the inert isotope, the number of neutrons liberated per fission, the number of ``initiator'' neutrons, the radius of the core, and the neutron-reflection efficiency of a surrounding tamper. The simulation, which is predicated on ordinary kinematics, follows the three-dimensional motions and fates of neutrons as they travel through the core. Limitations of time and computer memory render it impossible to model a real-life core, but results of numerous runs clearly demonstrate the existence of a critical mass for a given set of parameters and the dramatic effects of enrichment and tamper efficiency on the growth (or decay) of the neutron population. The logic of the simulation will be described and results of typical runs will be presented and discussed.

  19. Parallel computation of GA search for the artery shape determinants with CFD

    NASA Astrophysics Data System (ADS)

    Himeno, M.; Noda, S.; Fukasaku, K.; Himeno, R.

    2010-06-01

    We studied which factors play important role to determine the shape of arteries at the carotid artery bifurcation by performing multi-objective optimization with computation fluid dynamics (CFD) and the genetic algorithm (GA). To perform it, the most difficult problem is how to reduce turn-around time of the GA optimization with 3D unsteady computation of blood flow. We devised two levels of parallel computation method with the following features: level 1: parallel CFD computation with appropriate number of cores; level 2: parallel jobs generated by "master", which finds quickly available job cue and dispatches jobs, to reduce turn-around time. As a result, the turn-around time of one GA trial, which would have taken 462 days with one core, was reduced to less than two days on RIKEN supercomputer system, RICC, with 8192 cores. We performed a multi-objective optimization to minimize the maximum mean WSS and to minimize the sum of circumference for four different shapes and obtained a set of trade-off solutions for each shape. In addition, we found that the carotid bulb has the feature of the minimum local mean WSS and minimum local radius. We confirmed that our method is effective for examining determinants of artery shapes.

  20. Parallelization of interpolation, solar radiation and water flow simulation modules in GRASS GIS using OpenMP

    NASA Astrophysics Data System (ADS)

    Hofierka, Jaroslav; Lacko, Michal; Zubal, Stanislav

    2017-10-01

    In this paper, we describe the parallelization of three complex and computationally intensive modules of GRASS GIS using the OpenMP application programming interface for multi-core computers. These include the v.surf.rst module for spatial interpolation, the r.sun module for solar radiation modeling and the r.sim.water module for water flow simulation. We briefly describe the functionality of the modules and parallelization approaches used in the modules. Our approach includes the analysis of the module's functionality, identification of source code segments suitable for parallelization and proper application of OpenMP parallelization code to create efficient threads processing the subtasks. We document the efficiency of the solutions using the airborne laser scanning data representing land surface in the test area and derived high-resolution digital terrain model grids. We discuss the performance speed-up and parallelization efficiency depending on the number of processor threads. The study showed a substantial increase in computation speeds on a standard multi-core computer while maintaining the accuracy of results in comparison to the output from original modules. The presented parallelization approach showed the simplicity and efficiency of the parallelization of open-source GRASS GIS modules using OpenMP, leading to an increased performance of this geospatial software on standard multi-core computers.

  1. Performance of VPIC on Sequoia

    NASA Astrophysics Data System (ADS)

    Nystrom, William

    2014-10-01

    Sequoia is a major DOE computing resource which is characteristic of future resources in that it has many threads per compute node, 64, and the individual processor cores are simpler and less powerful than cores on previous processors like Intel's Sandy Bridge or AMD's Opteron. An effort is in progress to port VPIC to the Blue Gene Q architecture of Sequoia and evaluate its performance. Results of this work will be presented on single node performance of VPIC as well as multi-node scaling.

  2. Field project to obtain pressure core, wireline log, and production test data for evaluation of CO/sub 2/ flooding potential, Conoco MCA unit well No. 358, Maljamar Field, Lea County, New Mexico

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swift, T.E.; Marlow, R.E.; Wilhelm, M.H.

    1981-11-01

    This report describes part of the work done to fulfill a contract awarded to Gruy Federal, Inc., by the Department of Energy (DOE) on Feburary 12, 1979. The work includes pressure-coring and associated logging and testing programs to provide data on in-situ oil saturation, porosity and permeability distribution, and other data needed for resource characterization of fields and reservoirs in which CO/sub 2/ injection might have a high probability of success. This report details the second such project. Core porosities agreed well with computed log porosities. Core water saturation and computed log porosities agree fairly well from 3692 to 3712more » feet, poorly from 3712 to 3820 feet and in a general way from 4035 to 4107 feet. Computer log analysis techniques incorporating the a, m, and n values obtained from Core Laboratories analysis did not improve the agreement of log versus core derived water saturations. However, both core and log analysis indicated the ninth zone had the highest residual hydrocarbon saturations and production data confirmed the validity of oil saturation determinations. Residual oil saturation, for the perforated and tested intervals were 259 STB/acre-ft for the interval from 4035 to 4055 feet, and 150 STB/acre-ft for the interval from 3692 to 3718 feet. Nine BOPD was produced from the interval 4035 to 4055 feet and no oil was produced from interval 3692 to 3718 feet, qualitatively confirming the relative oil saturations as calculated. The low oil production in the zone from 4022 to 4055 and the lack of production from 3692 to 3718 feet indicated the zone to be at or near residual waterflood conditions as determined by log analysis. This project demonstrates the usefulness of integrating pressure core, log, and production data to realistically evaluate a reservoir for carbon dioxide flood.« less

  3. Statistics Online Computational Resource for Education

    ERIC Educational Resources Information Center

    Dinov, Ivo D.; Christou, Nicolas

    2009-01-01

    The Statistics Online Computational Resource (http://www.SOCR.ucla.edu) provides one of the largest collections of free Internet-based resources for probability and statistics education. SOCR develops, validates and disseminates two core types of materials--instructional resources and computational libraries. (Contains 2 figures.)

  4. Conformational changes accompany activation of reovirus RNA-dependent RNA transcription

    PubMed Central

    Mendez, Israel I.; Weiner, Scott G.; She, Yi-Min; Yeager, Mark; Coombs, Kevin M.

    2009-01-01

    Many critical biologic processes involve dynamic interactions between proteins and nucleic acids. Such dynamic processes are often difficult to delineate by conventional static methods. For example, while a variety of nucleic acid polymerase structures have been determined at atomic resolution, the details of how some multi-protein transcriptase complexes actively produce mRNA, as well as conformational changes associated with activation of such complexes, remain poorly understood. The mammalian reovirus innermost capsid (core) manifests all enzymatic activities necessary to produce mRNA from each of the 10 encased double-stranded RNA genes. We used rapid freezing and electron cryo-microscopy to trap and visualize transcriptionally active reovirus core particles and compared them to inactive core images. Rod-like density centered within actively transcribing core spike channels was attributed to exiting nascent mRNA. Comparative radial density plots of active and inactive core particles identified several structural changes in both internal and external regions of the icosahedral core capsid. Inactive and transcriptionally active cores were partially digested with trypsin and identities of initial tryptic peptides determined by mass spectrometry. Differentially-digested peptides, which also suggest transcription-associated conformational changes, were placed within the known 3-dimensional structures of major core proteins. PMID:18321727

  5. 36 CFR 79.4 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... or use of man-made or natural materials (such as slag, dumps, cores and debitage); (v) Organic..., laboratory reports, computer cards and tapes, computer disks and diskettes, printouts of computerized data...

  6. 36 CFR 79.4 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... or use of man-made or natural materials (such as slag, dumps, cores and debitage); (v) Organic..., laboratory reports, computer cards and tapes, computer disks and diskettes, printouts of computerized data...

  7. 36 CFR 79.4 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... or use of man-made or natural materials (such as slag, dumps, cores and debitage); (v) Organic..., laboratory reports, computer cards and tapes, computer disks and diskettes, printouts of computerized data...

  8. 36 CFR 79.4 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... or use of man-made or natural materials (such as slag, dumps, cores and debitage); (v) Organic..., laboratory reports, computer cards and tapes, computer disks and diskettes, printouts of computerized data...

  9. 36 CFR 79.4 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... or use of man-made or natural materials (such as slag, dumps, cores and debitage); (v) Organic..., laboratory reports, computer cards and tapes, computer disks and diskettes, printouts of computerized data...

  10. Recommendations for an Undergraduate Program in Computational Mathematics.

    ERIC Educational Resources Information Center

    Committee on the Undergraduate Program in Mathematics, Berkeley, CA.

    This report describes an undergraduate program designed to produce mathematicians who will know how to use and to apply computers. There is a core of 12 one-semester courses: five in mathematics, four in computational mathematics and three in computer science, leaving the senior year for electives. The content and spirit of these courses are…

  11. Design and Development of a Run-Time Monitor for Multi-Core Architectures in Cloud Computing

    PubMed Central

    Kang, Mikyung; Kang, Dong-In; Crago, Stephen P.; Park, Gyung-Leen; Lee, Junghoon

    2011-01-01

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data. PMID:22163811

  12. Design and development of a run-time monitor for multi-core architectures in cloud computing.

    PubMed

    Kang, Mikyung; Kang, Dong-In; Crago, Stephen P; Park, Gyung-Leen; Lee, Junghoon

    2011-01-01

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.

  13. IPSL-CM5A2. An Earth System Model designed to run long simulations for past and future climates.

    NASA Astrophysics Data System (ADS)

    Sepulchre, Pierre; Caubel, Arnaud; Marti, Olivier; Hourdin, Frédéric; Dufresne, Jean-Louis; Boucher, Olivier

    2017-04-01

    The IPSL-CM5A model was developed and released in 2013 "to study the long-term response of the climate system to natural and anthropogenic forcings as part of the 5th Phase of the Coupled Model Intercomparison Project (CMIP5)" [Dufresne et al., 2013]. Although this model also has been used for numerous paleoclimate studies, a major limitation was its computation time, which averaged 10 model-years / day on 32 cores of the Curie supercomputer (on TGCC computing center, France). Such performances were compatible with the experimental designs of intercomparison projects (e.g. CMIP, PMIP) but became limiting for modelling activities involving several multi-millenial experiments, which are typical for Quaternary or "deeptime" paleoclimate studies, in which a fully-equilibrated deep-ocean is mandatory. Here we present the Earth-System model IPSL-CM5A2. Based on IPSL-CM5A, technical developments have been performed both on separate components and on the coupling system in order to speed up the whole coupled model. These developments include the integration of hybrid parallelization MPI-OpenMP in LMDz atmospheric component, the use of a new input-ouput library to perform parallel asynchronous input/output by using computing cores as "IO servers", the use of a parallel coupling library between the ocean and the atmospheric components. Running on 304 cores, the model can now simulate 55 years per day, opening new gates towards multi-millenial simulations. Apart from obtaining better computing performances, one aim of setting up IPSL-CM5A2 was also to overcome the cold bias depicted in global surface air temperature (t2m) in IPSL-CM5A. We present the tuning strategy to overcome this bias as well as the main characteristics (including biases) of the pre-industrial climate simulated by IPSL-CM5A2. Lastly, we shortly present paleoclimate simulations run with this model, for the Holocene and for deeper timescales in the Cenozoic, for which the particular continental configuration was overcome by a new design of the ocean tripolar grid.

  14. 32 CFR 169a.9 - Reviews: Existing in-house commercial activities.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... skill levels. (ii) Core logistics activities. The core logistics capability reported to Congress, March... either government or contractor personnel, whichever is more cost effective. Core logistics activities... submitted to the ASD (P&L). DoD Components may propose to the ASD (P&L) additional core logistics capability...

  15. Selective increase of intention-based economic decisions by noninvasive brain stimulation to the dorsolateral prefrontal cortex.

    PubMed

    Nihonsugi, Tsuyoshi; Ihara, Aya; Haruno, Masahiko

    2015-02-25

    The intention behind another's action and the impact of the outcome are major determinants of human economic behavior. It is poorly understood, however, whether the two systems share a core neural computation. Here, we investigated whether the two systems are causally dissociable in the brain by integrating computational modeling, functional magnetic resonance imaging, and transcranial direct current stimulation experiments in a newly developed trust game task. We show not only that right dorsolateral prefrontal cortex (DLPFC) activity is correlated with intention-based economic decisions and that ventral striatum and amygdala activity are correlated with outcome-based decisions, but also that stimulation to the DLPFC selectively enhances intention-based decisions. These findings suggest that the right DLPFC is involved in the implementation of intention-based decisions in the processing of cooperative decisions. This causal dissociation of cortical and subcortical backgrounds may indicate evolutionary and developmental differences in the two decision systems. Copyright © 2015 the authors 0270-6474/15/53412-08$15.00/0.

  16. The Virtual Test Bed Project

    NASA Technical Reports Server (NTRS)

    Rabelo, Luis C.

    2002-01-01

    This is a report of my activities as a NASA Fellow during the summer of 2002 at the NASA Kennedy Space Center (KSC). The core of these activities is the assigned project: the Virtual Test Bed (VTB) from the Spaceport Engineering and Technology Directorate. The VTB Project has its foundations in the NASA Ames Research Center (ARC) Intelligent Launch & Range Operations program. The objective of the VTB project is to develop a new and unique collaborative computing environment where simulation models can be hosted and integrated in a seamless fashion. This collaborative computing environment will be used to build a Virtual Range as well as a Virtual Spaceport. This project will work as a technology pipeline to research, develop, test and validate R&D efforts against real time operations without interfering with the actual operations or consuming the operational personnel s time. This report will also focus on the systems issues required to conceptualize and provide form to a systems architecture capable of handling the different demands.

  17. The impact of teachers’ modifications of an evidenced-based HIV prevention intervention on program outcomes

    PubMed Central

    Wang, Bo; Stanton, Bonita; Lunn, Sonja; Rolle, Glenda; Poitier, Maxwell; Adderley, Richard; Li, Xiaoming; Koci, Veronica; Deveaux, Lynette

    2015-01-01

    The degree to which evidence-based program outcomes are affected by modifications is a significant concern in the implementation of interventions. The ongoing national implementation of an evidence-based HIV prevention program targeting grade six students in The Bahamas [Focus on Youth in The Caribbean (FOYC)] offers an opportunity to explore factors associated with teachers’ modification of FOYC lessons and to examine the impact of types and degrees of modifications on student outcomes. Data were collected in 2012 from 155 teachers and 3646 students in 77 government elementary schools. Results indicate that teachers taught 16 of 30 core activities, 24.5 of 46 total activities and 4.7 of 8 sessions. Over one-half of the teachers made modifications to FOYC core activities; one-fourth of the teachers modified 25% or more core activities that they taught (heavily modified FOYC). Omitting core activities was the most common content modification, followed by lengthening FOYC lessons with reading, writing assignments or role-play games, shortening core activities or adding educational videos. Mixed-effects modeling revealed that omitting core activities had negative impacts on all four student outcomes. Shortening core activities and adding videos into lessons had negative impacts on HIV/AIDS knowledge and/or intention to use condom protection. Heavy modifications (>1/4 core activities) were associated with diminished program effectiveness. Heavy modifications and omitting or shortening core activities were negatively related to teachers’ level of implementation. We conclude that poorer student outcomes were associated with heavy modifications. PMID:26297497

  18. The Impact of Teachers' Modifications of an Evidenced-Based HIV Prevention Intervention on Program Outcomes.

    PubMed

    Wang, Bo; Stanton, Bonita; Lunn, Sonja; Rolle, Glenda; Poitier, Maxwell; Adderley, Richard; Li, Xiaoming; Koci, Veronica; Deveaux, Lynette

    2016-01-01

    The degree to which evidence-based program outcomes are affected by modifications is a significant concern in the implementation of interventions. The ongoing national implementation of an evidence-based HIV prevention program targeting grade 6 students in The Bahamas [Focus on Youth in The Caribbean (FOYC)] offers an opportunity to explore factors associated with teachers' modification of FOYC lessons and to examine the impact of types and degrees of modifications on student outcomes. Data were collected in 2012 from 155 teachers and 3646 students in 77 government elementary schools. Results indicate that teachers taught 16 of 30 core activities, 24.5 of 46 total activities and 4.7 of 8 sessions. Over one-half of the teachers made modifications to FOYC core activities; one-fourth of the teachers modified 25 % or more core activities that they taught (heavily modified FOYC). Omitting core activities was the most common content modification, followed by lengthening FOYC lessons with reading, writing assignments or role-play games, and shortening core activities or adding educational videos. Mixed-effects modeling revealed that omitting core activities had negative impacts on all four student outcomes. Shortening core activities and adding videos into lessons had negative impacts on HIV/AIDS knowledge and/or intention to use condom protection. Heavy modifications (>1/4 core activities) were associated with diminished program effectiveness. Heavy modifications and omitting or shortening core activities were negatively related to teachers' level of implementation. We conclude that poorer student outcomes were associated with heavy modifications.

  19. Efficient parallel implementation of active appearance model fitting algorithm on GPU.

    PubMed

    Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou

    2014-01-01

    The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  20. Efficient Parallel Implementation of Active Appearance Model Fitting Algorithm on GPU

    PubMed Central

    Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou

    2014-01-01

    The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures. PMID:24723812

  1. A qualitative study adopting a user-centered approach to design and validate a brain computer interface for cognitive rehabilitation for people with brain injury.

    PubMed

    Martin, Suzanne; Armstrong, Elaine; Thomson, Eileen; Vargiu, Eloisa; Solà, Marc; Dauwalder, Stefan; Miralles, Felip; Daly Lynn, Jean

    2017-07-14

    Cognitive rehabilitation is established as a core intervention within rehabilitation programs following a traumatic brain injury (TBI). Digitally enabled assistive technologies offer opportunities for clinicians to increase remote access to rehabilitation supporting transition into home. Brain Computer Interface (BCI) systems can harness the residual abilities of individuals with limited function to gain control over computers through their brain waves. This paper presents an online cognitive rehabilitation application developed with therapists, to work remotely with people who have TBI, who will use BCI at home to engage in the therapy. A qualitative research study was completed with people who are community dwellers post brain injury (end users), and a cohort of therapists involved in cognitive rehabilitation. A user-centered approach over three phases in the development, design and feasibility testing of this cognitive rehabilitation application included two tasks (Find-a-Category and a Memory Card task). The therapist could remotely prescribe activity with different levels of difficulty. The service user had a home interface which would present the therapy activities. This novel work was achieved by an international consortium of academics, business partners and service users.

  2. Formation of Cool Cores in Galaxy Clusters via Hierarchical Mergers

    NASA Astrophysics Data System (ADS)

    Motl, Patrick M.; Burns, Jack O.; Loken, Chris; Norman, Michael L.; Bryan, Greg

    2004-05-01

    We present a new scenario for the formation of cool cores in rich galaxy clusters, based on results from recent high spatial dynamic range, adaptive mesh Eulerian hydrodynamic simulations of large-scale structure formation. We find that cores of cool gas, material that would be identified as a classical cooling flow on the basis of its X-ray luminosity excess and temperature profile, are built from the accretion of discrete stable subclusters. Any ``cooling flow'' present is overwhelmed by the velocity field within the cluster; the bulk flow of gas through the cluster typically has speeds up to about 2000 km s-1, and significant rotation is frequently present in the cluster core. The inclusion of consistent initial cosmological conditions for the cluster within its surrounding supercluster environment is crucial when the evolution of cool cores in rich galaxy clusters is simulated. This new model for the hierarchical assembly of cool gas naturally explains the high frequency of cool cores in rich galaxy clusters, despite the fact that a majority of these clusters show evidence of substructure that is believed to arise from recent merger activity. Furthermore, our simulations generate complex cluster cores in concordance with recent X-ray observations of cool fronts, cool ``bullets,'' and filaments in a number of galaxy clusters. Our simulations were computed with a coupled N-body, Eulerian, adaptive mesh refinement, hydrodynamics cosmology code that properly treats the effects of shocks and radiative cooling by the gas. We employ up to seven levels of refinement to attain a peak resolution of 15.6 kpc within a volume 256 Mpc on a side and assume a standard ΛCDM cosmology.

  3. An MPI-based MoSST core dynamics model

    NASA Astrophysics Data System (ADS)

    Jiang, Weiyuan; Kuang, Weijia

    2008-09-01

    Distributed systems are among the main cost-effective and expandable platforms for high-end scientific computing. Therefore scalable numerical models are important for effective use of such systems. In this paper, we present an MPI-based numerical core dynamics model for simulation of geodynamo and planetary dynamos, and for simulation of core-mantle interactions. The model is developed based on MPI libraries. Two algorithms are used for node-node communication: a "master-slave" architecture and a "divide-and-conquer" architecture. The former is easy to implement but not scalable in communication. The latter is scalable in both computation and communication. The model scalability is tested on Linux PC clusters with up to 128 nodes. This model is also benchmarked with a published numerical dynamo model solution.

  4. Pervasive influence of idiosyncratic associative biases during facial emotion recognition.

    PubMed

    El Zein, Marwa; Wyart, Valentin; Grèzes, Julie

    2018-06-11

    Facial morphology has been shown to influence perceptual judgments of emotion in a way that is shared across human observers. Here we demonstrate that these shared associations between facial morphology and emotion coexist with strong variations unique to each human observer. Interestingly, a large part of these idiosyncratic associations does not vary on short time scales, emerging from stable inter-individual differences in the way facial morphological features influence emotion recognition. Computational modelling of decision-making and neural recordings of electrical brain activity revealed that both shared and idiosyncratic face-emotion associations operate through a common biasing mechanism rather than an increased sensitivity to face-associated emotions. Together, these findings emphasize the underestimated influence of idiosyncrasies on core social judgments and identify their neuro-computational signatures.

  5. Computer Training for Staff and Patrons.

    ERIC Educational Resources Information Center

    Krissoff, Alan; Konrad, Lee

    1998-01-01

    Describes a pilot computer training program for library staff and patrons at the University of Wisconsin-Madison. Reviews components of effective training programs and highlights core computer competencies: operating systems, hardware and software basics and troubleshooting, and search concepts and techniques. Includes an instructional outline and…

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vinnikov, B.; NRC Kurchatov Inst.

    According to Scientific and Technical Cooperation between the USA and Russia in the field of nuclear engineering the Idaho National Laboratory has transferred to the possession of the National Research Center ' Kurchatov Inst. ' the SAPHIRE software without any fee. With the help of the software Kurchatov Inst. developed a Pilot Living PSA- Model of Leningrad NPP Unit 1. Computations of core damage frequencies were carried out for additional Initiating Events. In the submitted paper such additional Initiating Events are fires in various compartments of the NPP. During the computations of each fire, structure of the PSA - Modelmore » was not changed, but Fault Trees for the appropriate systems, which are removed from service during the fire, were changed. It follows from the computations, that for ten fires Core Damaged Frequencies (CDF) are not changed. Other six fires will cause additional core damage. On the basis of the calculated results it is possible to determine a degree of importance of these fires and to establish sequence of performance of fire-prevention measures in various places of the NPP. (authors)« less

  7. MEGA-CC: computing core of molecular evolutionary genetics analysis program for automated and iterative data analysis.

    PubMed

    Kumar, Sudhir; Stecher, Glen; Peterson, Daniel; Tamura, Koichiro

    2012-10-15

    There is a growing need in the research community to apply the molecular evolutionary genetics analysis (MEGA) software tool for batch processing a large number of datasets and to integrate it into analysis workflows. Therefore, we now make available the computing core of the MEGA software as a stand-alone executable (MEGA-CC), along with an analysis prototyper (MEGA-Proto). MEGA-CC provides users with access to all the computational analyses available through MEGA's graphical user interface version. This includes methods for multiple sequence alignment, substitution model selection, evolutionary distance estimation, phylogeny inference, substitution rate and pattern estimation, tests of natural selection and ancestral sequence inference. Additionally, we have upgraded the source code for phylogenetic analysis using the maximum likelihood methods for parallel execution on multiple processors and cores. Here, we describe MEGA-CC and outline the steps for using MEGA-CC in tandem with MEGA-Proto for iterative and automated data analysis. http://www.megasoftware.net/.

  8. Navier-Stokes calculations for 3D gaseous fuel injection with data comparisons

    NASA Technical Reports Server (NTRS)

    Fuller, E. J.; Walters, R. W.

    1991-01-01

    Results from a computational study and experiments designed to further expand the knowledge of gaseous injection into supersonic cross-flows are presented. Experiments performed at Mach 6 included several cases of gaseous helium injection with low transverse angles and injection with low transverse angles coupled with a low yaw angle. Both experimental and computational data confirm that injector yaw has an adverse effect on the helium core decay rate. An array of injectors is found to give higher penetration into the freestream without loss of core injectant decay as compared to a single injector. Lateral diffusion plays a major role in lateral plume spreading, eddy viscosity, injectant plume, and injectant-freestream mixing. Grid refinement makes it possible to capture the gradients in the streamwise direction accurately and to vastly improve the data comparisons. Computational results for a refined grid are found to compare favorably with experimental data on injectant overall and core penetration provided laminar lateral diffusion was taken into account using the modified Baldwin-Lomax turbulence model.

  9. High performance in silico virtual drug screening on many-core processors.

    PubMed

    McIntosh-Smith, Simon; Price, James; Sessions, Richard B; Ibarra, Amaurys A

    2015-05-01

    Drug screening is an important part of the drug development pipeline for the pharmaceutical industry. Traditional, lab-based methods are increasingly being augmented with computational methods, ranging from simple molecular similarity searches through more complex pharmacophore matching to more computationally intensive approaches, such as molecular docking. The latter simulates the binding of drug molecules to their targets, typically protein molecules. In this work, we describe BUDE, the Bristol University Docking Engine, which has been ported to the OpenCL industry standard parallel programming language in order to exploit the performance of modern many-core processors. Our highly optimized OpenCL implementation of BUDE sustains 1.43 TFLOP/s on a single Nvidia GTX 680 GPU, or 46% of peak performance. BUDE also exploits OpenCL to deliver effective performance portability across a broad spectrum of different computer architectures from different vendors, including GPUs from Nvidia and AMD, Intel's Xeon Phi and multi-core CPUs with SIMD instruction sets.

  10. High performance in silico virtual drug screening on many-core processors

    PubMed Central

    Price, James; Sessions, Richard B; Ibarra, Amaurys A

    2015-01-01

    Drug screening is an important part of the drug development pipeline for the pharmaceutical industry. Traditional, lab-based methods are increasingly being augmented with computational methods, ranging from simple molecular similarity searches through more complex pharmacophore matching to more computationally intensive approaches, such as molecular docking. The latter simulates the binding of drug molecules to their targets, typically protein molecules. In this work, we describe BUDE, the Bristol University Docking Engine, which has been ported to the OpenCL industry standard parallel programming language in order to exploit the performance of modern many-core processors. Our highly optimized OpenCL implementation of BUDE sustains 1.43 TFLOP/s on a single Nvidia GTX 680 GPU, or 46% of peak performance. BUDE also exploits OpenCL to deliver effective performance portability across a broad spectrum of different computer architectures from different vendors, including GPUs from Nvidia and AMD, Intel’s Xeon Phi and multi-core CPUs with SIMD instruction sets. PMID:25972727

  11. Cajanusflavanols A-C, Three Pairs of Flavonostilbene Enantiomers from Cajanus cajan.

    PubMed

    He, Qi-Fang; Wu, Zhen-Long; Huang, Xiao-Jun; Zhong, Yuan-Lin; Li, Man-Mei; Jiang, Ren-Wang; Li, Yao-Lan; Ye, Wen-Cai; Wang, Ying

    2018-02-02

    Three pairs of new flavonostilbene enantiomers, cajanusflavanols A-C (1-3), along with their putative biogenetic precursors 4-6, were isolated from Cajanus cajan. Compound 1 possesses an unprecedented carbon skeleton featuring a unique highly functionalized cyclopenta[1,2,3-de]isobenzopyran-1-one tricyclic core. Compounds 2 and 3 are the first examples of methylene-unit-linked flavonostilbenes. Their structures with absolute configurations were elucidated by spectroscopic analyses, X-ray diffraction, and computational calculations. Compounds 1 and 2 exhibited significant in vitro anti-inflammatory activities.

  12. Development of multiple user AMTRAN on the Datacraft DC6024

    NASA Technical Reports Server (NTRS)

    Austin, S. L.

    1973-01-01

    A multiple user version of AMTRAn was implemented on the Datacraft DC6024 computer is reported. The major portion of the multiple user logic is incorporated in the main program which remains in core during all AMTRAN processes. A detailed flowchart of the main program is provided as documentation of the multiple user capability. Activities are directed toward perfecting its capability, providing new features in response to user needs and requests, providing a two-dimensional array AMTRAN containing multiple user logic, and providing documentation as the tasks progress.

  13. Laser Heating of the Core-Shell Nanowires

    NASA Astrophysics Data System (ADS)

    Astefanoaei, Iordana; Dumitru, Ioan; Stancu, Alexandru

    2016-12-01

    The induced thermal stress in a heating process is an important parameter to be known and controlled in the magnetization process of core-shell nanowires. This paper analyses the stress produced by a laser heating source placed at one end of a core-shell type structure. The thermal field was computed with the non-Fourier heat transport equation using a finite element method (FEM) implemented in Comsol Multiphysics. The internal stresses are essentially due to thermal gradients and different expansion characteristics of core and shell materials. The stress values were computed using the thermo elastic formalism and are depending on the laser beam parameters (spot size, power etc.) and system characteristics (dimensions, thermal characteristics). Stresses in the GPa range were estimated and consequently we find that the magnetic state of the system can be influenced significantly. A shell material as the glass which is a good thermal insulator induces in the magnetic core, the smaller stresses and consequently the smaller magnetoelastic energy. These results lead to a better understanding of the switching process in the magnetic materials.

  14. 18F-FDG uptake in the colon is modulated by metformin but not associated with core body temperature and energy expenditure.

    PubMed

    Bahler, Lonneke; Holleman, Frits; Chan, Man-Wai; Booij, Jan; Hoekstra, Joost B; Verberne, Hein J

    2017-01-01

    Physiological colonic 18F-fluorodeoxyglucose (18F-FDG) uptake is a frequent finding on 18F-FDG positron emission tomography computed tomography (PET-CT). Interestingly, metformin, a glucose lowering drug associated with moderate weight loss, is also associated with an increased colonic 18F-FDG uptake. Consequently, increased colonic glucose use might partly explain the weight losing effect of metformin when this results in an increased energy expenditure and/or core body temperature. Therefore, we aimed to determine whether metformin modifies the metabolic activity of the colon by increasing glucose uptake. In this open label, non-randomized, prospective mechanistic study, we included eight lean and eight overweight males. We measured colonic 18F-FDG uptake on PET-CT, energy expenditure and core body temperature before and after the use of metformin. The maximal colonic 18F-FDG uptake was measured in 5 separate segments (caecum, colon ascendens,-transversum,-descendens and sigmoid). The maximal colonic 18F-FDG uptake increased significantly in all separate segments after the use of metformin. There was no significant difference in energy expenditure or core body temperature after the use of metformin. There was no correlation between maximal colonic 18F-FDG uptake and energy expenditure or core body temperature. Metformin significantly increases colonic 18F-FDG uptake, but this increased uptake is not associated with an increase in energy expenditure or core body temperature. Although the colon might be an important site of the glucose plasma lowering actions of metformin, this mechanism of action does not explain directly any associated weight loss.

  15. Automatic Quantification of X-ray Computed Tomography Images of Cores: Method and Application to Shimokita Cores (Northeast Coast of Honshu, Japan)

    NASA Astrophysics Data System (ADS)

    Gaillot, P.

    2007-12-01

    X-ray computed tomography (CT) of rock core provides nondestructive cross-sectional or three-dimensional core representations from the attenuation of electromagnetic radiation. Attenuation depends on the density and the atomic constituents of the rock material that is scanned. Since it has the potential to non-invasively measure phase distribution and species concentration, X-ray CT offers significant advantages to characterize both heterogeneous and apparently homogeneous lithologies. In particular, once empirically calibrated into 3D density images, this scanning technique is useful in the observation of density variation. In this paper, I present a procedure from which information contained in the 3D images can be quantitatively extracted and turned into very-high resolution core logs and core image logs including (1) the radial and angular distributions of density values, (2) the histogram of distribution of the density and its related statistical parameters (average, 10- 25- 50, 75 and 90 percentiles, and width at half maximum), and (3) the volume, the average density and the mass contribution of three core fractions defined by two user-defined density thresholds (voids and vugs < 1.01 g/cc ≤ damaged core material < 1.25 g/cc < non-damaged core material). In turn, these quantitative outputs (1) allow the recognition of bedding and sedimentary features, as well as natural and coring-induced fractures, (2) provide a high-resolution bulk density core log, and (3) provide quantitative estimates of core voids and core damaged zones that can further be used to characterize core quality and core disturbance, and apply, where appropriate, volume correction on core physical properties (gamma-ray attenuation density, magnetic susceptibility, natural gamma radiation, non-contact electrical resistivity, P-wave velocity) acquired via Multi- Sensors Core loggers (MSCL). The procedure is illustrated on core data (XR-CT images, continuous MSCL physical properties and discrete Moisture and Density measurements) from the Hole C9001C drilled off-shore Shimokita (northeast coast of Honshu, Japan) during the shake-down cruise (08-11/2006) of the scientific drilling vessel, Chikyu.

  16. Azaborines: Unique Isosteres of Aromatic and Heteroaromatic Systems

    NASA Astrophysics Data System (ADS)

    Davies, Geraint H. M.

    The azaborine motif provides a unique opportunity to develop core isosteres by inserting B-N units in place of C=C bonds within aromatic scaffolds. These boron/nitrogen-containing heteroaromatic systems provide molecular frameworks that have similar, but not identical, geometrical shapes and electronic distributions to the analogous all carbon systems. Synthetic routes to the 1,3,2-benzodiazaborole core have been developed utilizing entirely bench-stable starting materials, including organotrifluoroborates, enabling a wider array of substrate analogues under facile reaction conditions. The physical, structural, and electronic properties of these compounds were explored computationally to understand the influence of the B-N replacement on structure, aromaticity, and the isosteric viability of these analogues. The class of azaborininones could similarly be accessed from both organotrifluoroborates and boronic acids. An inexpensive, common reagent, SiO2, was found to serve as both a fluorophile and desiccant to facilitate the annulation process across three different azaborininone platforms. Computationally-derived pK a values, NICS aromaticity calculations, and electrostatic potential surfaces revealed a unique isoelectronic/isostructural relationship between these azaborines and their carbon isosteres that changed based on boron connectivity. The 2,1-borazaronaphthalene motif can be accessed through robust methods of synthesis and subsequent functionalization strategies, affording an ideal platform to use for a variety of applications. However, the initial scope of substructures for this archetype has been limited by the lack of nitrogen-containing heteroaryls that can be incorporated within them. Modified reaction conditions enabled greater tolerance to provide access to a wider range of substructures. Additionally, computational and experimental studies of solvent decomposition demonstrate that substitution off boron is important to stability. Post-annulation derivitization of the azaborine cores can allow access to higher order functionalized structures. A method for functionalizing the 2,1-borazaronaphthalene scaffold using ammonium alkylbis(catecholato)silicates via photoredox/nickel dual catalysis was found to be highly effective. By forging Csp3-C sp2 bonds via this approach, alkyl fragments with various functional groups can be introduced to the azaborine core, affording previously inaccessible heterocyclic isosteres in good to excellent yields. These conditions provide sensitive functional group tolerance, even permitting the cross-coupling of unprotected primary and secondary amines. Regioselective C-H borylation and subsequent cross-coupling of the 2,1-borazaronaphthalene core could also be achieved. Although 2,1-borazaronaphthalene is closely related to naphthalene in terms of structure, the argument is made that the former has electronic similarities to indole. Based on that premise, iridium-mediated C-H activation has enabled facile installation of a versatile, nucleophilic coupling handle at a previously inaccessible site of 2,1-borazaronaphthalenes. A variety of substituted 2,1-borazaronaphthalene cores can be successfully borylated and further cross-coupled in a facile manner to yield diverse C(8)-substituted 2,1-borazaronaphthalenes.

  17. Optimizing performance by improving core stability and core strength.

    PubMed

    Hibbs, Angela E; Thompson, Kevin G; French, Duncan; Wrigley, Allan; Spears, Iain

    2008-01-01

    Core stability and core strength have been subject to research since the early 1980s. Research has highlighted benefits of training these processes for people with back pain and for carrying out everyday activities. However, less research has been performed on the benefits of core training for elite athletes and how this training should be carried out to optimize sporting performance. Many elite athletes undertake core stability and core strength training as part of their training programme, despite contradictory findings and conclusions as to their efficacy. This is mainly due to the lack of a gold standard method for measuring core stability and strength when performing everyday tasks and sporting movements. A further confounding factor is that because of the differing demands on the core musculature during everyday activities (low load, slow movements) and sporting activities (high load, resisted, dynamic movements), research performed in the rehabilitation sector cannot be applied to the sporting environment and, subsequently, data regarding core training programmes and their effectiveness on sporting performance are lacking. There are many articles in the literature that promote core training programmes and exercises for performance enhancement without providing a strong scientific rationale of their effectiveness, especially in the sporting sector. In the rehabilitation sector, improvements in lower back injuries have been reported by improving core stability. Few studies have observed any performance enhancement in sporting activities despite observing improvements in core stability and core strength following a core training programme. A clearer understanding of the roles that specific muscles have during core stability and core strength exercises would enable more functional training programmes to be implemented, which may result in a more effective transfer of these skills to actual sporting activities.

  18. Dinardokanshones C-E, isonardoeudesmols A-D and nardoeudesmol D from Nardostachys jatamansi DC.

    PubMed

    Wu, Hong-Hua; Deng, Xu; Zhang, Hu; Chen, Ying-Peng; Ying, Shu-Song; Wu, Yi-Jing; Liu, Yan-Ting; Zhu, Yan; Gao, Xiu-Mei; Xu, Yan-Tong; Li, Li

    2018-06-01

    Dinardokanshones C-E, three sesquiterpenoid dimers comprising an unusual nornardosinane-type sesquiterpenoid core and an aristolane-type sesquiterpenoid unit conjugated by an extra pyran or furan ring, together with monomeric sesquiterpenoids isonardoeudesmols A-D and nardoeudesmol D, were isolated from the underground parts of Nardostachys jatamansi DC. Structures of the eight compounds were elucidated by analysis of the extensive spectroscopic data, and their absolute configurations were established by analysis of NOESY and X-ray diffraction data, combined with computational electronic circular dichroism (ECD) calculations. The results of SERT activity assay revealed that isonardoeudesmol D and nardoeudesmol D significantly inhibited SERT activity, while dinardokanshones D-E and isonardoeudesmols B-C significantly enhanced SERT activity, among which dinardokanshone D exhibited the strongest effect. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Description of core samples returned by Apollo 12

    NASA Technical Reports Server (NTRS)

    Lindsay, J. F.; Fryxell, R.

    1971-01-01

    Three core samples were collected by the Apollo 12 astronauts. Two are single cores, one of which (sample 12026) was collected close to the lunar module during the first extravehicular activity period and is 19.3 centimeters long. The second core (sample 12027) was collected at Sharp Crater during the second extravehicular activity period and is 17.4 centimeters long. The third sample is a double core (samples 12025 and 12028), which was collected near Halo Crater during the second extravehicular activity period. Unlike the other cores, the double-drive-tube core sample has complex layering with at least 10 clearly defined stratigraphic units. This core sample is approximately 41 centimeters long.

  20. Optical properties of light absorbing carbon aggregates mixed with sulfate: assessment of different model geometries for climate forcing calculations.

    PubMed

    Kahnert, Michael; Nousiainen, Timo; Lindqvist, Hannakaisa; Ebert, Martin

    2012-04-23

    Light scattering by light absorbing carbon (LAC) aggregates encapsulated into sulfate shells is computed by use of the discrete dipole method. Computations are performed for a UV, visible, and IR wavelength, different particle sizes, and volume fractions. Reference computations are compared to three classes of simplified model particles that have been proposed for climate modeling purposes. Neither model matches the reference results sufficiently well. Remarkably, more realistic core-shell geometries fall behind homogeneous mixture models. An extended model based on a core-shell-shell geometry is proposed and tested. Good agreement is found for total optical cross sections and the asymmetry parameter. © 2012 Optical Society of America

  1. Multiphysics Analysis of a Solid-Core Nuclear Thermal Engine Thrust Chamber

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Canabal, Francisco; Cheng, Gary; Chen, Yen-Sen

    2006-01-01

    The objective of this effort is to develop an efficient and accurate thermo-fluid computational methodology to predict environments for a hypothetical solid-core, nuclear thermal engine thrust chamber. The computational methodology is based on an unstructured-grid, pressure-based computational fluid dynamics methodology. Formulations for heat transfer in solids and porous media were implemented and anchored. A two-pronged approach was employed in this effort: A detailed thermo-fluid analysis on a multi-channel flow element for mid-section corrosion investigation; and a global modeling of the thrust chamber to understand the effect of hydrogen dissociation and recombination on heat transfer and thrust performance. The formulations and preliminary results on both aspects are presented.

  2. Real time display Fourier-domain OCT using multi-thread parallel computing with data vectorization

    NASA Astrophysics Data System (ADS)

    Eom, Tae Joong; Kim, Hoon Seop; Kim, Chul Min; Lee, Yeung Lak; Choi, Eun-Seo

    2011-03-01

    We demonstrate a real-time display of processed OCT images using multi-thread parallel computing with a quad-core CPU of a personal computer. The data of each A-line are treated as one vector to maximize the data translation rate between the cores of the CPU and RAM stored image data. A display rate of 29.9 frames/sec for processed OCT data (4096 FFT-size x 500 A-scans) is achieved in our system using a wavelength swept source with 52-kHz swept frequency. The data processing times of the OCT image and a Doppler OCT image with a 4-time average are 23.8 msec and 91.4 msec.

  3. Ni@Ru and NiCo@Ru Core-Shell Hexagonal Nanosandwiches with a Compositionally Tunable Core and a Regioselectively Grown Shell.

    PubMed

    Hwang, Hyeyoun; Kwon, Taehyun; Kim, Ho Young; Park, Jongsik; Oh, Aram; Kim, Byeongyoon; Baik, Hionsuck; Joo, Sang Hoon; Lee, Kwangyeol

    2018-01-01

    The development of highly active electrocatalysts is crucial for the advancement of renewable energy conversion devices. The design of core-shell nanoparticle catalysts represents a promising approach to boost catalytic activity as well as save the use of expensive precious metals. Here, a simple, one-step synthetic route is reported to prepare hexagonal nanosandwich-shaped Ni@Ru core-shell nanoparticles (Ni@Ru HNS), in which Ru shell layers are overgrown in a regioselective manner on the top and bottom, and around the center section of a hexagonal Ni nanoplate core. Notably, the synthesis can be extended to NiCo@Ru core-shell nanoparticles with tunable core compositions (Ni 3 Co x @Ru HNS). Core-shell HNS structures show superior electrocatalytic activity for the oxygen evolution reaction (OER) to a commercial RuO 2 black catalyst, with their OER activity being dependent on their core compositions. The observed trend in OER activity is correlated to the population of Ru oxide (Ru 4+ ) species, which can be modulated by the core compositions. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Preparation and biological activities of anti-HER2 monoclonal antibodies with fully core-fucosylated homogeneous bi-antennary complex-type glycans.

    PubMed

    Tsukimura, Wataru; Kurogochi, Masaki; Mori, Masako; Osumi, Kenji; Matsuda, Akio; Takegawa, Kaoru; Furukawa, Kiyoshi; Shirai, Takashi

    2017-12-01

    Recently, the absence of a core-fucose residue in the N-glycan has been implicated to be important for enhancing antibody-dependent cellular cytotoxicity (ADCC) activity of immunoglobulin G monoclonal antibodies (mAbs). Here, we first prepared anti-HER2 mAbs having two core-fucosylated N-glycan chains with the single G2F, G1aF, G1bF, or G0F structure, together with those having two N-glycan chains with a single non-core-fucosylated corresponding structure for comparison, and determined their biological activities. Dissociation constants of mAbs with core-fucosylated N-glycans bound to recombinant Fcγ-receptor type IIIa variant were 10 times higher than those with the non-core-fucosylated N-glycans, regardless of core glycan structures. mAbs with the core-fucosylated N-glycans had markedly reduced ADCC activities, while those with the non-core-fucosylated N-glycans had high activities. These results indicate that the presence of a core-fucose residue in the N-glycan suppresses the binding to the Fc-receptor and the induction of ADCC of anti-HER2 mAbs.

  5. HCV core protein induces hepatic lipid accumulation by activating SREBP1 and PPAR{gamma}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kook Hwan; Hong, Sung Pyo; Kim, KyeongJin

    2007-04-20

    Hepatic steatosis is a common feature in patients with chronic hepatitis C virus (HCV) infection. HCV core protein plays an important role in the development of hepatic steatosis in HCV infection. Because SREBP1 (sterol regulatory element binding protein 1) and PPAR{gamma} (peroxisome proliferators-activated receptor {gamma}) are involved in the regulation of lipid metabolism of hepatocyte, we sought to determine whether HCV core protein may impair the expression and activity of SREBP1 and PPAR{gamma}. In this study, it was demonstrated that HCV core protein increases the gene expression of SREBP1 not only in Chang liver, Huh7, and HepG2 cells transiently transfectedmore » with HCV core protein expression plasmid, but also in Chang liver-core stable cells. Furthermore, HCV core protein enhanced the transcriptional activity of SREBP1. In addition, HCV core protein elevated PPAR{gamma} transcriptional activity. However, HCV core protein had no effect on PPAR{gamma} gene expression. Finally, we showed that HCV core protein stimulates the genes expression of lipogenic enzyme and fatty acid uptake associated protein. Therefore, our finding provides a new insight into the mechanism of hepatic steatosis by HCV infection.« less

  6. Cost efficient CFD simulations: Proper selection of domain partitioning strategies

    NASA Astrophysics Data System (ADS)

    Haddadi, Bahram; Jordan, Christian; Harasek, Michael

    2017-10-01

    Computational Fluid Dynamics (CFD) is one of the most powerful simulation methods, which is used for temporally and spatially resolved solutions of fluid flow, heat transfer, mass transfer, etc. One of the challenges of Computational Fluid Dynamics is the extreme hardware demand. Nowadays super-computers (e.g. High Performance Computing, HPC) featuring multiple CPU cores are applied for solving-the simulation domain is split into partitions for each core. Some of the different methods for partitioning are investigated in this paper. As a practical example, a new open source based solver was utilized for simulating packed bed adsorption, a common separation method within the field of thermal process engineering. Adsorption can for example be applied for removal of trace gases from a gas stream or pure gases production like Hydrogen. For comparing the performance of the partitioning methods, a 60 million cell mesh for a packed bed of spherical adsorbents was created; one second of the adsorption process was simulated. Different partitioning methods available in OpenFOAM® (Scotch, Simple, and Hierarchical) have been used with different numbers of sub-domains. The effect of the different methods and number of processor cores on the simulation speedup and also energy consumption were investigated for two different hardware infrastructures (Vienna Scientific Clusters VSC 2 and VSC 3). As a general recommendation an optimum number of cells per processor core was calculated. Optimized simulation speed, lower energy consumption and consequently the cost effects are reported here.

  7. PyNEST: A Convenient Interface to the NEST Simulator.

    PubMed

    Eppler, Jochen Martin; Helias, Moritz; Muller, Eilif; Diesmann, Markus; Gewaltig, Marc-Oliver

    2008-01-01

    The neural simulation tool NEST (http://www.nest-initiative.org) is a simulator for heterogeneous networks of point neurons or neurons with a small number of compartments. It aims at simulations of large neural systems with more than 10(4) neurons and 10(7) to 10(9) synapses. NEST is implemented in C++ and can be used on a large range of architectures from single-core laptops over multi-core desktop computers to super-computers with thousands of processor cores. Python (http://www.python.org) is a modern programming language that has recently received considerable attention in Computational Neuroscience. Python is easy to learn and has many extension modules for scientific computing (e.g. http://www.scipy.org). In this contribution we describe PyNEST, the new user interface to NEST. PyNEST combines NEST's efficient simulation kernel with the simplicity and flexibility of Python. Compared to NEST's native simulation language SLI, PyNEST makes it easier to set up simulations, generate stimuli, and analyze simulation results. We describe how PyNEST connects NEST and Python and how it is implemented. With a number of examples, we illustrate how it is used.

  8. Extending Moore's Law via Computationally Error Tolerant Computing.

    DOE PAGES

    Deng, Bobin; Srikanth, Sriseshan; Hein, Eric R.; ...

    2018-03-01

    Dennard scaling has ended. Lowering the voltage supply (V dd) to sub-volt levels causes intermittent losses in signal integrity, rendering further scaling (down) no longer acceptable as a means to lower the power required by a processor core. However, it is possible to correct the occasional errors caused due to lower V dd in an efficient manner and effectively lower power. By deploying the right amount and kind of redundancy, we can strike a balance between overhead incurred in achieving reliability and energy savings realized by permitting lower V dd. One promising approach is the Redundant Residue Number System (RRNS)more » representation. Unlike other error correcting codes, RRNS has the important property of being closed under addition, subtraction and multiplication, thus enabling computational error correction at a fraction of an overhead compared to conventional approaches. We use the RRNS scheme to design a Computationally-Redundant, Energy-Efficient core, including the microarchitecture, Instruction Set Architecture (ISA) and RRNS centered algorithms. Finally, from the simulation results, this RRNS system can reduce the energy-delay-product by about 3× for multiplication intensive workloads and by about 2× in general, when compared to a non-error-correcting binary core.« less

  9. PyNEST: A Convenient Interface to the NEST Simulator

    PubMed Central

    Eppler, Jochen Martin; Helias, Moritz; Muller, Eilif; Diesmann, Markus; Gewaltig, Marc-Oliver

    2008-01-01

    The neural simulation tool NEST (http://www.nest-initiative.org) is a simulator for heterogeneous networks of point neurons or neurons with a small number of compartments. It aims at simulations of large neural systems with more than 104 neurons and 107 to 109 synapses. NEST is implemented in C++ and can be used on a large range of architectures from single-core laptops over multi-core desktop computers to super-computers with thousands of processor cores. Python (http://www.python.org) is a modern programming language that has recently received considerable attention in Computational Neuroscience. Python is easy to learn and has many extension modules for scientific computing (e.g. http://www.scipy.org). In this contribution we describe PyNEST, the new user interface to NEST. PyNEST combines NEST's efficient simulation kernel with the simplicity and flexibility of Python. Compared to NEST's native simulation language SLI, PyNEST makes it easier to set up simulations, generate stimuli, and analyze simulation results. We describe how PyNEST connects NEST and Python and how it is implemented. With a number of examples, we illustrate how it is used. PMID:19198667

  10. Extending Moore's Law via Computationally Error Tolerant Computing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, Bobin; Srikanth, Sriseshan; Hein, Eric R.

    Dennard scaling has ended. Lowering the voltage supply (V dd) to sub-volt levels causes intermittent losses in signal integrity, rendering further scaling (down) no longer acceptable as a means to lower the power required by a processor core. However, it is possible to correct the occasional errors caused due to lower V dd in an efficient manner and effectively lower power. By deploying the right amount and kind of redundancy, we can strike a balance between overhead incurred in achieving reliability and energy savings realized by permitting lower V dd. One promising approach is the Redundant Residue Number System (RRNS)more » representation. Unlike other error correcting codes, RRNS has the important property of being closed under addition, subtraction and multiplication, thus enabling computational error correction at a fraction of an overhead compared to conventional approaches. We use the RRNS scheme to design a Computationally-Redundant, Energy-Efficient core, including the microarchitecture, Instruction Set Architecture (ISA) and RRNS centered algorithms. Finally, from the simulation results, this RRNS system can reduce the energy-delay-product by about 3× for multiplication intensive workloads and by about 2× in general, when compared to a non-error-correcting binary core.« less

  11. On the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods

    PubMed Central

    Lee, Anthony; Yau, Christopher; Giles, Michael B.; Doucet, Arnaud; Holmes, Christopher C.

    2011-01-01

    We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276

  12. Scheduler for multiprocessor system switch with selective pairing

    DOEpatents

    Gara, Alan; Gschwind, Michael Karl; Salapura, Valentina

    2015-01-06

    System, method and computer program product for scheduling threads in a multiprocessing system with selective pairing of processor cores for increased processing reliability. A selective pairing facility is provided that selectively connects, i.e., pairs, multiple microprocessor or processor cores to provide one highly reliable thread (or thread group). The method configures the selective pairing facility to use checking provide one highly reliable thread for high-reliability and allocate threads to corresponding processor cores indicating need for hardware checking. The method configures the selective pairing facility to provide multiple independent cores and allocate threads to corresponding processor cores indicating inherent resilience.

  13. Experimental and Theoretical Investigations on Viscosity of Fe-Ni-C Liquids at High Pressures

    NASA Astrophysics Data System (ADS)

    Chen, B.; Lai, X.; Wang, J.; Zhu, F.; Liu, J.; Kono, Y.

    2016-12-01

    Understanding and modeling of Earth's core processes such as geodynamo and heat flow via convection in liquid outer cores hinges on the viscosity of candidate liquid iron alloys under core conditions. Viscosity estimates from various methods of the metallic liquid of the outer core, however, span up to 12 orders of magnitude. Due to experimental challenges, viscosity measurements of iron liquids alloyed with lighter elements are scarce and conducted at conditions far below those expected for the outer core. In this study, we adopt a synergistic approach by integrating experiments at experimentally-achievable conditions with computations up to core conditions. We performed viscosity measurements based on the modified Stokes' floating sphere viscometry method for the Fe-Ni-C liquids at high pressures in a Paris-Edinburgh press at Sector 16 of the Advanced Photon Source, Argonne National Laboratory. Our results show that the addition of 3-5 wt.% carbon to iron-nickel liquids has negligible effect on its viscosity at pressures lower than 5 GPa. The viscosity of the Fe-Ni-C liquids, however, becomes notably higher and increases by a factor of 3 at 5-8 GPa. Similarly, our first-principles molecular dynamics calculations up to Earth's core pressures show a viscosity change in Fe-Ni-C liquids at 5 GPa. The significant change in the viscosity is likely due to a liquid structural transition of the Fe-Ni-C liquids as revealed by our X-ray diffraction measurements and first-principles molecular dynamics calculations. The observed correlation between structure and physical properties of liquids permit stringent benchmark test of the computational liquid models and contribute to a more comprehensive understanding of liquid properties under high pressures. The interplay between experiments and first-principles based modeling is shown to be a practical and effective methodology for studying liquid properties under outer core conditions that are difficult to reach with the current static high-pressure capabilities. The new viscosity data from experiments and computations would provide new insights into the internal dynamics of the outer core.

  14. Designing Computer-Based Assessments: Multidisciplinary Findings and Student Perspectives

    ERIC Educational Resources Information Center

    Dembitzer, Leah; Zelikovitz, Sarah; Kettler, Ryan J.

    2017-01-01

    A partnership was created between psychologists and computer programmers to develop a computer-based assessment program. Psychometric concerns of accessibility, reliability, and validity were juxtaposed with core development concepts of usability and user-centric design. Phases of development were iterative, with evaluation phases alternating with…

  15. SUMMARY OF EQUATIONS FOR EFFECT OF SHIP ATTITUDE AND SHIP MOTION ON PRIMARY COOLANT SYSTEM FLOW RATES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, H.E. Jr.

    1960-02-16

    SYNFAR, the buckling, reflector saviags, flux, and reactivity segments of the pilot code, was assembled, checked out, and placed in production status. A reduction of 50% in the computation time required for SYNFAR was achieved through incorporation of a convergence acceleration technique. Modification of SYNFAR to perrait computation of dynamic flux and reactivity was raade and the option was prepared for checkout. Retails of the APWRC Error and Exit Diagnosis package and the APWRC Library Tape System are given. The latter was checked out except for the simultaneous tape shifting fuaction. Digitalization of basic cross section data was completed formore » fifteen materials. The portion of the Cross Section Data Program which converts the punched card data to magnetic tape form, interpolating as necessary to obtain data at 1001 energy levels, was completed and checked out. The Breit-Wigner Analysis Program, used with the Cross Section Data Program, was checked out. A listing of the Fortran source program, containing definitions of terms used, fiow diagrams, input data forms, and a sample caloulation is contained. The theory and equations developed to compute the scattering parametera, mu and xi , also used by the Croas Section Data Program, were developed. Checkout of the corresponding program, XIMU, was started. Theory and equations for computing an inelastic scattering matrix, for use with the Cross Section Data Program, were developed and a FORTRAN program for evaluating them was started. An aralysis of the results of the experimental program was started using SYNFAR. Multiplication factors for the two cores studied, Nos. 453 and 454, agreed with the experimental value of 1.00 within 0.6%. The experimental program on Core 454 was completed. Experiments performed were determination of temperature coefficient (--8.9 x 10/sup -5/ DELTA k/k per degree centigrade at 35 deg C), per cent fiasions by subcadmium neutrons (18%), intracell thermal flux measurements, and buckling measuremerts. Core 453 was assembled. The cold clean critical mass for this core was 17.5 kg of U/sup 235/ with 134.63 grams of natural boron in the core. A complete series of clean core experiments was performed on this core. Core 452 was also assembled. The critical mass for this core was 14.4 kg of U/sup 235/ with 83.14 grams of natural boron in the core. The critical experiment control rods were calibrated. Material and dimensional specifications of the homogeneous fuel elements were prepared. A number of saruple blocks containing powdered stainless steel and lucite was pressed. Improvements in the process are being made in an attempt to minimize dimensioral variations from block to block. (See also MND-E-2119.) (auth)« less

  16. Computer Science Teacher Professional Development in the United States: A Review of Studies Published between 2004 and 2014

    ERIC Educational Resources Information Center

    Menekse, Muhsin

    2015-01-01

    While there has been a remarkable interest to make computer science a core K-12 academic subject in the United States, there is a shortage of K-12 computer science teachers to successfully implement computer sciences courses in schools. In order to enhance computer science teacher capacity, training programs have been offered through teacher…

  17. Campus Computing, 1998. The Ninth National Survey of Desktop Computing and Information Technology in American Higher Education.

    ERIC Educational Resources Information Center

    Green, Kenneth C.

    This report presents findings of a June 1998 survey of computing officials at 1,623 two- and four-year U.S. colleges and universities concerning the use of computer technology. The survey found that computing and information technology (IT) are now core components of the campus environment and classroom experience. However, key aspects of IT…

  18. Analytical methods in the high conversion reactor core design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeggel, W.; Oldekop, W.; Axmann, J.K.

    High conversion reactor (HCR) design methods have been used at the Technical University of Braunschweig (TUBS) with the technological support of Kraftwerk Union (KWU). The present state and objectives of this cooperation between KWU and TUBS in the field of HCRs have been described using existing design models and current activities aimed at further development and validation of the codes. The hard physical and thermal-hydraulic boundary conditions of pressurized water reactor (PWR) cores with a high degree of fuel utilization result from the tight packing of the HCR fuel rods and the high fissionable plutonium content of the fuel. Inmore » terms of design, the problem will be solved with rod bundles whose fuel rods are adjusted by helical spacers to the proposed small rod pitches. These HCR properties require novel computational models for neutron physics, thermal hydraulics, and fuel rod design. By means of a survey of the codes, the analytical procedure for present-day HCR core design is presented. The design programs are currently under intensive development, as design tools with a solid, scientific foundation and with essential parameters that are widely valid and are required for a promising optimization of the HCR core. Design results and a survey of future HCR development are given. In this connection, the reoptimization of the PWR core in the direction of an HCR is considered a fascinating scientific task, with respect to both economic and safety aspects.« less

  19. Safety and core design of large liquid-metal cooled fast breeder reactors

    NASA Astrophysics Data System (ADS)

    Qvist, Staffan Alexander

    In light of the scientific evidence for changes in the climate caused by greenhouse-gas emissions from human activities, the world is in ever more desperate need of new, inexhaustible, safe and clean primary energy sources. A viable solution to this problem is the widespread adoption of nuclear breeder reactor technology. Innovative breeder reactor concepts using liquid-metal coolants such as sodium or lead will be able to utilize the waste produced by the current light water reactor fuel cycle to power the entire world for several centuries to come. Breed & burn (B&B) type fast reactor cores can unlock the energy potential of readily available fertile material such as depleted uranium without the need for chemical reprocessing. Using B&B technology, nuclear waste generation, uranium mining needs and proliferation concerns can be greatly reduced, and after a transitional period, enrichment facilities may no longer be needed. In this dissertation, new passively operating safety systems for fast reactors cores are presented. New analysis and optimization methods for B&B core design have been developed, along with a comprehensive computer code that couples neutronics, thermal-hydraulics and structural mechanics and enables a completely automated and optimized fast reactor core design process. In addition, an experiment that expands the knowledge-base of corrosion issues of lead-based coolants in nuclear reactors was designed and built. The motivation behind the work presented in this thesis is to help facilitate the widespread adoption of safe and efficient fast reactor technology.

  20. Present Status and Extensions of the Monte Carlo Performance Benchmark

    NASA Astrophysics Data System (ADS)

    Hoogenboom, J. Eduard; Petrovic, Bojan; Martin, William R.

    2014-06-01

    The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed.

  1. A fully parallel in time and space algorithm for simulating the electrical activity of a neural tissue.

    PubMed

    Bedez, Mathieu; Belhachmi, Zakaria; Haeberlé, Olivier; Greget, Renaud; Moussaoui, Saliha; Bouteiller, Jean-Marie; Bischoff, Serge

    2016-01-15

    The resolution of a model describing the electrical activity of neural tissue and its propagation within this tissue is highly consuming in term of computing time and requires strong computing power to achieve good results. In this study, we present a method to solve a model describing the electrical propagation in neuronal tissue, using parareal algorithm, coupling with parallelization space using CUDA in graphical processing unit (GPU). We applied the method of resolution to different dimensions of the geometry of our model (1-D, 2-D and 3-D). The GPU results are compared with simulations from a multi-core processor cluster, using message-passing interface (MPI), where the spatial scale was parallelized in order to reach a comparable calculation time than that of the presented method using GPU. A gain of a factor 100 in term of computational time between sequential results and those obtained using the GPU has been obtained, in the case of 3-D geometry. Given the structure of the GPU, this factor increases according to the fineness of the geometry used in the computation. To the best of our knowledge, it is the first time such a method is used, even in the case of neuroscience. Parallelization time coupled with GPU parallelization space allows for drastically reducing computational time with a fine resolution of the model describing the propagation of the electrical signal in a neuronal tissue. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Grand adventures: combing backcountry enthusiasts with machine learning to improve pollinator biodiversity and conservation across North America

    NASA Astrophysics Data System (ADS)

    Prudic, K.; Toshack, M.; Hickson, B.; Hutchinson, R.

    2017-12-01

    Far too many species do not have proven means of assessment or effective conservation across the globe. We must gain better insights into the biological, environmental, and behavioral influences on the health and wealth of biodiversity to make a difference for various species and habitats as the environment changes due to human activities. Pollinator biodiversity information necessary for conservation is difficult to collect at a local level, let alone across a continent. Particularly, what pollinators are doing in more remote locations across elevational clines and how is climate change affecting them? Here we showcase a citizen-science project which takes advantage of the human ability to catch and photograph butterfly and their nectar plants coupled with machine learning to identify species, phenology shifts and diversity hotspots. We use this combined approach of human-computer collaboration to represent patterns of pollinator and nectar plant occurrences and diversity across broad spatial and temporal scales. We also improve data quality by taking advantage of the synergies between human computation and mechanical computation. We call this a human-machine learning network, whose core is an active learning feedback loop between humans and computers. We explore how this approach can leverage the contributions of human observers and process their contributed data with artificial intelligence algorithms leading to a computational power that far exceeds the sum of the individual parts providing important data products and visualizations for pollinator conservation research across a continent.

  3. Large Scale Document Inversion using a Multi-threaded Computing System

    PubMed Central

    Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won

    2018-01-01

    Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. CCS Concepts •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations. PMID:29861701

  4. Large Scale Document Inversion using a Multi-threaded Computing System.

    PubMed

    Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won

    2017-06-01

    Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations.

  5. GPU accelerated dynamic functional connectivity analysis for functional MRI data.

    PubMed

    Akgün, Devrim; Sakoğlu, Ünal; Esquivel, Johnny; Adinoff, Bryon; Mete, Mutlu

    2015-07-01

    Recent advances in multi-core processors and graphics card based computational technologies have paved the way for an improved and dynamic utilization of parallel computing techniques. Numerous applications have been implemented for the acceleration of computationally-intensive problems in various computational science fields including bioinformatics, in which big data problems are prevalent. In neuroimaging, dynamic functional connectivity (DFC) analysis is a computationally demanding method used to investigate dynamic functional interactions among different brain regions or networks identified with functional magnetic resonance imaging (fMRI) data. In this study, we implemented and analyzed a parallel DFC algorithm based on thread-based and block-based approaches. The thread-based approach was designed to parallelize DFC computations and was implemented in both Open Multi-Processing (OpenMP) and Compute Unified Device Architecture (CUDA) programming platforms. Another approach developed in this study to better utilize CUDA architecture is the block-based approach, where parallelization involves smaller parts of fMRI time-courses obtained by sliding-windows. Experimental results showed that the proposed parallel design solutions enabled by the GPUs significantly reduce the computation time for DFC analysis. Multicore implementation using OpenMP on 8-core processor provides up to 7.7× speed-up. GPU implementation using CUDA yielded substantial accelerations ranging from 18.5× to 157× speed-up once thread-based and block-based approaches were combined in the analysis. Proposed parallel programming solutions showed that multi-core processor and CUDA-supported GPU implementations accelerated the DFC analyses significantly. Developed algorithms make the DFC analyses more practical for multi-subject studies with more dynamic analyses. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Patient-specific core decompression surgery for early-stage ischemic necrosis of the femoral head

    PubMed Central

    Wang, Wei; Hu, Wei; Yang, Pei; Dang, Xiao Qian; Li, Xiao Hui; Wang, Kun Zheng

    2017-01-01

    Introduction Core decompression is an efficient treatment for early stage ischemic necrosis of the femoral head. In conventional procedures, the pre-operative X-ray only shows one plane of the ischemic area, which often results in inaccurate drilling. This paper introduces a new method that uses computer-assisted technology and rapid prototyping to enhance drilling accuracy during core decompression surgeries and presents a validation study of cadaveric tests. Methods Twelve cadaveric human femurs were used to simulate early-stage ischemic necrosis. The core decompression target at the anterolateral femoral head was simulated using an embedded glass ball (target). Three positioning Kirschner wires were drilled into the top and bottom of the large rotor. The specimen was then subjected to computed tomography (CT). A CT image of the specimen was imported into the Mimics software to construct a three-dimensional model including the target. The best core decompression channel was then designed using the 3D model. A navigational template for the specimen was designed using the Pro/E software and manufactured by rapid prototyping technology to guide the drilling channel. The specimen-specific navigation template was installed on the specimen using positioning Kirschner wires. Drilling was performed using a guide needle through the guiding hole on the templates. The distance between the end point of the guide needle and the target was measured to validate the patient-specific surgical accuracy. Results The average distance between the tip of the guide needle drilled through the guiding template and the target was 1.92±0.071 mm. Conclusions Core decompression using a computer-rapid prototyping template is a reliable and accurate technique that could provide a new method of precision decompression for early-stage ischemic necrosis. PMID:28464029

  7. Patient-specific core decompression surgery for early-stage ischemic necrosis of the femoral head.

    PubMed

    Wang, Wei; Hu, Wei; Yang, Pei; Dang, Xiao Qian; Li, Xiao Hui; Wang, Kun Zheng

    2017-01-01

    Core decompression is an efficient treatment for early stage ischemic necrosis of the femoral head. In conventional procedures, the pre-operative X-ray only shows one plane of the ischemic area, which often results in inaccurate drilling. This paper introduces a new method that uses computer-assisted technology and rapid prototyping to enhance drilling accuracy during core decompression surgeries and presents a validation study of cadaveric tests. Twelve cadaveric human femurs were used to simulate early-stage ischemic necrosis. The core decompression target at the anterolateral femoral head was simulated using an embedded glass ball (target). Three positioning Kirschner wires were drilled into the top and bottom of the large rotor. The specimen was then subjected to computed tomography (CT). A CT image of the specimen was imported into the Mimics software to construct a three-dimensional model including the target. The best core decompression channel was then designed using the 3D model. A navigational template for the specimen was designed using the Pro/E software and manufactured by rapid prototyping technology to guide the drilling channel. The specimen-specific navigation template was installed on the specimen using positioning Kirschner wires. Drilling was performed using a guide needle through the guiding hole on the templates. The distance between the end point of the guide needle and the target was measured to validate the patient-specific surgical accuracy. The average distance between the tip of the guide needle drilled through the guiding template and the target was 1.92±0.071 mm. Core decompression using a computer-rapid prototyping template is a reliable and accurate technique that could provide a new method of precision decompression for early-stage ischemic necrosis.

  8. Multi-scale structure and topological anomaly detection via a new network statistic: The onion decomposition.

    PubMed

    Hébert-Dufresne, Laurent; Grochow, Joshua A; Allard, Antoine

    2016-08-18

    We introduce a network statistic that measures structural properties at the micro-, meso-, and macroscopic scales, while still being easy to compute and interpretable at a glance. Our statistic, the onion spectrum, is based on the onion decomposition, which refines the k-core decomposition, a standard network fingerprinting method. The onion spectrum is exactly as easy to compute as the k-cores: It is based on the stages at which each vertex gets removed from a graph in the standard algorithm for computing the k-cores. Yet, the onion spectrum reveals much more information about a network, and at multiple scales; for example, it can be used to quantify node heterogeneity, degree correlations, centrality, and tree- or lattice-likeness. Furthermore, unlike the k-core decomposition, the combined degree-onion spectrum immediately gives a clear local picture of the network around each node which allows the detection of interesting subgraphs whose topological structure differs from the global network organization. This local description can also be leveraged to easily generate samples from the ensemble of networks with a given joint degree-onion distribution. We demonstrate the utility of the onion spectrum for understanding both static and dynamic properties on several standard graph models and on many real-world networks.

  9. Development of massive multilevel molecular dynamics simulation program, Platypus (PLATform for dYnamic Protein Unified Simulation), for the elucidation of protein functions.

    PubMed

    Takano, Yu; Nakata, Kazuto; Yonezawa, Yasushige; Nakamura, Haruki

    2016-05-05

    A massively parallel program for quantum mechanical-molecular mechanical (QM/MM) molecular dynamics simulation, called Platypus (PLATform for dYnamic Protein Unified Simulation), was developed to elucidate protein functions. The speedup and the parallelization ratio of Platypus in the QM and QM/MM calculations were assessed for a bacteriochlorophyll dimer in the photosynthetic reaction center (DIMER) on the K computer, a massively parallel computer achieving 10 PetaFLOPs with 705,024 cores. Platypus exhibited the increase in speedup up to 20,000 core processors at the HF/cc-pVDZ and B3LYP/cc-pVDZ, and up to 10,000 core processors by the CASCI(16,16)/6-31G** calculations. We also performed excited QM/MM-MD simulations on the chromophore of Sirius (SIRIUS) in water. Sirius is a pH-insensitive and photo-stable ultramarine fluorescent protein. Platypus accelerated on-the-fly excited-state QM/MM-MD simulations for SIRIUS in water, using over 4000 core processors. In addition, it also succeeded in 50-ps (200,000-step) on-the-fly excited-state QM/MM-MD simulations for the SIRIUS in water. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc.

  10. A non-local mixing-length theory able to compute core overshooting

    NASA Astrophysics Data System (ADS)

    Gabriel, M.; Belkacem, K.

    2018-04-01

    Turbulent convection is certainly one of the most important and thorny issues in stellar physics. Our deficient knowledge of this crucial physical process introduces a fairly large uncertainty concerning the internal structure and evolution of stars. A striking example is overshoot at the edge of convective cores. Indeed, nearly all stellar evolutionary codes treat the overshooting zones in a very approximative way that considers both its extent and the profile of the temperature gradient as free parameters. There are only a few sophisticated theories of stellar convection such as Reynolds stress approaches, but they also require the adjustment of a non-negligible number of free parameters. We present here a theory, based on the plume theory as well as on the mean-field equations, but without relying on the usual Taylor's closure hypothesis. It leads us to a set of eight differential equations plus a few algebraic ones. Our theory is essentially a non-mixing length theory. It enables us to compute the temperature gradient in a shrinking convective core and its overshooting zone. The case of an expanding convective core is also discussed, though more briefly. Numerical simulations have quickly improved during recent years and enabling us to foresee that they will probably soon provide a model of convection adapted to the computation of 1D stellar models.

  11. Offshore survey provides answers to coastal stability and potential offshore extensions of landslides into Abalone Cove, Palos Verdes peninsula, Calif

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dill, R.F.; Slosson, J.E.

    1993-04-01

    The configuration and stability of the present coast line near Abalone Cove, on the south side of Palos Verdes Peninsula, California is related to the geology, oceanographic conditions, and recent and ancient landslide activity. This case study utilizes offshore high resolution seismic profiles, side-scan sonar, diving, and coring, to relate marine geology to the stability of a coastal region with known active landslides utilizing a desk top computer and off-the-shelf software. Electronic navigation provided precise positioning that when applied to computer generated charts permitted correlation of survey data needed to define the offshore geology and sea floor sediment patterns. Amore » mackintosh desk-top computer and commercially available off-the-shelf software provided the analytical tools for constructing a base chart and a means to superimpose template overlays of topography, isopachs or sediment thickness, bottom roughness and sediment distribution patterns. This composite map of offshore geology and oceanography was then related to an extensive engineering and geological land study of the coastal zone forming Abalone Cove, an area of active landslides. Vibrocoring provided ground sediment data for high resolution seismic traverses. This paper details the systems used, present findings relative to potential landslide movements, coastal erosion and discuss how conclusions were reached to determine whether or not onshore landslide failures extend offshore.« less

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehtola, Susi; Parkhill, John; Head-Gordon, Martin

    Novel implementations based on dense tensor storage are presented here for the singlet-reference perfect quadruples (PQ) [J. A. Parkhill et al., J. Chem. Phys. 130, 084101 (2009)] and perfect hextuples (PH) [J. A. Parkhill and M. Head-Gordon, J. Chem. Phys. 133, 024103 (2010)] models. The methods are obtained as block decompositions of conventional coupled-cluster theory that are exact for four electrons in four orbitals (PQ) and six electrons in six orbitals (PH), but that can also be applied to much larger systems. PQ and PH have storage requirements that scale as the square, and as the cube of the numbermore » of active electrons, respectively, and exhibit quartic scaling of the computational effort for large systems. Applications of the new implementations are presented for full-valence calculations on linear polyenes (C nH n+2), which highlight the excellent computational scaling of the present implementations that can routinely handle active spaces of hundreds of electrons. The accuracy of the models is studied in the π space of the polyenes, in hydrogen chains (H 50), and in the π space of polyacene molecules. In all cases, the results compare favorably to density matrix renormalization group values. With the novel implementation of PQ, active spaces of 140 electrons in 140 orbitals can be solved in a matter of minutes on a single core workstation, and the relatively low polynomial scaling means that very large systems are also accessible using parallel computing.« less

  13. Identification and Addressing Reduction-Related Misconceptions

    ERIC Educational Resources Information Center

    Gal-Ezer, Judith; Trakhtenbrot, Mark

    2016-01-01

    Reduction is one of the key techniques used for problem-solving in computer science. In particular, in the theory of computation and complexity (TCC), mapping and polynomial reductions are used for analysis of decidability and computational complexity of problems, including the core concept of NP-completeness. Reduction is a highly abstract…

  14. Achieving High Performance with FPGA-Based Computing

    PubMed Central

    Herbordt, Martin C.; VanCourt, Tom; Gu, Yongfeng; Sukhwani, Bharat; Conti, Al; Model, Josh; DiSabello, Doug

    2011-01-01

    Numerous application areas, including bioinformatics and computational biology, demand increasing amounts of processing capability. In many cases, the computation cores and data types are suited to field-programmable gate arrays. The challenge is identifying the design techniques that can extract high performance potential from the FPGA fabric. PMID:21603088

  15. Learning Motivation in E-Learning Facilitated Computer Programming Courses

    ERIC Educational Resources Information Center

    Law, Kris M. Y.; Lee, Victor C. S.; Yu, Y. T.

    2010-01-01

    Computer programming skills constitute one of the core competencies that graduates from many disciplines, such as engineering and computer science, are expected to possess. Developing good programming skills typically requires students to do a lot of practice, which cannot sustain unless they are adequately motivated. This paper reports a…

  16. The Digital Workforce: Update, August 2000 [and] The Digital Work Force: State Data & Rankings, September 2000.

    ERIC Educational Resources Information Center

    Sargent, John

    The Office of Technology Policy analyzed Bureau of Labor Statistics' growth projections for the core occupational classifications of IT (information technology) workers to assess future demand in the United States. Classifications studied were computer engineers, systems analysts, computer programmers, database administrators, computer support…

  17. USRA/RIACS

    NASA Technical Reports Server (NTRS)

    Oliger, Joseph

    1992-01-01

    The Research Institute for Advanced Computer Science (RIACS) was established by the Universities Space Research Association (USRA) at the NASA Ames Research Center (ARC) on 6 June 1983. RIACS is privately operated by USRA, a consortium of universities with research programs in the aerospace sciences, under a cooperative agreement with NASA. The primary mission of RIACS is to provide research and expertise in computer science and scientific computing to support the scientific missions of NASA ARC. The research carried out at RIACS must change its emphasis from year to year in response to NASA ARC's changing needs and technological opportunities. A flexible scientific staff is provided through a university faculty visitor program, a post doctoral program, and a student visitor program. Not only does this provide appropriate expertise but it also introduces scientists outside of NASA to NASA problems. A small group of core RIACS staff provides continuity and interacts with an ARC technical monitor and scientific advisory group to determine the RIACS mission. RIACS activities are reviewed and monitored by a USRA advisory council and ARC technical monitor. Research at RIACS is currently being done in the following areas: Parallel Computing; Advanced Methods for Scientific Computing; Learning Systems; High Performance Networks and Technology; Graphics, Visualization, and Virtual Environments.

  18. The Transition to a Many-core World

    NASA Astrophysics Data System (ADS)

    Mattson, T. G.

    2012-12-01

    The need to increase performance within a fixed energy budget has pushed the computer industry to many core processors. This is grounded in the physics of computing and is not a trend that will just go away. It is hard to overestimate the profound impact of many-core processors on software developers. Virtually every facet of the software development process will need to change to adapt to these new processors. In this talk, we will look at many-core hardware and consider its evolution from a perspective grounded in the CPU. We will show that the number of cores will inevitably increase, but in addition, a quest to maximize performance per watt will push these cores to be heterogeneous. We will show that the inevitable result of these changes is a computing landscape where the distinction between the CPU and the GPU is blurred. We will then consider the much more pressing problem of software in a many core world. Writing software for heterogeneous many core processors is well beyond the ability of current programmers. One solution is to support a software development process where programmer teams are split into two distinct groups: a large group of domain-expert productivity programmers and much smaller team of computer-scientist efficiency programmers. The productivity programmers work in terms of high level frameworks to express the concurrency in their problems while avoiding any details for how that concurrency is exploited. The second group, the efficiency programmers, map applications expressed in terms of these frameworks onto the target many-core system. In other words, we can solve the many-core software problem by creating a software infrastructure that only requires a small subset of programmers to become master parallel programmers. This is different from the discredited dream of automatic parallelism. Note that productivity programmers still need to define the architecture of their software in a way that exposes the concurrency inherent in their problem. We submit that domain-expert programmers understand "what is concurrent". The parallel programming problem emerges from the complexity of "how that concurrency is utilized" on real hardware. The research described in this talk was carried out in collaboration with the ParLab at UC Berkeley. We use a design pattern language to define the high level frameworks exposed to domain-expert, productivity programmers. We then use tools from the SEJITS project (Selective embedded Just In time Specializers) to build the software transformation tool chains thst turn these framework-oriented designs into highly efficient code. The final ingredient is a software platform to serve as a target for these tools. One such platform is the OpenCL industry standard for programming heterogeneous systems. We will briefly describe OpenCL and show how it provides a vendor-neutral software target for current and future many core systems; both CPU-based, GPU-based, and heterogeneous combinations of the two.

  19. News on Seeking Gaia's Astrometric Core Solution with AGIS

    NASA Astrophysics Data System (ADS)

    Lammers, U.; Lindegren, L.

    We report on recent new developments around the Astrometric Global Iterative Solution system. This includes the availability of an efficient Conjugate Gradient solver and the Generic Astrometric Calibration scheme that had been proposed a while ago. The number of primary stars to be included in the core solution is now believed to be significantly higher than the 100 Million that served as baseline until now. Cloud computing services are being studied as a possible cost-effective alternative to running AGIS on dedicated computing hardware at ESAC during the operational phase.

  20. The Relationship of Core Strength and Activation and Performance on Three Functional Movement Screens.

    PubMed

    Johnson, Caleb D; Whitehead, Paul N; Pletcher, Erin R; Faherty, Mallory S; Lovalekar, Mita T; Eagle, Shawn R; Keenan, Karen A

    2018-04-01

    Johnson, CD, Whitehead, PN, Pletcher, ER, Faherty, MS, Lovalekar, MT, Eagle, SR, and Keenan, KA. The relationship of core strength and activation and performance on three functional movement screens. J Strength Cond Res 32(4): 1166-1173, 2018-Current measures of core stability used by clinicians and researchers suffer from several shortcomings. Three functional movement screens appear, at face-value, to be dependent on the ability to activate and control core musculature. These 3 screens may present a viable alternative to current measures of core stability. Thirty-nine subjects completed a deep squat, trunk stability push-up, and rotary stability screen. Scores on the 3 screens were summed to calculate a composite score (COMP). During the screens, muscle activity was collected to determine the length of time that the bilateral erector spinae, rectus abdominis, external oblique, and gluteus medius muscles were active. Strength was assessed for core muscles (trunk flexion and extension, trunk rotation, and hip abduction and adduction) and accessory muscles (knee flexion and extension and pectoralis major). Two ordinal logistic regression equations were calculated with COMP as the outcome variable, and: (a) core strength and accessory strength, (b) only core strength. The first model was significant in predicting COMP (p = 0.004) (Pearson's Chi-Square = 149.132, p = 0.435; Nagelkerke's R-Squared = 0.369). The second model was significant in predicting COMP (p = 0.001) (Pearson's Chi-Square = 148.837, p = 0.488; Nagelkerke's R-Squared = 0.362). The core muscles were found to be active for most screens, with percentages of "time active" for each muscle ranging from 54-86%. In conclusion, performance on the 3 screens is predicted by core strength, even when accounting for "accessory" strength variables. Furthermore, it seems the screens elicit wide-ranging activation of core muscles. Although more investigation is needed, these screens, collectively, seem to be a good assessment of core strength.

  1. Topological Defects in a Living Nematic Ensnare Swimming Bacteria

    NASA Astrophysics Data System (ADS)

    Genkin, Mikhail M.; Sokolov, Andrey; Lavrentovich, Oleg D.; Aranson, Igor S.

    2017-01-01

    Active matter exemplified by suspensions of motile bacteria or synthetic self-propelled particles exhibits a remarkable propensity to self-organization and collective motion. The local input of energy and simple particle interactions often lead to complex emergent behavior manifested by the formation of macroscopic vortices and coherent structures with long-range order. A realization of an active system has been conceived by combining swimming bacteria and a lyotropic liquid crystal. Here, by coupling the well-established and validated model of nematic liquid crystals with the bacterial dynamics, we develop a computational model describing intricate properties of such a living nematic. In faithful agreement with the experiment, the model reproduces the onset of periodic undulation of the director and consequent proliferation of topological defects with the increase in bacterial concentration. It yields a testable prediction on the accumulation of bacteria in the cores of +1 /2 topological defects and depletion of bacteria in the cores of -1 /2 defects. Our dedicated experiment on motile bacteria suspended in a freestanding liquid crystalline film fully confirms this prediction. Our findings suggest novel approaches for trapping and transport of bacteria and synthetic swimmers in anisotropic liquids and extend a scope of tools to control and manipulate microscopic objects in active matter.

  2. Ab Initio Computations and Active Thermochemical Tables Hand in Hand: Heats of Formation of Core Combustion Species.

    PubMed

    Klippenstein, Stephen J; Harding, Lawrence B; Ruscic, Branko

    2017-09-07

    The fidelity of combustion simulations is strongly dependent on the accuracy of the underlying thermochemical properties for the core combustion species that arise as intermediates and products in the chemical conversion of most fuels. High level theoretical evaluations are coupled with a wide-ranging implementation of the Active Thermochemical Tables (ATcT) approach to obtain well-validated high fidelity predictions for the 0 K heat of formation for a large set of core combustion species. In particular, high level ab initio electronic structure based predictions are obtained for a set of 348 C, N, O, and H containing species, which corresponds to essentially all core combustion species with 34 or fewer electrons. The theoretical analyses incorporate various high level corrections to base CCSD(T)/cc-pVnZ analyses (n = T or Q) using H 2 , CH 4 , H 2 O, and NH 3 as references. Corrections for the complete-basis-set limit, higher-order excitations, anharmonic zero-point energy, core-valence, relativistic, and diagonal Born-Oppenheimer effects are ordered in decreasing importance. Independent ATcT values are presented for a subset of 150 species. The accuracy of the theoretical predictions is explored through (i) examination of the magnitude of the various corrections, (ii) comparisons with other high level calculations, and (iii) through comparison with the ATcT values. The estimated 2σ uncertainties of the three methods devised here, ANL0, ANL0-F12, and ANL1, are in the range of ±1.0-1.5 kJ/mol for single-reference and moderately multireference species, for which the calculated higher order excitations are 5 kJ/mol or less. In addition to providing valuable references for combustion simulations, the subsequent inclusion of the current theoretical results into the ATcT thermochemical network is expected to significantly improve the thermochemical knowledge base for less-well studied species.

  3. Magnetic susceptibility as an indicator to paleo-environmental pollution in an urban lagoon near Istanbul city

    NASA Astrophysics Data System (ADS)

    Alpar, Bedri; Unlu, Selma; Altinok, Yildiz; Ongen, Sinan

    2014-05-01

    For assessing anthropogenic pollution, magnetic susceptibility profiles and accompanying data were measured along three short cores recovered at the southern part of an urban lagoon; Kucukcekmece, Istanbul, Turkey. This marine inlet, connected to the Sea of Marmara by a very narrow channel, was used as a drinking water reservoir 40-50 years ago before it was contaminated by municipal, agricultural and industrial activities, mainly carried by three streams feeding the lagoon. The magnetic signals decrease gradually from the lake bottom towards the core base showing some characteristic anomalies. These signatures were tested as an environmental magnetic parameter against the lithological diversity (silici-clastic, total organic matter and carbonate), metal enrichments with larger variations (Pb, Mn, Zn, Ni, Co, Cr, U and Al) and probable hydrocarbon contamination. Mineral assemblage was determined by a computer driven X-ray diffractometer. The heavy metal concentrations and total petroleum hydrocarbons (TPH) were measured by ICP-MS and UVF spectrometry, respectively. Magnetic susceptibility shows slightly higher values in interlayers containing higher silici-clastic material and organic content which may suggest first-order changes in the relative supplies of terrigenous and biogenic materials. On the basis of cluster analyses, enhanced magnetic signals could be correlated with the elevated concentrations of Co, Zn, U, Pb and TPH along the cores. The Pb concentrations at the upper parts of the cores were higher than the "Severe Effect Level" and could pose a potential risk for living organisms. Greater amounts of organic carbon tend to accumulate in muddy sediments. In fact, there are a few studies reporting some relationship between enhanced magnetic signals and organic contamination mainly due to petroleum aromatic hydrocarbons. In conclusion, the magnetic susceptibility changes in sedimentary depositional environments could be used as a rapid and cost-effective tool in identification of silici-clastic content, enrichment of some metals (iron cycling and bacterial activity) and increased TPH concentrations in hydrocarbon contaminated sediments along the cores.

  4. From the molecular structure to spectroscopic and material properties: computational investigation of a bent-core nematic liquid crystal.

    PubMed

    Greco, Cristina; Marini, Alberto; Frezza, Elisa; Ferrarini, Alberta

    2014-05-19

    We present a computational investigation of the nematic phase of the bent-core liquid crystal A131. We use an integrated approach that bridges density functional theory calculations of molecular geometry and torsional potentials to elastic properties through the molecular conformational and orientational distribution function. This unique capability to simultaneously access different length scales enables us to consistently describe molecular and material properties. We can reassign (13)C NMR chemical shifts and analyze the dependence of phase properties on molecular shape. Focusing on the elastic constants we can draw some general conclusions on the unconventional behavior of bent-core nematics and highlight the crucial role of a properly-bent shape. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. fissioncore: A desktop-computer simulation of a fission-bomb core

    NASA Astrophysics Data System (ADS)

    Cameron Reed, B.; Rohe, Klaus

    2014-10-01

    A computer program, fissioncore, has been developed to deterministically simulate the growth of the number of neutrons within an exploding fission-bomb core. The program allows users to explore the dependence of criticality conditions on parameters such as nuclear cross-sections, core radius, number of secondary neutrons liberated per fission, and the distance between nuclei. Simulations clearly illustrate the existence of a critical radius given a particular set of parameter values, as well as how the exponential growth of the neutron population (the condition that characterizes criticality) depends on these parameters. No understanding of neutron diffusion theory is necessary to appreciate the logic of the program or the results. The code is freely available in FORTRAN, C, and Java and is configured so that modifications to accommodate more refined physical conditions are possible.

  6. Fast Image Subtraction Using Multi-cores and GPUs

    NASA Astrophysics Data System (ADS)

    Hartung, Steven; Shukla, H.

    2013-01-01

    Many important image processing techniques in astronomy require a massive number of computations per pixel. Among them is an image differencing technique known as Optimal Image Subtraction (OIS), which is very useful for detecting and characterizing transient phenomena. Like many image processing routines, OIS computations increase proportionally with the number of pixels being processed, and the number of pixels in need of processing is increasing rapidly. Utilizing many-core graphical processing unit (GPU) technology in a hybrid conjunction with multi-core CPU and computer clustering technologies, this work presents a new astronomy image processing pipeline architecture. The chosen OIS implementation focuses on the 2nd order spatially-varying kernel with the Dirac delta function basis, a powerful image differencing method that has seen limited deployment in part because of the heavy computational burden. This tool can process standard image calibration and OIS differencing in a fashion that is scalable with the increasing data volume. It employs several parallel processing technologies in a hierarchical fashion in order to best utilize each of their strengths. The Linux/Unix based application can operate on a single computer, or on an MPI configured cluster, with or without GPU hardware. With GPU hardware available, even low-cost commercial video cards, the OIS convolution and subtraction times for large images can be accelerated by up to three orders of magnitude.

  7. Blue straggler formation at core collapse

    NASA Astrophysics Data System (ADS)

    Banerjee, Sambaran

    Among the most striking feature of blue straggler stars (BSS) in globular clusters is the presence of multiple sequences of BSSs in the colour-magnitude diagrams (CMDs) of several globular clusters. It is often envisaged that such a multiple BSS sequence would arise due a recent core collapse of the host cluster, triggering a number of stellar collisions and binary mass transfers simultaneously over a brief episode of time. Here we examine this scenario using direct N-body computations of moderately-massive star clusters (of order 104 {M⊙). As a preliminary attempt, these models are initiated with ≈8-10 Gyr old stellar population and King profiles of high concentrations, being ``tuned'' to undergo core collapse quickly. BSSs are indeed found to form in a ``burst'' at the onset of the core collapse and several of such BS-bursts occur during the post-core-collapse phase. In those models that include a few percent primordial binaries, both collisional and binary BSSs form after the onset of the (near) core-collapse. However, there is as such no clear discrimination between the two types of BSSs in the corresponding computed CMDs. We note that this may be due to the less number of BSSs formed in these less massive models than that in actual globular clusters.

  8. Structure-Property Relationships for Tailoring Phenoxazines as Reducing Photoredox Catalysts.

    PubMed

    McCarthy, Blaine G; Pearson, Ryan M; Lim, Chern-Hooi; Sartor, Steven M; Damrauer, Niels H; Miyake, Garret M

    2018-04-18

    Through the study of structure-property relationships using a combination of experimental and computational analyses, a number of phenoxazine derivatives have been developed as visible light absorbing, organic photoredox catalysts (PCs) with excited state reduction potentials rivaling those of highly reducing transition metal PCs. Time-dependent density functional theory (TD-DFT) computational modeling of the photoexcitation of N-aryl and core modified phenoxazines guided the design of PCs with absorption profiles in the visible regime. In accordance with our previous work with N, N-diaryl dihydrophenazines, characterization of noncore modified N-aryl phenoxazines in the excited state demonstrated that the nature of the N-aryl substituent dictates the ability of the PC to access a charge transfer excited state. However, our current analysis of core modified phenoxazines revealed that these molecules can access a different type of CT excited state which we posit involves a core substituent as the electron acceptor. Modification of the core of phenoxazine derivatives with electron-donating and electron-withdrawing substituents was used to alter triplet energies, excited state reduction potentials, and oxidation potentials of the phenoxazine derivatives. The catalytic activity of these molecules was explored using organocatalyzed atom transfer radical polymerization (O-ATRP) for the synthesis of poly(methyl methacrylate) (PMMA) using white light irradiation. All of the derivatives were determined to be suitable PCs for O-ATRP as indicated by a linear growth of polymer molecular weight as a function of monomer conversion and the ability to synthesize PMMA with moderate to low dispersity (dispersity less than or equal to 1.5) and initiator efficiencies typically greater than 70% at high conversions. However, only PCs that exhibit strong absorption of visible light and strong triplet excited state reduction potentials maintain control over the polymerization during the entire course of the reaction. The structure-property relationships established here will enable the application of these organic PCs for O-ATRP and other photoredox-catalyzed small molecule and polymer syntheses.

  9. Toward Connecting Core-Collapse Supernova Theory with Observations: Nucleosynthetic Yields and Distribution of Elements in a 15 M⊙ Blue Supergiant Progenitor with SN 1987A Energetics

    NASA Astrophysics Data System (ADS)

    Plewa, Tomasz; Handy, Timothy; Odrzywolek, Andrzej

    2014-03-01

    We compute and discuss the process of nucleosynthesis in a series of core-collapse explosion models of a 15 solar mass, blue supergiant progenitor. We obtain nucleosynthetic yields and study the evolution of the chemical element distribution from the moment of core bounce until young supernova remnant phase. Our models show how the process of energy deposition due to radioactive decay modifies the dynamics and the core ejecta structure on small and intermediate scales. The results are compared against observations of young supernova remnants including Cas A and the recent data obtained for SN 1987A. The work has been supported by the NSF grant AST-1109113 and DOE grant DE-FG52-09NA29548. This research used resources of the National Energy Research Scientific Computing Center, which is supported by the U.S. DoE under Contract No. DE-AC02-05CH11231.

  10. Evaluating the networking characteristics of the Cray XC-40 Intel Knights Landing-based Cori supercomputer at NERSC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doerfler, Douglas; Austin, Brian; Cook, Brandon

    There are many potential issues associated with deploying the Intel Xeon Phi™ (code named Knights Landing [KNL]) manycore processor in a large-scale supercomputer. One in particular is the ability to fully utilize the high-speed communications network, given that the serial performance of a Xeon Phi TM core is a fraction of a Xeon®core. In this paper, we take a look at the trade-offs associated with allocating enough cores to fully utilize the Aries high-speed network versus cores dedicated to computation, e.g., the trade-off between MPI and OpenMP. In addition, we evaluate new features of Cray MPI in support of KNL,more » such as internode optimizations. We also evaluate one-sided programming models such as Unified Parallel C. We quantify the impact of the above trade-offs and features using a suite of National Energy Research Scientific Computing Center applications.« less

  11. A diurnal resonance in the ocean tide and in the earth's load response due to the resonant free 'core nutation'

    NASA Technical Reports Server (NTRS)

    Wahr, J. M.; Sasao, T.

    1981-01-01

    The effects of the oceans, which are subject to a resonance due to a free rotational eigenmode of an elliptical, rotating earth with a fluid outer core having an eigenfrequency of (1 + 1/460) cycle/day, on the body tide and nutational response of the earth to the diurnal luni-tidal force are computed. The response of an elastic, rotating, elliptical, oceanless earth with a fluid outer core to a given load distribution on its surface is first considered, and the tidal sea level height for equilibrium and nonequilibrium oceans is examined. Computations of the effects of equilibrium and nonequilibrium oceans on the nutational and deformational responses of the earth are then presented which show small but significant perturbations to the retrograde 18.6-year and prograde six-month nutations, and more important effects on the earth body tide, which is also resonant at the free core notation eigenfrequency.

  12. Production Level CFD Code Acceleration for Hybrid Many-Core Architectures

    NASA Technical Reports Server (NTRS)

    Duffy, Austen C.; Hammond, Dana P.; Nielsen, Eric J.

    2012-01-01

    In this work, a novel graphics processing unit (GPU) distributed sharing model for hybrid many-core architectures is introduced and employed in the acceleration of a production-level computational fluid dynamics (CFD) code. The latest generation graphics hardware allows multiple processor cores to simultaneously share a single GPU through concurrent kernel execution. This feature has allowed the NASA FUN3D code to be accelerated in parallel with up to four processor cores sharing a single GPU. For codes to scale and fully use resources on these and the next generation machines, codes will need to employ some type of GPU sharing model, as presented in this work. Findings include the effects of GPU sharing on overall performance. A discussion of the inherent challenges that parallel unstructured CFD codes face in accelerator-based computing environments is included, with considerations for future generation architectures. This work was completed by the author in August 2010, and reflects the analysis and results of the time.

  13. Core Collapse: The Race Between Stellar Evolution and Binary Heating

    NASA Astrophysics Data System (ADS)

    Converse, Joseph M.; Chandar, R.

    2012-01-01

    The dynamical formation of binary stars can dramatically affect the evolution of their host star clusters. In relatively small clusters (M < 6000 Msun) the most massive stars rapidly form binaries, heating the cluster and preventing any significant contraction of the core. The situation in much larger globular clusters (M 105 Msun) is quite different, with many showing collapsed cores, implying that binary formation did not affect them as severely as lower mass clusters. More massive clusters, however, should take longer to form their binaries, allowing stellar evolution more time to prevent the heating by causing the larger stars to die off. Here, we simulate the evolution of clusters between those of open and globular clusters in order to find at what size a star cluster is able to experience true core collapse. Our simulations make use of a new GPU-based computing cluster recently purchased at the University of Toledo. We also present some benchmarks of this new computational resource.

  14. An Interactive Version of MULR04 With Enhanced Graphic Capability

    ERIC Educational Resources Information Center

    Burkholder, Joel H.

    1978-01-01

    An existing computer program for computing multiple regression analyses is made interactive in order to alleviate core storage requirements. Also, some improvements in the graphics aspects of the program are included. (JKS)

  15. Silicon microdisk-based full adders for optical computing.

    PubMed

    Ying, Zhoufeng; Wang, Zheng; Zhao, Zheng; Dhar, Shounak; Pan, David Z; Soref, Richard; Chen, Ray T

    2018-03-01

    Due to the projected saturation of Moore's law, as well as the drastically increasing trend of bandwidth with lower power consumption, silicon photonics has emerged as one of the most promising alternatives that has attracted a lasting interest due to the accessibility and maturity of ultra-compact passive and active integrated photonic components. In this Letter, we demonstrate a ripple-carry electro-optic 2-bit full adder using microdisks, which replaces the core part of an electrical full adder by optical counterparts and uses light to carry signals from one bit to the next with high bandwidth and low power consumption per bit. All control signals of the operands are applied simultaneously within each clock cycle. Thus, the severe latency issue that accumulates as the size of the full adder increases can be circumvented, allowing for an improvement in computing speed and a reduction in power consumption. This approach paves the way for future high-speed optical computing systems in the post-Moore's law era.

  16. Alfvén eigenmode evolution computed with the VENUS and KINX codes for the ITER baseline scenario

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isaev, M. Yu., E-mail: isaev-my@nrcki.ru; Medvedev, S. Yu.; Cooper, W. A.

    A new application of the VENUS code is described, which computes alpha particle orbits in the perturbed electromagnetic fields and its resonant interaction with the toroidal Alfvén eigenmodes (TAEs) for the ITER device. The ITER baseline scenario with Q = 10 and the plasma toroidal current of 15 MA is considered as the most important and relevant for the International Tokamak Physics Activity group on energetic particles (ITPA-EP). For this scenario, typical unstable TAE-modes with the toroidal index n = 20 have been predicted that are localized in the plasma core near the surface with safety factor q = 1.more » The spatial structure of ballooning and antiballooning modes has been computed with the ideal MHD code KINX. The linear growth rates and the saturation levels taking into account the damping effects and the different mode frequencies have been calculated with the VENUS code for both ballooning and antiballooning TAE-modes.« less

  17. EEG-Based Computer Aided Diagnosis of Autism Spectrum Disorder Using Wavelet, Entropy, and ANN

    PubMed Central

    AlSharabi, Khalil; Ibrahim, Sutrisno; Alsuwailem, Abdullah

    2017-01-01

    Autism spectrum disorder (ASD) is a type of neurodevelopmental disorder with core impairments in the social relationships, communication, imagination, or flexibility of thought and restricted repertoire of activity and interest. In this work, a new computer aided diagnosis (CAD) of autism ‎based on electroencephalography (EEG) signal analysis is investigated. The proposed method is based on discrete wavelet transform (DWT), entropy (En), and artificial neural network (ANN). DWT is used to decompose EEG signals into approximation and details coefficients to obtain EEG subbands. The feature vector is constructed by computing Shannon entropy values from each EEG subband. ANN classifies the corresponding EEG signal into normal or autistic based on the extracted features. The experimental results show the effectiveness of the proposed method for assisting autism diagnosis. A receiver operating characteristic (ROC) curve metric is used to quantify the performance of the proposed method. The proposed method obtained promising results tested using real dataset provided by King Abdulaziz Hospital, Jeddah, Saudi Arabia. PMID:28484720

  18. QSAR Methods.

    PubMed

    Gini, Giuseppina

    2016-01-01

    In this chapter, we introduce the basis of computational chemistry and discuss how computational methods have been extended to some biological properties and toxicology, in particular. Since about 20 years, chemical experimentation is more and more replaced by modeling and virtual experimentation, using a large core of mathematics, chemistry, physics, and algorithms. Then we see how animal experiments, aimed at providing a standardized result about a biological property, can be mimicked by new in silico methods. Our emphasis here is on toxicology and on predicting properties through chemical structures. Two main streams of such models are available: models that consider the whole molecular structure to predict a value, namely QSAR (Quantitative Structure Activity Relationships), and models that find relevant substructures to predict a class, namely SAR. The term in silico discovery is applied to chemical design, to computational toxicology, and to drug discovery. We discuss how the experimental practice in biological science is moving more and more toward modeling and simulation. Such virtual experiments confirm hypotheses, provide data for regulation, and help in designing new chemicals.

  19. Approaches and Tools Used to Teach the Computer Input/Output Subsystem: A Survey

    ERIC Educational Resources Information Center

    Larraza-Mendiluze, Edurne; Garay-Vitoria, Nestor

    2015-01-01

    This paper surveys how the computer input/output (I/O) subsystem is taught in introductory undergraduate courses. It is important to study the educational process of the computer I/O subsystem because, in the curricula recommendations, it is considered a core topic in the area of knowledge of computer architecture and organization (CAO). It is…

  20. 20 CFR 666.140 - Which individuals receiving services are included in the core indicators of performance?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... included in the core indicators of performance? 666.140 Section 666.140 Employees' Benefits EMPLOYMENT AND... the core indicators of performance? (a)(1) The core indicators of performance apply to all individuals... informational activities. (WIA sec. 136(b)(2)(A).) (2) Self-service and informational activities are those core...

  1. W17_geowave “3D full waveform geophysical models”

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larmat, Carene; Maceira, Monica; Roy, Corinna

    2018-02-12

    Performance of the MCMC inversion according to the number of cores for the computation. A) 64 cores. B) 480 cores. C) 816 cores. The true model is represented by the black line. Vsv is the wave speed of S waves polarized in the vertical plane, ξ is an anisotropy parameter. The Earth is highly anisotropics; the wavespeed of seismic waves depends on the polarization of the wave. Seismic inversion of the elastic structure is usually limited to isotropic information such as Vsv. Our research looked at the inversion of Earth anisotropy.

  2. Ordering of guarded and unguarded stores for no-sync I/O

    DOEpatents

    Gara, Alan; Ohmacht, Martin

    2013-06-25

    A parallel computing system processes at least one store instruction. A first processor core issues a store instruction. A first queue, associated with the first processor core, stores the store instruction. A second queue, associated with a first local cache memory device of the first processor core, stores the store instruction. The first processor core updates first data in the first local cache memory device according to the store instruction. The third queue, associated with at least one shared cache memory device, stores the store instruction. The first processor core invalidates second data, associated with the store instruction, in the at least one shared cache memory. The first processor core invalidates third data, associated with the store instruction, in other local cache memory devices of other processor cores. The first processor core flushing only the first queue.

  3. Real-time polarization-sensitive optical coherence tomography data processing with parallel computing

    PubMed Central

    Liu, Gangjun; Zhang, Jun; Yu, Lingfeng; Xie, Tuqiang; Chen, Zhongping

    2010-01-01

    With the increase of the A-line speed of optical coherence tomography (OCT) systems, real-time processing of acquired data has become a bottleneck. The shared-memory parallel computing technique is used to process OCT data in real time. The real-time processing power of a quad-core personal computer (PC) is analyzed. It is shown that the quad-core PC could provide real-time OCT data processing ability of more than 80K A-lines per second. A real-time, fiber-based, swept source polarization-sensitive OCT system with 20K A-line speed is demonstrated with this technique. The real-time 2D and 3D polarization-sensitive imaging of chicken muscle and pig tendon is also demonstrated. PMID:19904337

  4. A solid reactor core thermal model for nuclear thermal rockets

    NASA Astrophysics Data System (ADS)

    Rider, William J.; Cappiello, Michael W.; Liles, Dennis R.

    1991-01-01

    A Helium/Hydrogen Cooled Reactor Analysis (HERA) computer code has been developed. HERA has the ability to model arbitrary geometries in three dimensions, which allows the user to easily analyze reactor cores constructed of prismatic graphite elements. The code accounts for heat generation in the fuel, control rods, and other structures; conduction and radiation across gaps; convection to the coolant; and a variety of boundary conditions. The numerical solution scheme has been optimized for vector computers, making long transient analyses economical. Time integration is either explicit or implicit, which allows the use of the model to accurately calculate both short- or long-term transients with an efficient use of computer time. Both the basic spatial and temporal integration schemes have been benchmarked against analytical solutions.

  5. Neural system prediction and identification challenge.

    PubMed

    Vlachos, Ioannis; Zaytsev, Yury V; Spreizer, Sebastian; Aertsen, Ad; Kumar, Arvind

    2013-01-01

    Can we infer the function of a biological neural network (BNN) if we know the connectivity and activity of all its constituent neurons?This question is at the core of neuroscience and, accordingly, various methods have been developed to record the activity and connectivity of as many neurons as possible. Surprisingly, there is no theoretical or computational demonstration that neuronal activity and connectivity are indeed sufficient to infer the function of a BNN. Therefore, we pose the Neural Systems Identification and Prediction Challenge (nuSPIC). We provide the connectivity and activity of all neurons and invite participants (1) to infer the functions implemented (hard-wired) in spiking neural networks (SNNs) by stimulating and recording the activity of neurons and, (2) to implement predefined mathematical/biological functions using SNNs. The nuSPICs can be accessed via a web-interface to the NEST simulator and the user is not required to know any specific programming language. Furthermore, the nuSPICs can be used as a teaching tool. Finally, nuSPICs use the crowd-sourcing model to address scientific issues. With this computational approach we aim to identify which functions can be inferred by systematic recordings of neuronal activity and connectivity. In addition, nuSPICs will help the design and application of new experimental paradigms based on the structure of the SNN and the presumed function which is to be discovered.

  6. Neural system prediction and identification challenge

    PubMed Central

    Vlachos, Ioannis; Zaytsev, Yury V.; Spreizer, Sebastian; Aertsen, Ad; Kumar, Arvind

    2013-01-01

    Can we infer the function of a biological neural network (BNN) if we know the connectivity and activity of all its constituent neurons?This question is at the core of neuroscience and, accordingly, various methods have been developed to record the activity and connectivity of as many neurons as possible. Surprisingly, there is no theoretical or computational demonstration that neuronal activity and connectivity are indeed sufficient to infer the function of a BNN. Therefore, we pose the Neural Systems Identification and Prediction Challenge (nuSPIC). We provide the connectivity and activity of all neurons and invite participants (1) to infer the functions implemented (hard-wired) in spiking neural networks (SNNs) by stimulating and recording the activity of neurons and, (2) to implement predefined mathematical/biological functions using SNNs. The nuSPICs can be accessed via a web-interface to the NEST simulator and the user is not required to know any specific programming language. Furthermore, the nuSPICs can be used as a teaching tool. Finally, nuSPICs use the crowd-sourcing model to address scientific issues. With this computational approach we aim to identify which functions can be inferred by systematic recordings of neuronal activity and connectivity. In addition, nuSPICs will help the design and application of new experimental paradigms based on the structure of the SNN and the presumed function which is to be discovered. PMID:24399966

  7. Multi-core and GPU accelerated simulation of a radial star target imaged with equivalent t-number circular and Gaussian pupils

    NASA Astrophysics Data System (ADS)

    Greynolds, Alan W.

    2013-09-01

    Results from the GelOE optical engineering software are presented for the through-focus, monochromatic coherent and polychromatic incoherent imaging of a radial "star" target for equivalent t-number circular and Gaussian pupils. The FFT-based simulations are carried out using OpenMP threading on a multi-core desktop computer, with and without the aid of a many-core NVIDIA GPU accessing its cuFFT library. It is found that a custom FFT optimized for the 12-core host has similar performance to a simply implemented 256-core GPU FFT. A more sophisticated version of the latter but tuned to reduce overhead on a 448-core GPU is 20 to 28 times faster than a basic FFT implementation running on one CPU core.

  8. The LHCb software and computing upgrade for Run 3: opportunities and challenges

    NASA Astrophysics Data System (ADS)

    Bozzi, C.; Roiser, S.; LHCb Collaboration

    2017-10-01

    The LHCb detector will be upgraded for the LHC Run 3 and will be readout at 30 MHz, corresponding to the full inelastic collision rate, with major implications on the full software trigger and offline computing. If the current computing model and software framework are kept, the data storage capacity and computing power required to process data at this rate, and to generate and reconstruct equivalent samples of simulated events, will exceed the current capacity by at least one order of magnitude. A redesign of the software framework, including scheduling, the event model, the detector description and the conditions database, is needed to fully exploit the computing power of multi-, many-core architectures, and coprocessors. Data processing and the analysis model will also change towards an early streaming of different data types, in order to limit storage resources, with further implications for the data analysis workflows. Fast simulation options will allow to obtain a reasonable parameterization of the detector response in considerably less computing time. Finally, the upgrade of LHCb will be a good opportunity to review and implement changes in the domains of software design, test and review, and analysis workflow and preservation. In this contribution, activities and recent results in all the above areas are presented.

  9. ParallelStructure: A R Package to Distribute Parallel Runs of the Population Genetics Program STRUCTURE on Multi-Core Computers

    PubMed Central

    Besnier, Francois; Glover, Kevin A.

    2013-01-01

    This software package provides an R-based framework to make use of multi-core computers when running analyses in the population genetics program STRUCTURE. It is especially addressed to those users of STRUCTURE dealing with numerous and repeated data analyses, and who could take advantage of an efficient script to automatically distribute STRUCTURE jobs among multiple processors. It also consists of additional functions to divide analyses among combinations of populations within a single data set without the need to manually produce multiple projects, as it is currently the case in STRUCTURE. The package consists of two main functions: MPI_structure() and parallel_structure() as well as an example data file. We compared the performance in computing time for this example data on two computer architectures and showed that the use of the present functions can result in several-fold improvements in terms of computation time. ParallelStructure is freely available at https://r-forge.r-project.org/projects/parallstructure/. PMID:23923012

  10. Benchmarking NWP Kernels on Multi- and Many-core Processors

    NASA Astrophysics Data System (ADS)

    Michalakes, J.; Vachharajani, M.

    2008-12-01

    Increased computing power for weather, climate, and atmospheric science has provided direct benefits for defense, agriculture, the economy, the environment, and public welfare and convenience. Today, very large clusters with many thousands of processors are allowing scientists to move forward with simulations of unprecedented size. But time-critical applications such as real-time forecasting or climate prediction need strong scaling: faster nodes and processors, not more of them. Moreover, the need for good cost- performance has never been greater, both in terms of performance per watt and per dollar. For these reasons, the new generations of multi- and many-core processors being mass produced for commercial IT and "graphical computing" (video games) are being scrutinized for their ability to exploit the abundant fine- grain parallelism in atmospheric models. We present results of our work to date identifying key computational kernels within the dynamics and physics of a large community NWP model, the Weather Research and Forecast (WRF) model. We benchmark and optimize these kernels on several different multi- and many-core processors. The goals are to (1) characterize and model performance of the kernels in terms of computational intensity, data parallelism, memory bandwidth pressure, memory footprint, etc. (2) enumerate and classify effective strategies for coding and optimizing for these new processors, (3) assess difficulties and opportunities for tool or higher-level language support, and (4) establish a continuing set of kernel benchmarks that can be used to measure and compare effectiveness of current and future designs of multi- and many-core processors for weather and climate applications.

  11. Accelerating Climate Simulations Through Hybrid Computing

    NASA Technical Reports Server (NTRS)

    Zhou, Shujia; Sinno, Scott; Cruz, Carlos; Purcell, Mark

    2009-01-01

    Unconventional multi-core processors (e.g., IBM Cell B/E and NYIDIDA GPU) have emerged as accelerators in climate simulation. However, climate models typically run on parallel computers with conventional processors (e.g., Intel and AMD) using MPI. Connecting accelerators to this architecture efficiently and easily becomes a critical issue. When using MPI for connection, we identified two challenges: (1) identical MPI implementation is required in both systems, and; (2) existing MPI code must be modified to accommodate the accelerators. In response, we have extended and deployed IBM Dynamic Application Virtualization (DAV) in a hybrid computing prototype system (one blade with two Intel quad-core processors, two IBM QS22 Cell blades, connected with Infiniband), allowing for seamlessly offloading compute-intensive functions to remote, heterogeneous accelerators in a scalable, load-balanced manner. Currently, a climate solar radiation model running with multiple MPI processes has been offloaded to multiple Cell blades with approx.10% network overhead.

  12. The science of visual analysis at extreme scale

    NASA Astrophysics Data System (ADS)

    Nowell, Lucy T.

    2011-01-01

    Driven by market forces and spanning the full spectrum of computational devices, computer architectures are changing in ways that present tremendous opportunities and challenges for data analysis and visual analytic technologies. Leadership-class high performance computing system will have as many as a million cores by 2020 and support 10 billion-way concurrency, while laptop computers are expected to have as many as 1,000 cores by 2015. At the same time, data of all types are increasing exponentially and automated analytic methods are essential for all disciplines. Many existing analytic technologies do not scale to make full use of current platforms and fewer still are likely to scale to the systems that will be operational by the end of this decade. Furthermore, on the new architectures and for data at extreme scales, validating the accuracy and effectiveness of analytic methods, including visual analysis, will be increasingly important.

  13. Data Acquisition System for Multi-Frequency Radar Flight Operations Preparation

    NASA Technical Reports Server (NTRS)

    Leachman, Jonathan

    2010-01-01

    A three-channel data acquisition system was developed for the NASA Multi-Frequency Radar (MFR) system. The system is based on a commercial-off-the-shelf (COTS) industrial PC (personal computer) and two dual-channel 14-bit digital receiver cards. The decimated complex envelope representations of the three radar signals are passed to the host PC via the PCI bus, and then processed in parallel by multiple cores of the PC CPU (central processing unit). The innovation is this parallelization of the radar data processing using multiple cores of a standard COTS multi-core CPU. The data processing portion of the data acquisition software was built using autonomous program modules or threads, which can run simultaneously on different cores. A master program module calculates the optimal number of processing threads, launches them, and continually supplies each with data. The benefit of this new parallel software architecture is that COTS PCs can be used to implement increasingly complex processing algorithms on an increasing number of radar range gates and data rates. As new PCs become available with higher numbers of CPU cores, the software will automatically utilize the additional computational capacity.

  14. A Two-Step Approach to Uncertainty Quantification of Core Simulators

    DOE PAGES

    Yankov, Artem; Collins, Benjamin; Klein, Markus; ...

    2012-01-01

    For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating cross section uncertainties through core simulators are the XSUSA statistical approach and the “two-step” method. The XSUSA approach, which is based on the SUSA code package, is fundamentally a stochastic sampling method. Alternatively, the two-step method utilizes generalized perturbation theory in the first step and stochastic sampling in the second step. The consistency of these two methods in quantifying uncertainties in the multiplication factor andmore » in the core power distribution was examined in the framework of phase I-3 of the OECD Uncertainty Analysis in Modeling benchmark. With the Three Mile Island Unit 1 core as a base model for analysis, the XSUSA and two-step methods were applied with certain limitations, and the results were compared to those produced by other stochastic sampling-based codes. Based on the uncertainty analysis results, conclusions were drawn as to the method that is currently more viable for computing uncertainties in burnup and transient calculations.« less

  15. Core-6 fucose and the oligomerization of the 1918 pandemic influenza viral neuraminidase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Zhengliang L., E-mail: Leon.wu@bio-techne.com; Zhou, Hui; Ethen, Cheryl M.

    The 1918 H1N1 influenza virus was responsible for one of the most deadly pandemics in human history. Yet to date, the structure component responsible for its virulence is still a mystery. In order to search for such a component, the neuraminidase (NA) antigen of the virus was expressed, which led to the discovery of an active form (tetramer) and an inactive form (dimer and monomer) of the protein due to different glycosylation. In this report, the N-glycans from both forms were released and characterized by mass spectrometry. It was found that the glycans from the active form had 26% core-6more » fucosylated, while the glycans from the inactive form had 82% core-6 fucosylated. Even more surprisingly, the stalk region of the active form was almost completely devoid of core-6-linked fucose. These findings were further supported by the results obtained from in vitro incorporation of azido fucose and {sup 3}H-labeled fucose using core-6 fucosyltransferase, FUT8. In addition, the incorporation of fucose did not change the enzymatic activity of the active form, implying that core-6 fucose is not directly involved in the enzymatic activity. It is postulated that core-6 fucose prohibits the oligomerization and subsequent activation of the enzyme. - Graphical abstract: Proposed mechanism for how core-fucose prohibits the tetramerization of the 1918 pandemic viral neuraminidase. Only the cross section of the stalk region with two N-linked glycans are depicted for clarity. (A) Carbohydrate–carbohydrate interaction on non-fucosylated monomer allows tetramerization. (B) Core-fucosylation disrupts the interaction and prevents the tetramerization. - Highlights: • Expressed 1918 pandemic influenza viral neuraminidase has inactive and active forms. • The inactive form contains high level of core-6 fucose, while the active form lacks such modification. • Core fucose could interfere the oligomerization of the neuraminidase and thus its activation. • This discovery may explain why 1918 pandemic influenza caused higher death rate among young population.« less

  16. Cloud Computing as a Core Discipline in a Technology Entrepreneurship Program

    ERIC Educational Resources Information Center

    Lawler, James; Joseph, Anthony

    2012-01-01

    Education in entrepreneurship continues to be a developing area of curricula for computer science and information systems students. Entrepreneurship is enabled frequently by cloud computing methods that furnish benefits to especially medium and small-sized firms. Expanding upon an earlier foundation paper, the authors of this paper present an…

  17. The Role of Visualization in Computer Science Education

    ERIC Educational Resources Information Center

    Fouh, Eric; Akbar, Monika; Shaffer, Clifford A.

    2012-01-01

    Computer science core instruction attempts to provide a detailed understanding of dynamic processes such as the working of an algorithm or the flow of information between computing entities. Such dynamic processes are not well explained by static media such as text and images, and are difficult to convey in lecture. The authors survey the history…

  18. Are Academic Programs Adequate for the Software Profession?

    ERIC Educational Resources Information Center

    Koster, Alexis

    2010-01-01

    According to the Bureau of Labor Statistics, close to 1.8 million people, or 77% of all computer professionals, were working in the design, development, deployment, maintenance, and management of software in 2006. The ACM [Association for Computing Machinery] model curriculum for the BS in computer science proposes that about 42% of the core body…

  19. Automated Analysis of Composition and Style of Photographs and Paintings

    ERIC Educational Resources Information Center

    Yao, Lei

    2013-01-01

    Computational aesthetics is a newly emerging cross-disciplinary field with its core situated in traditional research areas such as image processing and computer vision. Using a computer to interpret aesthetic terms for images is very challenging. In this dissertation, I focus on solving specific problems about analyzing the composition and style…

  20. CT Scanning and Geophysical Measurements of the Marcellus Formation from the Tippens 6HS Well

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crandall, Dustin; Paronish, Thomas; Brown, Sarah

    The computed tomography (CT) facilities and the Multi-Sensor Core Logger (MSCL) at the National Energy Technology Laboratory (NETL) Morgantown, West Virginia site were used to characterize core of the Marcellus Shale from a vertical well drilled in Eastern Ohio. The core is from the Tippens 6HS Well in Monroe County, Ohio and is comprised primarily of the Marcellus Shale from depths of 5550 to 5663 ft.

  1. Optimisation of multiplet identifier processing on a PLAYSTATION® 3

    NASA Astrophysics Data System (ADS)

    Hattori, Masami; Mizuno, Takashi

    2010-02-01

    To enable high-performance computing (HPC) for applications with large datasets using a Sony® PLAYSTATION® 3 (PS3™) video game console, we configured a hybrid system consisting of a Windows® PC and a PS3™. To validate this system, we implemented the real-time multiplet identifier (RTMI) application, which identifies multiplets of microearthquakes in terms of the similarity of their waveforms. The cross-correlation computation, which is a core algorithm of the RTMI application, was optimised for the PS3™ platform, while the rest of the computation, including data input and output remained on the PC. With this configuration, the core part of the algorithm ran 69 times faster than the original program, accelerating total computation speed more than five times. As a result, the system processed up to 2100 total microseismic events, whereas the original implementation had a limit of 400 events. These results indicate that this system enables high-performance computing for large datasets using the PS3™, as long as data transfer time is negligible compared with computation time.

  2. 18F-FDG uptake in the colon is modulated by metformin but not associated with core body temperature and energy expenditure

    PubMed Central

    Bahler, Lonneke; Holleman, Frits; Chan, Man-Wai; Booij, Jan; Hoekstra, Joost B.; Verberne, Hein J.

    2017-01-01

    Purpose Physiological colonic 18F-fluorodeoxyglucose (18F-FDG) uptake is a frequent finding on 18F-FDG positron emission tomography computed tomography (PET-CT). Interestingly, metformin, a glucose lowering drug associated with moderate weight loss, is also associated with an increased colonic 18F-FDG uptake. Consequently, increased colonic glucose use might partly explain the weight losing effect of metformin when this results in an increased energy expenditure and/or core body temperature. Therefore, we aimed to determine whether metformin modifies the metabolic activity of the colon by increasing glucose uptake. Methods In this open label, non-randomized, prospective mechanistic study, we included eight lean and eight overweight males. We measured colonic 18F-FDG uptake on PET-CT, energy expenditure and core body temperature before and after the use of metformin. The maximal colonic 18F-FDG uptake was measured in 5 separate segments (caecum, colon ascendens,—transversum,—descendens and sigmoid). Results The maximal colonic 18F-FDG uptake increased significantly in all separate segments after the use of metformin. There was no significant difference in energy expenditure or core body temperature after the use of metformin. There was no correlation between maximal colonic 18F-FDG uptake and energy expenditure or core body temperature. Conclusion Metformin significantly increases colonic 18F-FDG uptake, but this increased uptake is not associated with an increase in energy expenditure or core body temperature. Although the colon might be an important site of the glucose plasma lowering actions of metformin, this mechanism of action does not explain directly any associated weight loss. PMID:28464031

  3. An evaluation of MPI message rate on hybrid-core processors

    DOE PAGES

    Barrett, Brian W.; Brightwell, Ron; Grant, Ryan; ...

    2014-11-01

    Power and energy concerns are motivating chip manufacturers to consider future hybrid-core processor designs that may combine a small number of traditional cores optimized for single-thread performance with a large number of simpler cores optimized for throughput performance. This trend is likely to impact the way in which compute resources for network protocol processing functions are allocated and managed. In particular, the performance of MPI match processing is critical to achieving high message throughput. In this paper, we analyze the ability of simple and more complex cores to perform MPI matching operations for various scenarios in order to gain insightmore » into how MPI implementations for future hybrid-core processors should be designed.« less

  4. 40 CFR 35.6225 - Activities eligible for funding under Core Program Cooperative Agreements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Core Program Cooperative Agreements. 35.6225 Section 35.6225 Protection of Environment ENVIRONMENTAL... Superfund State Contracts for Superfund Response Actions Core Program Cooperative Agreements § 35.6225 Activities eligible for funding under Core Program Cooperative Agreements. (a) To be eligible for funding...

  5. 40 CFR 35.6225 - Activities eligible for funding under Core Program Cooperative Agreements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Core Program Cooperative Agreements. 35.6225 Section 35.6225 Protection of Environment ENVIRONMENTAL... Superfund State Contracts for Superfund Response Actions Core Program Cooperative Agreements § 35.6225 Activities eligible for funding under Core Program Cooperative Agreements. (a) To be eligible for funding...

  6. A fast CT reconstruction scheme for a general multi-core PC.

    PubMed

    Zeng, Kai; Bai, Erwei; Wang, Ge

    2007-01-01

    Expensive computational cost is a severe limitation in CT reconstruction for clinical applications that need real-time feedback. A primary example is bolus-chasing computed tomography (CT) angiography (BCA) that we have been developing for the past several years. To accelerate the reconstruction process using the filtered backprojection (FBP) method, specialized hardware or graphics cards can be used. However, specialized hardware is expensive and not flexible. The graphics processing unit (GPU) in a current graphic card can only reconstruct images in a reduced precision and is not easy to program. In this paper, an acceleration scheme is proposed based on a multi-core PC. In the proposed scheme, several techniques are integrated, including utilization of geometric symmetry, optimization of data structures, single-instruction multiple-data (SIMD) processing, multithreaded computation, and an Intel C++ compilier. Our scheme maintains the original precision and involves no data exchange between the GPU and CPU. The merits of our scheme are demonstrated in numerical experiments against the traditional implementation. Our scheme achieves a speedup of about 40, which can be further improved by several folds using the latest quad-core processors.

  7. PARALLELISATION OF THE MODEL-BASED ITERATIVE RECONSTRUCTION ALGORITHM DIRA.

    PubMed

    Örtenberg, A; Magnusson, M; Sandborg, M; Alm Carlsson, G; Malusek, A

    2016-06-01

    New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelisation of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelisation of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelised using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelisation of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelisation with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Optimization of Sparse Matrix-Vector Multiplication on Emerging Multicore Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Samuel; Oliker, Leonid; Vuduc, Richard

    2008-10-16

    We are witnessing a dramatic change in computer architecture due to the multicore paradigm shift, as every electronic device from cell phones to supercomputers confronts parallelism of unprecedented scale. To fully unleash the potential of these systems, the HPC community must develop multicore specific-optimization methodologies for important scientific computations. In this work, we examine sparse matrix-vector multiply (SpMV) - one of the most heavily used kernels in scientific computing - across a broad spectrum of multicore designs. Our experimental platform includes the homogeneous AMD quad-core, AMD dual-core, and Intel quad-core designs, the heterogeneous STI Cell, as well as one ofmore » the first scientific studies of the highly multithreaded Sun Victoria Falls (a Niagara2 SMP). We present several optimization strategies especially effective for the multicore environment, and demonstrate significant performance improvements compared to existing state-of-the-art serial and parallel SpMV implementations. Additionally, we present key insights into the architectural trade-offs of leading multicore design strategies, in the context of demanding memory-bound numerical algorithms.« less

  9. A Fast CT Reconstruction Scheme for a General Multi-Core PC

    PubMed Central

    Zeng, Kai; Bai, Erwei; Wang, Ge

    2007-01-01

    Expensive computational cost is a severe limitation in CT reconstruction for clinical applications that need real-time feedback. A primary example is bolus-chasing computed tomography (CT) angiography (BCA) that we have been developing for the past several years. To accelerate the reconstruction process using the filtered backprojection (FBP) method, specialized hardware or graphics cards can be used. However, specialized hardware is expensive and not flexible. The graphics processing unit (GPU) in a current graphic card can only reconstruct images in a reduced precision and is not easy to program. In this paper, an acceleration scheme is proposed based on a multi-core PC. In the proposed scheme, several techniques are integrated, including utilization of geometric symmetry, optimization of data structures, single-instruction multiple-data (SIMD) processing, multithreaded computation, and an Intel C++ compilier. Our scheme maintains the original precision and involves no data exchange between the GPU and CPU. The merits of our scheme are demonstrated in numerical experiments against the traditional implementation. Our scheme achieves a speedup of about 40, which can be further improved by several folds using the latest quad-core processors. PMID:18256731

  10. Development of an embedded atmospheric turbulence mitigation engine

    NASA Astrophysics Data System (ADS)

    Paolini, Aaron; Bonnett, James; Kozacik, Stephen; Kelmelis, Eric

    2017-05-01

    Methods to reconstruct pictures from imagery degraded by atmospheric turbulence have been under development for decades. The techniques were initially developed for observing astronomical phenomena from the Earth's surface, but have more recently been modified for ground and air surveillance scenarios. Such applications can impose significant constraints on deployment options because they both increase the computational complexity of the algorithms themselves and often dictate a requirement for low size, weight, and power (SWaP) form factors. Consequently, embedded implementations must be developed that can perform the necessary computations on low-SWaP platforms. Fortunately, there is an emerging class of embedded processors driven by the mobile and ubiquitous computing industries. We have leveraged these processors to develop embedded versions of the core atmospheric correction engine found in our ATCOM software. In this paper, we will present our experience adapting our algorithms for embedded systems on a chip (SoCs), namely the NVIDIA Tegra that couples general-purpose ARM cores with their graphics processing unit (GPU) technology and the Xilinx Zynq which pairs similar ARM cores with their field-programmable gate array (FPGA) fabric.

  11. High-Speed Computation of the Kleene Star in Max-Plus Algebraic System Using a Cell Broadband Engine

    NASA Astrophysics Data System (ADS)

    Goto, Hiroyuki

    This research addresses a high-speed computation method for the Kleene star of the weighted adjacency matrix in a max-plus algebraic system. We focus on systems whose precedence constraints are represented by a directed acyclic graph and implement it on a Cell Broadband Engine™ (CBE) processor. Since the resulting matrix gives the longest travel times between two adjacent nodes, it is often utilized in scheduling problem solvers for a class of discrete event systems. This research, in particular, attempts to achieve a speedup by using two approaches: parallelization and SIMDization (Single Instruction, Multiple Data), both of which can be accomplished by a CBE processor. The former refers to a parallel computation using multiple cores, while the latter is a method whereby multiple elements are computed by a single instruction. Using the implementation on a Sony PlayStation 3™ equipped with a CBE processor, we found that the SIMDization is effective regardless of the system's size and the number of processor cores used. We also found that the scalability of using multiple cores is remarkable especially for systems with a large number of nodes. In a numerical experiment where the number of nodes is 2000, we achieved a speedup of 20 times compared with the method without the above techniques.

  12. Scalable, High-performance 3D Imaging Software Platform: System Architecture and Application to Virtual Colonoscopy

    PubMed Central

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2013-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system. PMID:23366803

  13. Time-efficient simulations of tight-binding electronic structures with Intel Xeon PhiTM many-core processors

    NASA Astrophysics Data System (ADS)

    Ryu, Hoon; Jeong, Yosang; Kang, Ji-Hoon; Cho, Kyu Nam

    2016-12-01

    Modelling of multi-million atomic semiconductor structures is important as it not only predicts properties of physically realizable novel materials, but can accelerate advanced device designs. This work elaborates a new Technology-Computer-Aided-Design (TCAD) tool for nanoelectronics modelling, which uses a sp3d5s∗ tight-binding approach to describe multi-million atomic structures, and simulate electronic structures with high performance computing (HPC), including atomic effects such as alloy and dopant disorders. Being named as Quantum simulation tool for Advanced Nanoscale Devices (Q-AND), the tool shows nice scalability on traditional multi-core HPC clusters implying the strong capability of large-scale electronic structure simulations, particularly with remarkable performance enhancement on latest clusters of Intel Xeon PhiTM coprocessors. A review of the recent modelling study conducted to understand an experimental work of highly phosphorus-doped silicon nanowires, is presented to demonstrate the utility of Q-AND. Having been developed via Intel Parallel Computing Center project, Q-AND will be open to public to establish a sound framework of nanoelectronics modelling with advanced HPC clusters of a many-core base. With details of the development methodology and exemplary study of dopant electronics, this work will present a practical guideline for TCAD development to researchers in the field of computational nanoelectronics.

  14. Prevalence of mutations in hepatitis C virus core protein associated with alteration of NF-kappaB activation.

    PubMed

    Mann, Elizabeth A; Stanford, Sandra; Sherman, Kenneth E

    2006-10-01

    The hepatitis C virus (HCV) core protein is a key structural element of the virion but also affects a number of cellular pathways, including nuclear factor kappaB (NF-kappaB) signaling. NF-kappaB is a transcription factor that regulates both anti-apoptotic and pro-inflammatory genes and its activation may contribute to HCV-mediated pathogenesis. Amino acid sequence divergence in core is seen at the genotype level as well as within patient isolates. Recent work has implicated amino acids 9-11 of core in the modulation of NF-kappaB activation. We report that the sequence RKT is highly conserved (93%) at this position across all HCV genotypes, based on sequences collected in the Los Alamos HCV database. Of the 13 types of variants present in the database, the two most prevalent substitutions are RQT and RKP. We further show that core encoding RKP fails to activate NF-kappaB signaling in vitro while NF-kappaB activation by core encoding RQT does not differ from control RKT core. The effect of RKP core is specific to NF-kappaB signaling as activator protein 1 (AP-1) activity is not altered. Further studies are needed to assess potential associations between specific amino acid substitutions at positions 9-11 and liver disease progression and/or response to treatment in individual patients.

  15. Determinants of performance of supplemental immunization activities for polio eradication in Uttar Pradesh, India: social mobilization activities of the Social mobilization Network (SM Net) and Core Group Polio Project (CGPP)

    PubMed Central

    2013-01-01

    Background The primary strategy to interrupt transmission of wild poliovirus in India is to improve supplemental immunization activities (SIAs) and routine immunization coverage in priority districts. The CORE Group, part of the Social Mobilization Network (SM Net), has been successful in improving SIA coverage in high-risk areas of Uttar Pradesh (UP). The SM Net works through community level mobilisers (from the CORE Group and UNICEF) and covers more than 2 million children under the age of five. In this paper, we examine the reasons the CORE Group had been successful through exploration of which social mobilization activities of the CORE Group predicted better performance of SIAs. Methods We carried out a secondary data analysis of routine monitoring information collected by the CORE Group and the Government of India for SIAs. These data included information about vaccination outcomes of SIAs in CORE Group areas and non-CORE Group areas within the districts where the CORE Group operates, along with information about the number of various social mobilization activities carried out for each SIA. We employed Generalized Linear Latent and Mixed Model (GLLAMM) statistical analysis methods to identify which social mobilization activities predicted SIA performance, and to account for the intra-class correlation (ICC) between multiple observations within the same geographic areas over time. Results The number of mosque announcements carried out was the most consistent determinant of improved SIA performance across various performance measures. The number of Bullawa Tollies carried out also appeared to be an important determinant of improved SIA performance. The number of times other social mobilization activities were carried out did not appear to determine better SIA performance. Conclusions Social mobilization activities can improve the performance of mass vaccination campaigns. In the CORE Group areas, the number of mosque announcements and Bullawa Tollies carried out were important determinants of desired SIA outcomes. The CORE Group and SM Net should conduct sufficient numbers of these activities in support of each SIA. It is likely, however, that the quality of social mobilization activities (not studied here) is as or more important than the quantity of activities; quality measures of social mobilization activities should be investigated in the future as to how they determine vaccination performance. PMID:23327427

  16. Seed-Surface Grafting Precipitation Polymerization for Preparing Microsized Optically Active Helical Polymer Core/Shell Particles and Their Application in Enantioselective Crystallization.

    PubMed

    Zhao, Biao; Lin, Jiangfeng; Deng, Jianping; Liu, Dong

    2018-05-14

    Core/shell particles constructed by polymer shell and silica core have constituted a significant category of advanced functional materials. However, constructing microsized optically active helical polymer core/shell particles still remains as a big academic challenge due to the lack of effective and universal preparation methods. In this study, a seed-surface grafting precipitation polymerization (SSGPP) strategy is developed for preparing microsized core/shell particles with SiO 2 as core on which helically substituted polyacetylene is covalently bonded as shell. The resulting core/shell particles exhibit fascinating optical activity and efficiently induce enantioselective crystallization of racemic threonine. Taking advantage of the preparation strategy, novel achiral polymeric and hybrid core/shell particles are also expected. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Optimization of image processing algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Poudel, Pramod; Shirvaikar, Mukul

    2011-03-01

    This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time deadlines. A methodology to take advantage of the asymmetric dual-core processor, which includes an ARM and a DSP core supported by shared memory, is presented with implementation details. The target platform chosen is the popular OMAP 3530 processor for embedded media systems. It has an asymmetric dual-core architecture with an ARM Cortex-A8 and a TMS320C64x Digital Signal Processor (DSP). The development platform was the BeagleBoard with 256 MB of NAND RAM and 256 MB SDRAM memory. The basic image correlation algorithm is chosen for benchmarking as it finds widespread application for various template matching tasks such as face-recognition. The basic algorithm prototypes conform to OpenCV, a popular computer vision library. OpenCV algorithms can be easily ported to the ARM core which runs a popular operating system such as Linux or Windows CE. However, the DSP is architecturally more efficient at handling DFT algorithms. The algorithms are tested on a variety of images and performance results are presented measuring the speedup obtained due to dual-core implementation. A major advantage of this approach is that it allows the ARM processor to perform important real-time tasks, while the DSP addresses performance-hungry algorithms.

  18. Structure-Function Analysis of the Drosophila melanogaster Caudal Transcription Factor Provides Insights into Core Promoter-preferential Activation.

    PubMed

    Shir-Shapira, Hila; Sharabany, Julia; Filderman, Matan; Ideses, Diana; Ovadia-Shochat, Avital; Mannervik, Mattias; Juven-Gershon, Tamar

    2015-07-10

    Regulation of RNA polymerase II transcription is critical for the proper development, differentiation, and growth of an organism. The RNA polymerase II core promoter is the ultimate target of a multitude of transcription factors that control transcription initiation. Core promoters encompass the RNA start site and consist of functional elements such as the TATA box, initiator, and downstream core promoter element (DPE), which confer specific properties to the core promoter. We have previously discovered that Drosophila Caudal, which is a master regulator of genes involved in development and differentiation, is a DPE-specific transcriptional activator. Here, we show that the mouse Caudal-related homeobox (Cdx) proteins (mCdx1, mCdx2, and mCdx4) are also preferential core promoter transcriptional activators. To elucidate the mechanism that enables Caudal to preferentially activate DPE transcription, we performed structure-function analysis. Using a systematic series of deletion mutants (all containing the intact DNA-binding homeodomain) we discovered that the C-terminal region of Caudal contributes to the preferential activation of the fushi tarazu (ftz) Caudal target gene. Furthermore, the region containing both the homeodomain and the C terminus of Caudal was sufficient to confer core promoter-preferential activation to the heterologous GAL4 DNA-binding domain. Importantly, we discovered that Drosophila CREB-binding protein (dCBP) is a co-activator for Caudal-regulated activation of ftz. Strikingly, dCBP conferred the ability to preferentially activate the DPE-dependent ftz reporter to mini-Caudal proteins that were unable to preferentially activate ftz transcription themselves. Taken together, it is the unique combination of dCBP and Caudal that enables the co-activation of ftz in a core promoter-preferential manner. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.

  19. Genome-wide computational prediction and analysis of core promoter elements across plant monocots and dicots

    USDA-ARS?s Scientific Manuscript database

    Transcription initiation, essential to gene expression regulation, involves recruitment of basal transcription factors to the core promoter elements (CPEs). The distribution of currently known CPEs across plant genomes is largely unknown. This is the first large scale genome-wide report on the compu...

  20. Temperature Dependent Photoluminescence of CuInS2 with ZnS Capping

    DTIC Science & Technology

    2014-05-11

    cadmium or zinc like cadmium selenide. The optical properties of core-type nanocrystals can be fine-tuned by changing the quantum dot size. Core...Physics Department To August 2011 University of Notre Dame, South Bend, Indiana - Computational work involving the half-life of Fe60 - Data

  1. NASA CORE (Central Operation of Resources for Educators) Educational Materials Catalog

    NASA Technical Reports Server (NTRS)

    1998-01-01

    This educational materials catalog presents NASA CORE (Central Operation of Resources for Educators). The topics include: 1) Videocassettes (Aeronautics, Earth Resources, Weather, Space Exploration/Satellites, Life Sciences, Careers); 2) Slide Programs; 3) Computer Materials; 4) NASA Memorabilia/Miscellaneous; 5) NASA Educator Resource Centers; 6) and NASA Resources.

  2. Results of comparative RBMK neutron computation using VNIIEF codes (cell computation, 3D statics, 3D kinetics). Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grebennikov, A.N.; Zhitnik, A.K.; Zvenigorodskaya, O.A.

    1995-12-31

    In conformity with the protocol of the Workshop under Contract {open_quotes}Assessment of RBMK reactor safety using modern Western Codes{close_quotes} VNIIEF performed a neutronics computation series to compare western and VNIIEF codes and assess whether VNIIEF codes are suitable for RBMK type reactor safety assessment computation. The work was carried out in close collaboration with M.I. Rozhdestvensky and L.M. Podlazov, NIKIET employees. The effort involved: (1) cell computations with the WIMS, EKRAN codes (improved modification of the LOMA code) and the S-90 code (VNIIEF Monte Carlo). Cell, polycell, burnup computation; (2) 3D computation of static states with the KORAT-3D and NEUmore » codes and comparison with results of computation with the NESTLE code (USA). The computations were performed in the geometry and using the neutron constants presented by the American party; (3) 3D computation of neutron kinetics with the KORAT-3D and NEU codes. These computations were performed in two formulations, both being developed in collaboration with NIKIET. Formulation of the first problem maximally possibly agrees with one of NESTLE problems and imitates gas bubble travel through a core. The second problem is a model of the RBMK as a whole with imitation of control and protection system controls (CPS) movement in a core.« less

  3. Flow Field and Acoustic Predictions for Three-Stream Jets

    NASA Technical Reports Server (NTRS)

    Simmons, Shaun Patrick; Henderson, Brenda S.; Khavaran, Abbas

    2014-01-01

    Computational fluid dynamics was used to analyze a three-stream nozzle parametric design space. The study varied bypass-to-core area ratio, tertiary-to-core area ratio and jet operating conditions. The flowfield solutions from the Reynolds-Averaged Navier-Stokes (RANS) code Overflow 2.2e were used to pre-screen experimental models for a future test in the Aero-Acoustic Propulsion Laboratory (AAPL) at the NASA Glenn Research Center (GRC). Flowfield solutions were considered in conjunction with the jet-noise-prediction code JeNo to screen the design concepts. A two-stream versus three-stream computation based on equal mass flow rates showed a reduction in peak turbulent kinetic energy (TKE) for the three-stream jet relative to that for the two-stream jet which resulted in reduced acoustic emission. Additional three-stream solutions were analyzed for salient flowfield features expected to impact farfield noise. As tertiary power settings were increased there was a corresponding near nozzle increase in shear rate that resulted in an increase in high frequency noise and a reduction in peak TKE. As tertiary-to-core area ratio was increased the tertiary potential core elongated and the peak TKE was reduced. The most noticeable change occurred as secondary-to-core area ratio was increased thickening the secondary potential core, elongating the primary potential core and reducing peak TKE. As forward flight Mach number was increased the jet plume region decreased and reduced peak TKE.

  4. Evaluating Multi-core Architectures through Accelerating the Three-Dimensional Lax–Wendroff Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    You, Yang; Fu, Haohuan; Song, Shuaiwen

    2014-07-18

    Wave propagation forward modeling is a widely used computational method in oil and gas exploration. The iterative stencil loops in such problems have broad applications in scientific computing. However, executing such loops can be highly time time-consuming, which greatly limits application’s performance and power efficiency. In this paper, we accelerate the forward modeling technique on the latest multi-core and many-core architectures such as Intel Sandy Bridge CPUs, NVIDIA Fermi C2070 GPU, NVIDIA Kepler K20x GPU, and the Intel Xeon Phi Co-processor. For the GPU platforms, we propose two parallel strategies to explore the performance optimization opportunities for our stencil kernels.more » For Sandy Bridge CPUs and MIC, we also employ various optimization techniques in order to achieve the best.« less

  5. The Enzyme Function Initiative†

    PubMed Central

    Gerlt, John A.; Allen, Karen N.; Almo, Steven C.; Armstrong, Richard N.; Babbitt, Patricia C.; Cronan, John E.; Dunaway-Mariano, Debra; Imker, Heidi J.; Jacobson, Matthew P.; Minor, Wladek; Poulter, C. Dale; Raushel, Frank M.; Sali, Andrej; Shoichet, Brian K.; Sweedler, Jonathan V.

    2011-01-01

    The Enzyme Function Initiative (EFI) was recently established to address the challenge of assigning reliable functions to enzymes discovered in bacterial genome projects; in this Current Topic we review the structure and operations of the EFI. The EFI includes the Superfamily/Genome, Protein, Structure, Computation, and Data/Dissemination Cores that provide the infrastructure for reliably predicting the in vitro functions of unknown enzymes. The initial targets for functional assignment are selected from five functionally diverse superfamilies (amidohydrolase, enolase, glutathione transferase, haloalkanoic acid dehalogenase, and isoprenoid synthase), with five superfamily-specific Bridging Projects experimentally testing the predicted in vitro enzymatic activities. The EFI also includes the Microbiology Core that evaluates the in vivo context of in vitro enzymatic functions and confirms the functional predictions of the EFI. The deliverables of the EFI to the scientific community include: 1) development of a large-scale, multidisciplinary sequence/structure-based strategy for functional assignment of unknown enzymes discovered in genome projects (target selection, protein production, structure determination, computation, experimental enzymology, microbiology, and structure-based annotation); 2) dissemination of the strategy to the community via publications, collaborations, workshops, and symposia; 3) computational and bioinformatic tools for using the strategy; 4) provision of experimental protocols and/or reagents for enzyme production and characterization; and 5) dissemination of data via the EFI’s website, enzymefunction.org. The realization of multidisciplinary strategies for functional assignment will begin to define the full metabolic diversity that exists in nature and will impact basic biochemical and evolutionary understanding, as well as a wide range of applications of central importance to industrial, medicinal and pharmaceutical efforts. PMID:21999478

  6. The Enzyme Function Initiative.

    PubMed

    Gerlt, John A; Allen, Karen N; Almo, Steven C; Armstrong, Richard N; Babbitt, Patricia C; Cronan, John E; Dunaway-Mariano, Debra; Imker, Heidi J; Jacobson, Matthew P; Minor, Wladek; Poulter, C Dale; Raushel, Frank M; Sali, Andrej; Shoichet, Brian K; Sweedler, Jonathan V

    2011-11-22

    The Enzyme Function Initiative (EFI) was recently established to address the challenge of assigning reliable functions to enzymes discovered in bacterial genome projects; in this Current Topic, we review the structure and operations of the EFI. The EFI includes the Superfamily/Genome, Protein, Structure, Computation, and Data/Dissemination Cores that provide the infrastructure for reliably predicting the in vitro functions of unknown enzymes. The initial targets for functional assignment are selected from five functionally diverse superfamilies (amidohydrolase, enolase, glutathione transferase, haloalkanoic acid dehalogenase, and isoprenoid synthase), with five superfamily specific Bridging Projects experimentally testing the predicted in vitro enzymatic activities. The EFI also includes the Microbiology Core that evaluates the in vivo context of in vitro enzymatic functions and confirms the functional predictions of the EFI. The deliverables of the EFI to the scientific community include (1) development of a large-scale, multidisciplinary sequence/structure-based strategy for functional assignment of unknown enzymes discovered in genome projects (target selection, protein production, structure determination, computation, experimental enzymology, microbiology, and structure-based annotation), (2) dissemination of the strategy to the community via publications, collaborations, workshops, and symposia, (3) computational and bioinformatic tools for using the strategy, (4) provision of experimental protocols and/or reagents for enzyme production and characterization, and (5) dissemination of data via the EFI's Website, http://enzymefunction.org. The realization of multidisciplinary strategies for functional assignment will begin to define the full metabolic diversity that exists in nature and will impact basic biochemical and evolutionary understanding, as well as a wide range of applications of central importance to industrial, medicinal, and pharmaceutical efforts. © 2011 American Chemical Society

  7. Global Futures: a multithreaded execution model for Global Arrays-based applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavarría-Miranda, Daniel; Krishnamoorthy, Sriram; Vishnu, Abhinav

    2012-05-31

    We present Global Futures (GF), an execution model extension to Global Arrays, which is based on a PGAS-compatible Active Message-based paradigm. We describe the design and implementation of Global Futures and illustrate its use in a computational chemistry application benchmark (Hartree-Fock matrix construction using the Self-Consistent Field method). Our results show how we used GF to increase the scalability of the Hartree-Fock matrix build to up to 6,144 cores of an Infiniband cluster. We also show how GF's multithreaded execution has comparable performance to the traditional process-based SPMD model.

  8. Multi Modal Anticipation in Fuzzy Space

    NASA Astrophysics Data System (ADS)

    Asproth, Viveca; Holmberg, Stig C.; Hâkansson, Anita

    2006-06-01

    We are all stakeholders in the geographical space, which makes up our common living and activity space. This means that a careful, creative, and anticipatory planning, design, and management of that space will be of paramount importance for our sustained life on earth. Here it is shown that the quality of such planning could be significantly increased with help of a computer based modelling and simulation tool. Further, the design and implementation of such a tool ought to be guided by the conceptual integration of some core concepts like anticipation and retardation, multi modal system modelling, fuzzy space modelling, and multi actor interaction.

  9. Discovery of a Series of Indazole TRPA1 Antagonists

    PubMed Central

    2017-01-01

    A series of TRPA1 antagonists is described which has as its core structure an indazole moiety. The physical properties and in vitro DMPK profiles are discussed. Good in vivo exposure was obtained with several analogs, allowing efficacy to be assessed in rodent models of inflammatory pain. Two compounds showed significant activity in these models when administered either systemically or topically. Protein chimeras were constructed to indicate compounds from the series bound in the S5 region of the channel, and a computational docking model was used to propose a binding mode for example compounds. PMID:28626530

  10. NASA Wrangler: Automated Cloud-Based Data Assembly in the RECOVER Wildfire Decision Support System

    NASA Technical Reports Server (NTRS)

    Schnase, John; Carroll, Mark; Gill, Roger; Wooten, Margaret; Weber, Keith; Blair, Kindra; May, Jeffrey; Toombs, William

    2017-01-01

    NASA Wrangler is a loosely-coupled, event driven, highly parallel data aggregation service designed to take advantageof the elastic resource capabilities of cloud computing. Wrangler automatically collects Earth observational data, climate model outputs, derived remote sensing data products, and historic biophysical data for pre-, active-, and post-wildfire decision making. It is a core service of the RECOVER decision support system, which is providing rapid-response GIS analytic capabilities to state and local government agencies. Wrangler reduces to minutes the time needed to assemble and deliver crucial wildfire-related data.

  11. Energy: Economic activity and energy demand; link to energy flow. Example: France

    NASA Astrophysics Data System (ADS)

    1980-10-01

    The data derived from the EXPLOR and EPOM, Energy Flow Optimization Model are described. The core of the EXPLOR model is a circular system of relations involving consumer's demand, producer's outputs, and market prices. The solution of this system of relations is obtained by successive iterations; the final output is a coherent system of economic accounts. The computer program for this transition is described. The work conducted by comparing different energy demand models is summarized. The procedure is illustrated by a numerical projection to 1980 and 1985 using the existing version of the EXPLOR France model.

  12. Tidal disruption of fuzzy dark matter subhalo cores

    NASA Astrophysics Data System (ADS)

    Du, Xiaolong; Schwabe, Bodo; Niemeyer, Jens C.; Bürger, David

    2018-03-01

    We study tidal stripping of fuzzy dark matter (FDM) subhalo cores using simulations of the Schrödinger-Poisson equations and analyze the dynamics of tidal disruption, highlighting the differences with standard cold dark matter. Mass loss outside of the tidal radius forces the core to relax into a less compact configuration, lowering the tidal radius. As the characteristic radius of a solitonic core scales inversely with its mass, tidal stripping results in a runaway effect and rapid tidal disruption of the core once its central density drops below 4.5 times the average density of the host within the orbital radius. Additionally, we find that the core is deformed into a tidally locked ellipsoid with increasing eccentricities until it is completely disrupted. Using the core mass loss rate, we compute the minimum mass of cores that can survive several orbits for different FDM particle masses and compare it with observed masses of satellite galaxies in the Milky Way.

  13. DIODE STEERED MANGETIC-CORE MEMORY

    DOEpatents

    Melmed, A.S.; Shevlin, R.T.; Laupheimer, R.

    1962-09-18

    A word-arranged magnetic-core memory is designed for use in a digital computer utilizing the reverse or back current property of the semi-conductor diodes to restore the information in the memory after read-out. In order to ob tain a read-out signal from a magnetic core storage unit, it is necessary to change the states of some of the magnetic cores. In order to retain the information in the memory after read-out it is then necessary to provide a means to return the switched cores to their states before read-out. A rewrite driver passes a pulse back through each row of cores in which some switching has taken place. This pulse combines with the reverse current pulses of diodes for each column in which a core is switched during read-out to cause the particular cores to be switched back into their states prior to read-out. (AEC)

  14. Grids, virtualization, and clouds at Fermilab

    DOE PAGES

    Timm, S.; Chadwick, K.; Garzoglio, G.; ...

    2014-06-11

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture andmore » the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). Lastly, this work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.« less

  15. Grids, virtualization, and clouds at Fermilab

    NASA Astrophysics Data System (ADS)

    Timm, S.; Chadwick, K.; Garzoglio, G.; Noh, S.

    2014-06-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture and the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). This work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.

  16. Information Assurance and Forensic Readiness

    NASA Astrophysics Data System (ADS)

    Pangalos, Georgios; Katos, Vasilios

    Egalitarianism and justice are amongst the core attributes of a democratic regime and should be also secured in an e-democratic setting. As such, the rise of computer related offenses pose a threat to the fundamental aspects of e-democracy and e-governance. Digital forensics are a key component for protecting and enabling the underlying (e-)democratic values and therefore forensic readiness should be considered in an e-democratic setting. This position paper commences from the observation that the density of compliance and potential litigation activities is monotonically increasing in modern organizations, as rules, legislative regulations and policies are being constantly added to the corporate environment. Forensic practices seem to be departing from the niche of law enforcement and are becoming a business function and infrastructural component, posing new challenges to the security professionals. Having no a priori knowledge on whether a security related event or corporate policy violation will lead to litigation, we advocate that computer forensics need to be applied to all investigatory, monitoring and auditing activities. This would result into an inflation of the responsibilities of the Information Security Officer. After exploring some commonalities and differences between IS audit and computer forensics, we present a list of strategic challenges the organization and, in effect, the IS security and audit practitioner will face.

  17. Investigation on the Core Bypass Flow in a Very High Temperature Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hassan, Yassin

    2013-10-22

    Uncertainties associated with the core bypass flow are some of the key issues that directly influence the coolant mass flow distribution and magnitude, and thus the operational core temperature profiles, in the very high-temperature reactor (VHTR). Designers will attempt to configure the core geometry so the core cooling flow rate magnitude and distribution conform to the design values. The objective of this project is to study the bypass flow both experimentally and computationally. Researchers will develop experimental data using state-of-the-art particle image velocimetry in a small test facility. The team will attempt to obtain full field temperature distribution using racksmore » of thermocouples. The experimental data are intended to benchmark computational fluid dynamics (CFD) codes by providing detailed information. These experimental data are urgently needed for validation of the CFD codes. The following are the project tasks: • Construct a small-scale bench-top experiment to resemble the bypass flow between the graphite blocks, varying parameters to address their impact on bypass flow. Wall roughness of the graphite block walls, spacing between the blocks, and temperature of the blocks are some of the parameters to be tested. • Perform CFD to evaluate pre- and post-test calculations and turbulence models, including sensitivity studies to achieve high accuracy. • Develop the state-of-the art large eddy simulation (LES) using appropriate subgrid modeling. • Develop models to be used in systems thermal hydraulics codes to account and estimate the bypass flows. These computer programs include, among others, RELAP3D, MELCOR, GAMMA, and GAS-NET. Actual core bypass flow rate may vary considerably from the design value. Although the uncertainty of the bypass flow rate is not known, some sources have stated that the bypass flow rates in the Fort St. Vrain reactor were between 8 and 25 percent of the total reactor mass flow rate. If bypass flow rates are on the high side, the quantity of cooling flow through the core may be considerably less than the nominal design value, causing some regions of the core to operate at temperatures in excess of the design values. These effects are postulated to lead to localized hot regions in the core that must be considered when evaluating the VHTR operational and accident scenarios.« less

  18. Parallelized seeded region growing using CUDA.

    PubMed

    Park, Seongjin; Lee, Jeongjin; Lee, Hyunna; Shin, Juneseuk; Seo, Jinwook; Lee, Kyoung Ho; Shin, Yeong-Gil; Kim, Bohyoung

    2014-01-01

    This paper presents a novel method for parallelizing the seeded region growing (SRG) algorithm using Compute Unified Device Architecture (CUDA) technology, with intention to overcome the theoretical weakness of SRG algorithm of its computation time being directly proportional to the size of a segmented region. The segmentation performance of the proposed CUDA-based SRG is compared with SRG implementations on single-core CPUs, quad-core CPUs, and shader language programming, using synthetic datasets and 20 body CT scans. Based on the experimental results, the CUDA-based SRG outperforms the other three implementations, advocating that it can substantially assist the segmentation during massive CT screening tests.

  19. Performance analysis of distributed symmetric sparse matrix vector multiplication algorithm for multi-core architectures

    DOE PAGES

    Oryspayev, Dossay; Aktulga, Hasan Metin; Sosonkina, Masha; ...

    2015-07-14

    In this article, sparse matrix vector multiply (SpMVM) is an important kernel that frequently arises in high performance computing applications. Due to its low arithmetic intensity, several approaches have been proposed in literature to improve its scalability and efficiency in large scale computations. In this paper, our target systems are high end multi-core architectures and we use messaging passing interface + open multiprocessing hybrid programming model for parallelism. We analyze the performance of recently proposed implementation of the distributed symmetric SpMVM, originally developed for large sparse symmetric matrices arising in ab initio nuclear structure calculations. We also study important featuresmore » of this implementation and compare with previously reported implementations that do not exploit underlying symmetry. Our SpMVM implementations leverage the hybrid paradigm to efficiently overlap expensive communications with computations. Our main comparison criterion is the "CPU core hours" metric, which is the main measure of resource usage on supercomputers. We analyze the effects of topology-aware mapping heuristic using simplified network load model. Furthermore, we have tested the different SpMVM implementations on two large clusters with 3D Torus and Dragonfly topology. Our results show that the distributed SpMVM implementation that exploits matrix symmetry and hides communication yields the best value for the "CPU core hours" metric and significantly reduces data movement overheads.« less

  20. Computer-assisted self interviewing in sexual health clinics.

    PubMed

    Fairley, Christopher K; Sze, Jun Kit; Vodstrcil, Lenka A; Chen, Marcus Y

    2010-11-01

    This review describes the published information on what constitutes the elements of a core sexual history and the use of computer-assisted self interviewing (CASI) within sexually transmitted disease clinics. We searched OVID Medline from 1990 to February 2010 using the terms "computer assisted interviewing" and "sex," and to identify published articles on a core sexual history, we used the term "core sexual history." Since 1990, 3 published articles used a combination of expert consensus, formal clinician surveys, and the Delphi technique to decide on what questions form a core sexual health history. Sexual health histories from 4 countries mostly ask about the sex of the partners, the number of partners (although the time period varies), the types of sex (oral, anal, and vaginal) and condom use, pregnancy intent, and contraceptive methods. Five published studies in the United States, Australia, and the United Kingdom compared CASI with in person interviews in sexually transmitted disease clinics. In general, CASI identified higher risk behavior more commonly than clinician interviews, although there were substantial differences between studies. CASI was found to be highly acceptable and individuals felt it allowed more honest reporting. Currently, there are insufficient data to determine whether CASI results in differences in sexually transmitted infection testing, diagnosis, or treatment or if CASI improves the quality of sexual health care or its efficiency. The potential public health advantages of the widespread use of CASI are discussed.

  1. Implementing Molecular Dynamics for Hybrid High Performance Computers - 1. Short Range Forces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, W Michael; Wang, Peng; Plimpton, Steven J

    The use of accelerators such as general-purpose graphics processing units (GPGPUs) have become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high performance computers, machines with more than one type of floating-point processor, are now becoming more prevalent due to these advantages. In this work, we discuss several important issues in porting a large molecular dynamics code for use on parallel hybrid machines - 1) choosing a hybrid parallel decomposition that works on central processing units (CPUs) with distributed memory and accelerator cores with shared memory,more » 2) minimizing the amount of code that must be ported for efficient acceleration, 3) utilizing the available processing power from both many-core CPUs and accelerators, and 4) choosing a programming model for acceleration. We present our solution to each of these issues for short-range force calculation in the molecular dynamics package LAMMPS. We describe algorithms for efficient short range force calculation on hybrid high performance machines. We describe a new approach for dynamic load balancing of work between CPU and accelerator cores. We describe the Geryon library that allows a single code to compile with both CUDA and OpenCL for use on a variety of accelerators. Finally, we present results on a parallel test cluster containing 32 Fermi GPGPUs and 180 CPU cores.« less

  2. Best core stabilization exercise to facilitate subcortical neuroplasticity: A functional MRI neuroimaging study.

    PubMed

    Kim, Do Hyun; Lee, Jae Jin; You, Sung Joshua Hyun

    2018-03-23

    To investigate the effects of conscious (ADIM) and subconscious (DNS) core stabilization exercises on cortical changes in adults with core instability. Five non-symptomatic participants with core instability. A novel core stabilization task switching paradigm was designed to separate cortical or subcortical neural substrates during a series of DNS or ADIM core stabilization tasks. fMRI blood BOLD analysis revealed a distinctive subcortical activation pattern during the performance of the DNS, whereas the cortical motor network was primarily activated during an ADIM. Peak voxel volume values showed significantly greater DNS (11.08 ± 1.51) compared with the ADIM (8.81 ± 0.21) (p= 0.043). The ADIM exercise activated the cortical PMC-SMC-SMA motor network, whereas the DNS exercise activated both these same cortical areas and the subcortical cerebellum-BG-thalamus-cingulate cortex network.

  3. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.

  4. BNL program in support of LWR degraded-core accident analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ginsberg, T.; Greene, G.A.

    1982-01-01

    Two major sources of loading on dry watr reactor containments are steam generatin from core debris water thermal interactions and molten core-concrete interactions. Experiments are in progress at BNL in support of analytical model development related to aspects of the above containment loading mechanisms. The work supports development and evaluation of the CORCON (Muir, 1981) and MARCH (Wooton, 1980) computer codes. Progress in the two programs is described in this paper. 8 figures.

  5. A Conserved Hydrophobic Core in Gαi1 Regulates G Protein Activation and Release from Activated Receptor.

    PubMed

    Kaya, Ali I; Lokits, Alyssa D; Gilbert, James A; Iverson, T M; Meiler, Jens; Hamm, Heidi E

    2016-09-09

    G protein-coupled receptor-mediated heterotrimeric G protein activation is a major mode of signal transduction in the cell. Previously, we and other groups reported that the α5 helix of Gαi1, especially the hydrophobic interactions in this region, plays a key role during nucleotide release and G protein activation. To further investigate the effect of this hydrophobic core, we disrupted it in Gαi1 by inserting 4 alanine amino acids into the α5 helix between residues Gln(333) and Phe(334) (Ins4A). This extends the length of the α5 helix without disturbing the β6-α5 loop interactions. This mutant has high basal nucleotide exchange activity yet no receptor-mediated activation of nucleotide exchange. By using structural approaches, we show that this mutant loses critical hydrophobic interactions, leading to significant rearrangements of side chain residues His(57), Phe(189), Phe(191), and Phe(336); it also disturbs the rotation of the α5 helix and the π-π interaction between His(57) and Phe(189) In addition, the insertion mutant abolishes G protein release from the activated receptor after nucleotide binding. Our biochemical and computational data indicate that the interactions between α5, α1, and β2-β3 are not only vital for GDP release during G protein activation, but they are also necessary for proper GTP binding (or GDP rebinding). Thus, our studies suggest that this hydrophobic interface is critical for accurate rearrangement of the α5 helix for G protein release from the receptor after GTP binding. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.

  6. Multi-level Hierarchical Poly Tree computer architectures

    NASA Technical Reports Server (NTRS)

    Padovan, Joe; Gute, Doug

    1990-01-01

    Based on the concept of hierarchical substructuring, this paper develops an optimal multi-level Hierarchical Poly Tree (HPT) parallel computer architecture scheme which is applicable to the solution of finite element and difference simulations. Emphasis is given to minimizing computational effort, in-core/out-of-core memory requirements, and the data transfer between processors. In addition, a simplified communications network that reduces the number of I/O channels between processors is presented. HPT configurations that yield optimal superlinearities are also demonstrated. Moreover, to generalize the scope of applicability, special attention is given to developing: (1) multi-level reduction trees which provide an orderly/optimal procedure by which model densification/simplification can be achieved, as well as (2) methodologies enabling processor grading that yields architectures with varying types of multi-level granularity.

  7. Network Coding on Heterogeneous Multi-Core Processors for Wireless Sensor Networks

    PubMed Central

    Kim, Deokho; Park, Karam; Ro, Won W.

    2011-01-01

    While network coding is well known for its efficiency and usefulness in wireless sensor networks, the excessive costs associated with decoding computation and complexity still hinder its adoption into practical use. On the other hand, high-performance microprocessors with heterogeneous multi-cores would be used as processing nodes of the wireless sensor networks in the near future. To this end, this paper introduces an efficient network coding algorithm developed for the heterogenous multi-core processors. The proposed idea is fully tested on one of the currently available heterogeneous multi-core processors referred to as the Cell Broadband Engine. PMID:22164053

  8. Final Report for ALCC Allocation: Predictive Simulation of Complex Flow in Wind Farms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barone, Matthew F.; Ananthan, Shreyas; Churchfield, Matt

    This report documents work performed using ALCC computing resources granted under a proposal submitted in February 2016, with the resource allocation period spanning the period July 2016 through June 2017. The award allocation was 10.7 million processor-hours at the National Energy Research Scientific Computing Center. The simulations performed were in support of two projects: the Atmosphere to Electrons (A2e) project, supported by the DOE EERE office; and the Exascale Computing Project (ECP), supported by the DOE Office of Science. The project team for both efforts consists of staff scientists and postdocs from Sandia National Laboratories and the National Renewable Energymore » Laboratory. At the heart of these projects is the open-source computational-fluid-dynamics (CFD) code, Nalu. Nalu solves the low-Mach-number Navier-Stokes equations using an unstructured- grid discretization. Nalu leverages the open-source Trilinos solver library and the Sierra Toolkit (STK) for parallelization and I/O. This report documents baseline computational performance of the Nalu code on problems of direct relevance to the wind plant physics application - namely, Large Eddy Simulation (LES) of an atmospheric boundary layer (ABL) flow and wall-modeled LES of a flow past a static wind turbine rotor blade. Parallel performance of Nalu and its constituent solver routines residing in the Trilinos library has been assessed previously under various campaigns. However, both Nalu and Trilinos have been, and remain, in active development and resources have not been available previously to rigorously track code performance over time. With the initiation of the ECP, it is important to establish and document baseline code performance on the problems of interest. This will allow the project team to identify and target any deficiencies in performance, as well as highlight any performance bottlenecks as we exercise the code on a greater variety of platforms and at larger scales. The current study is rather modest in scale, examining performance on problem sizes of O(100 million) elements and core counts up to 8k cores. This will be expanded as more computational resources become available to the projects.« less

  9. A K-6 Computational Thinking Curriculum Framework: Implications for Teacher Knowledge

    ERIC Educational Resources Information Center

    Angeli, Charoula; Voogt, Joke; Fluck, Andrew; Webb, Mary; Cox, Margaret; Malyn-Smith, Joyce; Zagami, Jason

    2016-01-01

    Adding computer science as a separate school subject to the core K-6 curriculum is a complex issue with educational challenges. The authors herein address two of these challenges: (1) the design of the curriculum based on a generic computational thinking framework, and (2) the knowledge teachers need to teach the curriculum. The first issue is…

  10. ARTVAL user guide : user guide for the ARTerial eVALuation computational engine.

    DOT National Transportation Integrated Search

    2015-06-01

    This document provides guidance on the use of the ARTVAL (Arterial Evaluation) computational : engine. The engine implements the Quick Estimation Method for Urban Streets (QEM-US) : described in Highway Capacity Manual (HCM2010) as the core computati...

  11. Developmental regulation of collagenase-3 mRNA in normal, differentiating osteoblasts through the activator protein-1 and the runt domain binding sites

    NASA Technical Reports Server (NTRS)

    Winchester, S. K.; Selvamurugan, N.; D'Alonzo, R. C.; Partridge, N. C.

    2000-01-01

    Collagenase-3 mRNA is initially detectable when osteoblasts cease proliferation, increasing during differentiation and mineralization. We showed that this developmental expression is due to an increase in collagenase-3 gene transcription. Mutation of either the activator protein-1 or the runt domain binding site decreased collagenase-3 promoter activity, demonstrating that these sites are responsible for collagenase-3 gene transcription. The activator protein-1 and runt domain binding sites bind members of the activator protein-1 and core-binding factor family of transcription factors, respectively. We identified core-binding factor a1 binding to the runt domain binding site and JunD in addition to a Fos-related antigen binding to the activator protein-1 site. Overexpression of both c-Fos and c-Jun in osteoblasts or core-binding factor a1 increased collagenase-3 promoter activity. Furthermore, overexpression of c-Fos, c-Jun, and core-binding factor a1 synergistically increased collagenase-3 promoter activity. Mutation of either the activator protein-1 or the runt domain binding site resulted in the inability of c-Fos and c-Jun or core-binding factor a1 to increase collagenase-3 promoter activity, suggesting that there is cooperative interaction between the sites and the proteins. Overexpression of Fra-2 and JunD repressed core-binding factor a1-induced collagenase-3 promoter activity. Our results suggest that members of the activator protein-1 and core-binding factor families, binding to the activator protein-1 and runt domain binding sites are responsible for the developmental regulation of collagenase-3 gene expression in osteoblasts.

  12. High-Efficiency High-Resolution Global Model Developments at the NASA Goddard Data Assimilation Office

    NASA Technical Reports Server (NTRS)

    Lin, Shian-Jiann; Atlas, Robert (Technical Monitor)

    2002-01-01

    The Data Assimilation Office (DAO) has been developing a new generation of ultra-high resolution General Circulation Model (GCM) that is suitable for 4-D data assimilation, numerical weather predictions, and climate simulations. These three applications have conflicting requirements. For 4-D data assimilation and weather predictions, it is highly desirable to run the model at the highest possible spatial resolution (e.g., 55 km or finer) so as to be able to resolve and predict socially and economically important weather phenomena such as tropical cyclones, hurricanes, and severe winter storms. For climate change applications, the model simulations need to be carried out for decades, if not centuries. To reduce uncertainty in climate change assessments, the next generation model would also need to be run at a fine enough spatial resolution that can at least marginally simulate the effects of intense tropical cyclones. Scientific problems (e.g., parameterization of subgrid scale moist processes) aside, all three areas of application require the model's computational performance to be dramatically improved as compared to the previous generation. In this talk, I will present the current and future developments of the "finite-volume dynamical core" at the Data Assimilation Office. This dynamical core applies modem monotonicity preserving algorithms and is genuinely conservative by construction, not by an ad hoc fixer. The "discretization" of the conservation laws is purely local, which is clearly advantageous for resolving sharp gradient flow features. In addition, the local nature of the finite-volume discretization also has a significant advantage on distributed memory parallel computers. Together with a unique vertically Lagrangian control volume discretization that essentially reduces the dimension of the computational problem from three to two, the finite-volume dynamical core is very efficient, particularly at high resolutions. I will also present the computational design of the dynamical core using a hybrid distributed-shared memory programming paradigm that is portable to virtually any of today's high-end parallel super-computing clusters.

  13. High-Efficiency High-Resolution Global Model Developments at the NASA Goddard Data Assimilation Office

    NASA Technical Reports Server (NTRS)

    Lin, Shian-Jiann; Atlas, Robert (Technical Monitor)

    2002-01-01

    The Data Assimilation Office (DAO) has been developing a new generation of ultra-high resolution General Circulation Model (GCM) that is suitable for 4-D data assimilation, numerical weather predictions, and climate simulations. These three applications have conflicting requirements. For 4-D data assimilation and weather predictions, it is highly desirable to run the model at the highest possible spatial resolution (e.g., 55 kin or finer) so as to be able to resolve and predict socially and economically important weather phenomena such as tropical cyclones, hurricanes, and severe winter storms. For climate change applications, the model simulations need to be carried out for decades, if not centuries. To reduce uncertainty in climate change assessments, the next generation model would also need to be run at a fine enough spatial resolution that can at least marginally simulate the effects of intense tropical cyclones. Scientific problems (e.g., parameterization of subgrid scale moist processes) aside, all three areas of application require the model's computational performance to be dramatically improved as compared to the previous generation. In this talk, I will present the current and future developments of the "finite-volume dynamical core" at the Data Assimilation Office. This dynamical core applies modem monotonicity preserving algorithms and is genuinely conservative by construction, not by an ad hoc fixer. The "discretization" of the conservation laws is purely local, which is clearly advantageous for resolving sharp gradient flow features. In addition, the local nature of the finite-volume discretization also has a significant advantage on distributed memory parallel computers. Together with a unique vertically Lagrangian control volume discretization that essentially reduces the dimension of the computational problem from three to two, the finite-volume dynamical core is very efficient, particularly at high resolutions. I will also present the computational design of the dynamical core using a hybrid distributed- shared memory programming paradigm that is portable to virtually any of today's high-end parallel super-computing clusters.

  14. Thiophene-Core Estrogen Receptor Ligands Having Superagonist Activity

    PubMed Central

    Min, Jian; Wang, Pengcheng; Srinivasan, Sathish; Nwachukwu, Jerome C.; Guo, Pu; Huang, Minjian; Carlson, Kathryn E.; Katzenellenbogen, John A.; Nettles, Kendall W.; Zhou, Hai-Bing

    2013-01-01

    To probe the importance of the heterocyclic core of estrogen receptor (ER) ligands, we prepared a series of thiophene-core ligands by Suzuki cross-coupling of aryl boronic acids with bromo-thiophenes, and we assessed their receptor binding and cell biological activities. The disposition of the phenol substituents on the thiophene core, at alternate or adjacent sites, and the nature of substituents on these phenols all contribute to binding affinity and subtype selectivity. Most of the bis(hydroxyphenyl)-thiophenes were ERβ selective, whereas the tris(hydroxyphenyl)-thiophenes were ERα selective; analogous furan-core compounds generally have lower affinity and less selectivity. Some diarylthiophenes show distinct superagonist activity in reporter gene assays, giving maximal activities 2–3 times that of estradiol, and modeling suggests that these ligands have a different interaction with a hydrogen-bonding residue in helix-11. Ligand-core modification may be a new strategy for developing ER ligands whose selectivity is based on having transcriptional activity greater than that of estradiol. PMID:23586645

  15. Enhancements to the Image Analysis Tool for Core Punch Experiments and Simulations (vs. 2014)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, John Edward; Unal, Cetin

    A previous paper (Hogden & Unal, 2012, Image Analysis Tool for Core Punch Experiments and Simulations) described an image processing computer program developed at Los Alamos National Laboratory. This program has proven useful so developement has been continued. In this paper we describe enhacements to the program as of 2014.

  16. Analysing Student Performance Using Sparse Data of Core Bachelor Courses

    ERIC Educational Resources Information Center

    Saarela, Mirka; Karkkainen, Tommi

    2015-01-01

    Curricula for Computer Science (CS) degrees are characterized by the strong occupational orientation of the discipline. In the BSc degree structure, with clearly separate CS core studies, the learning skills for these and other required courses may vary a lot, which is shown in students' overall performance. To analyze this situation, we apply…

  17. Towards Energy-Performance Trade-off Analysis of Parallel Applications

    ERIC Educational Resources Information Center

    Korthikanti, Vijay Anand Reddy

    2011-01-01

    Energy consumption by computer systems has emerged as an important concern, both at the level of individual devices (limited battery capacity in mobile systems) and at the societal level (the production of Green House Gases). In parallel architectures, applications may be executed on a variable number of cores and these cores may operate at…

  18. Application of Intel Many Integrated Core (MIC) accelerators to the Pleim-Xiu land surface scheme

    NASA Astrophysics Data System (ADS)

    Huang, Melin; Huang, Bormin; Huang, Allen H.

    2015-10-01

    The land-surface model (LSM) is one physics process in the weather research and forecast (WRF) model. The LSM includes atmospheric information from the surface layer scheme, radiative forcing from the radiation scheme, and precipitation forcing from the microphysics and convective schemes, together with internal information on the land's state variables and land-surface properties. The LSM is to provide heat and moisture fluxes over land points and sea-ice points. The Pleim-Xiu (PX) scheme is one LSM. The PX LSM features three pathways for moisture fluxes: evapotranspiration, soil evaporation, and evaporation from wet canopies. To accelerate the computation process of this scheme, we employ Intel Xeon Phi Many Integrated Core (MIC) Architecture as it is a multiprocessor computer structure with merits of efficient parallelization and vectorization essentials. Our results show that the MIC-based optimization of this scheme running on Xeon Phi coprocessor 7120P improves the performance by 2.3x and 11.7x as compared to the original code respectively running on one CPU socket (eight cores) and on one CPU core with Intel Xeon E5-2670.

  19. GVIPS Models and Software

    NASA Technical Reports Server (NTRS)

    Arnold, Steven M.; Gendy, Atef; Saleeb, Atef F.; Mark, John; Wilt, Thomas E.

    2007-01-01

    Two reports discuss, respectively, (1) the generalized viscoplasticity with potential structure (GVIPS) class of mathematical models and (2) the Constitutive Material Parameter Estimator (COMPARE) computer program. GVIPS models are constructed within a thermodynamics- and potential-based theoretical framework, wherein one uses internal state variables and derives constitutive equations for both the reversible (elastic) and the irreversible (viscoplastic) behaviors of materials. Because of the underlying potential structure, GVIPS models not only capture a variety of material behaviors but also are very computationally efficient. COMPARE comprises (1) an analysis core and (2) a C++-language subprogram that implements a Windows-based graphical user interface (GUI) for controlling the core. The GUI relieves the user of the sometimes tedious task of preparing data for the analysis core, freeing the user to concentrate on the task of fitting experimental data and ultimately obtaining a set of material parameters. The analysis core consists of three modules: one for GVIPS material models, an analysis module containing a specialized finite-element solution algorithm, and an optimization module. COMPARE solves the problem of finding GVIPS material parameters in the manner of a design-optimization problem in which the parameters are the design variables.

  20. Dual Super-Systolic Core for Real-Time Reconstructive Algorithms of High-Resolution Radar/SAR Imaging Systems

    PubMed Central

    Atoche, Alejandro Castillo; Castillo, Javier Vázquez

    2012-01-01

    A high-speed dual super-systolic core for reconstructive signal processing (SP) operations consists of a double parallel systolic array (SA) machine in which each processing element of the array is also conceptualized as another SA in a bit-level fashion. In this study, we addressed the design of a high-speed dual super-systolic array (SSA) core for the enhancement/reconstruction of remote sensing (RS) imaging of radar/synthetic aperture radar (SAR) sensor systems. The selected reconstructive SP algorithms are efficiently transformed in their parallel representation and then, they are mapped into an efficient high performance embedded computing (HPEC) architecture in reconfigurable Xilinx field programmable gate array (FPGA) platforms. As an implementation test case, the proposed approach was aggregated in a HW/SW co-design scheme in order to solve the nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) from a remotely sensed scene. We show how such dual SSA core, drastically reduces the computational load of complex RS regularization techniques achieving the required real-time operational mode. PMID:22736964

  1. Negative core affect and employee silence: How differences in activation, cognitive rumination, and problem-solving demands matter.

    PubMed

    Madrid, Hector P; Patterson, Malcolm G; Leiva, Pedro I

    2015-11-01

    Employees can help to improve organizational performance by sharing ideas, suggestions, or concerns about practices, but sometimes they keep silent because of the experience of negative affect. Drawing and expanding on this stream of research, this article builds a theoretical rationale based on core affect and cognitive appraisal theories to describe how differences in affect activation and boundary conditions associated with cognitive rumination and cognitive problem-solving demands can explain employee silence. Results of a diary study conducted with professionals from diverse organizations indicated that within-person low-activated negative core affect increased employee silence when, as an invariant factor, cognitive rumination was high. Furthermore, within-person high-activated negative core affect decreased employee silence when, as an invariant factor, cognitive problem-solving demand was high. Thus, organizations should manage conditions to reduce experiences of low-activated negative core affect because these feelings increase silence in individuals high in rumination. In turn, effective management of experiences of high-activated negative core affect can reduce silence for individuals working under high problem-solving demand situations. (c) 2015 APA, all rights reserved).

  2. Core-Shell Structuring of Pure Metallic Aerogels towards Highly Efficient Platinum Utilization for the Oxygen Reduction Reaction.

    PubMed

    Cai, Bin; Hübner, René; Sasaki, Kotaro; Zhang, Yuanzhe; Su, Dong; Ziegler, Christoph; Vukmirovic, Miomir B; Rellinghaus, Bernd; Adzic, Radoslav R; Eychmüller, Alexander

    2018-03-05

    The development of core-shell structures remains a fundamental challenge for pure metallic aerogels. Here we report the synthesis of Pd x Au-Pt core-shell aerogels composed of an ultrathin Pt shell and a composition-tunable Pd x Au alloy core. The universality of this strategy ensures the extension of core compositions to Pd transition-metal alloys. The core-shell aerogels exhibited largely improved Pt utilization efficiencies for the oxygen reduction reaction and their activities show a volcano-type relationship as a function of the lattice parameter of the core substrate. The maximum mass and specific activities are 5.25 A mg Pt -1 and 2.53 mA cm -2 , which are 18.7 and 4.1 times higher than those of Pt/C, respectively, demonstrating the superiority of the core-shell metallic aerogels. The proposed core-based activity descriptor provides a new possible strategy for the design of future core-shell electrocatalysts. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Multi-GPU Jacobian accelerated computing for soft-field tomography.

    PubMed

    Borsic, A; Attardo, E A; Halter, R J

    2012-10-01

    Image reconstruction in soft-field tomography is based on an inverse problem formulation, where a forward model is fitted to the data. In medical applications, where the anatomy presents complex shapes, it is common to use finite element models (FEMs) to represent the volume of interest and solve a partial differential equation that models the physics of the system. Over the last decade, there has been a shifting interest from 2D modeling to 3D modeling, as the underlying physics of most problems are 3D. Although the increased computational power of modern computers allows working with much larger FEM models, the computational time required to reconstruct 3D images on a fine 3D FEM model can be significant, on the order of hours. For example, in electrical impedance tomography (EIT) applications using a dense 3D FEM mesh with half a million elements, a single reconstruction iteration takes approximately 15-20 min with optimized routines running on a modern multi-core PC. It is desirable to accelerate image reconstruction to enable researchers to more easily and rapidly explore data and reconstruction parameters. Furthermore, providing high-speed reconstructions is essential for some promising clinical application of EIT. For 3D problems, 70% of the computing time is spent building the Jacobian matrix, and 25% of the time in forward solving. In this work, we focus on accelerating the Jacobian computation by using single and multiple GPUs. First, we discuss an optimized implementation on a modern multi-core PC architecture and show how computing time is bounded by the CPU-to-memory bandwidth; this factor limits the rate at which data can be fetched by the CPU. Gains associated with the use of multiple CPU cores are minimal, since data operands cannot be fetched fast enough to saturate the processing power of even a single CPU core. GPUs have much faster memory bandwidths compared to CPUs and better parallelism. We are able to obtain acceleration factors of 20 times on a single NVIDIA S1070 GPU, and of 50 times on four GPUs, bringing the Jacobian computing time for a fine 3D mesh from 12 min to 14 s. We regard this as an important step toward gaining interactive reconstruction times in 3D imaging, particularly when coupled in the future with acceleration of the forward problem. While we demonstrate results for EIT, these results apply to any soft-field imaging modality where the Jacobian matrix is computed with the adjoint method.

  4. Multi-GPU Jacobian Accelerated Computing for Soft Field Tomography

    PubMed Central

    Borsic, A.; Attardo, E. A.; Halter, R. J.

    2012-01-01

    Image reconstruction in soft-field tomography is based on an inverse problem formulation, where a forward model is fitted to the data. In medical applications, where the anatomy presents complex shapes, it is common to use Finite Element Models to represent the volume of interest and to solve a partial differential equation that models the physics of the system. Over the last decade, there has been a shifting interest from 2D modeling to 3D modeling, as the underlying physics of most problems are three-dimensional. Though the increased computational power of modern computers allows working with much larger FEM models, the computational time required to reconstruct 3D images on a fine 3D FEM model can be significant, on the order of hours. For example, in Electrical Impedance Tomography applications using a dense 3D FEM mesh with half a million elements, a single reconstruction iteration takes approximately 15 to 20 minutes with optimized routines running on a modern multi-core PC. It is desirable to accelerate image reconstruction to enable researchers to more easily and rapidly explore data and reconstruction parameters. Further, providing high-speed reconstructions are essential for some promising clinical application of EIT. For 3D problems 70% of the computing time is spent building the Jacobian matrix, and 25% of the time in forward solving. In the present work, we focus on accelerating the Jacobian computation by using single and multiple GPUs. First, we discuss an optimized implementation on a modern multi-core PC architecture and show how computing time is bounded by the CPU-to-memory bandwidth; this factor limits the rate at which data can be fetched by the CPU. Gains associated with use of multiple CPU cores are minimal, since data operands cannot be fetched fast enough to saturate the processing power of even a single CPU core. GPUs have a much faster memory bandwidths compared to CPUs and better parallelism. We are able to obtain acceleration factors of 20 times on a single NVIDIA S1070 GPU, and of 50 times on 4 GPUs, bringing the Jacobian computing time for a fine 3D mesh from 12 minutes to 14 seconds. We regard this as an important step towards gaining interactive reconstruction times in 3D imaging, particularly when coupled in the future with acceleration of the forward problem. While we demonstrate results for Electrical Impedance Tomography, these results apply to any soft-field imaging modality where the Jacobian matrix is computed with the Adjoint Method. PMID:23010857

  5. Pushing configuration-interaction to the limit: Towards massively parallel MCSCF calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vogiatzis, Konstantinos D.; Ma, Dongxia; Olsen, Jeppe

    A new large-scale parallel multiconfigurational self-consistent field (MCSCF) implementation in the open-source NWChem computational chemistry code is presented. The generalized active space approach is used to partition large configuration interaction (CI) vectors and generate a sufficient number of batches that can be distributed to the available cores. Massively parallel CI calculations with large active spaces can be performed. The new parallel MCSCF implementation is tested for the chromium trimer and for an active space of 20 electrons in 20 orbitals, which can now routinely be performed. Unprecedented CI calculations with an active space of 22 electrons in 22 orbitals formore » the pentacene systems were performed and a single CI iteration calculation with an active space of 24 electrons in 24 orbitals for the chromium tetramer was possible. In conclusion, the chromium tetramer corresponds to a CI expansion of one trillion Slater determinants (914 058 513 424) and is the largest conventional CI calculation attempted up to date.« less

  6. Pushing configuration-interaction to the limit: Towards massively parallel MCSCF calculations

    DOE PAGES

    Vogiatzis, Konstantinos D.; Ma, Dongxia; Olsen, Jeppe; ...

    2017-11-14

    A new large-scale parallel multiconfigurational self-consistent field (MCSCF) implementation in the open-source NWChem computational chemistry code is presented. The generalized active space approach is used to partition large configuration interaction (CI) vectors and generate a sufficient number of batches that can be distributed to the available cores. Massively parallel CI calculations with large active spaces can be performed. The new parallel MCSCF implementation is tested for the chromium trimer and for an active space of 20 electrons in 20 orbitals, which can now routinely be performed. Unprecedented CI calculations with an active space of 22 electrons in 22 orbitals formore » the pentacene systems were performed and a single CI iteration calculation with an active space of 24 electrons in 24 orbitals for the chromium tetramer was possible. In conclusion, the chromium tetramer corresponds to a CI expansion of one trillion Slater determinants (914 058 513 424) and is the largest conventional CI calculation attempted up to date.« less

  7. Pushing configuration-interaction to the limit: Towards massively parallel MCSCF calculations

    NASA Astrophysics Data System (ADS)

    Vogiatzis, Konstantinos D.; Ma, Dongxia; Olsen, Jeppe; Gagliardi, Laura; de Jong, Wibe A.

    2017-11-01

    A new large-scale parallel multiconfigurational self-consistent field (MCSCF) implementation in the open-source NWChem computational chemistry code is presented. The generalized active space approach is used to partition large configuration interaction (CI) vectors and generate a sufficient number of batches that can be distributed to the available cores. Massively parallel CI calculations with large active spaces can be performed. The new parallel MCSCF implementation is tested for the chromium trimer and for an active space of 20 electrons in 20 orbitals, which can now routinely be performed. Unprecedented CI calculations with an active space of 22 electrons in 22 orbitals for the pentacene systems were performed and a single CI iteration calculation with an active space of 24 electrons in 24 orbitals for the chromium tetramer was possible. The chromium tetramer corresponds to a CI expansion of one trillion Slater determinants (914 058 513 424) and is the largest conventional CI calculation attempted up to date.

  8. Cost-effective description of strong correlation: Efficient implementations of the perfect quadruples and perfect hextuples models

    DOE PAGES

    Lehtola, Susi; Parkhill, John; Head-Gordon, Martin

    2016-10-07

    Novel implementations based on dense tensor storage are presented here for the singlet-reference perfect quadruples (PQ) [J. A. Parkhill et al., J. Chem. Phys. 130, 084101 (2009)] and perfect hextuples (PH) [J. A. Parkhill and M. Head-Gordon, J. Chem. Phys. 133, 024103 (2010)] models. The methods are obtained as block decompositions of conventional coupled-cluster theory that are exact for four electrons in four orbitals (PQ) and six electrons in six orbitals (PH), but that can also be applied to much larger systems. PQ and PH have storage requirements that scale as the square, and as the cube of the numbermore » of active electrons, respectively, and exhibit quartic scaling of the computational effort for large systems. Applications of the new implementations are presented for full-valence calculations on linear polyenes (C nH n+2), which highlight the excellent computational scaling of the present implementations that can routinely handle active spaces of hundreds of electrons. The accuracy of the models is studied in the π space of the polyenes, in hydrogen chains (H 50), and in the π space of polyacene molecules. In all cases, the results compare favorably to density matrix renormalization group values. With the novel implementation of PQ, active spaces of 140 electrons in 140 orbitals can be solved in a matter of minutes on a single core workstation, and the relatively low polynomial scaling means that very large systems are also accessible using parallel computing.« less

  9. Cost-effective description of strong correlation: Efficient implementations of the perfect quadruples and perfect hextuples models

    NASA Astrophysics Data System (ADS)

    Lehtola, Susi; Parkhill, John; Head-Gordon, Martin

    2016-10-01

    Novel implementations based on dense tensor storage are presented for the singlet-reference perfect quadruples (PQ) [J. A. Parkhill et al., J. Chem. Phys. 130, 084101 (2009)] and perfect hextuples (PH) [J. A. Parkhill and M. Head-Gordon, J. Chem. Phys. 133, 024103 (2010)] models. The methods are obtained as block decompositions of conventional coupled-cluster theory that are exact for four electrons in four orbitals (PQ) and six electrons in six orbitals (PH), but that can also be applied to much larger systems. PQ and PH have storage requirements that scale as the square, and as the cube of the number of active electrons, respectively, and exhibit quartic scaling of the computational effort for large systems. Applications of the new implementations are presented for full-valence calculations on linear polyenes (CnHn+2), which highlight the excellent computational scaling of the present implementations that can routinely handle active spaces of hundreds of electrons. The accuracy of the models is studied in the π space of the polyenes, in hydrogen chains (H50), and in the π space of polyacene molecules. In all cases, the results compare favorably to density matrix renormalization group values. With the novel implementation of PQ, active spaces of 140 electrons in 140 orbitals can be solved in a matter of minutes on a single core workstation, and the relatively low polynomial scaling means that very large systems are also accessible using parallel computing.

  10. De novo design of the hydrophobic core of ubiquitin.

    PubMed Central

    Lazar, G. A.; Desjarlais, J. R.; Handel, T. M.

    1997-01-01

    We have previously reported the development and evaluation of a computational program to assist in the design of hydrophobic cores of proteins. In an effort to investigate the role of core packing in protein structure, we have used this program, referred to as Repacking of Cores (ROC), to design several variants of the protein ubiquitin. Nine ubiquitin variants containing from three to eight hydrophobic core mutations were constructed, purified, and characterized in terms of their stability and their ability to adopt a uniquely folded native-like conformation. In general, designed ubiquitin variants are more stable than control variants in which the hydrophobic core was chosen randomly. However, in contrast to previous results with 434 cro, all designs are destabilized relative to the wild-type (WT) protein. This raises the possibility that beta-sheet structures have more stringent packing requirements than alpha-helical proteins. A more striking observation is that all variants, including random controls, adopt fairly well-defined conformations, regardless of their stability. This result supports conclusions from the cro studies that non-core residues contribute significantly to the conformational uniqueness of these proteins while core packing largely affects protein stability and has less impact on the nature or uniqueness of the fold. Concurrent with the above work, we used stability data on the nine ubiquitin variants to evaluate and improve the predictive ability of our core packing algorithm. Additional versions of the program were generated that differ in potential function parameters and sampling of side chain conformers. Reasonable correlations between experimental and predicted stabilities suggest the program will be useful in future studies to design variants with stabilities closer to that of the native protein. Taken together, the present study provides further clarification of the role of specific packing interactions in protein structure and stability, and demonstrates the benefit of using systematic computational methods to predict core packing arrangements for the design of proteins. PMID:9194177

  11. Generic algorithms for high performance scalable geocomputing

    NASA Astrophysics Data System (ADS)

    de Jong, Kor; Schmitz, Oliver; Karssenberg, Derek

    2016-04-01

    During the last decade, the characteristics of computing hardware have changed a lot. For example, instead of a single general purpose CPU core, personal computers nowadays contain multiple cores per CPU and often general purpose accelerators, like GPUs. Additionally, compute nodes are often grouped together to form clusters or a supercomputer, providing enormous amounts of compute power. For existing earth simulation models to be able to use modern hardware platforms, their compute intensive parts must be rewritten. This can be a major undertaking and may involve many technical challenges. Compute tasks must be distributed over CPU cores, offloaded to hardware accelerators, or distributed to different compute nodes. And ideally, all of this should be done in such a way that the compute task scales well with the hardware resources. This presents two challenges: 1) how to make good use of all the compute resources and 2) how to make these compute resources available for developers of simulation models, who may not (want to) have the required technical background for distributing compute tasks. The first challenge requires the use of specialized technology (e.g.: threads, OpenMP, MPI, OpenCL, CUDA). The second challenge requires the abstraction of the logic handling the distribution of compute tasks from the model-specific logic, hiding the technical details from the model developer. To assist the model developer, we are developing a C++ software library (called Fern) containing algorithms that can use all CPU cores available in a single compute node (distributing tasks over multiple compute nodes will be done at a later stage). The algorithms are grid-based (finite difference) and include local and spatial operations such as convolution filters. The algorithms handle distribution of the compute tasks to CPU cores internally. In the resulting model the low-level details of how this is done is separated from the model-specific logic representing the modeled system. This contrasts with practices in which code for distributing of compute tasks is mixed with model-specific code, and results in a better maintainable model. For flexibility and efficiency, the algorithms are configurable at compile-time with the respect to the following aspects: data type, value type, no-data handling, input value domain handling, and output value range handling. This makes the algorithms usable in very different contexts, without the need for making intrusive changes to existing models when using them. Applications that benefit from using the Fern library include the construction of forward simulation models in (global) hydrology (e.g. PCR-GLOBWB (Van Beek et al. 2011)), ecology, geomorphology, or land use change (e.g. PLUC (Verstegen et al. 2014)) and manipulation of hyper-resolution land surface data such as digital elevation models and remote sensing data. Using the Fern library, we have also created an add-on to the PCRaster Python Framework (Karssenberg et al. 2010) allowing its users to speed up their spatio-temporal models, sometimes by changing just a single line of Python code in their model. In our presentation we will give an overview of the design of the algorithms, providing examples of different contexts where they can be used to replace existing sequential algorithms, including the PCRaster environmental modeling software (www.pcraster.eu). We will show how the algorithms can be configured to behave differently when necessary. References Karssenberg, D., Schmitz, O., Salamon, P., De Jong, K. and Bierkens, M.F.P., 2010, A software framework for construction of process-based stochastic spatio-temporal models and data assimilation. Environmental Modelling & Software, 25, pp. 489-502, Link. Best Paper Award 2010: Software and Decision Support. Van Beek, L. P. H., Y. Wada, and M. F. P. Bierkens. 2011. Global monthly water stress: 1. Water balance and water availability. Water Resources Research. 47. Verstegen, J. A., D. Karssenberg, F. van der Hilst, and A. P. C. Faaij. 2014. Identifying a land use change cellular automaton by Bayesian data assimilation. Environmental Modelling & Software 53:121-136.

  12. Who Can You Turn to? Tie Activation within Core Business Discussion Networks

    ERIC Educational Resources Information Center

    Renzulli, Linda A.; Aldrich, Howard

    2005-01-01

    We examine the connection between personal network characteristics and the activation of ties for access to resources during routine times. We focus on factors affecting business owners' use of their core network ties to obtain legal, loan, financial and expert advice. Owners rely more on core business ties when their core networks contain a high…

  13. Effect of Ni Core Structure on the Electrocatalytic Activity of Pt-Ni/C in Methanol Oxidation

    PubMed Central

    Kang, Jian; Wang, Rongfang; Wang, Hui; Liao, Shijun; Key, Julian; Linkov, Vladimir; Ji, Shan

    2013-01-01

    Methanol oxidation catalysts comprising an outer Pt-shell with an inner Ni-core supported on carbon, (Pt-Ni/C), were prepared with either crystalline or amorphous Ni core structures. Structural comparisons of the two forms of catalyst were made using transmission electron microscopy (TEM), X-ray diffraction (XRD) and X-ray photoelectron spectroscopy (XPS), and methanol oxidation activity compared using CV and chronoamperometry (CA). While both the amorphous Ni core and crystalline Ni core structures were covered by similar Pt shell thickness and structure, the Pt-Ni(amorphous)/C catalyst had higher methanol oxidation activity. The amorphous Ni core thus offers improved Pt usage efficiency in direct methanol fuel cells. PMID:28811402

  14. APPLICATION OF COMPUTER AIDED TOMOGRAPHY (CAT) TO THE STUDY OF MARINE BENTIC COMMUNITIES

    EPA Science Inventory

    Sediment cores were imaged using a Computer-Aided Tomography (CT) scanner at Massachusetts General Hospital, Boston, Massachusetts, United States. Procedures were developed, using the attenuation of X-rays, to differentiate between sediment and the water contained in macrobenthic...

  15. An embedded multi-core parallel model for real-time stereo imaging

    NASA Astrophysics Data System (ADS)

    He, Wenjing; Hu, Jian; Niu, Jingyu; Li, Chuanrong; Liu, Guangyu

    2018-04-01

    The real-time processing based on embedded system will enhance the application capability of stereo imaging for LiDAR and hyperspectral sensor. The task partitioning and scheduling strategies for embedded multiprocessor system starts relatively late, compared with that for PC computer. In this paper, aimed at embedded multi-core processing platform, a parallel model for stereo imaging is studied and verified. After analyzing the computing amount, throughout capacity and buffering requirements, a two-stage pipeline parallel model based on message transmission is established. This model can be applied to fast stereo imaging for airborne sensors with various characteristics. To demonstrate the feasibility and effectiveness of the parallel model, a parallel software was designed using test flight data, based on the 8-core DSP processor TMS320C6678. The results indicate that the design performed well in workload distribution and had a speed-up ratio up to 6.4.

  16. Acceleration of the Particle Swarm Optimization for Peierls-Nabarro modeling of dislocations in conventional and high-entropy alloys

    NASA Astrophysics Data System (ADS)

    Pei, Zongrui; Eisenbach, Markus

    2017-06-01

    Dislocations are among the most important defects in determining the mechanical properties of both conventional alloys and high-entropy alloys. The Peierls-Nabarro model supplies an efficient pathway to their geometries and mobility. The difficulty in solving the integro-differential Peierls-Nabarro equation is how to effectively avoid the local minima in the energy landscape of a dislocation core. Among the other methods to optimize the dislocation core structures, we choose the algorithm of Particle Swarm Optimization, an algorithm that simulates the social behaviors of organisms. By employing more particles (bigger swarm) and more iterative steps (allowing them to explore for longer time), the local minima can be effectively avoided. But this would require more computational cost. The advantage of this algorithm is that it is readily parallelized in modern high computing architecture. We demonstrate the performance of our parallelized algorithm scales linearly with the number of employed cores.

  17. Host-Guest Complexes with Protein-Ligand-Like Affinities: Computational Analysis and Design

    PubMed Central

    Moghaddam, Sarvin; Inoue, Yoshihisa

    2009-01-01

    It has recently been discovered that guests combining a nonpolar core with cationic substituents bind cucurbit[7]uril (CB[7]) in water with ultra-high affinities. The present study uses the Mining Minima algorithm to study the physics of these extraordinary associations and to computationally test a new series of CB[7] ligands designed to bind with similarly high affinity. The calculations reproduce key experimental observations regarding the affinities of ferrocene-based guests with CB[7] and β-cyclodextrin and provide a coherent view of the roles of electrostatics and configurational entropy as determinants of affinity in these systems. The newly designed series of compounds is based on a bicyclo[2.2.2]octane core, which is similar in size and polarity to the ferrocene core of the existing series. Mining Minima predicts that these new compounds will, like the ferrocenes, bind CB[7] with extremely high affinities. PMID:19133781

  18. Computational modeling of temperature elevation and thermoregulatory response in the brains of anesthetized rats locally exposed at 1.5 GHz

    NASA Astrophysics Data System (ADS)

    Hirata, Akimasa; Masuda, Hiroshi; Kanai, Yuya; Asai, Ryuichi; Fujiwara, Osamu; Arima, Takuji; Kawai, Hiroki; Watanabe, Soichi; Lagroye, Isabelle; Veyret, Bernard

    2011-12-01

    The dominant effect of human exposures to microwaves is caused by temperature elevation ('thermal effect'). In the safety guidelines/standards, the specific absorption rate averaged over a specific volume is used as a metric for human protection from localized exposure. Further investigation on the use of this metric is required, especially in terms of thermophysiology. The World Health Organization (2006 RF research agenda) has given high priority to research into the extent and consequences of microwave-induced temperature elevation in children. In this study, an electromagnetic-thermal computational code was developed to model electromagnetic power absorption and resulting temperature elevation leading to changes in active blood flow in response to localized 1.457 GHz exposure in rat heads. Both juvenile (4 week old) and young adult (8 week old) rats were considered. The computational code was validated against measurements for 4 and 8 week old rats. Our computational results suggest that the blood flow rate depends on both brain and core temperature elevations. No significant difference was observed between thermophysiological responses in 4 and 8 week old rats under these exposure conditions. The computational model developed herein is thus applicable to set exposure conditions for rats in laboratory investigations, as well as in planning treatment protocols in the thermal therapy.

  19. The LPO Iron Pattern beneath the Earth's Inner Core Boundary

    NASA Astrophysics Data System (ADS)

    Mattesini, Maurizio; Belonoshko, Anatoly; Tkalčić, Hrvoje

    2017-04-01

    An Earth's inner core surface pattern for the iron Lattice Preferred Orientation (LPO) has been addressed for various iron crystal polymorphs. The geographical distribution of the amount of crystal alienation was achieved by bridging high-quality inner core probing seismic data [PKP(bc-df)] together with ab initio computed elastic constants. We show that the proposed topographic crystal alignment may be used as a boundary condition for dynamo simulations, providing an additional way to discriminate in between different and, often controversial, geodynamical scenarios.

  20. The LPO Iron Pattern beneath the Earth's Inner Core Boundary

    NASA Astrophysics Data System (ADS)

    Mattesini, M.; Tkalcic, H.; Belonoshko, A. B.; Buforn, E.; Udias, A.

    2015-12-01

    An Earth's inner core surface pattern for the iron Lattice Preferred Orientation (LPO) has been addressed for various iron crystal polymorphs. The geographical distribution of the amount of crystal alienation was achieved by bridging high-quality inner core probing seismic data [PKP(bc-df)] together with ab initio computed elastic constants. We show that the proposed topographic crystal alignment may be used as a boundary condition for dynamo simulations, providing an additional way to discriminate in between different and, often controversial, geodynamical scenarios.

  1. Evaluation of out-of-core computer programs for the solution of symmetric banded linear equations. [simultaneous equations

    NASA Technical Reports Server (NTRS)

    Dunham, R. S.

    1976-01-01

    FORTRAN coded out-of-core equation solvers that solve using direct methods symmetric banded systems of simultaneous algebraic equations. Banded, frontal and column (skyline) solvers were studied as well as solvers that can partition the working area and thus could fit into any available core. Comparison timings are presented for several typical two dimensional and three dimensional continuum type grids of elements with and without midside nodes. Extensive conclusions are also given.

  2. BNL severe-accident sequence experiments and analysis program. [PWR; BWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greene, G.A.; Ginsberg, T.; Tutu, N.K.

    1983-01-01

    In the analysis of degraded core accidents, the two major sources of pressure loading on light water reactor containments are: steam generation from core debris-water thermal interactions; and molten core-concrete interactions. Experiments are in progress at BNL in support of analytical model development related to aspects of the above containment loading mechanisms. The work supports development and evaluation of the CORCON (Muir, 1981) and MARCH (Wooton, 1980) computer codes. Progress in the two programs is described.

  3. Facile CO Cleavage by a Multimetallic CsU2 Nitride Complex.

    PubMed

    Falcone, Marta; Kefalidis, Christos E; Scopelliti, Rosario; Maron, Laurent; Mazzanti, Marinella

    2016-09-26

    Uranium nitrides are important materials with potential for application as fuels for nuclear power generation, and as highly active catalysts. Molecular nitride compounds could provide important insight into the nature of the uranium-nitride bond, but currently little is known about their reactivity. In this study, we found that a complex containing a nitride bridging two uranium centers and a cesium cation readily cleaved the C≡O bond (one of the strongest bonds in nature) under ambient conditions. The product formed has a [CsU2 (μ-CN)(μ-O)] core, thus indicating that the three cations cooperate to cleave CO. Moreover, the addition of MeOTf to the nitride complex led to an exceptional valence disproportionation of the CsU(IV) -N-U(IV) core to yield CsU(III) (OTf) and [MeN=U(V) ] fragments. The important role of multimetallic cooperativity in both reactions is illustrated by the computed reaction mechanisms. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Multi-reference approach to the calculation of photoelectron spectra including spin-orbit coupling.

    PubMed

    Grell, Gilbert; Bokarev, Sergey I; Winter, Bernd; Seidel, Robert; Aziz, Emad F; Aziz, Saadullah G; Kühn, Oliver

    2015-08-21

    X-ray photoelectron spectra provide a wealth of information on the electronic structure. The extraction of molecular details requires adequate theoretical methods, which in case of transition metal complexes has to account for effects due to the multi-configurational and spin-mixed nature of the many-electron wave function. Here, the restricted active space self-consistent field method including spin-orbit coupling is used to cope with this challenge and to calculate valence- and core-level photoelectron spectra. The intensities are estimated within the frameworks of the Dyson orbital formalism and the sudden approximation. Thereby, we utilize an efficient computational algorithm that is based on a biorthonormal basis transformation. The approach is applied to the valence photoionization of the gas phase water molecule and to the core ionization spectrum of the [Fe(H2O)6](2+) complex. The results show good agreement with the experimental data obtained in this work, whereas the sudden approximation demonstrates distinct deviations from experiments.

  5. The VERCE platform: Enabling Computational Seismology via Streaming Workflows and Science Gateways

    NASA Astrophysics Data System (ADS)

    Spinuso, Alessandro; Filgueira, Rosa; Krause, Amrey; Matser, Jonas; Casarotti, Emanuele; Magnoni, Federica; Gemund, Andre; Frobert, Laurent; Krischer, Lion; Atkinson, Malcolm

    2015-04-01

    The VERCE project is creating an e-Science platform to facilitate innovative data analysis and coding methods that fully exploit the wealth of data in global seismology. One of the technologies developed within the project is the Dispel4Py python library, which allows to describe abstract stream-based workflows for data-intensive applications and to execute them in a distributed environment. At runtime Dispel4Py is able to map workflow descriptions dynamically onto a number of computational resources (Apache Storm clusters, MPI powered clusters, and shared-memory multi-core machines, single-core machines), setting it apart from other workflow frameworks. Therefore, Dispel4Py enables scientists to focus on their computation instead of being distracted by details of the computing infrastructure they use. Among the workflows developed with Dispel4Py in VERCE, we mention here those for Seismic Ambient Noise Cross-Correlation and MISFIT calculation, which address two data-intensive problems that are common in computational seismology. The former, also called Passive Imaging, allows the detection of relative seismic-wave velocity variations during the time of recording, to be associated with the stress-field changes that occurred in the test area. The MISFIT instead, takes as input the synthetic seismograms generated from HPC simulations for a certain Earth model and earthquake and, after a preprocessing stage, compares them with real observations in order to foster subsequent model updates and improvement (Inversion). The VERCE Science Gateway exposes the MISFIT calculation workflow as a service, in combination with the simulation phase. Both phases can be configured, controlled and monitored by the user via a rich user interface which is integrated within the gUSE Science Gateway framework, hiding the complexity of accessing third parties data services, security mechanisms and enactment on the target resources. Thanks to a modular extension to the Dispel4Py framework, the system collects provenance data adopting the W3C-PROV data model. Provenance recordings can be explored and analysed at run time for rapid diagnostic and workflow steering, or later for further validation and comparisons across runs. We will illustrate the interactive services of the gateway and the capabilities of the produced metadata, coupled with the VERCE data management layer based on iRODS. The Cross-Correlation workflow was evaluated on SuperMUC, a supercomputing cluster at the Leibniz Supercomputing Centre in Munich, with 155,656 processor cores in 9400 compute nodes. SuperMUC is based on the Intel Xeon architecture consisting of 18 Thin Node Islands and one Fat Node Island. This work has only had access to the Thin Node Islands, which contain Sandy Bridge nodes, each having 16 cores and 32 GB of memory. In the evaluations we used 1000 stations, and we applied two types of methods (whiten and non-whiten) for pre-processing the data. The workflow was tested on a varying number of cores (16, 32, 64, 128, and 256 cores) using the MPI mapping of Dispel4Py. The results show that Dispel4Py is able to improve the performance by increasing the number of cores without changing the description of the workflow.

  6. Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2012

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David W. Nigg, Principal Investigator; Kevin A. Steuhm, Project Manager

    Legacy computational reactor physics software tools and protocols currently used for support of Advanced Test Reactor (ATR) core fuel management and safety assurance, and to some extent, experiment management, are inconsistent with the state of modern nuclear engineering practice, and are difficult, if not impossible, to properly verify and validate (V&V) according to modern standards. Furthermore, the legacy staff knowledge required for application of these tools and protocols from the 1960s and 1970s is rapidly being lost due to staff turnover and retirements. In late 2009, the Idaho National Laboratory (INL) initiated a focused effort, the ATR Core Modeling Updatemore » Project, to address this situation through the introduction of modern high-fidelity computational software and protocols. This aggressive computational and experimental campaign will have a broad strategic impact on the operation of the ATR, both in terms of improved computational efficiency and accuracy for support of ongoing DOE programs as well as in terms of national and international recognition of the ATR National Scientific User Facility (NSUF). The ATR Core Modeling Update Project, targeted for full implementation in phase with the next anticipated ATR Core Internals Changeout (CIC) in the 2014-2015 time frame, began during the last quarter of Fiscal Year 2009, and has just completed its third full year. Key accomplishments so far have encompassed both computational as well as experimental work. A new suite of stochastic and deterministic transport theory based reactor physics codes and their supporting nuclear data libraries (HELIOS, KENO6/SCALE, NEWT/SCALE, ATTILA, and an extended implementation of MCNP5) has been installed at the INL under various licensing arrangements. Corresponding models of the ATR and ATRC are now operational with all five codes, demonstrating the basic feasibility of the new code packages for their intended purpose. Of particular importance, a set of as-run core depletion HELIOS calculations for all ATR cycles since August 2009, Cycle 145A through Cycle 151B, was successfully completed during 2012. This major effort supported a decision late in the year to proceed with the phased incorporation of the HELIOS methodology into the ATR Core Safety Analysis Package (CSAP) preparation process, in parallel with the established PDQ-based methodology, beginning late in Fiscal Year 2012. Acquisition of the advanced SERPENT (VTT-Finland) and MC21 (DOE-NR) Monte Carlo stochastic neutronics simulation codes was also initiated during the year and some initial applications of SERPENT to ATRC experiment analysis were demonstrated. These two new codes will offer significant additional capability, including the possibility of full-3D Monte Carlo fuel management support capabilities for the ATR at some point in the future. Finally, a capability for rigorous sensitivity analysis and uncertainty quantification based on the TSUNAMI system has been implemented and initial computational results have been obtained. This capability will have many applications as a tool for understanding the margins of uncertainty in the new models as well as for validation experiment design and interpretation.« less

  7. Performance of an MPI-only semiconductor device simulator on a quad socket/quad core InfiniBand platform.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shadid, John Nicolas; Lin, Paul Tinphone

    2009-01-01

    This preliminary study considers the scaling and performance of a finite element (FE) semiconductor device simulator on a capacity cluster with 272 compute nodes based on a homogeneous multicore node architecture utilizing 16 cores. The inter-node communication backbone for this Tri-Lab Linux Capacity Cluster (TLCC) machine is comprised of an InfiniBand interconnect. The nonuniform memory access (NUMA) nodes consist of 2.2 GHz quad socket/quad core AMD Opteron processors. The performance results for this study are obtained with a FE semiconductor device simulation code (Charon) that is based on a fully-coupled Newton-Krylov solver with domain decomposition and multilevel preconditioners. Scaling andmore » multicore performance results are presented for large-scale problems of 100+ million unknowns on up to 4096 cores. A parallel scaling comparison is also presented with the Cray XT3/4 Red Storm capability platform. The results indicate that an MPI-only programming model for utilizing the multicore nodes is reasonably efficient on all 16 cores per compute node. However, the results also indicated that the multilevel preconditioner, which is critical for large-scale capability type simulations, scales better on the Red Storm machine than the TLCC machine.« less

  8. Magnetic core mesoporous silica nanoparticles doped with dacarbazine and labelled with 99mTc for early and differential detection of metastatic melanoma by single photon emission computed tomography.

    PubMed

    Portilho, Filipe Leal; Helal-Neto, Edward; Cabezas, Santiago Sánchez; Pinto, Suyene Rocha; Dos Santos, Sofia Nascimento; Pozzo, Lorena; Sancenón, Félix; Martínez-Máñez, Ramón; Santos-Oliveira, Ralph

    2018-02-27

    Cancer is responsible for more than 12% of all causes of death in the world, with an annual death rate of more than 7 million people. In this scenario melanoma is one of the most aggressive ones with serious limitation in early detection and therapy. In this direction we developed, characterized and tested in vivo a new drug delivery system based on magnetic core-mesoporous silica nanoparticle that has been doped with dacarbazine and labelled with technetium 99 m to be used as nano-imaging agent (nanoradiopharmaceutical) for early and differential diagnosis and melanoma by single photon emission computed tomography. The results demonstrated the ability of the magnetic core-mesoporous silica to be efficiently (>98%) doped with dacarbazine and also efficiently labelled with 99mTc (technetium 99 m) (>99%). The in vivo test, using inducted mice with melanoma, demonstrated the EPR effect of the magnetic core-mesoporous silica nanoparticles doped with dacarbazine and labelled with technetium 99 metastable when injected intratumorally and the possibility to be used as systemic injection too. In both cases, magnetic core-mesoporous silica nanoparticles doped with dacarbazine and labelled with technetium 99 metastable showed to be a reliable and efficient nano-imaging agent for melanoma.

  9. Hydraulic Conductivity Measurements Barrow 2014

    DOE Data Explorer

    Katie McKnight; Tim Kneafsey; Craig Ulrich; Jil Geller

    2015-02-22

    Six individual ice cores were collected from Barrow Environmental Observatory in Barrow, Alaska, in May of 2013 as part of the Next Generation Ecosystem Experiment (NGEE). Each core was drilled from a different location at varying depths. A few days after drilling, the cores were stored in coolers packed with dry ice and flown to Lawrence Berkeley National Laboratory (LBNL) in Berkeley, CA. 3-dimensional images of the cores were constructed using a medical X-ray computed tomography (CT) scanner at 120kV. Hydraulic conductivity samples were extracted from these cores at LBNL Richmond Field Station in Richmond, CA, in February 2014 by cutting 5 to 8 inch segments using a chop saw. Samples were packed individually and stored at freezing temperatures to minimize any changes in structure or loss of ice content prior to analysis. Hydraulic conductivity was determined through falling head tests using a permeameter [ELE International, Model #: K-770B]. After approximately 12 hours of thaw, initial falling head tests were performed. Two to four measurements were collected on each sample and collection stopped when the applied head load exceeded 25% change from the original load. Analyses were performed between 2 to 3 times for each sample. The final hydraulic conductivity calculations were computed using methodology of Das et al., 1985.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rynkun, P., E-mail: pavel.rynkun@gmail.com; Jönsson, P.; Gaigalas, G.

    Based on relativistic wavefunctions from multiconfiguration Dirac–Hartree–Fock and configuration interaction calculations, E1, M1, E2, and M2 transition rates, weighted oscillator strengths, and lifetimes are evaluated for the states of the (1s{sup 2})2s{sup 2}2p{sup 3},2s2p{sup 4}, and 2p{sup 5} configurations in all nitrogen-like ions between F III and Kr XXX. The wavefunction expansions include valence, core–valence, and core–core correlation effects through single–double multireference expansions to increasing sets of active orbitals. The computed energies agree very well with experimental values, with differences of only 300–600 cm{sup −1} for the majority of the levels and ions in the sequence. Computed transitions rates aremore » in close agreement with available data from MCHF-BP calculations by Tachiev and Froese Fischer [G.I. Tachiev, C. Froese Fischer, A and A 385 (2002) 716].« less

  11. Decision making in recurrent neuronal circuits.

    PubMed

    Wang, Xiao-Jing

    2008-10-23

    Decision making has recently emerged as a central theme in neurophysiological studies of cognition, and experimental and computational work has led to the proposal of a cortical circuit mechanism of elemental decision computations. This mechanism depends on slow recurrent synaptic excitation balanced by fast feedback inhibition, which not only instantiates attractor states for forming categorical choices but also long transients for gradually accumulating evidence in favor of or against alternative options. Such a circuit endowed with reward-dependent synaptic plasticity is able to produce adaptive choice behavior. While decision threshold is a core concept for reaction time tasks, it can be dissociated from a general decision rule. Moreover, perceptual decisions and value-based economic choices are described within a unified framework in which probabilistic choices result from irregular neuronal activity as well as iterative interactions of a decision maker with an uncertain environment or other unpredictable decision makers in a social group.

  12. Model of the songbird nucleus HVC as a network of central pattern generators

    PubMed Central

    Abarbanel, Henry D. I.

    2016-01-01

    We propose a functional architecture of the adult songbird nucleus HVC in which the core element is a “functional syllable unit” (FSU). In this model, HVC is organized into FSUs, each of which provides the basis for the production of one syllable in vocalization. Within each FSU, the inhibitory neuron population takes one of two operational states: 1) simultaneous firing wherein all inhibitory neurons fire simultaneously, and 2) competitive firing of the inhibitory neurons. Switching between these basic modes of activity is accomplished via changes in the synaptic strengths among the inhibitory neurons. The inhibitory neurons connect to excitatory projection neurons such that during state 1 the activity of projection neurons is suppressed, while during state 2 patterns of sequential firing of projection neurons can occur. The latter state is stabilized by feedback from the projection to the inhibitory neurons. Song composition for specific species is distinguished by the manner in which different FSUs are functionally connected to each other. Ours is a computational model built with biophysically based neurons. We illustrate that many observations of HVC activity are explained by the dynamics of the proposed population of FSUs, and we identify aspects of the model that are currently testable experimentally. In addition, and standing apart from the core features of an FSU, we propose that the transition between modes may be governed by the biophysical mechanism of neuromodulation. PMID:27535375

  13. Topological defects in a living nematic ensnare swimming bacteria [Linking bacterial motility and liquid crystallinity in a model of living nematic

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Genkin, Mikhail Mikhailovich; Sokolov, Andrey; Lavrentovich, Oleg D.

    Active matter exemplified by suspensions of motile bacteria or synthetic self-propelled particles exhibits a remarkable propensity to self-organization and collective motion. The local input of energy and simple particle interactions often lead to complex emergent behavior manifested by the formation of macroscopic vortices and coherent structures with long-range order. A realization of an active system has been conceived by combining swimming bacteria and a lyotropic liquid crystal. Here, by coupling the well-established and validated model of nematic liquid crystals with the bacterial dynamics, we develop a computational model describing intricate properties of such a living nematic. In faithful agreement withmore » the experiment, the model reproduces the onset of periodic undulation of the director and consequent proliferation of topological defects with the increase in bacterial concentration. It yields a testable prediction on the accumulation of bacteria in the cores of +1/2 topological defects and depletion of bacteria in the cores of -1/2 defects. Our dedicated experiment on motile bacteria suspended in a freestanding liquid crystalline film fully confirms this prediction. Lastly, our findings suggest novel approaches for trapping and transport of bacteria and synthetic swimmers in anisotropic liquids and extend a scope of tools to control and manipulate microscopic objects in active matter.« less

  14. Topological defects in a living nematic ensnare swimming bacteria [Linking bacterial motility and liquid crystallinity in a model of living nematic

    DOE PAGES

    Genkin, Mikhail Mikhailovich; Sokolov, Andrey; Lavrentovich, Oleg D.; ...

    2017-03-08

    Active matter exemplified by suspensions of motile bacteria or synthetic self-propelled particles exhibits a remarkable propensity to self-organization and collective motion. The local input of energy and simple particle interactions often lead to complex emergent behavior manifested by the formation of macroscopic vortices and coherent structures with long-range order. A realization of an active system has been conceived by combining swimming bacteria and a lyotropic liquid crystal. Here, by coupling the well-established and validated model of nematic liquid crystals with the bacterial dynamics, we develop a computational model describing intricate properties of such a living nematic. In faithful agreement withmore » the experiment, the model reproduces the onset of periodic undulation of the director and consequent proliferation of topological defects with the increase in bacterial concentration. It yields a testable prediction on the accumulation of bacteria in the cores of +1/2 topological defects and depletion of bacteria in the cores of -1/2 defects. Our dedicated experiment on motile bacteria suspended in a freestanding liquid crystalline film fully confirms this prediction. Lastly, our findings suggest novel approaches for trapping and transport of bacteria and synthetic swimmers in anisotropic liquids and extend a scope of tools to control and manipulate microscopic objects in active matter.« less

  15. Core stability training for injury prevention.

    PubMed

    Huxel Bliven, Kellie C; Anderson, Barton E

    2013-11-01

    Enhancing core stability through exercise is common to musculoskeletal injury prevention programs. Definitive evidence demonstrating an association between core instability and injury is lacking; however, multifaceted prevention programs including core stabilization exercises appear to be effective at reducing lower extremity injury rates. PUBMED WAS SEARCHED FOR EPIDEMIOLOGIC, BIOMECHANIC, AND CLINICAL STUDIES OF CORE STABILITY FOR INJURY PREVENTION (KEYWORDS: "core OR trunk" AND "training OR prevention OR exercise OR rehabilitation" AND "risk OR prevalence") published between January 1980 and October 2012. Articles with relevance to core stability risk factors, assessment, and training were reviewed. Relevant sources from articles were also retrieved and reviewed. Stabilizer, mobilizer, and load transfer core muscles assist in understanding injury risk, assessing core muscle function, and developing injury prevention programs. Moderate evidence of alterations in core muscle recruitment and injury risk exists. Assessment tools to identify deficits in volitional muscle contraction, isometric muscle endurance, stabilization, and movement patterns are available. Exercise programs to improve core stability should focus on muscle activation, neuromuscular control, static stabilization, and dynamic stability. Core stabilization relies on instantaneous integration among passive, active, and neural control subsystems. Core muscles are often categorized functionally on the basis of stabilizing or mobilizing roles. Neuromuscular control is critical in coordinating this complex system for dynamic stabilization. Comprehensive assessment and training require a multifaceted approach to address core muscle strength, endurance, and recruitment requirements for functional demands associated with daily activities, exercise, and sport.

  16. Challenges in scaling NLO generators to leadership computers

    NASA Astrophysics Data System (ADS)

    Benjamin, D.; Childers, JT; Hoeche, S.; LeCompte, T.; Uram, T.

    2017-10-01

    Exascale computing resources are roughly a decade away and will be capable of 100 times more computing than current supercomputers. In the last year, Energy Frontier experiments crossed a milestone of 100 million core-hours used at the Argonne Leadership Computing Facility, Oak Ridge Leadership Computing Facility, and NERSC. The Fortran-based leading-order parton generator called Alpgen was successfully scaled to millions of threads to achieve this level of usage on Mira. Sherpa and MadGraph are next-to-leading order generators used heavily by LHC experiments for simulation. Integration times for high-multiplicity or rare processes can take a week or more on standard Grid machines, even using all 16-cores. We will describe our ongoing work to scale the Sherpa generator to thousands of threads on leadership-class machines and reduce run-times to less than a day. This work allows the experiments to leverage large-scale parallel supercomputers for event generation today, freeing tens of millions of grid hours for other work, and paving the way for future applications (simulation, reconstruction) on these and future supercomputers.

  17. Constructing Smart Protocells with Built-In DNA Computational Core to Eliminate Exogenous Challenge.

    PubMed

    Lyu, Yifan; Wu, Cuichen; Heinke, Charles; Han, Da; Cai, Ren; Teng, I-Ting; Liu, Yuan; Liu, Hui; Zhang, Xiaobing; Liu, Qiaoling; Tan, Weihong

    2018-06-06

    A DNA reaction network is like a biological algorithm that can respond to "molecular input signals", such as biological molecules, while the artificial cell is like a microrobot whose function is powered by the encapsulated DNA reaction network. In this work, we describe the feasibility of using a DNA reaction network as the computational core of a protocell, which will perform an artificial immune response in a concise way to eliminate a mimicked pathogenic challenge. Such a DNA reaction network (RN)-powered protocell can realize the connection of logical computation and biological recognition due to the natural programmability and biological properties of DNA. Thus, the biological input molecules can be easily involved in the molecular computation and the computation process can be spatially isolated and protected by artificial bilayer membrane. We believe the strategy proposed in the current paper, i.e., using DNA RN to power artificial cells, will lay the groundwork for understanding the basic design principles of DNA algorithm-based nanodevices which will, in turn, inspire the construction of artificial cells, or protocells, that will find a place in future biomedical research.

  18. Computational Aerodynamic Simulations of an 840 ft/sec Tip Speed Advanced Ducted Propulsor Fan System Model for Acoustic Methods Assessment and Development

    NASA Technical Reports Server (NTRS)

    Tweedt, Daniel L.

    2014-01-01

    Computational Aerodynamic simulations of an 840 ft/sec tip speed, Advanced Ducted Propulsor fan system were performed at five different operating points on the fan operating line, in order to provide detailed internal flow field information for use with fan acoustic prediction methods presently being developed, assessed and validated. The fan system is a sub-scale, lownoise research fan/nacelle model that has undergone extensive experimental testing in the 9- by 15- foot Low Speed Wind Tunnel at the NASA Glenn Research Center, resulting in quality, detailed aerodynamic and acoustic measurement data. Details of the fan geometry, the computational fluid dynamics methods, the computational grids, and various computational parameters relevant to the numerical simulations are discussed. Flow field results for three of the five operating conditions simulated are presented in order to provide a representative look at the computed solutions. Each of the five fan aerodynamic simulations involved the entire fan system, excluding a long core duct section downstream of the core inlet guide vane. As a result, only fan rotational speed and system bypass ratio, set by specifying static pressure downstream of the core inlet guide vane row, were adjusted in order to set the fan operating point, leading to operating points that lie on a fan operating line and making mass flow rate a fully dependent parameter. The resulting mass flow rates are in good agreement with measurement values. The computed blade row flow fields for all five fan operating points are, in general, aerodynamically healthy. Rotor blade and fan exit guide vane flow characteristics are good, including incidence and deviation angles, chordwise static pressure distributions, blade surface boundary layers, secondary flow structures, and blade wakes. Examination of the computed flow fields reveals no excessive boundary layer separations or related secondary-flow problems. A few spanwise comparisons between computational and measurement data in the bypass duct show that they are in good agreement, thus providing a partial validation of the computational results.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sattison, M.B.; Thatcher, T.A.; Knudsen, J.K.

    The US Nuclear Regulatory Commission (NRC) has been using full-power. Level 1, limited-scope risk models for the Accident Sequence Precursor (ASP) program for over fifteen years. These models have evolved and matured over the years, as have probabilistic risk assessment (PRA) and computer technologies. Significant upgrading activities have been undertaken over the past three years, with involvement from the Offices of Nuclear Reactor Regulation (NRR), Analysis and Evaluation of Operational Data (AEOD), and Nuclear Regulatory Research (RES), and several national laboratories. Part of these activities was an RES-sponsored feasibility study investigating the ability to extend the ASP models to includemore » contributors to core damage from events initiated with the reactor at low power or shutdown (LP/SD), both internal events and external events. This paper presents only the LP/SD internal event modeling efforts.« less

  20. Initial Performance Results on IBM POWER6

    NASA Technical Reports Server (NTRS)

    Saini, Subbash; Talcott, Dale; Jespersen, Dennis; Djomehri, Jahed; Jin, Haoqiang; Mehrotra, Piysuh

    2008-01-01

    The POWER5+ processor has a faster memory bus than that of the previous generation POWER5 processor (533 MHz vs. 400 MHz), but the measured per-core memory bandwidth of the latter is better than that of the former (5.7 GB/s vs. 4.3 GB/s). The reason for this is that in the POWER5+, the two cores on the chip share the L2 cache, L3 cache and memory bus. The memory controller is also on the chip and is shared by the two cores. This serializes the path to memory. For consistently good performance on a wide range of applications, the performance of the processor, the memory subsystem, and the interconnects (both latency and bandwidth) should be balanced. Recognizing this, IBM has designed the Power6 processor so as to avoid the bottlenecks due to the L2 cache, memory controller and buffer chips of the POWER5+. Unlike the POWER5+, each core in the POWER6 has its own L2 cache (4 MB - double that of the Power5+), memory controller and buffer chips. Each core in the POWER6 runs at 4.7 GHz instead of 1.9 GHz in POWER5+. In this paper, we evaluate the performance of a dual-core Power6 based IBM p6-570 system, and we compare its performance with that of a dual-core Power5+ based IBM p575+ system. In this evaluation, we have used the High- Performance Computing Challenge (HPCC) benchmarks, NAS Parallel Benchmarks (NPB), and four real-world applications--three from computational fluid dynamics and one from climate modeling.

  1. CoreFlow: a computational platform for integration, analysis and modeling of complex biological data.

    PubMed

    Pasculescu, Adrian; Schoof, Erwin M; Creixell, Pau; Zheng, Yong; Olhovsky, Marina; Tian, Ruijun; So, Jonathan; Vanderlaan, Rachel D; Pawson, Tony; Linding, Rune; Colwill, Karen

    2014-04-04

    A major challenge in mass spectrometry and other large-scale applications is how to handle, integrate, and model the data that is produced. Given the speed at which technology advances and the need to keep pace with biological experiments, we designed a computational platform, CoreFlow, which provides programmers with a framework to manage data in real-time. It allows users to upload data into a relational database (MySQL), and to create custom scripts in high-level languages such as R, Python, or Perl for processing, correcting and modeling this data. CoreFlow organizes these scripts into project-specific pipelines, tracks interdependencies between related tasks, and enables the generation of summary reports as well as publication-quality images. As a result, the gap between experimental and computational components of a typical large-scale biology project is reduced, decreasing the time between data generation, analysis and manuscript writing. CoreFlow is being released to the scientific community as an open-sourced software package complete with proteomics-specific examples, which include corrections for incomplete isotopic labeling of peptides (SILAC) or arginine-to-proline conversion, and modeling of multiple/selected reaction monitoring (MRM/SRM) results. CoreFlow was purposely designed as an environment for programmers to rapidly perform data analysis. These analyses are assembled into project-specific workflows that are readily shared with biologists to guide the next stages of experimentation. Its simple yet powerful interface provides a structure where scripts can be written and tested virtually simultaneously to shorten the life cycle of code development for a particular task. The scripts are exposed at every step so that a user can quickly see the relationships between the data, the assumptions that have been made, and the manipulations that have been performed. Since the scripts use commonly available programming languages, they can easily be transferred to and from other computational environments for debugging or faster processing. This focus on 'on the fly' analysis sets CoreFlow apart from other workflow applications that require wrapping of scripts into particular formats and development of specific user interfaces. Importantly, current and future releases of data analysis scripts in CoreFlow format will be of widespread benefit to the proteomics community, not only for uptake and use in individual labs, but to enable full scrutiny of all analysis steps, thus increasing experimental reproducibility and decreasing errors. This article is part of a Special Issue entitled: Can Proteomics Fill the Gap Between Genomics and Phenotypes? Copyright © 2014 Elsevier B.V. All rights reserved.

  2. A Computational Fluid Dynamic and Heat Transfer Model for Gaseous Core and Gas Cooled Space Power and Propulsion Reactors

    NASA Technical Reports Server (NTRS)

    Anghaie, S.; Chen, G.

    1996-01-01

    A computational model based on the axisymmetric, thin-layer Navier-Stokes equations is developed to predict the convective, radiation and conductive heat transfer in high temperature space nuclear reactors. An implicit-explicit, finite volume, MacCormack method in conjunction with the Gauss-Seidel line iteration procedure is utilized to solve the thermal and fluid governing equations. Simulation of coolant and propellant flows in these reactors involves the subsonic and supersonic flows of hydrogen, helium and uranium tetrafluoride under variable boundary conditions. An enthalpy-rebalancing scheme is developed and implemented to enhance and accelerate the rate of convergence when a wall heat flux boundary condition is used. The model also incorporated the Baldwin and Lomax two-layer algebraic turbulence scheme for the calculation of the turbulent kinetic energy and eddy diffusivity of energy. The Rosseland diffusion approximation is used to simulate the radiative energy transfer in the optically thick environment of gas core reactors. The computational model is benchmarked with experimental data on flow separation angle and drag force acting on a suspended sphere in a cylindrical tube. The heat transfer is validated by comparing the computed results with the standard heat transfer correlations predictions. The model is used to simulate flow and heat transfer under a variety of design conditions. The effect of internal heat generation on the heat transfer in the gas core reactors is examined for a variety of power densities, 100 W/cc, 500 W/cc and 1000 W/cc. The maximum temperature, corresponding with the heat generation rates, are 2150 K, 2750 K and 3550 K, respectively. This analysis shows that the maximum temperature is strongly dependent on the value of heat generation rate. It also indicates that a heat generation rate higher than 1000 W/cc is necessary to maintain the gas temperature at about 3500 K, which is typical design temperature required to achieve high efficiency in the gas core reactors. The model is also used to predict the convective and radiation heat fluxes for the gas core reactors. The maximum value of heat flux occurs at the exit of the reactor core. Radiation heat flux increases with higher wall temperature. This behavior is due to the fact that the radiative heat flux is strongly dependent on wall temperature. This study also found that at temperature close to 3500 K the radiative heat flux is comparable with the convective heat flux in a uranium fluoride failed gas core reactor.

  3. Fabrication of Fe{sub 3}O{sub 4}@CuO core-shell from MOF based materials and its antibacterial activity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajabi, S.K.; Sohrabnezhad, Sh., E-mail: sohrabnezhad@guilan.ac.ir; Ghafourian, S.

    Magnetic Fe{sub 3}O{sub 4}@CuO nanocomposite with a core/shell structure was successfully synthesized via direct calcinations of magnetic Fe{sub 3}O{sub 4}@HKUST-1 in air atmosphere. The morphology, structure, magnetic and porous properties of the as-synthesized nano composites were characterized by using scanning electron microscope (SEM), transmission electron microscopy (TEM), powder X-ray diffraction (PXRD), and vibration sample magnetometer (VSM). The results showed that the nanocomposite material included a Fe{sub 3}O{sub 4} core and a CuO shell. The Fe{sub 3}O{sub 4}@CuO core-shell can be separated easily from the medium by a small magnet. The antibacterial activity of Fe{sub 3}O{sub 4}-CuO core-shell was investigated againstmore » gram-positive and gram-negative bacteria. A new mechanism was proposed for inactivation of bacteria over the prepared sample. It was demonstrated that the core-shell exhibit recyclable antibacterial activity, acting as an ideal long-acting antibacterial agent. - Graphical abstract: Fe{sub 3}O{sub 4}@CuO core-shell release of copper ions. These Cu{sup 2+} ions were responsible for the exhibited antibacterial activity. - Highlights: • The Fe{sub 3}O{sub 4}@CuO core-shell was prepared by MOF method. • This is the first study of antibacterial activity of core-shell consist of CuO and Fe{sub 3}O{sub 4}. • The core-shell can be reused effectively. • Core-shell was separated from the reaction solution by external magnetic field.« less

  4. An Empirical Study of User Experience on Touch Mice

    ERIC Educational Resources Information Center

    Chou, Jyh Rong

    2016-01-01

    The touch mouse is a new type of computer mouse that provides users with a new way of touch-based environment to interact with computers. For more than a decade, user experience (UX) has grown into a core concept of human-computer interaction (HCI), describing a user's perceptions and responses that result from the use of a product in a particular…

  5. Passively Targeted Curcumin-Loaded PEGylated PLGA Nanocapsules for Colon Cancer Therapy In Vivo

    PubMed Central

    Klippstein, Rebecca; Wang, Julie Tzu-Wen; El-Gogary, Riham I; Bai, Jie; Mustafa, Falisa; Rubio, Noelia; Bansal, Sukhvinder; Al-Jamal, Wafa T; Al-Jamal, Khuloud T

    2015-01-01

    Clinical applications of curcumin for the treatment of cancer and other chronic diseases have been mainly hindered by its short biological half-life and poor water solubility. Nanotechnology-based drug delivery systems have the potential to enhance the efficacy of poorly soluble drugs for systemic delivery. This study proposes the use of poly(lactic-co-glycolic acid) (PLGA)-based polymeric oil-cored nanocapsules (NCs) for curcumin loading and delivery to colon cancer in mice after systemic injection. Formulations of different oil compositions are prepared and characterized for their curcumin loading, physico-chemical properties, and shelf-life stability. The results indicate that castor oil-cored PLGA-based NC achieves high drug loading efficiency (≈18% w(drug)/w(polymer)%) compared to previously reported NCs. Curcumin-loaded NCs internalize more efficiently in CT26 cells than the free drug, and exert therapeutic activity in vitro, leading to apoptosis and blocking the cell cycle. In addition, the formulated NC exhibits an extended blood circulation profile compared to the non-PEGylated NC, and accumulates in the subcutaneous CT26-tumors in mice, after systemic administration. The results are confirmed by optical and single photon emission computed tomography/computed tomography (SPECT/CT) imaging. In vivo growth delay studies are performed, and significantly smaller tumor volumes are achieved compared to empty NC injected animals. This study shows the great potential of the formulated NC for treating colon cancer. PMID:26140363

  6. Summary of comparison and analysis of results from exercises 1 and 2 of the OECD PBMR coupled neutronics/thermal hydraulics transient benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mkhabela, P.; Han, J.; Tyobeka, B.

    2006-07-01

    The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has accepted, through the Nuclear Science Committee (NSC), the inclusion of the Pebble-Bed Modular Reactor 400 MW design (PBMR-400) coupled neutronics/thermal hydraulics transient benchmark problem as part of their official activities. The scope of the benchmark is to establish a well-defined problem, based on a common given library of cross sections, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events through a set of multi-dimensional computational test problems. The benchmark includes three steady state exercises andmore » six transient exercises. This paper describes the first two steady state exercises, their objectives and the international participation in terms of organization, country and computer code utilized. This description is followed by a comparison and analysis of the participants' results submitted for these two exercises. The comparison of results from different codes allows for an assessment of the sensitivity of a result to the method employed and can thus help to focus the development efforts on the most critical areas. The two first exercises also allow for removing of user-related modeling errors and prepare core neutronics and thermal-hydraulics models of the different codes for the rest of the exercises in the benchmark. (authors)« less

  7. Programming for 1.6 Millon cores: Early experiences with IBM's BG/Q SMP architecture

    NASA Astrophysics Data System (ADS)

    Glosli, James

    2013-03-01

    With the stall in clock cycle improvements a decade ago, the drive for computational performance has continues along a path of increasing core counts on a processor. The multi-core evolution has been expressed in both a symmetric multi processor (SMP) architecture and cpu/GPU architecture. Debates rage in the high performance computing (HPC) community which architecture best serves HPC. In this talk I will not attempt to resolve that debate but perhaps fuel it. I will discuss the experience of exploiting Sequoia, a 98304 node IBM Blue Gene/Q SMP at Lawrence Livermore National Laboratory. The advantages and challenges of leveraging the computational power BG/Q will be detailed through the discussion of two applications. The first application is a Molecular Dynamics code called ddcMD. This is a code developed over the last decade at LLNL and ported to BG/Q. The second application is a cardiac modeling code called Cardioid. This is a code that was recently designed and developed at LLNL to exploit the fine scale parallelism of BG/Q's SMP architecture. Through the lenses of these efforts I'll illustrate the need to rethink how we express and implement our computational approaches. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  8. The effect of basis set and exchange-correlation functional on time-dependent density functional theory calculations within the Tamm-Dancoff approximation of the x-ray emission spectroscopy of transition metal complexes.

    PubMed

    Roper, Ian P E; Besley, Nicholas A

    2016-03-21

    The simulation of X-ray emission spectra of transition metal complexes with time-dependent density functional theory (TDDFT) is investigated. X-ray emission spectra can be computed within TDDFT in conjunction with the Tamm-Dancoff approximation by using a reference determinant with a vacancy in the relevant core orbital, and these calculations can be performed using the frozen orbital approximation or with the relaxation of the orbitals of the intermediate core-ionised state included. Both standard exchange-correlation functionals and functionals specifically designed for X-ray emission spectroscopy are studied, and it is shown that the computed spectral band profiles are sensitive to the exchange-correlation functional used. The computed intensities of the spectral bands can be rationalised by considering the metal p orbital character of the valence molecular orbitals. To compute X-ray emission spectra with the correct energy scale allowing a direct comparison with experiment requires the relaxation of the core-ionised state to be included and the use of specifically designed functionals with increased amounts of Hartree-Fock exchange in conjunction with high quality basis sets. A range-corrected functional with increased Hartree-Fock exchange in the short range provides transition energies close to experiment and spectral band profiles that have a similar accuracy to those from standard functionals.

  9. Fault-Tolerant, Real-Time, Multi-Core Computer System

    NASA Technical Reports Server (NTRS)

    Gostelow, Kim P.

    2012-01-01

    A document discusses a fault-tolerant, self-aware, low-power, multi-core computer for space missions with thousands of simple cores, achieving speed through concurrency. The proposed machine decides how to achieve concurrency in real time, rather than depending on programmers. The driving features of the system are simple hardware that is modular in the extreme, with no shared memory, and software with significant runtime reorganizing capability. The document describes a mechanism for moving ongoing computations and data that is based on a functional model of execution. Because there is no shared memory, the processor connects to its neighbors through a high-speed data link. Messages are sent to a neighbor switch, which in turn forwards that message on to its neighbor until reaching the intended destination. Except for the neighbor connections, processors are isolated and independent of each other. The processors on the periphery also connect chip-to-chip, thus building up a large processor net. There is no particular topology to the larger net, as a function at each processor allows it to forward a message in the correct direction. Some chip-to-chip connections are not necessarily nearest neighbors, providing short cuts for some of the longer physical distances. The peripheral processors also provide the connections to sensors, actuators, radios, science instruments, and other devices with which the computer system interacts.

  10. In Silico Characterization of the Binding Affinity of Dendrimers to Penicillin-Binding Proteins (PBPs): Can PBPs be Potential Targets for Antibacterial Dendrimers?

    PubMed

    Ahmed, Shaimaa; Vepuri, Suresh B; Ramesh, Muthusamy; Kalhapure, Rahul; Suleman, Nadia; Govender, Thirumala

    2016-04-01

    We have shown that novel silver salts of poly (propyl ether) imine (PETIM) dendron and dendrimers developed in our group exhibit preferential antibacterial activity against methicillin-resistant Staphylococcus aureus (MRSA) and Staphylococcus aureus. This led us to examine whether molecular modeling methods could be used to identify the key structural design principles for a bioactive lead molecule, explore the mechanism of binding with biological targets, and explain their preferential antibacterial activity. The current article reports the conformational landscape as well as mechanism of binding of generation 1 PETIM dendron and dendrimers to penicillin-binding proteins (PBPs) in order to understand the antibacterial activity profiles of their silver salts. Molecular dynamics at different simulation protocols and conformational analysis were performed to elaborate on the conformational features of the studied dendrimers, as well as to create the initial structure for further binding studies. The results showed that for all compounds, there were no significant conformational changes due to variation in simulation conditions. Molecular docking calculations were performed to investigate the binding theme between the studied dendrimers and PBPs. Interestingly, in significant accordance with the experimental data, dendron and dendrimer with aliphatic cores were found to show higher activity against S. aureus than the dendrimer with an aromatic core. The latter showed higher activity against MRSA. The findings from this computational and molecular modeling report together with the experimental results serve as a road map toward designing more potent antibacterial dendrimers against resistant bacterial strains.

  11. Extended core for motor/generator

    DOEpatents

    Shoykhet, Boris A.

    2005-05-10

    An extended stator core in a motor/generator can be utilized to mitigate losses in end regions of the core and a frame of the motor/generator. To mitigate the losses, the stator core can be extended to a length substantially equivalent to or greater than a length of a magnetically active portion in the rotor. Alternatively, a conventional length stator core can be utilized with a shortened magnetically active portion to mitigate losses in the motor/generator. To mitigate the losses in the core caused by stator winding, the core can be extended to a length substantially equivalent or greater than a length of stator winding.

  12. Extended core for motor/generator

    DOEpatents

    Shoykhet, Boris A.

    2006-08-22

    An extended stator core in a motor/generator can be utilized to mitigate losses in end regions of the core and a frame of the motor/generator. To mitigate the losses, the stator core can be extended to a length substantially equivalent to or greater than a length of a magnetically active portion in the rotor. Alternatively, a conventional length stator core can be utilized with a shortened magnetically active portion to mitigate losses in the motor/generator. To mitigate the losses in the core caused by stator winding, the core can be extended to a length substantially equivalent or greater than a length of stator winding.

  13. The Core Avionics System for the DLR Compact-Satellite Series

    NASA Astrophysics Data System (ADS)

    Montenegro, S.; Dittrich, L.

    2008-08-01

    The Standard Satellite Bus's core avionics system is a further step in the development line of the software and hardware architecture which was first used in the bispectral infrared detector mission (BIRD). The next step improves dependability, flexibility and simplicity of the whole core avionics system. Important aspects of this concept were already implemented, simulated and tested in other ESA and industrial projects. Therefore we can say the basic concept is proven. This paper deals with different aspects of core avionics development and proposes an extension to the existing core avionics system of BIRD to meet current and future requirements regarding flexibility, availability, reliability of small satellite and the continuous increasing demand of mass memory and computational power.

  14. RIACS/USRA

    NASA Technical Reports Server (NTRS)

    Oliger, Joseph

    1993-01-01

    The Research Institute for Advanced Computer Science (RIACS) was established by the Universities Space Research Association (USRA) at the NASA Ames Research Center (ARC) on 6 June 1983. RIACS is privately operated by USRA, a consortium of universities with research programs in the aerospace sciences, under contract with NASA. The primary mission of RIACS is to provide research and expertise in computer science and scientific computing to support the scientific missions of NASA ARC. The research carried out at RIACS must change its emphasis from year to year in response to NASA ARC's changing needs and technological opportunities. A flexible scientific staff is provided through a university faculty visitor program, a post doctoral program, and a student visitor program. Not only does this provide appropriate expertise but it also introduces scientists outside of NASA to NASA problems. A small group of core RIACS staff provides continuity and interacts with an ARC technical monitor and scientific advisory group to determine the RIACS mission. RIACS activities are reviewed and monitored by a USRA advisory council and ARC technical monitor. Research at RIACS is currently being done in the following areas: Parallel Computing, Advanced Methods for Scientific Computing, High Performance Networks and Technology, and Learning Systems. Parallel compiler techniques, adaptive numerical methods for flows in complicated geometries, and optimization were identified as important problems to investigate for ARC's involvement in the Computational Grand Challenges of the next decade.

  15. Using NCAR Yellowstone for PhotoVoltaic Power Forecasts with Artificial Neural Networks and an Analog Ensemble

    NASA Astrophysics Data System (ADS)

    Cervone, G.; Clemente-Harding, L.; Alessandrini, S.; Delle Monache, L.

    2016-12-01

    A methodology based on Artificial Neural Networks (ANN) and an Analog Ensemble (AnEn) is presented to generate 72-hour deterministic and probabilistic forecasts of power generated by photovoltaic (PV) power plants using input from a numerical weather prediction model and computed astronomical variables. ANN and AnEn are used individually and in combination to generate forecasts for three solar power plant located in Italy. The computational scalability of the proposed solution is tested using synthetic data simulating 4,450 PV power stations. The NCAR Yellowstone supercomputer is employed to test the parallel implementation of the proposed solution, ranging from 1 node (32 cores) to 4,450 nodes (141,140 cores). Results show that a combined AnEn + ANN solution yields best results, and that the proposed solution is well suited for massive scale computation.

  16. Parallelized Seeded Region Growing Using CUDA

    PubMed Central

    Park, Seongjin; Lee, Hyunna; Seo, Jinwook; Lee, Kyoung Ho; Shin, Yeong-Gil; Kim, Bohyoung

    2014-01-01

    This paper presents a novel method for parallelizing the seeded region growing (SRG) algorithm using Compute Unified Device Architecture (CUDA) technology, with intention to overcome the theoretical weakness of SRG algorithm of its computation time being directly proportional to the size of a segmented region. The segmentation performance of the proposed CUDA-based SRG is compared with SRG implementations on single-core CPUs, quad-core CPUs, and shader language programming, using synthetic datasets and 20 body CT scans. Based on the experimental results, the CUDA-based SRG outperforms the other three implementations, advocating that it can substantially assist the segmentation during massive CT screening tests. PMID:25309619

  17. Recursive partitioned inversion of large (1500 x 1500) symmetric matrices

    NASA Technical Reports Server (NTRS)

    Putney, B. H.; Brownd, J. E.; Gomez, R. A.

    1976-01-01

    A recursive algorithm was designed to invert large, dense, symmetric, positive definite matrices using small amounts of computer core, i.e., a small fraction of the core needed to store the complete matrix. The described algorithm is a generalized Gaussian elimination technique. Other algorithms are also discussed for the Cholesky decomposition and step inversion techniques. The purpose of the inversion algorithm is to solve large linear systems of normal equations generated by working geodetic problems. The algorithm was incorporated into a computer program called SOLVE. In the past the SOLVE program has been used in obtaining solutions published as the Goddard earth models.

  18. Computational Analysis of a Pylon-Chevron Core Nozzle Interaction

    NASA Technical Reports Server (NTRS)

    Thomas, Russell H.; Kinzie, Kevin W.; Pao, S. Paul

    2001-01-01

    In typical engine installations, the pylon of an engine creates a flow disturbance that interacts with the engine exhaust flow. This interaction of the pylon with the exhaust flow from a dual stream nozzle was studied computationally. The dual stream nozzle simulates an engine with a bypass ratio of five. A total of five configurations were simulated all at the take-off operating point. All computations were performed using the structured PAB3D code which solves the steady, compressible, Reynolds-averaged Navier-Stokes equations. These configurations included a core nozzle with eight chevron noise reduction devices built into the nozzle trailing edge. Baseline cases had no chevron devices and were run with a pylon and without a pylon. Cases with the chevron were also studied with and without the pylon. Another case was run with the chevron rotated relative to the pylon. The fan nozzle did not have chevron devices attached. Solutions showed that the effect of the pylon is to distort the round Jet plume and to destroy the symmetrical lobed pattern created by the core chevrons. Several overall flow field quantities were calculated that might be used in extensions of this work to find flow field parameters that correlate with changes in noise.

  19. Microhydration of LiOH: Insight from electronic decays of core-ionized states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kryzhevoi, Nikolai V., E-mail: nikolai.kryzhevoi@pci.uni-heidelberg.de

    2016-06-28

    We compute and compare the autoionization spectra of a core-ionized LiOH molecule both in its isolated and microhydrated states. Stepwise microhydration of LiOH leads to gradual elongation of the Li–OH bond length and finally to molecular dissociation. The accompanying changes in the local environment of the OH{sup −} and Li{sup +} counterions are reflected in the computed O 1s and Li 1s spectra. The role of solvent water molecules and the counterion in the spectral shape formation is assessed. Electronic decays of the microhydrated LiOH are found to be mostly intermolecular since the majority of the populated final states havemore » at least one outer-valence vacancy outside the initially core-ionized ion, mainly on a neighboring water molecule. The charge delocalization occurs through the intermolecular Coulombic and electron transfer mediated decays. Both mechanisms are highly efficient that is partly attributed to hybridization of molecular orbitals. The computed spectral shapes are sensitive to the counterion separation as well as to the number and arrangement of solvent molecules. These sensitivities can be used for studying the local hydration structure of solvated ions in aqueous solutions.« less

  20. Computational Models of Relational Processes in Cognitive Development

    ERIC Educational Resources Information Center

    Halford, Graeme S.; Andrews, Glenda; Wilson, William H.; Phillips, Steven

    2012-01-01

    Acquisition of relational knowledge is a core process in cognitive development. Relational knowledge is dynamic and flexible, entails structure-consistent mappings between representations, has properties of compositionality and systematicity, and depends on binding in working memory. We review three types of computational models relevant to…

Top