Science.gov

Sample records for computer oriented energy

  1. Computer Based Library Orientation.

    ERIC Educational Resources Information Center

    Machalow, Robert

    This document presents computer-based lessons used to teach basic library skills to college students at York College of the City University of New York. The information for library orientation has been entered on a disk which must be used in conjunction with a word processing program, the Applewriter IIe, and an Apple IIe microcomputer. The…

  2. SOLINS- SOLAR INSOLATION MODEL FOR COMPUTING AVAILABLE SOLAR ENERGY TO A SURFACE OF ARBITRARY ORIENTATION

    NASA Technical Reports Server (NTRS)

    Smith, J. H.

    1994-01-01

    This computer program, SOLINS, was developed to aid engineers and solar system designers in the accurate modeling of the average hourly solar insolation on a surface of arbitrary orientation. The program can be used to study insolation problems specific to residential and commercial applications where the amount of space available for solar collectors is limited by shadowing problems, energy output requirements, and costs. For tandem rack arrays, SOLINS will accommodate the use of augmentation reflectors built into the support structure to increase insolation values at the collector surface. As the use of flat plate solar collectors becomes more prevalent in the building industry, the engineer and designer must have the capability to conduct extensive sensitivity analyses on the orientation and location of solar collectors. SOLINS should prove to be a valuable aid in this area of engineering. SOLINS uses a modified version of the National Bureau of Standards model to calculate the direct, diffuse, and reflected components of total insolation on a tilted surface with a given azimuthal orientation. The model is based on the work of Liu and Jordan with corrections by Kusuda and Ishii to account for early morning and late afternoon errors. The model uses a parametric description of the average day solar climate to generate monthly average day profiles by hour of the insolation level on the collector surface. The model includes accommodation of user specified ground and landscape reflectivities at the collector site. For roof or ground mounted, tilted arrays, SOLINS will calculate insolation including the effects of shadowing and augmentation reflectors. The user provides SOLINS with data describing the array design, array orientation, the month, the solar climate parameter, the ground reflectance, and printout control specifications. For the specified array and environmental conditions, SOLINS outputs the hourly insolation the array will receive during an average day

  3. Object-oriented numerical computing C++

    NASA Technical Reports Server (NTRS)

    Vanrosendale, John

    1994-01-01

    An object oriented language is one allowing users to create a set of related types and then intermix and manipulate values of these related types. This paper discusses object oriented numerical computing using C++.

  4. "Smart Computing"--Orienting Your Students.

    ERIC Educational Resources Information Center

    Millis, Paul J.

    This paper discusses how to present new college students with their initial exposure to policy, security, and ethical computing issues. The Office of Policy Development and Education participates in summer orientation to introduce students to proper use of information technology resources at the University of Michigan. This presentation is known…

  5. Object-oriented Tools for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.

    1993-01-01

    Distributed computing systems are proliferating, owing to the availability of powerful, affordable microcomputers and inexpensive communication networks. A critical problem in developing such systems is getting application programs to interact with one another across a computer network. Remote interprogram connectivity is particularly challenging across heterogeneous environments, where applications run on different kinds of computers and operating systems. NetWorks! (trademark) is an innovative software product that provides an object-oriented messaging solution to these problems. This paper describes the design and functionality of NetWorks! and illustrates how it is being used to build complex distributed applications for NASA and in the commercial sector.

  6. A Computer Oriented Problem Solving Unit, Consume. Teacher Guide. Computer Technology Program Environmental Education Units.

    ERIC Educational Resources Information Center

    Northwest Regional Educational Lab., Portland, OR.

    This is the teacher's guide to accompany the student guide which together comprise one of five computer-oriented environmental/energy education units. This unit explores U.S. energy consumption; is applicable to Mathematics, Social Studies, and Ecology or Science Studies with Mathematics background; and is intended for use in grades 9 through 14.…

  7. Cloudbus Toolkit for Market-Oriented Cloud Computing

    NASA Astrophysics Data System (ADS)

    Buyya, Rajkumar; Pandey, Suraj; Vecchiola, Christian

    This keynote paper: (1) presents the 21st century vision of computing and identifies various IT paradigms promising to deliver computing as a utility; (2) defines the architecture for creating market-oriented Clouds and computing atmosphere by leveraging technologies such as virtual machines; (3) provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; (4) presents the work carried out as part of our new Cloud Computing initiative, called Cloudbus: (i) Aneka, a Platform as a Service software system containing SDK (Software Development Kit) for construction of Cloud applications and deployment on private or public Clouds, in addition to supporting market-oriented resource management; (ii) internetworking of Clouds for dynamic creation of federated computing environments for scaling of elastic applications; (iii) creation of 3rd party Cloud brokering services for building content delivery networks and e-Science applications and their deployment on capabilities of IaaS providers such as Amazon along with Grid mashups; (iv) CloudSim supporting modelling and simulation of Clouds for performance studies; (v) Energy Efficient Resource Allocation Mechanisms and Techniques for creation and management of Green Clouds; and (vi) pathways for future research.

  8. Development of site-oriented Analytics for Grid computing centres

    NASA Astrophysics Data System (ADS)

    Washbrook, A.; Crooks, D.; Roy, G.; Skipsey, S.; Qin, G.; Stewart, G. P.; Britton, D.

    2015-12-01

    The field of analytics, the process of analysing data to visualise meaningful patterns and trends, has become increasingly important in scientific computing as the volume and variety of data available to process has significantly increased. There is now ongoing work in the High Energy Physics (HEP) community in this area, for example in the augmentation of systems management at WLCG computing sites. We report on work evaluating the feasibility of distributed site-oriented analytics using the Elasticsearch, Logstash and Kibana software stack and demonstrate functionality by the application of two workflows that give greater insight into site operations.

  9. Object Orientated Methods in Computational Fluid Dynamics.

    NASA Astrophysics Data System (ADS)

    Tabor, Gavin; Weller, Henry; Jasak, Hrvoje; Fureby, Christer

    1997-11-01

    We outline the aims of the FOAM code, a Finite Volume Computational Fluid Dynamics code written in C++, and discuss the use of Object Orientated Programming (OOP) methods to achieve these aims. The intention when writing this code was to make it as easy as possible to alter the modelling : this was achieved by making the top level syntax of the code as close as possible to conventional mathematical notation for tensors and partial differential equations. Object orientation enables us to define classes for both types of objects, and the operator overloading possible in C++ allows normal symbols to be used for the basic operations. The introduction of features such as automatic dimension checking of equations helps to enforce correct coding of models. We also discuss the use of OOP techniques such as data encapsulation and code reuse. As examples of the flexibility of this approach, we discuss the implementation of turbulence modelling using RAS and LES. The code is used to simulate turbulent flow for a number of test cases, including fully developed channel flow and flow around obstacles. We also demonstrate the use of the code for solving structures calculations and magnetohydrodynamics.

  10. Computational Modeling of Magnetically Actuated Propellant Orientation

    NASA Technical Reports Server (NTRS)

    Hochstein, John I.

    1996-01-01

    sufficient performance to support cryogenic propellant management tasks. In late 1992, NASA MSFC began a new investigation in this technology commencing with the design of the Magnetically-Actuated Propellant Orientation (MAPO) experiment. A mixture of ferrofluid and water is used to simulate the paramagnetic properties of LOX and the experiment is being flown on the KC-135 aircraft to provide a reduced gravity environment. The influence of a 0.4 Tesla ring magnet on flow into and out of a subscale Plexiglas tank is being recorded on video tape. The most efficient approach to evaluating the feasibility of MAPO is to compliment the experimental program with development of a computational tool to model the process of interest. The goal of the present research is to develop such a tool. Once confidence in its fidelity is established by comparison to data from the MAPO experiment, it can be used to assist in the design of future experiments and to study the parameter space of the process. Ultimately, it is hoped that the computational model can serve as a design tool for full-scale spacecraft applications.

  11. Calculus: A Computer Oriented Presentation, Part 1 [and] Part 2.

    ERIC Educational Resources Information Center

    Stenberg, Warren; Walker, Robert J.

    Parts one and two of a one-year computer-oriented calculus course (without analytic geometry) are presented. The ideas of calculus are introduced and motivated through computer (i.e., algorithmic) concepts. An introduction to computing via algorithms and a simple flow chart language allows the book to be self-contained, except that material on…

  12. Terminal-oriented computer-communication networks.

    NASA Technical Reports Server (NTRS)

    Schwartz, M.; Boorstyn, R. R.; Pickholtz, R. L.

    1972-01-01

    Four examples of currently operating computer-communication networks are described in this tutorial paper. They include the TYMNET network, the GE Information Services network, the NASDAQ over-the-counter stock-quotation system, and the Computer Sciences Infonet. These networks all use programmable concentrators for combining a multiplicity of terminals. Included in the discussion for each network is a description of the overall network structure, the handling and transmission of messages, communication requirements, routing and reliability consideration where applicable, operating data and design specifications where available, and unique design features in the area of computer communications.

  13. Computer Oriented Exercises on Attitudes and U.S. Gasoline Consumption, Attitude. Student Guide. Computer Technology Program Environmental Education Units.

    ERIC Educational Resources Information Center

    Northwest Regional Educational Lab., Portland, OR.

    This is the student guide in a set of five computer-oriented environmental/energy education units. Contents of this guide present: (1) the three gasoline consumption-reducing options for which attitudes are to be explored; (2) exercises; and (3) appendices including an energy attitudes survey. (MR)

  14. Computational modeling of magnetically actuated propellant orientation

    NASA Technical Reports Server (NTRS)

    Hochstein, John I.

    1996-01-01

    Unlike terrestrial applications where gravity positions liquid at the 'bottom' of the tank, the location of liquid propellant in spacecraft tanks is uncertain unless specific actions are taken or special features are built into the tank. Some mission events require knowledge of liquid position prior to a particular action: liquid must be positioned over the tank outlet prior to starting the main engines and must be moved away from the tank vent before vapor can be released overboard to reduce pressure. It may also be desirable to positively position liquid to improve propulsion system performance: moving liquid away from the tank walls will dramatically decrease the rate of heat transfer to the propellant, suppressing the boil-off rate, thereby reducing overall mission propellant requirements. The process of moving propellant to a desired position is referred to as propellant orientation or reorientation. Several techniques have been developed to positively position propellant in spacecraft tanks and each technique imposes additional requirements on vehicle design. Propulsive reorientation relies on small auxiliary thrusters to accelerate the tank. The inertia of the liquid causes it to collect in the aft-end of the tank if the acceleration is forward. This technique requires that additional thrusters be added to the vehicle, that additional propellant be carried in the vehicle, and that an additional operational maneuver be executed. Another technique uses Liquid Acquisition Devices (LAD's) to positively position propellants. These devices rely on surface tension to hold the liquid within special geometries (i.e. vanes, wire-mesh channels, start-baskets). While avoiding some of the penalties of propulsive orientation, this technique requires the addition of complicated hardware inside the propellant tank and performance for long duration missions is uncertain. The subject of the present research is an alternate technique for positively positioning liquid within

  15. Steering object-oriented computations with Python

    SciTech Connect

    Yang, T.-Y.B.; Dubois, P.F.; Furnish, G.; Beazley, D.M.

    1996-10-01

    We have described current approaches and future plans for steering C++ application, running Python on parallel platforms, and combination of Tk interface and Python interpreter in steering computations. In addition, there has been significant enhancement in the Gist module. Tk mega widgets has been implemented for a few physics applications. We have also written Python interface to SIJLO, a data storage package used as an interface to a visualization system named MeshTv. Python is being used to control large-scale simulations (molecular dynamics in particular) running on the CM-5 and T3D at LANL as well. A few other code development projects at LLNL are either using or considering Python as their steering shells. In summary, the merits of Python have been appreciated by more and more people in the scientific computation community.

  16. Simulating complex intracellular processes using object-oriented computational modelling.

    PubMed

    Johnson, Colin G; Goldman, Jacki P; Gullick, William J

    2004-11-01

    The aim of this paper is to give an overview of computer modelling and simulation in cellular biology, in particular as applied to complex biochemical processes within the cell. This is illustrated by the use of the techniques of object-oriented modelling, where the computer is used to construct abstractions of objects in the domain being modelled, and these objects then interact within the computer to simulate the system and allow emergent properties to be observed. The paper also discusses the role of computer simulation in understanding complexity in biological systems, and the kinds of information which can be obtained about biology via simulation. PMID:15302205

  17. Oriented Nanostructures for Energy Conversion and Storage

    SciTech Connect

    Liu, Jun; Cao, Guozhong H.; Yang, Zhenguo; Wang, Donghai; DuBois, Daniel L.; Zhou, Xiao Dong; Graff, Gordon L.; Pederson, Larry R.; Zhang, Jiguang

    2008-08-28

    Recently the role of nanostructured materials in addressing the challenges in energy and natural resources has attracted wide attention. In particular, oriented nanostructures have demonstrated promising properties for energy harvesting, conversion and storage. The purpose of the paper is to review the synthesis and application of oriented nanostructures in a few key areas of energy technologies, namely photovoltaics, batteries, supercapacitors and thermoelectrics. Although the applications differ from field to field, one of the fundamental challenges is to improve the generation and transport of electrons and ions. We will first briefly review the several major approaches to attain oriented nanostructured films that are applicable for energy applications. We will then discuss how such controlled nanostructures can be used in photovoltaics, batteries, capacitors, thermoelectrics, and other unconventional ways of energy conversion. We will highlight the role of high surface area to maximize the surface activity, and the importance of optimum dimension and architecture, controlled pore channels and alignment of the nanocrystalline phase to optimize the electrons and ion transport. Finally, the paper will discuss the challenges in attaining integrated architectures to achieve the desired performance. Brief background information will be provided for the relevant technologies, but the emphasis is focused mainly on the nanoeffects of mostly inorganic based materials and devices.

  18. Generic, Type-Safe and Object Oriented Computer Algebra Software

    NASA Astrophysics Data System (ADS)

    Kredel, Heinz; Jolly, Raphael

    Advances in computer science, in particular object oriented programming, and software engineering have had little practical impact on computer algebra systems in the last 30 years. The software design of existing systems is still dominated by ad-hoc memory management, weakly typed algorithm libraries and proprietary domain specific interactive expression interpreters. We discuss a modular approach to computer algebra software: usage of state-of-the-art memory management and run-time systems (e.g. JVM) usage of strongly typed, generic, object oriented programming languages (e.g. Java) and usage of general purpose, dynamic interactive expression interpreters (e.g. Python) To illustrate the workability of this approach, we have implemented and studied computer algebra systems in Java and Scala. In this paper we report on the current state of this work by presenting new examples.

  19. Computer Oriented Exercises on Attitudes and U.S. Gasoline Consumption, Attitude. Teacher Guide. Computer Technology Program Environmental Education Units.

    ERIC Educational Resources Information Center

    Northwest Regional Educational Lab., Portland, OR.

    This is the teacher's guide to accompany the student guide which together comprise one of five computer-oriented environmental/energy education units. This unit is concerned with the attitude of people toward gasoline shortages and different steps the government could take to reduce gasoline consumption. Through the exercises, part of which make…

  20. A Computer Simulation of the U.S. Energy Crisis, Energy. Teacher Guide. Computer Technology Program Environmental Education Units.

    ERIC Educational Resources Information Center

    Northwest Regional Educational Lab., Portland, OR.

    This is the teacher's guide to accompany the student guide which together comprise one of five computer-oriented environmental/energy education units. The computer program, ENERGY, at the base of this unit, simulates the pattern of energy consumption in the United States. The total energy demand is determined by energy use in the various sectors…

  1. An Object-Oriented Approach to Writing Computational Electromagnetics Codes

    NASA Technical Reports Server (NTRS)

    Zimmerman, Martin; Mallasch, Paul G.

    1996-01-01

    Presently, most computer software development in the Computational Electromagnetics (CEM) community employs the structured programming paradigm, particularly using the Fortran language. Other segments of the software community began switching to an Object-Oriented Programming (OOP) paradigm in recent years to help ease design and development of highly complex codes. This paper examines design of a time-domain numerical analysis CEM code using the OOP paradigm, comparing OOP code and structured programming code in terms of software maintenance, portability, flexibility, and speed.

  2. Strategy Generalization across Orientation Tasks: Testing a Computational Cognitive Model

    ERIC Educational Resources Information Center

    Gunzelmann, Glenn

    2008-01-01

    Humans use their spatial information processing abilities flexibly to facilitate problem solving and decision making in a variety of tasks. This article explores the question of whether a general strategy can be adapted for performing two different spatial orientation tasks by testing the predictions of a computational cognitive model. Human…

  3. Computing 3D head orientation from a monocular image sequence

    NASA Astrophysics Data System (ADS)

    Horprasert, Thanarat; Yacoob, Yaser; Davis, Larry S.

    1997-02-01

    An approach for estimating 3D head orientation in a monocular image sequence is proposed. The approach employs recently developed image-based parameterized tracking for face and face features to locate the area in which a sub- pixel parameterized shape estimation of the eye's boundary is performed. This involves tracking of five points (four at the eye corners and the fifth is the tip of the nose). We describe an approach that relies on the coarse structure of the face to compute orientation relative to the camera plane. Our approach employs projective invariance of the cross-ratios of the eye corners and anthropometric statistics to estimate the head yaw, roll and pitch. Analytical and experimental results are reported.

  4. Reviews of computing technology: Object-oriented technology

    SciTech Connect

    Skeen, D.C.

    1993-03-01

    A useful metaphor in introducing object-oriented concepts is the idea of a computer hardware manufacturer assembling products from an existing stock of electronic parts. In this analogy, think of the parts as pieces of computer software and of the finished products as computer applications. Like its counterpart, the object is capable of performing its specific function in a wide variety of different applications. The advantages to assembling hardware using a set of prebuilt parts are obvious. The design process is greatly simplified in this scenario, since the designer needs only to carry the design down to the chip level, rather than to the transistor level. As a result, the designer is free to develop a more reliable and feature rich product. Also, since the component parts are reused in several different products, the parts can be made more robust and subjected to more rigorous testing than would be economically feasible for a part used in only one piece of equipment. Additionally, maintenance on the resulting systems is simplified because of the part-level consistency from one type of equipment to another. The remainder of this document introduces the techniques used to develop objects, the benefits of the technology, outstanding issues that remain with the technology, industry direction for the technology, and the impact that object-oriented technology is likely to have on the organization. While going through this material, the reader will find it useful to remember the parts analogy and to keep in mind that the overall purpose of object-oriented technology is to create software parts and to construct applications using those parts.

  5. APPLICATION OF OBJECT ORIENTED PROGRAMMING TECHNIQUES IN FRONT END COMPUTERS.

    SciTech Connect

    SKELLY,J.F.

    1997-11-03

    The Front End Computer (FEC) environment imposes special demands on software, beyond real time performance and robustness. FEC software must manage a diverse inventory of devices with individualistic timing requirements and hardware interfaces. It must implement network services which export device access to the control system at large, interpreting a uniform network communications protocol into the specific control requirements of the individual devices. Object oriented languages provide programming techniques which neatly address these challenges, and also offer benefits in terms of maintainability and flexibility. Applications are discussed which exhibit the use of inheritance, multiple inheritance and inheritance trees, and polymorphism to address the needs of FEC software.

  6. E-Governance and Service Oriented Computing Architecture Model

    NASA Astrophysics Data System (ADS)

    Tejasvee, Sanjay; Sarangdevot, S. S.

    2010-11-01

    E-Governance is the effective application of information communication and technology (ICT) in the government processes to accomplish safe and reliable information lifecycle management. Lifecycle of the information involves various processes as capturing, preserving, manipulating and delivering information. E-Governance is meant to transform of governance in better manner to the citizens which is transparent, reliable, participatory, and accountable in point of view. The purpose of this paper is to attempt e-governance model, focus on the Service Oriented Computing Architecture (SOCA) that includes combination of information and services provided by the government, innovation, find out the way of optimal service delivery to citizens and implementation in transparent and liable practice. This paper also try to enhance focus on the E-government Service Manager as a essential or key factors service oriented and computing model that provides a dynamically extensible structural design in which all area or branch can bring in innovative services. The heart of this paper examine is an intangible model that enables E-government communication for trade and business, citizen and government and autonomous bodies.

  7. Strategy generalization across orientation tasks: testing a computational cognitive model.

    PubMed

    Gunzelmann, Glenn

    2008-07-01

    Humans use their spatial information processing abilities flexibly to facilitate problem solving and decision making in a variety of tasks. This article explores the question of whether a general strategy can be adapted for performing two different spatial orientation tasks by testing the predictions of a computational cognitive model. Human performance was measured on an orientation task requiring participants to identify the location of a target either on a map (find-on-map) or within an egocentric view of a space (find-in-scene). A general strategy instantiated in a computational cognitive model of the find-on-map task, based on the results from Gunzelmann and Anderson (2006), was adapted to perform both tasks and used to generate performance predictions for a new study. The qualitative fit of the model to the human data supports the view that participants were able to tailor a general strategy to the requirements of particular spatial tasks. The quantitative differences between the predictions of the model and the performance of human participants in the new experiment expose individual differences in sample populations. The model provides a means of accounting for those differences and a framework for understanding how human spatial abilities are applied to naturalistic spatial tasks that involve reasoning with maps. PMID:21635355

  8. Computer Programming Games and Gender Oriented Cultural Forms

    NASA Astrophysics Data System (ADS)

    AlSulaiman, Sarah Abdulmalik

    I present the design and evaluation of two games designed to help elementary and middle school students learn computer programming concepts. The first game was designed to be "gender neutral", aligning with might be described as a consensus opinion on best practices for computational learning environments. The second game, based on the cultural form of dress up dolls was deliberately designed to appeal to females. I recruited 70 participants in an international two-phase study to investigate the relationship between games, gender, attitudes towards computer programming, and learning. My findings suggest that while the two games were equally effective in terms of learning outcomes, I saw differences in motivation between players of the two games. Specifically, participants who reported a preference for female- oriented games were more motivated to learn about computer programming when they played a game that they perceived as designed for females. In addition, I describe how the two games seemed to encourage different types of social activity between players in a classroom setting. Based on these results, I reflect on the strategy of exclusively designing games and activities as "gender neutral", and suggest that employing cultural forms, including gendered ones, may help create a more productive experience for learners.

  9. A Riemannian framework for orientation distribution function computing.

    PubMed

    Cheng, Jian; Ghosh, Aurobrata; Jiang, Tianzi; Deriche, Rachid

    2009-01-01

    Compared with Diffusion Tensor Imaging (DTI), High Angular Resolution Imaging (HARDI) can better explore the complex microstructure of white matter. Orientation Distribution Function (ODF) is used to describe the probability of the fiber direction. Fisher information metric has been constructed for probability density family in Information Geometry theory and it has been successfully applied for tensor computing in DTI. In this paper, we present a state of the art Riemannian framework for ODF computing based on Information Geometry and sparse representation of orthonormal bases. In this Riemannian framework, the exponential map, logarithmic map and geodesic have closed forms. And the weighted Frechet mean exists uniquely on this manifold. We also propose a novel scalar measurement, named Geometric Anisotropy (GA), which is the Riemannian geodesic distance between the ODF and the isotropic ODF. The Renyi entropy H1/2 of the ODF can be computed from the GA. Moreover, we present an Affine-Euclidean framework and a Log-Euclidean framework so that we can work in an Euclidean space. As an application, Lagrange interpolation on ODF field is proposed based on weighted Frechet mean. We validate our methods on synthetic and real data experiments. Compared with existing Riemannian frameworks on ODF, our framework is model-free. The estimation of the parameters, i.e. Riemannian coordinates, is robust and linear. Moreover it should be noted that our theoretical results can be used for any probability density function (PDF) under an orthonormal basis representation. PMID:20426075

  10. An object-oriented approach to energy-economic modeling

    SciTech Connect

    Wise, M.A.; Fox, J.A.; Sands, R.D.

    1993-12-01

    In this paper, the authors discuss the experiences in creating an object-oriented economic model of the U.S. energy and agriculture markets. After a discussion of some central concepts, they provide an overview of the model, focusing on the methodology of designing an object-oriented class hierarchy specification based on standard microeconomic production functions. The evolution of the model from the class definition stage to programming it in C++, a standard object-oriented programming language, will be detailed. The authors then discuss the main differences between writing the object-oriented program versus a procedure-oriented program of the same model. Finally, they conclude with a discussion of the advantages and limitations of the object-oriented approach based on the experience in building energy-economic models with procedure-oriented approaches and languages.

  11. Magnetic fusion energy and computers

    SciTech Connect

    Killeen, J.

    1982-01-01

    The application of computers to magnetic fusion energy research is essential. In the last several years the use of computers in the numerical modeling of fusion systems has increased substantially. There are several categories of computer models used to study the physics of magnetically confined plasmas. A comparable number of types of models for engineering studies are also in use. To meet the needs of the fusion program, the National Magnetic Fusion Energy Computer Center has been established at the Lawrence Livermore National Laboratory. A large central computing facility is linked to smaller computer centers at each of the major MFE laboratories by a communication network. In addition to providing cost effective computing services, the NMFECC environment stimulates collaboration and the sharing of computer codes among the various fusion research groups.

  12. Orientational energy of anisometric particles in liquid-crystalline suspensions

    NASA Astrophysics Data System (ADS)

    Burylov, S. V.; Zakhlevnykh, A. N.

    2013-07-01

    We obtain a general expression for the orientational energy of an individual anisometric particle suspended in uniform nematic liquid crystals when the main axis of the particle rotates with respect to the nematic director. We show that there is a qualitative and quantitative analogy between the internal and external problems for cylindrical volumes of nematic liquid crystals, and on this basis we obtain an estimate of the orientational energy of a particle of cylindrical (rodlike, needlelike, or ellipsoidal) shape. For an ensemble of such particles we propose a modified form of their orientational energy in the nematic matrix. This orientational energy has the usual second-order term, and additional fourth-order term in the scalar product of the nematic director and the vector which characterizes an average direction of the main axes of the particles. As an example we obtain the expression for the free energy density of ferronematics, i.e., colloidal suspensions of needlelike magnetic particles in nematic liquid crystals. Unlike previous models, the free energy density includes the proposed modified form of the particle orientational energy, and also a contribution describing the surface saddle-splay deformations of the liquid crystal matrix.

  13. User's Guide and Orientation to Canned Computer Programs.

    ERIC Educational Resources Information Center

    Kretke, George L.; Hopkins, Kenneth D.

    This handbook is for the student with little or no previous experience with computer utilization for data processing. Sample problems to be run on the computer are included. It gives: (1) an overview of the sequence of steps from obtaining data to receiving computer output, (2) a guide to common computer packages, (3) an illustration of the use of…

  14. Orientation-dependent binding energy of graphene on palladium

    SciTech Connect

    Kappes, Branden B.; Ebnonnasir, Abbas; Ciobanu, Cristian V.; Kodambaka, Suneel

    2013-02-04

    Using density functional theory calculations, we show that the binding strength of a graphene monolayer on Pd(111) can vary between physisorption and chemisorption depending on its orientation. By studying the interfacial charge transfer, we have identified a specific four-atom carbon cluster that is responsible for the local bonding of graphene to Pd(111). The areal density of such clusters varies with the in-plane orientation of graphene, causing the binding energy to change accordingly. Similar investigations can also apply to other metal substrates and suggests that physical, chemical, and mechanical properties of graphene may be controlled by changing its orientation.

  15. Breadth-Oriented Outcomes Assessment in Computer Science.

    ERIC Educational Resources Information Center

    Cordes, David; And Others

    Little work has been done regarding the overall assessment of quality of computer science graduates at the undergraduate level. This paper reports on a pilot study at the University of Alabama of a prototype computer science outcomes assessment designed to evaluate the breadth of knowledge of computer science seniors. The instrument evaluated two…

  16. Emerging energy and environmental applications of vertically-oriented graphenes.

    PubMed

    Bo, Zheng; Mao, Shun; Han, Zhao Jun; Cen, Kefa; Chen, Junhong; Ostrikov, Kostya Ken

    2015-04-21

    Graphene nanosheets arranged perpendicularly to the substrate surface, i.e., vertically-oriented graphenes (VGs), have many unique morphological and structural features that can lead to exciting properties. Plasma-enhanced chemical vapor deposition enables the growth of VGs on various substrates using gas, liquid, or solid precursors. Compared with conventional randomly-oriented graphenes, VGs' vertical orientation on the substrate, non-agglomerated morphology, controlled inter-sheet connectivity, as well as sharp and exposed edges make them very promising for a variety of applications. The focus of this tutorial review is on plasma-enabled simple yet efficient synthesis of VGs and their properties that lead to emerging energy and environmental applications, ranging from energy storage, energy conversion, sensing, to green corona discharges for pollution control. PMID:25711336

  17. An Introductory Course on Service-Oriented Computing for High Schools

    ERIC Educational Resources Information Center

    Tsai, W. T.; Chen, Yinong; Cheng, Calvin; Sun, Xin; Bitter, Gary; White, Mary

    2008-01-01

    Service-Oriented Computing (SOC) is a new computing paradigm that has been adopted by major computer companies as well as government agencies such as the Department of Defense for mission-critical applications. SOC is being used for developing Web and electronic business applications, as well as robotics, gaming, and scientific applications. Yet,…

  18. Computer Science Majors: Sex Role Orientation, Academic Achievement, and Social Cognitive Factors

    ERIC Educational Resources Information Center

    Brown, Chris; Garavalia, Linda S.; Fritts, Mary Lou Hines; Olson, Elizabeth A.

    2006-01-01

    This study examined the sex role orientations endorsed by 188 male and female students majoring in computer science, a male-dominated college degree program. The relations among sex role orientation and academic achievement and social cognitive factors influential in career decision-making self-efficacy were explored. Findings revealed that…

  19. ICT Oriented toward Nyaya: Community Computing in India's Slums

    ERIC Educational Resources Information Center

    Byker, Erik J.

    2014-01-01

    In many schools across India, access to information and communication technology (ICT) is still a rare privilege. While the Annual Status of Education Report in India (2013) showed a marginal uptick in the amount of computers, the opportunities for children to use those computers have remained stagnant. The lack of access to ICT is especially…

  20. Energy considerations in computational aeroacoustics

    NASA Astrophysics Data System (ADS)

    Brentner, Kenneth S.

    A finite-volume multistage time-stepping Euler code is used to investigate the use of CFD algorithms for the direct calculation of acoustics. The 2D compressible inviscid flow about an accelerating or decelerating circular cylinder is used as a model problem. The time evolution of the energy transfer from the cylinder to the fluid, as the cylinder is moved from rest to some nonnegligible velocity, is clearly seen. By examining the temporal and spatial characteristics of the numerical solution, a distinction can be made between the propagating acoustic energy, the convecting energy associated with the entropy change in the fluid, and the energy contained in the local aerodynamic field. Systematic variation of the cylinder acceleration shows that the radiated acoustic energy depends strongly upon the rate of acceleration or deceleration. The computational grid has a large effect on the ratio of acoustic energy to nonphysical entropy associated energy, while the role of the explicit artificial viscosity seems to be of second order. The entropy term was nearly negligible in all cases the cylinder was started slowly.

  1. Energy considerations in computational aeroacoustics

    NASA Technical Reports Server (NTRS)

    Brentner, Kenneth S.

    1993-01-01

    A finite-volume multistage time-stepping Euler code is used to investigate the use of CFD algorithms for the direct calculation of acoustics. The 2D compressible inviscid flow about an accelerating or decelerating circular cylinder is used as a model problem. The time evolution of the energy transfer from the cylinder to the fluid, as the cylinder is moved from rest to some nonnegligible velocity, is clearly seen. By examining the temporal and spatial characteristics of the numerical solution, a distinction can be made between the propagating acoustic energy, the convecting energy associated with the entropy change in the fluid, and the energy contained in the local aerodynamic field. Systematic variation of the cylinder acceleration shows that the radiated acoustic energy depends strongly upon the rate of acceleration or deceleration. The computational grid has a large effect on the ratio of acoustic energy to nonphysical entropy associated energy, while the role of the explicit artificial viscosity seems to be of second order. The entropy term was nearly negligible in all cases the cylinder was started slowly.

  2. Effect of row orientation on energy balance components

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Solar irradiance is the primary source of energy that is converted into sensible and latent heat fluxes in the soil-plant-atmosphere continuum. The orientation of agricultural crop rows relative to the sun’s zenith angle determines the amount of solar irradiance reaching the plant and soil surfaces...

  3. Computer-Oriented Laboratory Exercises for Geology and Oceanography

    ERIC Educational Resources Information Center

    Fox, William T.

    1969-01-01

    Describes the use of computers equipped with plotters to predict tides using the known period, phase, and amplitude of the major tidal components. Other demonstrations and projects are described. (RR)

  4. Forest value orientations in Australia: an application of computer content analysis.

    PubMed

    Webb, Trevor J; Bengston, David N; Fan, David P

    2008-01-01

    This article explores the expression of three forest value orientations that emerged from an analysis of Australian news media discourse about the management of Australian native forests from August 1, 1997 through December 31, 2004. Computer-coded content analysis was used to measure and track the relative importance of commodity, ecological and moral/spiritual/aesthetic forest value orientations. The number of expressions of these forest value orientations followed major events in forest management and policy, with peaks corresponding to finalization of Regional Forest Agreements and conflicts over forest management. Over the time period analyzed, the relative share of commodity value orientation decreased and the shares of the ecological and moral/spiritual/aesthetic value orientations increased. The shifts in forest value orientations highlight the need for native forests to be managed for multiple values and the need for continued monitoring of forest values. PMID:17846830

  5. Computer-Oriented Calculus Courses Using Finite Differences.

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    The so-called discrete approach in calculus instruction involves introducing topics from the calculus of finite differences and finite sums, both for motivation and as useful tools for applications of the calculus. In particular, it provides an ideal setting in which to incorporate computers into calculus courses. This approach has been…

  6. Effects of Textual and Animated Orienting Activities and Practice on Learning from Computer-Based Instruction.

    ERIC Educational Resources Information Center

    Rieber, Lloyd P.; Hannafin, Michael J.

    1988-01-01

    Describes study designed to examine the effects of textual and/or computer animated orienting strategies and practice on rule-using and problem-solving skills of elementary school students using computer-assisted instruction. Four different versions of a lesson based on Isaac Newton's Law of Motion are described, and results are analyzed. (28…

  7. An object-oriented environment for computer vision and pattern recognition

    SciTech Connect

    Hernandez, J.E.

    1992-12-01

    Vision is a flexible and extensible object-oriented programming environment for prototyping solutions to problems requiring computer vision and pattern recognition techniques. Vision integrates signal/image processing, statistical pattern recognition, neural networks, low and mid level computer vision, and graphics into a cohesive framework useful for a wide variety of applications at Lawrence Livermore National Laboratory.

  8. Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method.

    PubMed

    Zhao, Yan; Cao, Liangcai; Zhang, Hao; Kong, Dezhao; Jin, Guofan

    2015-10-01

    Fast calculation and correct depth cue are crucial issues in the calculation of computer-generated hologram (CGH) for high quality three-dimensional (3-D) display. An angular-spectrum based algorithm for layer-oriented CGH is proposed. Angular spectra from each layer are synthesized as a layer-corresponded sub-hologram based on the fast Fourier transform without paraxial approximation. The proposed method can avoid the huge computational cost of the point-oriented method and yield accurate predictions of the whole diffracted field compared with other layer-oriented methods. CGHs of versatile formats of 3-D digital scenes, including computed tomography and 3-D digital models, are demonstrated with precise depth performance and advanced image quality. PMID:26480062

  9. Computational modeling of orientation tuning dynamics in monkey primary visual cortex.

    PubMed

    Pugh, M C; Ringach, D L; Shapley, R; Shelley, M J

    2000-01-01

    In the primate visual pathway, orientation tuning of neurons is first observed in the primary visual cortex. The LGN cells that comprise the thalamic input to V1 are not orientation tuned, but some V1 neurons are quite selective. Two main classes of theoretical models have been offered to explain orientation selectivity: feedforward models, in which inputs from spatially aligned LGN cells are summed together by one cortical neuron; and feedback models, in which an initial weak orientation bias due to convergent LGN input is sharpened and amplified by intracortical feedback. Recent data on the dynamics of orientation tuning, obtained by a cross-correlation technique, may help to distinguish between these classes of models. To test this possibility, we simulated the measurement of orientation tuning dynamics on various receptive field models, including a simple Hubel-Wiesel type feedforward model: a linear spatiotemporal filter followed by an integrate-and-fire spike generator. The computational study reveals that simple feedforward models may account for some aspects of the experimental data but fail to explain many salient features of orientation tuning dynamics in V1 cells. A simple feedback model of interacting cells is also considered. This model is successful in explaining the appearance of Mexican-hat orientation profiles, but other features of the data continue to be unexplained. PMID:10798599

  10. Shlaer-Mellor object-oriented analysis and recursive design, an effective modern software development method for development of computing systems for a large physics detector

    SciTech Connect

    Kozlowski, T.; Carey, T.A.; Maguire, C.F.

    1995-10-01

    After evaluation of several modern object-oriented methods for development of the computing systems for the PHENIX detector at RHIC, we selected the Shlaer-Mellor Object-Oriented Analysis and Recursive Design method as the most appropriate for the needs and development environment of a large nuclear or high energy physics detector. This paper discusses our specific needs and environment, our method selection criteria, and major features and components of the Shlaer-Mellor method.

  11. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    ERIC Educational Resources Information Center

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  12. Orientational, kinetic, and magnetic energy of geodynamo, reversals, and asymmetries

    NASA Astrophysics Data System (ADS)

    Starchenko, S. V.

    2015-07-01

    Integral laws describing the evolution of the kinetic, magnetic, and orientational energy in the liquid core of the Earth, which are also valid in the interiors of the other terrestrial planets, are derived, simplified, and analyzed. These laws are coarsely approximated by a system of ordinary differential equations with a given energy of the convection. The characteristic velocities, magnetic fields, periods, and scales as the functions of the power of the convection are estimated for the states beyond and close to the reversal or excursion. With the assumed simplifications, the convection power should be close to a certain value in order to enable a relatively short reversal or excursion; significant deviation of the convection energy from this value will render the system into a long-term steady state. Here, two types of steady state are possible: the codirectional state with the magnetic field oriented along the velocity vector, and contradirectional state with the opposing orientations of the magnetic field and velocity. These states are not symmetric with respect to each other since, other factors being equal, the energy support of the convection and the average intensity of the magnetic field are typically higher in the contradirectional rather than codirectional state. The total duration of codirectional states is somewhat shorter than contradirectional states in the case when the convection power grows with time; in the case of a long-decreasing convection power, the situation is opposite. This asymmetry in the duration of steady states is confirmed by the paleomagnetic data on the timescale of the magnetic reversals. The length of the average interval between the reversals is controlled by the turbulent, thermal, electromagnetic, and visco-compositional diffusion. The predominant type of the diffusion can be in many cases identified from the dependence of the reversal frequency on the intensity of the magnetic field based on the paleomagnetic data. The

  13. Some Specifications for a Computer-Oriented First Course in Electrical Engineering.

    ERIC Educational Resources Information Center

    Commission on Engineering Education, Washington, DC.

    Reported are specifications for a computer-oriented first course in electrical engineering giving new direction to the development of texts and alternative courses of study. Guidelines for choice of topics, a statement of fundamental concepts, pitfalls to avoid, and some sample course outlines are given. The study of circuits through computer…

  14. Computer Graphics Orientation and Training in a Corporate/Production Environment.

    ERIC Educational Resources Information Center

    McDevitt, Marsha Jean

    This master's thesis provides an overview of a computer graphics production environment and proposes a realistic approach to orientation and on-going training for employees working within a fast-paced production schedule. Problems involved in meeting the training needs of employees are briefly discussed in the first chapter, while the second…

  15. Effect of Computer-Aided Perspective Drawings on Spatial Orientation and Perspective Drawing Achievement

    ERIC Educational Resources Information Center

    Kurtulus, Aytac

    2011-01-01

    The aim of this study is to investigate the effect of computer-aided Perspective Drawings on eighth grade primary school students' achievement in Spatial Orientation and Perspective Drawing. The study made use of pre-test post-test control group experimental design. The study was conducted with thirty 8th grade students attending a primary school…

  16. Hysteresis model and statistical interpretation of energy losses in non-oriented steels

    NASA Astrophysics Data System (ADS)

    Mănescu (Păltânea), Veronica; Păltânea, Gheorghe; Gavrilă, Horia

    2016-04-01

    In this paper the hysteresis energy losses in two non-oriented industrial steels (M400-65A and M800-65A) were determined, by means of an efficient classical Preisach model, which is based on the Pescetti-Biorci method for the identification of the Preisach density. The excess and the total energy losses were also determined, using a statistical framework, based on magnetic object theory. The hysteresis energy losses, in a non-oriented steel alloy, depend on the peak magnetic polarization and they can be computed using a Preisach model, due to the fact that in these materials there is a direct link between the elementary rectangular loops and the discontinuous character of the magnetization process (Barkhausen jumps). To determine the Preisach density it was necessary to measure the normal magnetization curve and the saturation hysteresis cycle. A system of equations was deduced and the Preisach density was calculated for a magnetic polarization of 1.5 T; then the hysteresis cycle was reconstructed. Using the same pattern for the Preisach distribution, it was computed the hysteresis cycle for 1 T. The classical losses were calculated using a well known formula and the excess energy losses were determined by means of the magnetic object theory. The total energy losses were mathematically reconstructed and compared with those, measured experimentally.

  17. Computed reconstruction of spatial ammonoid-shell orientation captured from digitized grinding and landmark data

    NASA Astrophysics Data System (ADS)

    Lukeneder, Susanne; Lukeneder, Alexander; Weber, Gerhard W.

    2014-03-01

    The internal orientation of fossil mass occurrences can be exploited as useful source of information about their primary depositional conditions. A series of studies, using different kinds of fossils, especially those with elongated shape (e.g., elongated gastropods), deal with their orientation and the subsequent reconstruction of the depositional conditions (e.g., paleocurrents and transport mechanisms). However, disk-shaped fossils like planispiral cephalopods or gastropods were used, up to now, with caution for interpreting paleocurrents. Moreover, most studies just deal with the topmost surface of such mass occurrences, due to the easier accessibility. Within this study, a new method for three-dimensional reconstruction of the internal structure of a fossil mass occurrence and the subsequent calculation of its spatial shell orientation is established. A 234 million-years-old (Carnian, Triassic) monospecific mass occurrence of the ammonoid Kasimlarceltites krystyni from the Taurus Mountains in Turkey, embedded in limestone, is used for this pilot study. Therefore, a 150×45×140 mm3 block of the ammonoid bearing limestone bed has been grinded to 70 slices, with a distance of 2 mm between each slice. By using a semi-automatic region growing algorithm of the 3D-visualization software Amira, ammonoids of a part of this mass occurrence were segmented and a 3D-model reconstructed. Landmarks, trigonometric and vector-based calculations were used to compute the diameters and the spatial orientation of each ammonoid. The spatial shell orientation was characterized by dip and dip-direction and aperture direction of the longitudinal axis, as well as by dip and azimuth of an imaginary sagittal-plane through each ammonoid. The exact spatial shell orientation was determined for a sample of 675 ammonoids, and their statistical orientation analyzed (i.e., NW/SE). The study combines classical orientation analysis with modern 3D-visualization techniques, and establishes a novel

  18. Computing support for High Energy Physics

    SciTech Connect

    Avery, P.; Yelton, J.

    1996-12-01

    This computing proposal (Task S) is submitted separately but in support of the High Energy Experiment (CLEO, Fermilab, CMS) and Theory tasks. The authors have built a very strong computing base at Florida over the past 8 years. In fact, computing has been one of the main contributions to their experimental collaborations, involving not just computing capacity for running Monte Carlos and data reduction, but participation in many computing initiatives, industrial partnerships, computing committees and collaborations. These facts justify the submission of a separate computing proposal.

  19. A MAC scheme for vectorized computation of internal flows in surface oriented curvilinear coordinates

    NASA Astrophysics Data System (ADS)

    Tragner, U. K.; Mitra, N. K.; Fiebig, M.

    1986-05-01

    A vectorizable algorithm has been developed for modified marker-and-cell solution in surface oriented coordinates of continuity and Navier-Stokes equations for incompressible flows in two-dimensional channels of arbitrary geometries. Computations have been performed on a CYBER-205 computer. Computed results for flows in a convergent channel compare quite well with exact solutions of Jeffery-Hamel flows at low Reynolds numbers. At a Reynolds number of 5000, a maximum disagreement of 5 percent between computed and exact velocity profile is obtained. Computations have also been performed for flows in a channel with a backward-facing step and in a channel with a 90-deg bend. Computed results show a cerain waviness in wall-drag coefficient at large Reynolds numbers. It is suspected that the waviness has been caused by the use of nonoptimal grids and central differences for spatial derivatives.

  20. Method for Statically Checking an Object-oriented Computer Program Module

    NASA Technical Reports Server (NTRS)

    Bierhoff, Kevin M. (Inventor); Aldrich, Jonathan (Inventor)

    2012-01-01

    A method for statically checking an object-oriented computer program module includes the step of identifying objects within a computer program module, at least one of the objects having a plurality of references thereto, possibly from multiple clients. A discipline of permissions is imposed on the objects identified within the computer program module. The permissions enable tracking, from among a discrete set of changeable states, a subset of states each object might be in. A determination is made regarding whether the imposed permissions are violated by a potential reference to any of the identified objects. The results of the determination are output to a user.

  1. Advanced Computing Technologies for Rocket Engine Propulsion Systems: Object-Oriented Design with C++

    NASA Technical Reports Server (NTRS)

    Bekele, Gete

    2002-01-01

    This document explores the use of advanced computer technologies with an emphasis on object-oriented design to be applied in the development of software for a rocket engine to improve vehicle safety and reliability. The primary focus is on phase one of this project, the smart start sequence module. The objectives are: 1) To use current sound software engineering practices, object-orientation; 2) To improve on software development time, maintenance, execution and management; 3) To provide an alternate design choice for control, implementation, and performance.

  2. Magnetic-fusion energy and computers

    SciTech Connect

    Killeen, J.

    1982-01-01

    The application of computers to magnetic fusion energy research is essential. In the last several years the use of computers in the numerical modeling of fusion systems has increased substantially. There are several categories of computer models used to study the physics of magnetically confined plasmas. A comparable number of types of models for engineering studies are also in use. To meet the needs of the fusion program, the National Magnetic Fusion Energy Computer Center has been established at the Lawrence Livermore National Laboratory. A large central computing facility is linked to smaller computer centers at each of the major MFE laboratories by a communication network. In addition to providing cost effective computing services, the NMFECC environment stimulates collaboration and the sharing of computer codes among the various fusion research groups.

  3. Our U.S. Energy Future, Student Guide. Computer Technology Program Environmental Education Units.

    ERIC Educational Resources Information Center

    Northwest Regional Educational Lab., Portland, OR.

    This is the student guide in a set of five computer-oriented environmental/energy education units. Contents are organized into the following parts or lessons: (1) Introduction to the U.S. Energy Future; (2) Description of the "FUTURE" programs; (3) Effects of "FUTURE" decisions; and (4) Exercises on the U.S. energy future. This guide supplements a…

  4. Computer-assisted selection of coplanar beam orientations in intensity-modulated radiation therapy*

    NASA Astrophysics Data System (ADS)

    Pugachev, A.; Xing, L.

    2001-09-01

    In intensity-modulated radiation therapy (IMRT), the incident beam orientations are often determined by a trial and error search. The conventional beam's-eye view (BEV) tool becomes less helpful in IMRT because it is frequently required that beams go through organs at risk (OARs) in order to achieve a compromise between the dosimetric objectives of the planning target volume (PTV) and the OARs. In this paper, we report a beam's-eye view dosimetrics (BEVD) technique to assist in the selection of beam orientations in IMRT. In our method, each beam portal is divided into a grid of beamlets. A score function is introduced to measure the `goodness' of each beamlet at a given gantry angle. The score is determined by the maximum PTV dose deliverable by the beamlet without exceeding the tolerance doses of the OARs and normal tissue located in the path of the beamlet. The overall score of the gantry angle is given by a sum of the scores of all beamlets. For a given patient, the score function is evaluated for each possible beam orientation. The directions with the highest scores are then selected as the candidates for beam placement. This procedure is similar to the BEV approach used in conventional radiation therapy, except that the evaluation by a human is replaced by a score function to take into account the intensity modulation. This technique allows one to select beam orientations without the excessive computing overhead of computer optimization of beam orientation. It also provides useful insight into the problem of selection of beam orientation and is especially valuable for complicated cases where the PTV is surrounded by several sensitive structures and where it is difficult to select a set of `good' beam orientations. Several two-dimensional (2D) model cases were used to test the proposed technique. The plans obtained using the BEVD-selected beam orientations were compared with the plans obtained using equiangular spaced beams. For all the model cases investigated

  5. Challenges and Opportunities in Using Automatic Differentiation with Object-Oriented Toolkits for Scientific Computing

    SciTech Connect

    Hovland, P; Lee, S; McInnes, L; Norris, B; Smith, B

    2001-04-17

    The increased use of object-oriented toolkits in large-scale scientific simulation presents new opportunities and challenges for the use of automatic (or algorithmic) differentiation (AD) techniques, especially in the context of optimization. Because object-oriented toolkits use well-defined interfaces and data structures, there is potential for simplifying the AD process. Furthermore, derivative computation can be improved by exploiting high-level information about numerical and computational abstractions. However, challenges to the successful use of AD with these toolkits also exist. Among the greatest challenges is balancing the desire to limit the scope of the AD process with the desire to minimize the work required of a user. They discuss their experiences in integrating AD with the PETSc, PVODE, and TAO toolkits and the plans for future research and development in this area.

  6. The Use of Computer Vision Algorithms for Automatic Orientation of Terrestrial Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Markiewicz, Jakub Stefan

    2016-06-01

    The paper presents analysis of the orientation of terrestrial laser scanning (TLS) data. In the proposed data processing methodology, point clouds are considered as panoramic images enriched by the depth map. Computer vision (CV) algorithms are used for orientation, which are applied for testing the correctness of the detection of tie points and time of computations, and for assessing difficulties in their implementation. The BRISK, FASRT, MSER, SIFT, SURF, ASIFT and CenSurE algorithms are used to search for key-points. The source data are point clouds acquired using a Z+F 5006h terrestrial laser scanner on the ruins of Iłża Castle, Poland. Algorithms allowing combination of the photogrammetric and CV approaches are also presented.

  7. Integrating Clinical Trial Imaging Data Resources Using Service-Oriented Architecture and Grid Computing

    PubMed Central

    Cladé, Thierry; Snyder, Joshua C.

    2010-01-01

    Clinical trials which use imaging typically require data management and workflow integration across several parties. We identify opportunities for all parties involved to realize benefits with a modular interoperability model based on service-oriented architecture and grid computing principles. We discuss middleware products for implementation of this model, and propose caGrid as an ideal candidate due to its healthcare focus; free, open source license; and mature developer tools and support. PMID:20449775

  8. Accuracy of magnetic energy computations

    NASA Astrophysics Data System (ADS)

    Valori, G.; Démoulin, P.; Pariat, E.; Masson, S.

    2013-05-01

    Context. For magnetically driven events, the magnetic energy of the system is the prime energy reservoir that fuels the dynamical evolution. In the solar context, the free energy (i.e., the energy in excess of the potential field energy) is one of the main indicators used in space weather forecasts to predict the eruptivity of active regions. A trustworthy estimation of the magnetic energy is therefore needed in three-dimensional (3D) models of the solar atmosphere, e.g., in coronal fields reconstructions or numerical simulations. Aims: The expression of the energy of a system as the sum of its potential energy and its free energy (Thomson's theorem) is strictly valid when the magnetic field is exactly solenoidal. For numerical realizations on a discrete grid, this property may be only approximately fulfilled. We show that the imperfect solenoidality induces terms in the energy that can lead to misinterpreting the amount of free energy present in a magnetic configuration. Methods: We consider a decomposition of the energy in solenoidal and nonsolenoidal parts which allows the unambiguous estimation of the nonsolenoidal contribution to the energy. We apply this decomposition to six typical cases broadly used in solar physics. We quantify to what extent the Thomson theorem is not satisfied when approximately solenoidal fields are used. Results: The quantified errors on energy vary from negligible to significant errors, depending on the extent of the nonsolenoidal component of the field. We identify the main source of errors and analyze the implications of adding a variable amount of divergence to various solenoidal fields. Finally, we present pathological unphysical situations where the estimated free energy would appear to be negative, as found in some previous works, and we identify the source of this error to be the presence of a finite divergence. Conclusions: We provide a method of quantifying the effect of a finite divergence in numerical fields, together with

  9. An Architecture and Supporting Environment of Service-Oriented Computing Based-On Context Awareness

    NASA Astrophysics Data System (ADS)

    Ma, Tianxiao; Wu, Gang; Huang, Jun

    Service-oriented computing (SOC) is emerging to be an important computing paradigm of the next future. Based on context awareness, this paper proposes an architecture of SOC. A definition of the context in open environments such as Internet is given, which is based on ontology. The paper also proposes a supporting environment for the context-aware SOC, which focus on services on-demand composition and context-awareness evolving. A reference implementation of the supporting environment based on OSGi[11] is given at last.

  10. Computed reconstruction of spatial ammonoid-shell orientation captured from digitized grinding and landmark data☆

    PubMed Central

    Lukeneder, Susanne; Lukeneder, Alexander; Weber, Gerhard W.

    2014-01-01

    The internal orientation of fossil mass occurrences can be exploited as useful source of information about their primary depositional conditions. A series of studies, using different kinds of fossils, especially those with elongated shape (e.g., elongated gastropods), deal with their orientation and the subsequent reconstruction of the depositional conditions (e.g., paleocurrents and transport mechanisms). However, disk-shaped fossils like planispiral cephalopods or gastropods were used, up to now, with caution for interpreting paleocurrents. Moreover, most studies just deal with the topmost surface of such mass occurrences, due to the easier accessibility. Within this study, a new method for three-dimensional reconstruction of the internal structure of a fossil mass occurrence and the subsequent calculation of its spatial shell orientation is established. A 234 million-years-old (Carnian, Triassic) monospecific mass occurrence of the ammonoid Kasimlarceltites krystyni from the Taurus Mountains in Turkey, embedded in limestone, is used for this pilot study. Therefore, a 150×45×140 mm3 block of the ammonoid bearing limestone bed has been grinded to 70 slices, with a distance of 2 mm between each slice. By using a semi-automatic region growing algorithm of the 3D-visualization software Amira, ammonoids of a part of this mass occurrence were segmented and a 3D-model reconstructed. Landmarks, trigonometric and vector-based calculations were used to compute the diameters and the spatial orientation of each ammonoid. The spatial shell orientation was characterized by dip and dip-direction and aperture direction of the longitudinal axis, as well as by dip and azimuth of an imaginary sagittal-plane through each ammonoid. The exact spatial shell orientation was determined for a sample of 675 ammonoids, and their statistical orientation analyzed (i.e., NW/SE). The study combines classical orientation analysis with modern 3D-visualization techniques, and establishes a

  11. Probability-Based Determination Methods for Service Waiting in Service-Oriented Computing Environments

    NASA Astrophysics Data System (ADS)

    Zeng, Sen; Huang, Shuangxi; Liu, Yang

    Cooperative business processes (CBP)-based service-oriented enterprise networks (SOEN) are emerging with the significant advances of enterprise integration and service-oriented architecture. The performance prediction and optimization for CBP-based SOEN is very complex. To meet these challenges, one of the key points is to try to reduce an abstract service’s waiting number of its physical services. This paper introduces a probability-based determination method (PBDM) of an abstract service’ waiting number, M l , and time span, τ i , for its physical services. The determination of M i and τ i is according to the physical services’ arriving rule and their overall performance’s distribution functions. In PBDM, the arriving probability of the physical services with the best overall performance value is a pre-defined reliability. PBDM has made use of the information of the physical services’ arriving rule and performance distribution functions thoroughly, which will improve the computational efficiency for the scheme design and performance optimization of the collaborative business processes in service-oriented computing environments.

  12. Sharing personal health information via service-oriented computing: a case of long-term care.

    PubMed

    Lin, Yung-Hsiu; Chen, Rong-Rong; Guo, Sophie Huey-Ming; Chiang, Su-Chien; Chang, Her-Kun

    2012-12-01

    Sharing personal health information among healthcare providers is a crucial business process not only for saving limited healthcare resources but also for increasing patient's healthcare quality. Building an effective personal health information sharing process from established healthcare systems is a challenge in terms of coordination different business operations among healthcare providers and restructuring technical details existed in different healthcare information systems. This study responds this challenge with a service-oriented approach and develops a business software application to describe how the challenge can be alleviated from both managerial and technical perspectives. The software application in this study depicts personal health information sharing process among different providers in a long-term care setting. The information sharing scenario is based on an industrial initiative, such as Integrating the Healthcare Enterprise (IHE) from healthcare domain and the technologies for implementing the scenario are Web Service technologies from Service-oriented computing paradigm. The implementation in this study can inform healthcare researchers and practitioners applying technologies from service-oriented computing to design and develop healthcare collaborative systems to meet the increasing need for personal health information sharing. PMID:22366977

  13. The Global Energy Situation on Earth, Student Guide. Computer Technology Program Environmental Education Units.

    ERIC Educational Resources Information Center

    Northwest Regional Educational Lab., Portland, OR.

    This is the student guide in a set of five computer-oriented environmental/energy education units. Contents of this guide are: (1) Introduction to the unit; (2) The "EARTH" program; (3) Exercises; and (4) Sources of information on the energy crisis. This guide supplements a simulation which allows students to analyze different aspects of energy…

  14. It Takes a Village: Supporting Inquiry- and Equity-Oriented Computer Science Pedagogy through a Professional Learning Community

    ERIC Educational Resources Information Center

    Ryoo, Jean; Goode, Joanna; Margolis, Jane

    2015-01-01

    This article describes the importance that high school computer science teachers place on a teachers' professional learning community designed around an inquiry- and equity-oriented approach for broadening participation in computing. Using grounded theory to analyze four years of teacher surveys and interviews from the Exploring Computer Science…

  15. An Object-oriented Computer Code for Aircraft Engine Weight Estimation

    NASA Technical Reports Server (NTRS)

    Tong, Michael T.; Naylor, Bret A.

    2008-01-01

    Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. At NASA Glenn (GRC), the Weight Analysis of Turbine Engines (WATE) computer code, originally developed by Boeing Aircraft, has been used to estimate the engine weight of various conceptual engine designs. The code, written in FORTRAN, was originally developed for NASA in 1979. Since then, substantial improvements have been made to the code to improve the weight calculations for most of the engine components. Most recently, to improve the maintainability and extensibility of WATE, the FORTRAN code has been converted into an object-oriented version. The conversion was done within the NASA s NPSS (Numerical Propulsion System Simulation) framework. This enables WATE to interact seamlessly with the thermodynamic cycle model which provides component flow data such as airflows, temperatures, and pressures, etc. that are required for sizing the components and weight calculations. The tighter integration between the NPSS and WATE would greatly enhance system-level analysis and optimization capabilities. It also would facilitate the enhancement of the WATE code for next-generation aircraft and space propulsion systems. In this paper, the architecture of the object-oriented WATE code (or WATE++) is described. Both the FORTRAN and object-oriented versions of the code are employed to compute the dimensions and weight of a 300- passenger aircraft engine (GE90 class). Both versions of the code produce essentially identical results as should be the case. Keywords: NASA, aircraft engine, weight, object-oriented

  16. Computing conformational free energy by deactivated morphing.

    SciTech Connect

    Park, S.; Lau, A. Y.; Roux, B.; Univ. of Chicago

    2008-10-07

    Despite the significant advances in free-energy computations for biomolecules, there exists no general method to evaluate the free-energy difference between two conformations of a macromolecule that differ significantly from each other. A crucial ingredient of such a method is the ability to find a path between different conformations that allows an efficient computation of the free energy. In this paper, we introduce a method called 'deactivated morphing', in which one conformation is morphed into another after the internal interactions are completely turned off. An important feature of this method is the (shameless) use of nonphysical paths, which makes the method robustly applicable to conformational changes of arbitrary complexity.

  17. Computing in high-energy physics

    DOE PAGESBeta

    Mount, Richard P.

    2016-05-31

    I present a very personalized journey through more than three decades of computing for experimental high-energy physics, pointing out the enduring lessons that I learned. This is followed by a vision of how the computing environment will evolve in the coming ten years and the technical challenges that this will bring. I then address the scale and cost of high-energy physics software and examine the many current and future challenges, particularly those of management, funding and software-lifecycle management. Lastly, I describe recent developments aimed at improving the overall coherence of high-energy physics software.

  18. Metal-Mediated Affinity and Orientation Specificity in a Computationally Designed Protein Homodimer

    SciTech Connect

    Der, Bryan S.; Machius, Mischa; Miley, Michael J.; Mills, Jeffrey L.; Szyperski, Thomas; Kuhlman, Brian

    2015-10-15

    Computationally designing protein-protein interactions with high affinity and desired orientation is a challenging task. Incorporating metal-binding sites at the target interface may be one approach for increasing affinity and specifying the binding mode, thereby improving robustness of designed interactions for use as tools in basic research as well as in applications from biotechnology to medicine. Here we describe a Rosetta-based approach for the rational design of a protein monomer to form a zinc-mediated, symmetric homodimer. Our metal interface design, named MID1 (NESG target ID OR37), forms a tight dimer in the presence of zinc (MID1-zinc) with a dissociation constant <30 nM. Without zinc the dissociation constant is 4 {micro}M. The crystal structure of MID1-zinc shows good overall agreement with the computational model, but only three out of four designed histidines coordinate zinc. However, a histidine-to-glutamate point mutation resulted in four-coordination of zinc, and the resulting metal binding site and dimer orientation closely matches the computational model (C{alpha} rmsd = 1.4 {angstrom}).

  19. Two-dimensional radiant energy array computers and computing devices

    NASA Technical Reports Server (NTRS)

    Schaefer, D. H.; Strong, J. P., III (Inventor)

    1976-01-01

    Two dimensional digital computers and computer devices operate in parallel on rectangular arrays of digital radiant energy optical signal elements which are arranged in ordered rows and columns. Logic gate devices receive two input arrays and provide an output array having digital states dependent only on the digital states of the signal elements of the two input arrays at corresponding row and column positions. The logic devices include an array of photoconductors responsive to at least one of the input arrays for either selectively accelerating electrons to a phosphor output surface, applying potentials to an electroluminescent output layer, exciting an array of discrete radiant energy sources, or exciting a liquid crystal to influence crystal transparency or reflectivity.

  20. Computational Study of Low Energy Nuclear Scattering

    NASA Astrophysics Data System (ADS)

    Salazar, Justin; Hira, Ajit; Brownrigg, Clifton; Pacheco, Jose

    2013-04-01

    We continue our interest in the interactions between different nuclear species with a computational study of the scattering of the low-energy nuclei of H through F atoms ( Z<=9 ) from Palladium and other metals. First, a FORTRAN computer program was developed to compute stopping cross sections and scattering angles in Pd and other metals for the small nuclear projectiles, using Monte Carlo calculation. This code allows for different angles of incidence. Next, simulations were done in the energy interval from 10 to 140kev. The computational results thus obtained are compared with relevant experimental data. The data are further analyzed to identify periodic trends in terms of the atomic number of the projectile. Such studies have potential applications in nuclear physics and in nuclear medicine.

  1. Problem-Oriented Simulation Packages and Computational Infrastructure for Numerical Studies of Powerful Gyrotrons

    NASA Astrophysics Data System (ADS)

    Damyanova, M.; Sabchevski, S.; Zhelyazkov, I.; Vasileva, E.; Balabanova, E.; Dankov, P.; Malinov, P.

    2016-05-01

    Powerful gyrotrons are necessary as sources of strong microwaves for electron cyclotron resonance heating (ECRH) and electron cyclotron current drive (ECCD) of magnetically confined plasmas in various reactors (most notably ITER) for controlled thermonuclear fusion. Adequate physical models and efficient problem-oriented software packages are essential tools for numerical studies, analysis, optimization and computer-aided design (CAD) of such high-performance gyrotrons operating in a CW mode and delivering output power of the order of 1-2 MW. In this report we present the current status of our simulation tools (physical models, numerical codes, pre- and post-processing programs, etc.) as well as the computational infrastructure on which they are being developed, maintained and executed.

  2. VLab: a service oriented architecture for first principles computations of planetary materials properties

    NASA Astrophysics Data System (ADS)

    da Silva, C. R.; da Silveira, P.; Wentzcovitch, R. M.; Pierce, M.; Erlebacher, G.

    2007-12-01

    We present an overview of the VLab, a system developed to handle execution of extensive workflows generated by first principles computations of thermoelastic properties of minerals. The multiplicity (102-3) of tasks derives from sampling of parameter space with variables such as pressure, temperature, strain, composition, etc. We review the algorithms of physical importance that define the system's requirements, its underlying service oriented architecture (SOA), and metadata. The system architecture emerges naturally. The SOA is a collection of web-services providing access to distributed computing nodes, controlling workflow execution, monitoring services, and providing data analyses tools, visualization services, data bases, and authentication services. A usage view diagram is described. We also show snapshots taken from the actual operational procedure in VLab. Research supported by NSF/ITR (VLab)

  3. VLab: A Service Oriented Architecture for Distributed First Principles Materials Computations

    NASA Astrophysics Data System (ADS)

    da Silva, Cesar; da Silveira, Pedro; Wentzcovitch, Renata; Pierce, Marlon; Erlebacher, Gordon

    2008-03-01

    We present an overview of VLab, a system developed to handle execution of extensive workflows generated by first principles computations of thermoelastic properties of minerals. The multiplicity (10^2-3) of tasks derives from sampling of parameter space with variables such as pressure, temperature, strain, composition, etc. We review the algorithms of physical importance that define the system's requirements, its underlying service oriented architecture (SOA), and metadata. The system architecture emerges naturally. The SOA is a collection of web-services providing access to distributed computing nodes, workflow control, and monitoring services, and providing data analysis tools, visualization services, data bases, and authentication services. A usage view diagram is described. We also show snapshots taken from the actual operational procedure in VLab.

  4. An Object-Oriented Network-Centric Software Architecture for Physical Computing

    NASA Astrophysics Data System (ADS)

    Palmer, Richard

    1997-08-01

    Recent developments in object-oriented computer languages and infrastructure such as the Internet, Web browsers, and the like provide an opportunity to define a more productive computational environment for scientific programming that is based more closely on the underlying mathematics describing physics than traditional programming languages such as FORTRAN or C++. In this talk I describe an object-oriented software architecture for representing physical problems that includes classes for such common mathematical objects as geometry, boundary conditions, partial differential and integral equations, discretization and numerical solution methods, etc. In practice, a scientific program written using this architecture looks remarkably like the mathematics used to understand the problem, is typically an order of magnitude smaller than traditional FORTRAN or C++ codes, and hence easier to understand, debug, describe, etc. All objects in this architecture are ``network-enabled,'' which means that components of a software solution to a physical problem can be transparently loaded from anywhere on the Internet or other global network. The architecture is expressed as an ``API,'' or application programmers interface specification, with reference embeddings in Java, Python, and C++. A C++ class library for an early version of this API has been implemented for machines ranging from PC's to the IBM SP2, meaning that phidentical codes run on all architectures.

  5. An Object-Oriented Computer Code for Aircraft Engine Weight Estimation

    NASA Technical Reports Server (NTRS)

    Tong, Michael T.; Naylor, Bret A.

    2009-01-01

    Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. At NASA Glenn Research Center (GRC), the Weight Analysis of Turbine Engines (WATE) computer code, originally developed by Boeing Aircraft, has been used to estimate the engine weight of various conceptual engine designs. The code, written in FORTRAN, was originally developed for NASA in 1979. Since then, substantial improvements have been made to the code to improve the weight calculations for most of the engine components. Most recently, to improve the maintainability and extensibility of WATE, the FORTRAN code has been converted into an object-oriented version. The conversion was done within the NASA's NPSS (Numerical Propulsion System Simulation) framework. This enables WATE to interact seamlessly with the thermodynamic cycle model which provides component flow data such as airflows, temperatures, and pressures, etc., that are required for sizing the components and weight calculations. The tighter integration between the NPSS and WATE would greatly enhance system-level analysis and optimization capabilities. It also would facilitate the enhancement of the WATE code for next-generation aircraft and space propulsion systems. In this paper, the architecture of the object-oriented WATE code (or WATE++) is described. Both the FORTRAN and object-oriented versions of the code are employed to compute the dimensions and weight of a 300-passenger aircraft engine (GE90 class). Both versions of the code produce essentially identical results as should be the case.

  6. A SUGGESTED CURRICULUM GUIDE FOR ELECTRO-MECHANICAL TECHNOLOGY ORIENTED SPECIFICALLY TO THE COMPUTER AND BUSINESS MACHINE FIELDS. INTERIM REPORT.

    ERIC Educational Resources Information Center

    LESCARBEAU, ROLAND F.; AND OTHERS

    A SUGGESTED POST-SECONDARY CURRICULUM GUIDE FOR ELECTRO-MECHANICAL TECHNOLOGY ORIENTED SPECIFICALLY TO THE COMPUTER AND BUSINESS MACHINE FIELDS WAS DEVELOPED BY A GROUP OF COOPERATING INSTITUTIONS, NOW INCORPORATED AS TECHNICAL EDUCATION CONSORTIUM, INCORPORATED. SPECIFIC NEEDS OF THE COMPUTER AND BUSINESS MACHINE INDUSTRY WERE DETERMINED FROM…

  7. Statistical energy analysis computer program, user's guide

    NASA Technical Reports Server (NTRS)

    Trudell, R. W.; Yano, L. I.

    1981-01-01

    A high frequency random vibration analysis, (statistical energy analysis (SEA) method) is examined. The SEA method accomplishes high frequency prediction of arbitrary structural configurations. A general SEA computer program is described. A summary of SEA theory, example problems of SEA program application, and complete program listing are presented.

  8. Computer simulation allows goal-oriented mechanical ventilation in acute respiratory distress syndrome

    PubMed Central

    Uttman, Leif; Ögren, Helena; Niklason, Lisbet; Drefeldt, Björn; Jonson, Björn

    2007-01-01

    Introduction To prevent further lung damage in patients with acute respiratory distress syndrome (ARDS), it is important to avoid overdistension and cyclic opening and closing of atelectatic alveoli. Previous studies have demonstrated protective effects of using low tidal volume (VT), moderate positive end-expiratory pressure and low airway pressure. Aspiration of dead space (ASPIDS) allows a reduction in VT by eliminating dead space in the tracheal tube and tubing. We hypothesized that, by applying goal-orientated ventilation based on iterative computer simulation, VT can be reduced at high respiratory rate and much further reduced during ASPIDS without compromising gas exchange or causing high airway pressure. Methods ARDS was induced in eight pigs by surfactant perturbation and ventilator-induced lung injury. Ventilator resetting guided by computer simulation was then performed, aiming at minimal VT, plateau pressure 30 cmH2O and isocapnia, first by only increasing respiratory rate and then by using ASPIDS as well. Results VT decreased from 7.2 ± 0.5 ml/kg to 6.6 ± 0.5 ml/kg as respiratory rate increased from 40 to 64 ± 6 breaths/min, and to 4.0 ± 0.4 ml/kg when ASPIDS was used at 80 ± 6 breaths/min. Measured values of arterial carbon dioxide tension were close to predicted values. Without ASPIDS, total positive end-expiratory pressure and plateau pressure were slightly higher than predicted, and with ASPIDS they were lower than predicted. Conclusion In principle, computer simulation may be used in goal-oriented ventilation in ARDS. Further studies are needed to investigate potential benefits and limitations over extended study periods. PMID:17352801

  9. Computing free energy hypersurfaces for anisotropic intermolecular associations.

    PubMed

    Strümpfer, Johan; Naidoo, Kevin J

    2010-01-30

    We previously used an adaptive reaction coordinate force biasing method for calculating the free energy of conformation (Naidoo and Brady, J Am Chem Soc 1999, 121, 2244) and chemical reactions (Rajamani et al., J Comput Chem 2003, 24, 1775) amongst others. Here, we describe a generalized version able to produce free energies in multiple dimensions, descriptively named the free energies from adaptive reaction coordinate forces method. To illustrate it, we describe how we calculate a multidimensional intermolecular orientational free energy, which can be used to investigate complex systems such as protein conformation and liquids. This multidimensional intermolecular free energy W(r, theta(1), theta(2), phi) provides a measure of orientationally dependent interactions that are appropriate for applications in systems that inherently have molecular anisotropic features. It is a highly informative free energy volume, which can be used to parameterize key terms such as the Gay-Berne intermolecular potential in coarse grain simulations. To demonstrate the value of the information gained from the W(r, theta(1), theta(2), phi) hypersurfaces we calculated them for TIP3P, TIP4P, and TIP5P dimer water models in vacuum. A comparison with a commonly used one-dimensional distance free energy profile is made to illustrate the significant increase in configurational information. The W(r) plots show little difference between the three models while the W(r, theta(1), theta(2), phi) hypersurfaces reveal the underlying energetic reasons why these potentials reproduce tetrahedrality in the condensed phase so differently from each. PMID:19462397

  10. Computational design and optimization of energy materials

    NASA Astrophysics Data System (ADS)

    Chan, Maria

    The use of density functional theory (DFT) to understand and improve energy materials for diverse applications - including energy storage, thermal management, catalysis, and photovoltaics - is widespread. The further step of using high throughput DFT calculations to design materials and has led to an acceleration in materials discovery and development. Due to various limitations in DFT, including accuracy and computational cost, however, it is important to leverage effective models and, in some cases, experimental information to aid the design process. In this talk, I will discuss efforts in design and optimization of energy materials using a combination of effective models, DFT, machine learning, and experimental information.

  11. PNNL streamlines energy-guzzling computers

    SciTech Connect

    Beckman, Mary T.; Marquez, Andres

    2008-10-27

    In a room the size of a garage, two rows of six-foot-tall racks holding supercomputer hard drives sit back-to-back. Thin tubes and wires snake off the hard drives, slithering into the corners. Stepping between the rows, a rush of heat whips around you -- the air from fans blowing off processing heat. But walk farther in, between the next racks of hard drives, and the temperature drops noticeably. These drives are being cooled by a non-conducting liquid that runs right over the hardworking processors. The liquid carries the heat away in tubes, saving the air a few degrees. This is the Energy Smart Data Center at Pacific Northwest National Laboratory. The bigger, faster, and meatier supercomputers get, the more energy they consume. PNNL's Andres Marquez has developed this test bed to learn how to train the behemoths in energy efficiency. The work will help supercomputers perform better as well. Processors have to keep cool or suffer from "thermal throttling," says Marquez. "That's the performance threshold where the computer is too hot to run well. That threshold is an industry secret." The center at EMSL, DOE's national scientific user facility at PNNL, harbors several ways of experimenting with energy usage. For example, the room's air conditioning is isolated from the rest of EMSL -- pipes running beneath the floor carry temperature-controlled water through heat exchangers to cooling towers outside. "We can test whether it's more energy efficient to cool directly on the processing chips or out in the water tower," says Marquez. The hard drives feed energy and temperature data to a network server running specially designed software that controls and monitors the data center. To test the center’s limits, the team runs the processors flat out – not only on carefully controlled test programs in the Energy Smart computers, but also on real world software from other EMSL research, such as regional weather forecasting models. Marquez's group is also developing "power

  12. Computing local edge probability in natural scenes from a population of oriented simple cells

    PubMed Central

    Ramachandra, Chaithanya A.; Mel, Bartlett W.

    2013-01-01

    A key computation in visual cortex is the extraction of object contours, where the first stage of processing is commonly attributed to V1 simple cells. The standard model of a simple cell—an oriented linear filter followed by a divisive normalization—fits a wide variety of physiological data, but is a poor performing local edge detector when applied to natural images. The brain's ability to finely discriminate edges from nonedges therefore likely depends on information encoded by local simple cell populations. To gain insight into the corresponding decoding problem, we used Bayes's rule to calculate edge probability at a given location/orientation in an image based on a surrounding filter population. Beginning with a set of ∼ 100 filters, we culled out a subset that were maximally informative about edges, and minimally correlated to allow factorization of the joint on- and off-edge likelihood functions. Key features of our approach include a new, efficient method for ground-truth edge labeling, an emphasis on achieving filter independence, including a focus on filters in the region orthogonal rather than tangential to an edge, and the use of a customized parametric model to represent the individual filter likelihood functions. The resulting population-based edge detector has zero parameters, calculates edge probability based on a sum of surrounding filter influences, is much more sharply tuned than the underlying linear filters, and effectively captures fine-scale edge structure in natural scenes. Our findings predict nonmonotonic interactions between cells in visual cortex, wherein a cell may for certain stimuli excite and for other stimuli inhibit the same neighboring cell, depending on the two cells' relative offsets in position and orientation, and their relative activation levels. PMID:24381295

  13. Grid Computing in High Energy Physics

    NASA Astrophysics Data System (ADS)

    Avery, Paul

    2004-09-01

    Over the next two decades, major high energy physics (HEP) experiments, particularly at the Large Hadron Collider, will face unprecedented challenges to achieving their scientific potential. These challenges arise primarily from the rapidly increasing size and complexity of HEP datasets that will be collected and the enormous computational, storage and networking resources that will be deployed by global collaborations in order to process, distribute and analyze them. Coupling such vast information technology resources to globally distributed collaborations of several thousand physicists requires extremely capable computing infrastructures supporting several key areas: (1) computing (providing sufficient computational and storage resources for all processing, simulation and analysis tasks undertaken by the collaborations); (2) networking (deploying high speed networks to transport data quickly between institutions around the world); (3) software (supporting simple and transparent access to data and software resources, regardless of location); (4) collaboration (providing tools that allow members full and fair access to all collaboration resources and enable distributed teams to work effectively, irrespective of location); and (5) education, training and outreach (providing resources and mechanisms for training students and for communicating important information to the public). It is believed that computing infrastructures based on Data Grids and optical networks can meet these challenges and can offer data intensive enterprises in high energy physics and elsewhere a comprehensive, scalable framework for collaboration and resource sharing. A number of Data Grid projects have been underway since 1999. Interestingly, the most exciting and far ranging of these projects are led by collaborations of high energy physicists, computer scientists and scientists from other disciplines in support of experiments with massive, near-term data needs. I review progress in this

  14. A First Course in Computational Physics and Object-Oriented Programming with C++

    NASA Astrophysics Data System (ADS)

    Yevick, David

    2005-03-01

    Part I. Basic C++ Programming: 1. Introduction; 2. Installing and running the Dev-C++ programming environment; 3. Introduction to computer and software architecture; 4. Fundamental concepts; 5. Writing a first program; 6. An introduction to object-oriented analysis; 7. C++ object-oriented programming syntax; 8. Control logic and iteration; 9. Basic function properties; 10. Arrays and matrices; 11. Input and output streams; Part II. Numerical Analysis: 12. Numerical error analysis - derivatives; 13. Integration; 14. Root finding procedures; 15. Differential equations; 16. Linear algebra; Part III. Pointers, References and Dynamic Memory Allocation: 17. References; 18. Pointers and dynamic memory allocation; 19. Advanced memory management; 20. The static keyword, multiple and virtual inheritance, templates and the STL library; 21. Program optimization in C++; Part IV. Advanced Numerical Examples: 22. Monte-Carlo methods; 23. Parabolic partial differential equation solvers; Part V. Appendices: Appendix A. Overview of MATLAB; Appendix B. The Borland C++ compiler; Appendix C. The Linux/Windows g++ compiler and profiler; Appendix D. Calling FORTRAN programs from C++; Appendix E. C++ coding standard; References.

  15. Activating Teacher Energy Through "Inquiry-Oriented" Teacher Education.

    ERIC Educational Resources Information Center

    Zeichner, Kenneth M.

    In an inquiry-oriented teacher education program, prospective teachers are encouraged to examine the origins and consequences of their actions and settings in which they work. Many of the characteristics of the elementary student teaching program at the University of Wisconsin at Madison are similar to this approach. During the students' 15-week…

  16. Computed potential energy surfaces for chemical reactions

    NASA Technical Reports Server (NTRS)

    Walch, Stephen P.

    1994-01-01

    Quantum mechanical methods have been used to compute potential energy surfaces for chemical reactions. The reactions studied were among those believed to be important to the NASP and HSR programs and included the recombination of two H atoms with several different third bodies; the reactions in the thermal Zeldovich mechanism; the reactions of H atom with O2, N2, and NO; reactions involved in the thermal De-NO(x) process; and the reaction of CH(squared Pi) with N2 (leading to 'prompt NO'). These potential energy surfaces have been used to compute reaction rate constants and rates of unimolecular decomposition. An additional application was the calculation of transport properties of gases using a semiclassical approximation (and in the case of interactions involving hydrogen inclusion of quantum mechanical effects).

  17. Energy consumption of personal computer workstations

    SciTech Connect

    Szydlowski, R.

    1995-12-01

    An important question for consideration is, {open_quotes}Are office equipment plug loads increasing?{close_quotes} Data taken by Pacific Northwest Laboratory (PNL) in May 1990 from the Forrestal Building, the U.S. Department of Energy (DOE) headquarters in Washington, DC, are desegregated by end use including: plug loads, lights, HVAC, large dedicated computers, and elevators. This study was repeated in November 1993, and there was a 3.8%/yr increase in plug loads in a building of approximately 1.75 million sq ft. Subsequent to this measurement, the plug loads were measured continuously by PNL over a 10-month period from November 1993 through September 1994, and the results showed another increase of 3.9%, nearly the same increase as in the previous three years. The energy use of personal computers (PCs) was measured by setting up a mobile outlet module (MOM), a replacement for a strip outlet, with current transformers (CTs) and potential transformers. The MOM was connected to a set of dataloggers, allowing for the monitoring of up to four PCs at a time. The PCs were plugged in through the MOM to a C180 datalogger, the data collected to a laptop, and the individual 24-hour profiles were then reduced to a standard profile. About 200 workstations were studied, including the PC, monitor, printer, modem, external disk drives, and CAD systems with their own peripherals. Also monitored were an additional collection of printers, photocopiers, facsimile machines, and monitor controllers. The end result was a set of profiles for energy use during working hours for five different buildings. There was a wide variation in these profiles from daytime to nighttime, since 16 to 35% of the computers remain on at night. Therefore, the needs for computers left on at night vary, along with the attitudes of people. Another area of energy consumption concern is the type of PC, such as IBM- or Macintosh-compatible, and there are many different kinds of workstations.

  18. Online object oriented Monte Carlo computational tool for the needs of biomedical optics

    PubMed Central

    Doronin, Alexander; Meglinski, Igor

    2011-01-01

    Conceptual engineering design and optimization of laser-based imaging techniques and optical diagnostic systems used in the field of biomedical optics requires a clear understanding of the light-tissue interaction and peculiarities of localization of the detected optical radiation within the medium. The description of photon migration within the turbid tissue-like media is based on the concept of radiative transfer that forms a basis of Monte Carlo (MC) modeling. An opportunity of direct simulation of influence of structural variations of biological tissues on the probing light makes MC a primary tool for biomedical optics and optical engineering. Due to the diversity of optical modalities utilizing different properties of light and mechanisms of light-tissue interactions a new MC code is typically required to be developed for the particular diagnostic application. In current paper introducing an object oriented concept of MC modeling and utilizing modern web applications we present the generalized online computational tool suitable for the major applications in biophotonics. The computation is supported by NVIDEA CUDA Graphics Processing Unit providing acceleration of modeling up to 340 times. PMID:21991540

  19. Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation

    NASA Technical Reports Server (NTRS)

    Afjeh, Abdollah A.; Reed, John A.

    2003-01-01

    The following reports are presented on this project:A first year progress report on: Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; A second year progress report on: Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; An Extensible, Interchangeable and Sharable Database Model for Improving Multidisciplinary Aircraft Design; Interactive, Secure Web-enabled Aircraft Engine Simulation Using XML Databinding Integration; and Improving the Aircraft Design Process Using Web-based Modeling and Simulation.

  20. Energy consumption of personal computer workstations

    SciTech Connect

    Szydlowski, R.F.; Chvala, W.D. Jr.

    1994-02-01

    The explosive growth of the information age has had a profound effect on the appearance of today`s office. Although the telephone still remains an important part of the information exchange and processing system within an office, other electronic devices are now considered required equipment within this environment. This office automation equipment includes facsimile machines, photocopiers, personal computers, printers, modems, and other peripherals. A recent estimate of the installed base indicated that 42 million personal computers and 7.3 million printers are in place, consuming 18.2 billion kWh/yr-and this installed base is growing (Luhn 1992). From a productivity standpoint, it can be argued that this equipment greatly improves the efficiency of those working in the office. But of primary concern to energy system designers, building managers, and electric utilities is the fact that this equipment requires electric energy. Although the impact of each incremental piece of equipment is small, installation of thousands of devices per building has resulted in office automation equipment becoming the major contributor to electric consumption and demand growth in commercial buildings. Personal computers and associated equipment are the dominant part of office automation equipment. In some cases, this electric demand growth has caused office buildings electric and cooling systems to overload.

  1. Orientation of Undergraduates toward Careers in the Computer and Information Sciences: Gender, Self-Efficacy and Social Support

    ERIC Educational Resources Information Center

    Rosson, Mary Beth; Carroll, John M.; Sinha, Hansa

    2011-01-01

    Researchers have been working to understand the factors that may be contributing to low rates of participation by women and other minorities in the computer and information sciences (CIS). We describe a multivariate investigation of male and female university students' orientation to CIS careers. We focus on the roles of "self-efficacy" and…

  2. The Effect of a Graph-Oriented Computer-Assisted Project-Based Learning Environment on Argumentation Skills

    ERIC Educational Resources Information Center

    Hsu, P. -S.; Van Dyke, M.; Chen, Y.; Smith, T. J.

    2015-01-01

    The purpose of this quasi-experimental study was to explore how seventh graders in a suburban school in the United States developed argumentation skills and science knowledge in a project-based learning environment that incorporated a graph-oriented, computer-assisted application. A total of 54 students (three classes) comprised this treatment…

  3. Computational materials design for energy applications

    NASA Astrophysics Data System (ADS)

    Ozolins, Vidvuds

    2013-03-01

    General adoption of sustainable energy technologies depends on the discovery and development of new high-performance materials. For instance, waste heat recovery and electricity generation via the solar thermal route require bulk thermoelectrics with a high figure of merit (ZT) and thermal stability at high-temperatures. Energy recovery applications (e.g., regenerative braking) call for the development of rapidly chargeable systems for electrical energy storage, such as electrochemical supercapacitors. Similarly, use of hydrogen as vehicular fuel depends on the ability to store hydrogen at high volumetric and gravimetric densities, as well as on the ability to extract it at ambient temperatures at sufficiently rapid rates. We will discuss how first-principles computational methods based on quantum mechanics and statistical physics can drive the understanding, improvement and prediction of new energy materials. We will cover prediction and experimental verification of new earth-abundant thermoelectrics, transition metal oxides for electrochemical supercapacitors, and kinetics of mass transport in complex metal hydrides. Research has been supported by the US Department of Energy under grant Nos. DE-SC0001342, DE-SC0001054, DE-FG02-07ER46433, and DE-FC36-08GO18136.

  4. Prediction of the orientations of adsorbed protein using an empirical energy function with implicit solvation.

    PubMed

    Sun, Yu; Welsh, William J; Latour, Robert A

    2005-06-01

    When simulating protein adsorption behavior, decisions must first be made regarding how the protein should be oriented on the surface. To address this problem, we have developed a molecular simulation program that combines an empirical adsorption free energy function with an efficient configurational search method to calculate orientation-dependent adsorption free energies between proteins and functionalized surfaces. The configuration space is searched systematically using a quaternion rotation technique, and the adsorption free energy is evaluated using an empirical energy function with an efficient grid-based calculational method. In this paper, the developed method is applied to analyze the preferred orientations of a model protein, lysozyme, on various functionalized alkanethiol self-assembled monolayer (SAM) surfaces by the generation of contour graphs that relate adsorption free energy to adsorbed orientation, and the results are compared with experimental observations. As anticipated, the adsorbed orientation of lysozyme is predicted to be dependent on the discrete organization of the functional groups presented by the surface. Lysozyme, which is a positively charged protein, is predicted to adsorb on its 'side' on both hydrophobic and negatively charged surfaces. On surfaces with discrete positively charged sites, attractive interaction energies can also be obtained due to the presence of discrete local negative charges present on the lysozyme surface. In this case, 'end-on' orientations are preferred. Additionally, SAM surface models with mixed functionality suggest that the interactions between lysozyme and surfaces could be greatly enhanced if individual surface functional groups are able to access the catalytic cleft region of lysozyme, similar to ligand-receptor interactions. The contour graphs generated by this method can be used to identify low-energy orientations that can then be used as starting points for further simulations to investigate

  5. Orienting the Neighborhood: A Subdivision Energy Analysis Tool; Preprint

    SciTech Connect

    Christensen, C.; Horowitz, S.

    2008-07-01

    This paper describes a new computerized Subdivision Energy Analysis Tool being developed to allow users to interactively design subdivision street layouts while receiving feedback about energy impacts based on user-specified building design variants and availability of roof surfaces for photovoltaic and solar water heating systems.

  6. Towards sustainable infrastructure management: knowledge-based service-oriented computing framework for visual analytics

    NASA Astrophysics Data System (ADS)

    Vatcha, Rashna; Lee, Seok-Won; Murty, Ajeet; Tolone, William; Wang, Xiaoyu; Dou, Wenwen; Chang, Remco; Ribarsky, William; Liu, Wanqiu; Chen, Shen-en; Hauser, Edd

    2009-05-01

    Infrastructure management (and its associated processes) is complex to understand, perform and thus, hard to make efficient and effective informed decisions. The management involves a multi-faceted operation that requires the most robust data fusion, visualization and decision making. In order to protect and build sustainable critical assets, we present our on-going multi-disciplinary large-scale project that establishes the Integrated Remote Sensing and Visualization (IRSV) system with a focus on supporting bridge structure inspection and management. This project involves specific expertise from civil engineers, computer scientists, geographers, and real-world practitioners from industry, local and federal government agencies. IRSV is being designed to accommodate the essential needs from the following aspects: 1) Better understanding and enforcement of complex inspection process that can bridge the gap between evidence gathering and decision making through the implementation of ontological knowledge engineering system; 2) Aggregation, representation and fusion of complex multi-layered heterogeneous data (i.e. infrared imaging, aerial photos and ground-mounted LIDAR etc.) with domain application knowledge to support machine understandable recommendation system; 3) Robust visualization techniques with large-scale analytical and interactive visualizations that support users' decision making; and 4) Integration of these needs through the flexible Service-oriented Architecture (SOA) framework to compose and provide services on-demand. IRSV is expected to serve as a management and data visualization tool for construction deliverable assurance and infrastructure monitoring both periodically (annually, monthly, even daily if needed) as well as after extreme events.

  7. Three-dimensional modeling and computational analysis of the human cornea considering distributed collagen fibril orientations.

    PubMed

    Pandolfi, Anna; Holzapfel, Gerhard A

    2008-12-01

    Experimental tests on human corneas reveal distinguished reinforcing collagen lamellar structures that may be well described by a structural constitutive model considering distributed collagen fibril orientations along the superior-inferior and the nasal-temporal meridians. A proper interplay between the material structure and the geometry guarantees the refractive function and defines the refractive properties of the cornea. We propose a three-dimensional computational model for the human cornea that is able to provide the refractive power by analyzing the structural mechanical response with the nonlinear regime and the effect the intraocular pressure has. For an assigned unloaded geometry we show how the distribution of the von Mises stress at the top surface of the cornea and through the corneal thickness and the refractive power depend on the material properties and the fibril dispersion. We conclude that a model for the human cornea must not disregard the peculiar collagen fibrillar structure, which equips the cornea with the unique biophysical, mechanical, and optical properties. PMID:19045535

  8. Computational fluid dynamics study of swimmer's hand velocity, orientation, and shape: contributions to hydrodynamics.

    PubMed

    Bilinauskaite, Milda; Mantha, Vishveshwar Rajendra; Rouboa, Abel Ilah; Ziliukas, Pranas; Silva, Antonio Jose

    2013-01-01

    The aim of this paper is to determine the hydrodynamic characteristics of swimmer's scanned hand models for various combinations of both the angle of attack and the sweepback angle and shape and velocity of swimmer's hand, simulating separate underwater arm stroke phases of freestyle (front crawl) swimming. Four realistic 3D models of swimmer's hand corresponding to different combinations of separated/closed fingers positions were used to simulate different underwater front crawl phases. The fluid flow was simulated using FLUENT (ANSYS, PA, USA). Drag force and drag coefficient were calculated using (computational fluid dynamics) CFD in steady state. Results showed that the drag force and coefficient varied at the different flow velocities on all shapes of the hand and variation was observed for different hand positions corresponding to different stroke phases. The models of the hand with thumb adducted and abducted generated the highest drag forces and drag coefficients. The current study suggests that the realistic variation of both the orientation angles influenced higher values of drag, lift, and resultant coefficients and forces. To augment resultant force, which affects swimmer's propulsion, the swimmer should concentrate in effectively optimising achievable hand areas during crucial propulsive phases. PMID:23691493

  9. Input-output oriented computation algorithms for the control of large flexible structures

    NASA Technical Reports Server (NTRS)

    Minto, K. D.

    1989-01-01

    An overview is given of work in progress aimed at developing computational algorithms addressing two important aspects in the control of large flexible space structures; namely, the selection and placement of sensors and actuators, and the resulting multivariable control law design problem. The issue of sensor/actuator set selection is particularly crucial to obtaining a satisfactory control design, as clearly a poor choice will inherently limit the degree to which good control can be achieved. With regard to control law design, the researchers are driven by concerns stemming from the practical issues associated with eventual implementation of multivariable control laws, such as reliability, limit protection, multimode operation, sampling rate selection, processor throughput, etc. Naturally, the burden imposed by dealing with these aspects of the problem can be reduced by ensuring that the complexity of the compensator is minimized. Our approach to these problems is based on extensions to input/output oriented techniques that have proven useful in the design of multivariable control systems for aircraft engines. In particular, researchers are exploring the use of relative gain analysis and the condition number as a means of quantifying the process of sensor/actuator selection and placement for shape control of a large space platform.

  10. A novel task-oriented optimal design for P300-based brain-computer interfaces.

    PubMed

    Zhou, Zongtan; Yin, Erwei; Liu, Yang; Jiang, Jun; Hu, Dewen

    2014-10-01

    Objective. The number of items of a P300-based brain-computer interface (BCI) should be adjustable in accordance with the requirements of the specific tasks. To address this issue, we propose a novel task-oriented optimal approach aimed at increasing the performance of general P300 BCIs with different numbers of items. Approach. First, we proposed a stimulus presentation with variable dimensions (VD) paradigm as a generalization of the conventional single-character (SC) and row-column (RC) stimulus paradigms. Furthermore, an embedding design approach was employed for any given number of items. Finally, based on the score-P model of each subject, the VD flash pattern was selected by a linear interpolation approach for a certain task. Main results. The results indicate that the optimal BCI design consistently outperforms the conventional approaches, i.e., the SC and RC paradigms. Specifically, there is significant improvement in the practical information transfer rate for a large number of items. Significance. The results suggest that the proposed optimal approach would provide useful guidance in the practical design of general P300-based BCIs. PMID:25080373

  11. Computational Fluid Dynamics Study of Swimmer's Hand Velocity, Orientation, and Shape: Contributions to Hydrodynamics

    PubMed Central

    Bilinauskaite, Milda; Mantha, Vishveshwar Rajendra; Rouboa, Abel Ilah; Ziliukas, Pranas; Silva, Antonio Jose

    2013-01-01

    The aim of this paper is to determine the hydrodynamic characteristics of swimmer's scanned hand models for various combinations of both the angle of attack and the sweepback angle and shape and velocity of swimmer's hand, simulating separate underwater arm stroke phases of freestyle (front crawl) swimming. Four realistic 3D models of swimmer's hand corresponding to different combinations of separated/closed fingers positions were used to simulate different underwater front crawl phases. The fluid flow was simulated using FLUENT (ANSYS, PA, USA). Drag force and drag coefficient were calculated using (computational fluid dynamics) CFD in steady state. Results showed that the drag force and coefficient varied at the different flow velocities on all shapes of the hand and variation was observed for different hand positions corresponding to different stroke phases. The models of the hand with thumb adducted and abducted generated the highest drag forces and drag coefficients. The current study suggests that the realistic variation of both the orientation angles influenced higher values of drag, lift, and resultant coefficients and forces. To augment resultant force, which affects swimmer's propulsion, the swimmer should concentrate in effectively optimising achievable hand areas during crucial propulsive phases. PMID:23691493

  12. A novel task-oriented optimal design for P300-based brain-computer interfaces

    NASA Astrophysics Data System (ADS)

    Zhou, Zongtan; Yin, Erwei; Liu, Yang; Jiang, Jun; Hu, Dewen

    2014-10-01

    Objective. The number of items of a P300-based brain-computer interface (BCI) should be adjustable in accordance with the requirements of the specific tasks. To address this issue, we propose a novel task-oriented optimal approach aimed at increasing the performance of general P300 BCIs with different numbers of items. Approach. First, we proposed a stimulus presentation with variable dimensions (VD) paradigm as a generalization of the conventional single-character (SC) and row-column (RC) stimulus paradigms. Furthermore, an embedding design approach was employed for any given number of items. Finally, based on the score-P model of each subject, the VD flash pattern was selected by a linear interpolation approach for a certain task. Main results. The results indicate that the optimal BCI design consistently outperforms the conventional approaches, i.e., the SC and RC paradigms. Specifically, there is significant improvement in the practical information transfer rate for a large number of items. Significance. The results suggest that the proposed optimal approach would provide useful guidance in the practical design of general P300-based BCIs.

  13. Three energy computed tomography with synchrotron radiation

    SciTech Connect

    Menk, R.H.; Thomlinson, W.; Zhong, Z.; Charvet, A.M.; Arfelli, F. |; Chapman, L.

    1997-09-01

    Preliminary experiments for digital subtraction computed tomography (CT) at the K-edge of iodine (33.1 keV) were carried out at SMERF (Synchrotron Medical Research Facility X17B2) at the National Synchrotron Light Source, Brookhaven National Laboratory. The major goal was to evaluate the availability of this kind of imaging for in vivo neurological studies. Using the transvenous coronary angiography system, CT images of various samples and phantoms were taken simultaneously at two slightly different energies bracketing the K-absorption edge of iodine. The logarithmic subtraction of the two images resulted in the contrast enhancement of iodine filled structures. An additional CT image was taken at 99.57 keV (second harmonic of the fundamental wave). The third energy allowed the calculation of absolute iodine, tissue and bone images by means of a matrix inversion. A spatial resolution of 0.8 LP/mm was measured in single energy images and iodine concentrations down to 0.082 mg/ml in a 1/4 diameter detail were visible in the reconstructed subtraction image.

  14. Quantitatively identical orientation-dependent ionization energy and electron affinity of diindenoperylene

    SciTech Connect

    Han, W. N.; Yonezawa, K.; Makino, R.; Kato, K.; Hinderhofer, A.; Ueno, N.; Kera, S.; Murdey, R.; Shiraishi, R.; Yoshida, H.; Sato, N.

    2013-12-16

    Molecular orientation dependences of the ionization energy (IE) and the electron affinity (EA) of diindenoperylene (DIP) films were studied by using ultraviolet photoemission spectroscopy and inverse photoemission spectroscopy. The molecular orientation was controlled by preparing the DIP films on graphite and SiO{sub 2} substrates. The threshold IE and EA of DIP thin films were determined to be 5.81 and 3.53 eV for the film of flat-lying DIP orientation, respectively, and 5.38 and 3.13 eV for the film of standing DIP orientation, respectively. The result indicates that the IE and EA for the flat-lying film are larger by 0.4 eV and the frontier orbital states shift away from the vacuum level compared to the standing film. This rigid energy shift is ascribed to a surface-electrostatic potential produced by the intramolecular polar bond (>C{sup −}-H{sup +}) for standing orientation and π-electron tailing to vacuum for flat-lying orientation.

  15. Excitation energy migration in uniaxially oriented polymer films: A comparison between strongly and weakly organized systems

    NASA Astrophysics Data System (ADS)

    Bojarski, P.; Synak, A.; Kułak, L.; Baszanowska, E.; Kubicki, A.; Grajek, H.; Szabelski, M.

    2006-04-01

    The mechanism of multistep excitation energy migration in uniaxially oriented polymer films is discussed for strongly and weakly orientating dyes in poly(vinyl alcohol) matrix. The comparison between both types of systems is based on concentration depolarization of fluorescence, Monte-Carlo simulations and linear dichroism data. It is found that the alignment of transition dipole moments of fluorophores in the ordered matrix relative to the direction of polymer stretching exhibits strong effect on the concentration depolarization of fluorescence. In ordered matrices of flavomononucleotide and rhodamine 6G concentration depolarization of fluorescence remains quite strong, whereas for linear carbocyanines it is very weak despite effective energy migration.

  16. A Crafts-Oriented Approach to Computing in High School: Introducing Computational Concepts, Practices, and Perspectives with Electronic Textiles

    ERIC Educational Resources Information Center

    Kafai, Yasmin B.; Lee, Eunkyoung; Searle, Kristin; Fields, Deborah; Kaplan, Eliot; Lui, Debora

    2014-01-01

    In this article, we examine the use of electronic textiles (e-textiles) for introducing key computational concepts and practices while broadening perceptions about computing. The starting point of our work was the design and implementation of a curriculum module using the LilyPad Arduino in a pre-AP high school computer science class. To…

  17. Computed potential energy surfaces for chemical reactions

    NASA Technical Reports Server (NTRS)

    Walch, Stephen P.

    1988-01-01

    The minimum energy path for the addition of a hydrogen atom to N2 is characterized in CASSCF/CCI calculations using the (4s3p2d1f/3s2p1d) basis set, with additional single point calculations at the stationary points of the potential energy surface using the (5s4p3d2f/4s3p2d) basis set. These calculations represent the most extensive set of ab initio calculations completed to date, yielding a zero point corrected barrier for HN2 dissociation of approx. 8.5 kcal mol/1. The lifetime of the HN2 species is estimated from the calculated geometries and energetics using both conventional Transition State Theory and a method which utilizes an Eckart barrier to compute one dimensional quantum mechanical tunneling effects. It is concluded that the lifetime of the HN2 species is very short, greatly limiting its role in both termolecular recombination reactions and combustion processes.

  18. Fermilab Central Computing Facility: Energy conservation report and mechanical systems design optimization and cost analysis study

    SciTech Connect

    Krstulovich, S.F.

    1986-11-12

    This report is developed as part of the Fermilab Central Computing Facility Project Title II Design Documentation Update under the provisions of DOE Document 6430.1, Chapter XIII-21, Section 14, paragraph a. As such, it concentrates primarily on HVAC mechanical systems design optimization and cost analysis and should be considered as a supplement to the Title I Design Report date March 1986 wherein energy related issues are discussed pertaining to building envelope and orientation as well as electrical systems design.

  19. An approach for measuring the spatial orientations of a computed-tomography simulation system.

    PubMed

    Wu, Meng Chia; Ramaseshan, Ramani

    2014-01-01

    The quality assurance tests for measuring the spatial orientations between tabletop, external patient positioning lasers, couch longitudinal moving direction, and imaging plane in a CT simulation system are a complicated and time-consuming process. We proposed a simple and efficient approach to acquire the angular deviations of spatial orientations between these components. An in-house cross-jig was used in this study. We found a relationship between the orientations of the jig's arms shown on the CT images and the orientations of the components in a CT simulator. We verified this relationship with 16 misalignment orientations of known errors, to simulate all possible deviation situations. Generally, the tabletop and external lasers system are mounted separately in a CT simulation system; the former is on the couch trail, the later is on the wall and ceiling. They are independent to each other and will cause different effects on CT images. We only need two scans to acquire the angular deviations of our system: i) when aligning the cross-jig with tabletop, we can check the orientations between the tabletop, couch longitudinal moving direction, and imaging plane; ii) while aligning the cross-jig with the external axial lasers, we will know the angular deviation between the lasers, couch longitudinal moving direction, and imaging plane. The CT simulator had been carefully examined by performing the QA procedures recommended by the AAPM Task Group 66. The measurements of the spatial orientations using the proposed method agree well with TG 66 recommendations. However, the time taken to perform the QA using our method is considerably shorter than the method described in TG 66--5 minutes versus 30 minutes. The deliberate misalignment orientations tests with known errors were detected successfully by our in-house analysis program. The maximum difference between the known errors and the measured angles is only 0.07°. We determined that the relationship between the

  20. A portable, GUI-based, object-oriented client-server architecture for computer-based patient record (CPR) systems.

    PubMed

    Schleyer, T K

    1995-01-01

    Software applications for computer-based patient records require substantial development investments. Portable, open software architectures are one way to delay or avoid software application obsolescence. The Clinical Management System at Temple University School of Dentistry uses a portable, GUI-based, object-oriented client-server architecture. Two main criteria determined this approach: preservation of investment in software development and a smooth migration path to a Computer-based Patient Record. The application is separated into three layers: graphical user interface, database interface, and application functionality Implementation with generic cross-platform development tools ensures maximum portability. PMID:7662879

  1. It takes a village: supporting inquiry- and equity-oriented computer science pedagogy through a professional learning community

    NASA Astrophysics Data System (ADS)

    Ryoo, Jean; Goode, Joanna; Margolis, Jane

    2015-10-01

    This article describes the importance that high school computer science teachers place on a teachers' professional learning community designed around an inquiry- and equity-oriented approach for broadening participation in computing. Using grounded theory to analyze four years of teacher surveys and interviews from the Exploring Computer Science (ECS) program in the Los Angeles Unified School District, this article describes how participating in professional development activities purposefully aimed at fostering a teachers' professional learning community helps ECS teachers make the transition to an inquiry-based classroom culture and break professional isolation. This professional learning community also provides experiences that challenge prevalent deficit notions and stereotypes about which students can or cannot excel in computer science.

  2. Surface-Energy-Anisotropy-Induced Orientation Effects on RayleighInstabilities in Sapphire

    SciTech Connect

    Santala, Melissa; Glaeser, Andreas M.

    2006-01-01

    Arrays of controlled-geometry, semi-infinite pore channels of systematically varied crystallographic orientation were introduced into undoped m-plane (10{bar 1}0) sapphire substrates using microfabrication techniques and ion-beam etching and subsequently internalized by solid-state diffusion bonding. A series of anneals at 1700 C caused the breakup of these channels into discrete pores via Rayleigh instabilities. In all cases, channels broke up with a characteristic wavelength larger than that expected for a material with isotropic surface energy, reflecting stabilization effects due to surface-energy anisotropy. The breakup wavelength and the time required for complete breakup varied significantly with channel orientation. For most orientations, the instability wavelength for channels of radius R was in the range of 13.2R-25R, and complete breakup occurred within 2-10 h. To first order, the anneal times for complete breakup scale with the square of the breakup wavelength. Channels oriented along a <11{bar 2}0> direction had a wavelength of {approx} 139R, and required 468 h for complete breakup. Cross-sectional analysis of channels oriented along a <11{bar 2}0> direction showed the channel to be completely bounded by stable c(0001), r{l_brace}{bar 1}012{r_brace}, and s{l_brace}10{bar 1}1{r_brace} facets.

  3. Social Studies: Application Units. Course II, Teachers. Computer-Oriented Curriculum. REACT (Relevant Educational Applications of Computer Technology).

    ERIC Educational Resources Information Center

    Tecnica Education Corp., San Carlos, CA.

    This book is one of a series in Course II of the Relevant Educational Applications of Computer Technology (REACT) Project. It is designed to point out to teachers two of the major applications of computers in the social sciences: simulation and data analysis. The first section contains a variety of simulation units organized under the following…

  4. Local Orientational Order in Liquids Revealed by Resonant Vibrational Energy Transfer

    NASA Astrophysics Data System (ADS)

    Panman, M. R.; Shaw, D. J.; Ensing, B.; Woutersen, S.

    2014-11-01

    We demonstrate that local orientational ordering in a liquid can be observed in the decay of the vibrational anisotropy caused by resonant transfer of vibrational excitations between its constituent molecules. We show that the functional form of this decay is determined by the (distribution of) angles between the vibrating bonds of the molecules between which energy transfer occurs, and that the initial drop in the decay reflects the average angle between nearest neighbors. We use this effect to observe the difference in local orientational ordering in the two hydrogen-bonded liquids ethanol and N -methylacetamide.

  5. Computer Simulation for Molecular Orientation of Vanadyl Phthalocyanine in Epitaxial Form

    NASA Astrophysics Data System (ADS)

    Tada, Hirokazu; Mashiko, Shinro

    1995-07-01

    Molecular orientation of vanadyl phthalocyanine (VOPc) adsorbed on KBr and KCl was studied by molecular mechanics simulation. A VOPc molecule with an oxygen atom oriented upward with respect to the substrate surface was found to be more stable than that oriented downward. The central vanadium atom preferred to stay on potassium cations rather than on halogen anions, which is contrary to our expectation. The lattices optimized in this study agree well with the experimental results. In the epitaxial form on KBr and KCl, the angle between the [100] axis of the substrates and the molecular axis passing through two bridge-nitrogen atoms was 39° and 45°, respectively. The dovetail molecular packing was observed on KCl, while some voids existed between molecules in the optimized packing on KBr.

  6. Stress and performance: do service orientation and emotional energy moderate the relationship?

    PubMed

    Smith, Michael R; Rasmussen, Jennifer L; Mills, Maura J; Wefald, Andrew J; Downey, Ronald G

    2012-01-01

    The current study examines the moderating effect of customer service orientation and emotional energy on the stress-performance relationship for 681 U.S. casual dining restaurant employees. Customer service orientation was hypothesized to moderate the stress-performance relationship for Front-of-House (FOH) workers. Emotional energy was hypothesized to moderate stress-performance for Back-of-House (BOH) workers. Contrary to expectations, customer service orientation failed to moderate the effects of stress on performance for FOH employees, but the results supported that customer service orientation is likely a mediator of the relationship. However, the hypothesis was supported for BOH workers; emotional energy was found to moderate stress performance for these employees. This finding suggests that during times of high stress, meaningful, warm, and empathetic relationships are likely to impact BOH workers' ability to maintain performance. These findings have real-world implications in organizational practice, including highlighting the importance of developing positive and meaningful social interactions among workers and facilitating appropriate person-job fits. Doing so is likely to help in alleviating worker stress and is also likely to encourage worker performance. PMID:22122550

  7. Group-oriented coordination models for distributed client-server computing

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.; Hughes, Craig S.

    1994-01-01

    This paper describes group-oriented control models for distributed client-server interactions. These models transparently coordinate requests for services that involve multiple servers, such as queries across distributed databases. Specific capabilities include: decomposing and replicating client requests; dispatching request subtasks or copies to independent, networked servers; and combining server results into a single response for the client. The control models were implemented by combining request broker and process group technologies with an object-oriented communication middleware tool. The models are illustrated in the context of a distributed operations support application for space-based systems.

  8. Computational Fluid Dynamics Investigation of Human Aspiration in Low-Velocity Air: Orientation Effects on Mouth-Breathing Simulations

    PubMed Central

    Anthony, T. Renée

    2013-01-01

    Computational fluid dynamics was used to investigate particle aspiration efficiency in low-moving air typical of occupational settings (0.1–0.4 m s−1). Fluid flow surrounding an inhaling humanoid form and particle trajectories traveling into the mouth were simulated for seven discrete orientations relative to the oncoming wind (0°, 15°, 30°, 60°, 90°, 135° and 180°). Three continuous inhalation velocities (1.81, 4.33, and 12.11 m s−1), representing the mean inhalation velocity associated with sinusoidal at-rest, moderate, and heavy breathing (7.5, 20.8, and 50.3 l min−1, respectively) were simulated. These simulations identified a decrease in aspiration efficiency below the inhalable particulate mass (IPM) criterion of 0.5 for large particles, with no aspiration of particles 100 µm and larger for at-rest breathing and no aspiration of particles 116 µm for moderate breathing, over all freestream velocities and orientations relative to the wind. For particles smaller than 100 µm, orientation-averaged aspiration efficiency exceeded the IPM criterion, with increased aspiration efficiency as freestream velocity decreased. Variability in aspiration efficiencies between velocities was low for small (<22 µm) particles, but increased with increasing particle size over the range of conditions studied. Orientation-averaged simulation estimates of aspiration efficiency agree with the linear form of the proposed linear low-velocity inhalable convention through 100 µm, based on laboratory studies using human mannequins. PMID:23316076

  9. Improving scalability with loop transformations and message aggregation in parallel object-oriented frameworks for scientific computing

    SciTech Connect

    Bassetti, F.; Davis, K.; Quinlan, D.

    1998-09-01

    Application codes reliably achieve performance far less than the advertised capabilities of existing architectures, and this problem is worsening with increasingly-parallel machines. For large-scale numerical applications, stencil operations often impose the great part of the computational cost, and the primary sources of inefficiency are the costs of message passing and poor cache utilization. This paper proposes and demonstrates optimizations for stencil and stencil-like computations for both serial and parallel environments that ameliorate these sources of inefficiency. Achieving scalability, they believe, requires both algorithm design and compile-time support. The optimizations they present are automatable because the stencil-like computations are implemented at a high level of abstraction using object-oriented parallel array class libraries. These optimizations, which are beyond the capabilities of today compilers, may be performed automatically by a preprocessor such as the one they are currently developing.

  10. Computing alignment and orientation of non-linear molecules at room temperatures using random phase wave functions

    NASA Astrophysics Data System (ADS)

    Kallush, Shimshon; Fleischer, Sharly; Ultrafast terahertz molecular dynamics Collaboration

    2015-05-01

    Quantum simulation of large open systems is a hard task that demands huge computation and memory costs. The rotational dynamics of non-linear molecules at high-temperature under external fields is such an example. At room temperature, the initial density matrix populates ~ 104 rotational states, and the whole coupled Hilbert space can reach ~ 106 states. Simulation by neither the direct density matrix nor the full basis set of populated wavefunctions is impossible. We employ the random phase wave function method to represent the initial state and compute several time dependent and independent observables such as the orientation and the alignment of the molecules. The error of the method was found to scale as N- 1 / 2, where N is the number of wave function realizations employed. Scaling vs. the temperature was computed for weak and strong fields. As expected, the convergence of the method increase rapidly with the temperature and the field intensity.

  11. A Time Sequence-Oriented Concept Map Approach to Developing Educational Computer Games for History Courses

    ERIC Educational Resources Information Center

    Chu, Hui-Chun; Yang, Kai-Hsiang; Chen, Jing-Hong

    2015-01-01

    Concept maps have been recognized as an effective tool for students to organize their knowledge; however, in history courses, it is important for students to learn and organize historical events according to the time of their occurrence. Therefore, in this study, a time sequence-oriented concept map approach is proposed for developing a game-based…

  12. Gestalt Computing and the Study of Content-Oriented User Behavior on the Web

    ERIC Educational Resources Information Center

    Bandari, Roja

    2013-01-01

    Elementary actions online establish an individual's existence on the web and her/his orientation toward different issues. In this sense, actions truly define a user in spaces like online forums and communities and the aggregate of elementary actions shape the atmosphere of these online spaces. This observation, coupled with the unprecedented scale…

  13. Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation

    NASA Technical Reports Server (NTRS)

    Afjeh, Abdollah A.; Reed, John A.

    2003-01-01

    This research is aimed at developing a neiv and advanced simulation framework that will significantly improve the overall efficiency of aerospace systems design and development. This objective will be accomplished through an innovative integration of object-oriented and Web-based technologies ivith both new and proven simulation methodologies. The basic approach involves Ihree major areas of research: Aerospace system and component representation using a hierarchical object-oriented component model which enables the use of multimodels and enforces component interoperability. Collaborative software environment that streamlines the process of developing, sharing and integrating aerospace design and analysis models. . Development of a distributed infrastructure which enables Web-based exchange of models to simplify the collaborative design process, and to support computationally intensive aerospace design and analysis processes. Research for the first year dealt with the design of the basic architecture and supporting infrastructure, an initial implementation of that design, and a demonstration of its application to an example aircraft engine system simulation.

  14. Computational study to evaluate the birefringence of uniaxially oriented film of cellulose triacetate.

    PubMed

    Hayakawa, Daichi; Ueda, Kazuyoshi

    2015-01-30

    The intrinsic birefringence of a cellulose triacetate (CTA) film is evaluated using the polarizability of the monomer model of the CTA repeating unit, which is calculated using the density functional theory (DFT). Since the CTA monomer is known to have three rotational isomers, referred to as gg, gt, and tg, the intrinsic birefringence of these isomers is evaluated separately. The calculation indicates that the monomer CTA with gg and gt structures shows a negative intrinsic birefringence, whereas the monomer unit with a tg structure shows a positive intrinsic birefringence. By using these values, a model of the uniaxially elongated CTA film is constructed with a molecular dynamics simulation, and the orientation birefringence of the film model was evaluated. The result indicates that the film has negative orientation birefringence and that its value is in good agreement with experimental results. PMID:25498014

  15. Computational Examination of Orientation-Dependent Morphological Evolution during the Electrodeposition and Electrodissolution of Magnesium

    DOE PAGESBeta

    DeWitt, S.; Hahn, N.; Zavadil, K.; Thornton, K.

    2015-12-30

    Here a new model of electrodeposition and electrodissolution is developed and applied to the evolution of Mg deposits during anode cycling. The model captures Butler-Volmer kinetics, facet evolution, the spatially varying potential in the electrolyte, and the time-dependent electrolyte concentration. The model utilizes a diffuse interface approach, employing the phase field and smoothed boundary methods. Scanning electron microscope (SEM) images of magnesium deposited on a gold substrate show the formation of faceted deposits, often in the form of hexagonal prisms. Orientation-dependent reaction rate coefficients were parameterized using the experimental SEM images. Three-dimensional simulations of the growth of magnesium deposits yieldmore » deposit morphologies consistent with the experimental results. The simulations predict that the deposits become narrower and taller as the current density increases due to the depletion of the electrolyte concentration near the sides of the deposits. Increasing the distance between the deposits leads to increased depletion of the electrolyte surrounding the deposit. Two models relating the orientation-dependence of the deposition and dissolution reactions are presented. Finally, the morphology of the Mg deposit after one deposition-dissolution cycle is significantly different between the two orientation-dependence models, providing testable predictions that suggest the underlying physical mechanisms governing morphology evolution during deposition and dissolution.« less

  16. Computational Examination of Orientation-Dependent Morphological Evolution during the Electrodeposition and Electrodissolution of Magnesium

    SciTech Connect

    DeWitt, S.; Hahn, N.; Zavadil, K.; Thornton, K.

    2015-12-30

    Here a new model of electrodeposition and electrodissolution is developed and applied to the evolution of Mg deposits during anode cycling. The model captures Butler-Volmer kinetics, facet evolution, the spatially varying potential in the electrolyte, and the time-dependent electrolyte concentration. The model utilizes a diffuse interface approach, employing the phase field and smoothed boundary methods. Scanning electron microscope (SEM) images of magnesium deposited on a gold substrate show the formation of faceted deposits, often in the form of hexagonal prisms. Orientation-dependent reaction rate coefficients were parameterized using the experimental SEM images. Three-dimensional simulations of the growth of magnesium deposits yield deposit morphologies consistent with the experimental results. The simulations predict that the deposits become narrower and taller as the current density increases due to the depletion of the electrolyte concentration near the sides of the deposits. Increasing the distance between the deposits leads to increased depletion of the electrolyte surrounding the deposit. Two models relating the orientation-dependence of the deposition and dissolution reactions are presented. Finally, the morphology of the Mg deposit after one deposition-dissolution cycle is significantly different between the two orientation-dependence models, providing testable predictions that suggest the underlying physical mechanisms governing morphology evolution during deposition and dissolution.

  17. Possible Applications of Computer Oriented Problem Solving Methods to Mathematics Education.

    ERIC Educational Resources Information Center

    Hunt, Earl B.; And Others

    This report consists of five separate papers. The first is an extensive review of the "state of the art" in computer simulation and artificial intelligence. This review states that artificial intelligence and computer simulation have accomplished a great deal, with particular attention to findings relevant to psychology. The second paper is an…

  18. Studying Computer Science in a Multidisciplinary Degree Programme: Freshman Students' Orientation, Knowledge, and Background

    ERIC Educational Resources Information Center

    Kautz, Karlheinz; Kofoed, Uffe

    2004-01-01

    Teachers at universities are facing an increasing disparity in students' prior IT knowledge and, at the same time, experience a growing disengagement of the students with regard to involvement in study activities. As computer science teachers in a joint programme in computer science and business administration, we made a number of similar…

  19. Adapting to a Computer-Oriented Society: The Leadership Role of Business and Liberal Arts Faculties.

    ERIC Educational Resources Information Center

    O'Gorman, David E.

    The need for higher education to take a proactive rather than a reactive stance in dealing with the impact of the computer is considered. The field of computerized video technology is briefly discussed. It is suggested that disparate groups such as the liberal arts and business faculties should cooperate to maximize the use of computer technology.…

  20. Secondary iris recognition method based on local energy-orientation feature

    NASA Astrophysics Data System (ADS)

    Huo, Guang; Liu, Yuanning; Zhu, Xiaodong; Dong, Hongxing

    2015-01-01

    This paper proposes a secondary iris recognition based on local features. The application of the energy-orientation feature (EOF) by two-dimensional Gabor filter to the extraction of the iris goes before the first recognition by the threshold of similarity, which sets the whole iris database into two categories-a correctly recognized class and a class to be recognized. Therefore, the former are accepted and the latter are transformed by histogram to achieve an energy-orientation histogram feature (EOHF), which is followed by a second recognition with the chi-square distance. The experiment has proved that the proposed method, because of its higher correct recognition rate, could be designated as the most efficient and effective among its companion studies in iris recognition algorithms.

  1. Bringing Advanced Computational Techniques to Energy Research

    SciTech Connect

    Mitchell, Julie C

    2012-11-17

    Please find attached our final technical report for the BACTER Institute award. BACTER was created as a graduate and postdoctoral training program for the advancement of computational biology applied to questions of relevance to bioenergy research.

  2. Office of Fusion Energy computational review

    SciTech Connect

    Cohen, B.I.; Cohen, R.H.; Byers, J.A.

    1996-03-06

    The LLNL MFE Theory and Computations Program supports computational efforts in the following areas: (1) Magnetohydrodynamic equilibrium and stability; (2) Fluid and kinetic edge plasma simulation and modeling; (3) Kinetic and fluid core turbulent transport simulation; (4) Comprehensive tokamak modeling (CORSICA Project) - transport, MHD equilibrium and stability, edge physics, heating, turbulent transport, etc. and (5) Other: ECRH ray tracing, reflectometry, plasma processing. This report discusses algorithm and codes pertaining to these areas.

  3. An Application of the Market-Oriented Programming to Energy Trading Decision Method in Distributed Energy Management Systems

    NASA Astrophysics Data System (ADS)

    Yakire, Koji; Miyamoto, Toshiyuki; Kumagai, Sadatoshi; Mori, Kazuyuki; Kitamura, Shoichi; Yamamoto, Takaya

    A control of CO2 emissions which is the main factor of global warming is one of the most important problems in the 21st century about preservation of earth environment. Therefore, efficient supply and use of energy are indispensable. We have proposed distributed energy management systems (DEMSs), where we are to obtain optimal plans that minimize both of costs and CO2 emissions through electrical and thermal energy trading. A DEMS consists of the plural entities that seek their own economic profits. In this paper, we propose a trading method that gives competitive equilibrium resource distribution by applying the market-oriented programming (MOP) to DEMSs.

  4. National Energy Research Scientific Computing Center (NERSC): Advancing the frontiers of computational science and technology

    SciTech Connect

    Hules, J.

    1996-11-01

    National Energy Research Scientific Computing Center (NERSC) provides researchers with high-performance computing tools to tackle science`s biggest and most challenging problems. Founded in 1974 by DOE/ER, the Controlled Thermonuclear Research Computer Center was the first unclassified supercomputer center and was the model for those that followed. Over the years the center`s name was changed to the National Magnetic Fusion Energy Computer Center and then to NERSC; it was relocated to LBNL. NERSC, one of the largest unclassified scientific computing resources in the world, is the principal provider of general-purpose computing services to DOE/ER programs: Magnetic Fusion Energy, High Energy and Nuclear Physics, Basic Energy Sciences, Health and Environmental Research, and the Office of Computational and Technology Research. NERSC users are a diverse community located throughout US and in several foreign countries. This brochure describes: the NERSC advantage, its computational resources and services, future technologies, scientific resources, and computational science of scale (interdisciplinary research over a decade or longer; examples: combustion in engines, waste management chemistry, global climate change modeling).

  5. Applying analytic hierarchy process to assess healthcare-oriented cloud computing service systems.

    PubMed

    Liao, Wen-Hwa; Qiu, Wan-Li

    2016-01-01

    Numerous differences exist between the healthcare industry and other industries. Difficulties in the business operation of the healthcare industry have continually increased because of the volatility and importance of health care, changes to and requirements of health insurance policies, and the statuses of healthcare providers, which are typically considered not-for-profit organizations. Moreover, because of the financial risks associated with constant changes in healthcare payment methods and constantly evolving information technology, healthcare organizations must continually adjust their business operation objectives; therefore, cloud computing presents both a challenge and an opportunity. As a response to aging populations and the prevalence of the Internet in fast-paced contemporary societies, cloud computing can be used to facilitate the task of balancing the quality and costs of health care. To evaluate cloud computing service systems for use in health care, providing decision makers with a comprehensive assessment method for prioritizing decision-making factors is highly beneficial. Hence, this study applied the analytic hierarchy process, compared items related to cloud computing and health care, executed a questionnaire survey, and then classified the critical factors influencing healthcare cloud computing service systems on the basis of statistical analyses of the questionnaire results. The results indicate that the primary factor affecting the design or implementation of optimal cloud computing healthcare service systems is cost effectiveness, with the secondary factors being practical considerations such as software design and system architecture. PMID:27441149

  6. Time-oriented hierarchical method for computation of principal components using subspace learning algorithm.

    PubMed

    Jankovic, Marko; Ogawa, Hidemitsu

    2004-10-01

    Principal Component Analysis (PCA) and Principal Subspace Analysis (PSA) are classic techniques in statistical data analysis, feature extraction and data compression. Given a set of multivariate measurements, PCA and PSA provide a smaller set of "basis vectors" with less redundancy, and a subspace spanned by them, respectively. Artificial neurons and neural networks have been shown to perform PSA and PCA when gradient ascent (descent) learning rules are used, which is related to the constrained maximization (minimization) of statistical objective functions. Due to their low complexity, such algorithms and their implementation in neural networks are potentially useful in cases of tracking slow changes of correlations in the input data or in updating eigenvectors with new samples. In this paper we propose PCA learning algorithm that is fully homogeneous with respect to neurons. The algorithm is obtained by modification of one of the most famous PSA learning algorithms--Subspace Learning Algorithm (SLA). Modification of the algorithm is based on Time-Oriented Hierarchical Method (TOHM). The method uses two distinct time scales. On a faster time scale PSA algorithm is responsible for the "behavior" of all output neurons. On a slower scale, output neurons will compete for fulfillment of their "own interests". On this scale, basis vectors in the principal subspace are rotated toward the principal eigenvectors. At the end of the paper it will be briefly analyzed how (or why) time-oriented hierarchical method can be used for transformation of any of the existing neural network PSA method, into PCA method. PMID:15593379

  7. Energy--What to Do until the Computer Comes.

    ERIC Educational Resources Information Center

    Johnston, Archie B.

    Drawing from Tallahassee Community College's (TCC's) experiences with energy conservation, this paper offers suggestions for reducing energy costs through computer-controlled systems and other means. After stating the energy problems caused by TCC's multi-zone heating and cooling system, the paper discusses the five-step process by which TCC…

  8. Energy measurement using flow computers and chromatography

    SciTech Connect

    Beeson, J.

    1995-12-01

    Arkla Pipeline Group (APG), along with most transmission companies, went to electronic flow measurement (EFM) to: (1) Increase resolution and accuracy; (2) Real time correction of flow variables; (3) Increase speed in data retrieval; (4) Reduce capital expenditures; and (5) Reduce operation and maintenance expenditures Prior to EFM, mechanical seven day charts were used which yielded 800 pressure and differential pressure readings. EFM yields 1.2-million readings, a 1500 time improvement in resolution and additional flow representation. The total system accuracy of the EFM system is 0.25 % compared with 2 % for the chart system which gives APG improved accuracy. A typical APG electronic measurement system includes a microprocessor-based flow computer, a telemetry communications package, and a gas chromatograph. Live relative density (specific gravity), BTU, CO{sub 2}, and N{sub 2} are updated from the chromatograph to the flow computer every six minutes which provides accurate MMBTU computations. Because the gas contract length has changed from years to monthly and from a majority of direct sales to transports both Arkla and its customers wanted access to actual volumes on a much more timely basis than is allowed with charts. The new electronic system allows volumes and other system data to be retrieved continuously, if EFM is on Supervisory Control and Data Acquisition (SCADA) or daily if on dial up telephone. Previously because of chart integration, information was not available for four to six weeks. EFM costs much less than the combined costs of telemetry transmitters, pressure and differential pressure chart recorders, and temperature chart recorder which it replaces. APG will install this equipment on smaller volume stations at a customers expense. APG requires backup measurement on metering facilities this size. It could be another APG flow computer or chart recorder, or the other companies flow computer or chart recorder.

  9. Orientational dependence of the translational energy transfer in the scattering of oriented fluoroform and tert-butyl chloride molecules by a graphite(0001) surface

    NASA Astrophysics Data System (ADS)

    Ionov, Stanislav I.; LaVilla, Michael E.; Bernstein, Richard B.

    1990-11-01

    Time-of-flight distributions of beams of hexapole-oriented CHF3 and t-BuCl molecules scattered from a graphite (0001) surface have been measured for parallel vs antiparallel incident orientations of the molecular dipole with respect to the surface normal, over a range of surface temperatures 170≤Ts≤730 K. The observed difference in arrival times, Δtexp, for opposite initial orientations depends strongly on the degree of orientation of the incident molecules. In the analysis of the Δtexp data, we make use of the two-component model, which assumes that the scattered beams are composed of directly scattered and trapped/desorbed molecules. It is shown that in the common case of short residence times for the trapped molecules, the difference in arrival times for the directly scattered molecules, Δtdir, can be ascertained from the measured Δtexp. The magnitudes of the calculated Δtdir correspond to a strong orientation dependence in the translational energy transfer accompanying the direct scattering of CHF3 and t-BuCl by graphite (0001). The final translational energy of directly scattered molecules E' is found to be smaller for the collision of the H ``end'' of fluoroform with the graphite surface; for t-BuCl, E' is smaller for the Cl ``end'' collision. These are the orientations that also give rise to higher trapping probability. In the course of the present study, the residence times of t-BuCl on graphite (0001) have been measured over the surface temperature range 170

  10. COMPLEAT (Community-Oriented Model for Planning Least-Cost Energy Alternatives and Technologies): A planning tool for publicly owned electric utilities. [Community-Oriented Model for Planning Least-Cost Energy Alternatives and Technologies (Compleat)

    SciTech Connect

    Not Available

    1990-09-01

    COMPLEAT takes its name, as an acronym, from Community-Oriented Model for Planning Least-Cost Energy Alternatives and Technologies. It is an electric utility planning model designed for use principally by publicly owned electric utilities and agencies serving such utilities. As a model, COMPLEAT is significantly more full-featured and complex than called out in APPA's original plan and proposal to DOE. The additional complexity grew out of a series of discussions early in the development schedule, in which it became clear to APPA staff and advisors that the simplicity characterizing the original plan, while highly desirable in terms of utility applications, was not achievable if practical utility problems were to be addressed. The project teams settled on Energy 20/20, an existing model developed by Dr. George Backus of Policy Assessment Associates, as the best candidate for the kinds of modifications and extensions that would be required. The remainder of the project effort was devoted to designing specific input data files, output files, and user screens and to writing and testing the compute programs that would properly implement the desired features around Energy 20/20 as a core program. This report presents in outline form, the features and user interface of COMPLEAT.

  11. MOUSE (MODULAR ORIENTED UNCERTAINTY SYSTEM): A COMPUTERIZED UNCERTAINTY ANALYSIS SYSTEM (FOR MICRO- COMPUTERS)

    EPA Science Inventory

    Environmental engineering calculations involving uncertainties; either in the model itself or in the data, are far beyond the capabilities of conventional analysis for any but the simplest of models. There exist a number of general-purpose computer simulation languages, using Mon...

  12. Interactive Computer Graphics for Performance-Structure-Oriented CAI. Technical Report No. 73.

    ERIC Educational Resources Information Center

    Rigney, Joseph W.; And Others

    Two different uses of interactive graphics in computer-assisted instruction are described. Interactive graphics may be used as substitutes for physical devices and operations. An example is simulation of operating on man/machine interfaces, substituting interactive graphics for controls, indicators, and indications. Interactive graphics may also…

  13. The Effectiveness of Instructional Orienting Activities in Computer-Based Instruction.

    ERIC Educational Resources Information Center

    Kenny, Richard F.

    Research literature pertaining to the use of instructional organizers is reviewed, and a comparative analysis is made of their effectiveness with computer-based instruction (CBI). One of the earliest forms of instructional organizer is the advance organizer, first proposed by David Ausubel (1960, 1963) which is meant to facilitate the retention of…

  14. An overview of energy efficiency techniques in cluster computing systems

    SciTech Connect

    Valentini, Giorgio Luigi; Lassonde, Walter; Khan, Samee Ullah; Min-Allah, Nasro; Madani, Sajjad A.; Li, Juan; Zhang, Limin; Wang, Lizhe; Ghani, Nasir; Kolodziej, Joanna; Li, Hongxiang; Zomaya, Albert Y.; Xu, Cheng-Zhong; Balaji, Pavan; Vishnu, Abhinav; Pinel, Fredric; Pecero, Johnatan E.; Kliazovich, Dzmitry; Bouvry, Pascal

    2011-09-10

    Two major constraints demand more consideration for energy efficiency in cluster computing: (a) operational costs, and (b) system reliability. Increasing energy efficiency in cluster systems will reduce energy consumption, excess heat, lower operational costs, and improve system reliability. Based on the energy-power relationship, and the fact that energy consumption can be reduced with strategic power management, we focus in this survey on the characteristic of two main power management technologies: (a) static power management (SPM) systems that utilize low-power components to save the energy, and (b) dynamic power management (DPM) systems that utilize software and power-scalable components to optimize the energy consumption. We present the current state of the art in both of the SPM and DPM techniques, citing representative examples. The survey is concluded with a brief discussion and some assumptions about the possible future directions that could be explored to improve the energy efficiency in cluster computing.

  15. POET (parallel object-oriented environment and toolkit) and frameworks for scientific distributed computing

    SciTech Connect

    Armstrong, R.; Cheung, A.

    1997-01-01

    Frameworks for parallel computing have recently become popular as a means for preserving parallel algorithms as reusable components. Frameworks for parallel computing in general, and POET in particular, focus on finding ways to orchestrate and facilitate cooperation between components that implement the parallel algorithms. Since performance is a key requirement for POET applications, CORBA or CORBA-like systems are eschewed for a SPMD message-passing architecture common to the world of distributed-parallel computing. Though the system is written in C++ for portability, the behavior of POET is more like a classical framework, such as Smalltalk. POET seeks to be a general platform for scientific parallel algorithm components which can be modified, linked, mixed and matched to a user`s specification. The purpose of this work is to identify a means for parallel code reuse and to make parallel computing more accessible to scientists whose expertise is outside the field of parallel computing. The POET framework provides two things: (1) an object model for parallel components that allows cooperation without being restrictive; (2) services that allow components to access and manage user data and message-passing facilities, etc. This work has evolved through application of a series of real distributed-parallel scientific problems. The paper focuses on what is required for parallel components to cooperate and at the same time remain ``black-boxes`` that users can drop into the frame without having to know the exquisite details of message-passing, data layout, etc. The paper walks through a specific example of a chemically reacting flow application. The example is implemented in POET and the authors identify component cooperation, usability and reusability in an anecdotal fashion.

  16. Computational Approaches for Understanding Energy Metabolism

    PubMed Central

    Shestov, Alexander A; Barker, Brandon; Gu, Zhenglong; Locasale, Jason W

    2013-01-01

    There has been a surge of interest in understanding the regulation of metabolic networks involved in disease in recent years. Quantitative models are increasingly being used to i nterrogate the metabolic pathways that are contained within this complex disease biology. At the core of this effort is the mathematical modeling of central carbon metabolism involving glycolysis and the citric acid cycle (referred to as energy metabolism). Here we discuss several approaches used to quantitatively model metabolic pathways relating to energy metabolism and discuss their formalisms, successes, and limitations. PMID:23897661

  17. Service-Oriented Architecture for NVO and TeraGrid Computing

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph; Miller, Craig; Williams, Roy; Steenberg, Conrad; Graham, Matthew

    2008-01-01

    The National Virtual Observatory (NVO) Extensible Secure Scalable Service Infrastructure (NESSSI) is a Web service architecture and software framework that enables Web-based astronomical data publishing and processing on grid computers such as the National Science Foundation's TeraGrid. Characteristics of this architecture include the following: (1) Services are created, managed, and upgraded by their developers, who are trusted users of computing platforms on which the services are deployed. (2) Service jobs can be initiated by means of Java or Python client programs run on a command line or with Web portals. (3) Access is granted within a graduated security scheme in which the size of a job that can be initiated depends on the level of authentication of the user.

  18. Computer Review Can Cut HVAC Energy Use

    ERIC Educational Resources Information Center

    McClure, Charles J. R.

    1974-01-01

    A computerized review of construction bidding documents, usually done by a consulting engineer, can reveal how much money it will cost to operate various alternative types of HVAC equipment over a school's lifetime. The review should include a computerized load calculation, energy systems flow diagram, control system analysis, and a computerized…

  19. Energy-scalable pulsed mid-IR source using orientation-patterned GaAs.

    PubMed

    French, Douglas; Peterson, Rita; Jovanovic, Igor

    2011-02-15

    Coherent mid-IR sources based on orientation-patterned GaAs (OPGaAs) are of significant interest in diverse scientific, medical, and military applications. The generation of long-wavelength mid-IR beams in OPGaAs using optical parametric oscillation exhibits limitations in the obtainable pulse energy and peak power. The master oscillator power amplifier concept is demonstrated in OPGaAs, by which a mid-IR source based on optical parametric oscillation can be scaled to high energy by amplification of the output of the optical parametric oscillator in an optical parametric amplifier (OPA). A fivefold increase in the pulse energy is obtained using this method by amplifying 3.85μm pulses in an OPGaAs OPA pumped by a Th,Ho:YLF Q-switched laser. PMID:21326434

  20. Orientation of bluff body for designing efficient energy harvesters from vortex-induced vibrations

    NASA Astrophysics Data System (ADS)

    Dai, H. L.; Abdelkefi, A.; Yang, Y.; Wang, L.

    2016-02-01

    The characteristics and performances of four distinct vortex-induced vibrations (VIVs) piezoelectric energy harvesters are experimentally investigated and compared. The difference between these VIV energy harvesters is the installation of the cylindrical bluff body at the tip of cantilever beam with different orientations (bottom, top, horizontal, and vertical). Experiments show that the synchronization regions of the bottom, top, and horizontal configurations are almost the same at low wind speeds (around 1.5 m/s). The vertical configuration has the highest wind speed for synchronization (around 3.5 m/s) with the largest harvested power, which is explained by its highest natural frequency and the smallest coupled damping. The results lead to the conclusion that to design efficient VIV energy harvesters, the bluff body should be aligned with the beam for low wind speeds (<2 m/s) and perpendicular to the beam at high wind speeds (>2 m/s).

  1. Delta: An object-oriented finite element code architecture for massively parallel computers

    SciTech Connect

    Weatherby, J.R.; Schutt, J.A.; Peery, J.S.; Hogan, R.E.

    1996-02-01

    Delta is an object-oriented code architecture based on the finite element method which enables simulation of a wide range of engineering mechanics problems in a parallel processing environment. Written in C{sup ++}, Delta is a natural framework for algorithm development and for research involving coupling of mechanics from different Engineering Science disciplines. To enhance flexibility and encourage code reuse, the architecture provides a clean separation of the major aspects of finite element programming. Spatial discretization, temporal discretization, and the solution of linear and nonlinear systems of equations are each implemented separately, independent from the governing field equations. Other attractive features of the Delta architecture include support for constitutive models with internal variables, reusable ``matrix-free`` equation solvers, and support for region-to-region variations in the governing equations and the active degrees of freedom. A demonstration code built from the Delta architecture has been used in two-dimensional and three-dimensional simulations involving dynamic and quasi-static solid mechanics, transient and steady heat transport, and flow in porous media.

  2. Computational physics at the National Energy Research Supercomputer Center

    SciTech Connect

    Mirin, A.A.

    1990-04-01

    The principal roles of the Computational Physics Group are (1) to develop efficient numerical algorithms, programming techniques and applications software for current and future generations of supercomputers, (2) to develop advanced numerical models for the investigation of plasma phenomena and the simulation of contemporary magnetic fusion devices, and (3) to serve as a liaison between the Center and the user community; in particular; to provide NERSC with an application-oriented viewpoint and to provide the user community with expertise on the effective usage of the computers. In addition, many of our computer codes employ state-of the-art algorithms that test the prototypical hardware and software features of the various computers. This document describes the activities of the Computational Physics Group and was prepared with the assistance of the various Group members. The fist part contains overviews on a number of our important projects. The second section lists our important computational models. The third part provides a comprehensive list of our publications.

  3. Determination of dominant fibre orientations in fibre-reinforced high-strength concrete elements based on computed tomography scans

    NASA Astrophysics Data System (ADS)

    Vicente, Miguel A.; González, Dorys C.; Mínguez, Jesús

    2014-04-01

    Computed tomography (CT) is a nondestructive technique, based on absorbing X-rays, that permits the visualisation of the internal structure of materials in micron-range resolution. In this paper, the CT scan is used to determine the position and orientation of the fibres in steel fibre-reinforced high-strength concrete elements. The aim of this paper was to present a numerical procedure, automated through a MATLAB routine specially developed by the authors, which enables, fast and reliable, to obtain the orientation of each and every one of the fibres and their centre of gravity. The procedure shown is directly extrapolated to any type of fibre-reinforced material, only if there is a wide difference between density of fibres and density of matrix. The mathematical basis of this procedure is very simple and robust. The result is a fast algorithm and a routine easy to use. In addition, the validation tests show that the error is almost zero. This algorithm can help the industry to implement the technology of CT in the protocols of product quality control.

  4. Computer simulation of stress-oriented nucleation and growth of {theta}{prime} precipitates in Al-Cu alloys

    SciTech Connect

    Li, D.Y.; Chen, L.Q.

    1998-05-01

    Many structural transformations result in several orientation variants whose volume fractions and distributions can be controlled by applied stresses during nucleation, growth or coarsening. Depending on the type of stress and the coupling between the applied stress and the lattice misfit strain, the precipitate variants may be aligned parallel or perpendicular to the stress axis. This paper reports their studies on the effect of applied stresses on nucleation and growth of coherent {theta}{prime} precipitates in Al-Cu alloys using computer simulations based on a diffuse-interface phase-field kinetic model. In this model, the orientational differences among precipitate variants are distinguished by non-conserved structural field variables, whereas the compositional difference between the precipitate and matrix is described by a conserved field variable. The temporal evolution of the spatially dependent field variables is determined by numerically solving the time-dependent Ginzburg-Landau (TDGL) equations for the structural variables and the Cahn-Hilliard diffusion equation for composition. Random noises were introduced in both the composition and the structural order parameter fields to simulate the nucleation of {theta}{prime} precipitates. It is demonstrated that although an applied stress affects the microstructural development of a two-phase alloy during both the nucleation and growth stages, it is most effective to apply stresses during the initial nucleation stage for producing anisotropic precipitate alignment.

  5. Computed potential energy surfaces for chemical reactions

    NASA Technical Reports Server (NTRS)

    Walch, Stephen P.

    1990-01-01

    The objective was to obtain accurate potential energy surfaces (PES's) for a number of reactions which are important in the H/N/O combustion process. The interest in this is centered around the design of the SCRAM jet engine for the National Aerospace Plane (NASP), which was envisioned as an air-breathing hydrogen-burning vehicle capable of reaching velocities as large as Mach 25. Preliminary studies indicated that the supersonic flow in the combustor region of the scram jet engine required accurate reaction rate data for reactions in the H/N/O system, some of which was not readily available from experiment. The most important class of combustion reactions from the standpoint of the NASP project are radical recombinaton reactions, since these reactions result in most of the heat release in the combustion process. Theoretical characterizations of the potential energy surfaces for these reactions are presented and discussed.

  6. Region-oriented CT image representation for reducing computing time of Monte Carlo simulations

    SciTech Connect

    Sarrut, David; Guigues, Laurent

    2008-04-15

    Purpose. We propose a new method for efficient particle transportation in voxelized geometry for Monte Carlo simulations. We describe its use for calculating dose distribution in CT images for radiation therapy. Material and methods. The proposed approach, based on an implicit volume representation named segmented volume, coupled with an adapted segmentation procedure and a distance map, allows us to minimize the number of boundary crossings, which slows down simulation. The method was implemented with the GEANT4 toolkit and compared to four other methods: One box per voxel, parameterized volumes, octree-based volumes, and nested parameterized volumes. For each representation, we compared dose distribution, time, and memory consumption. Results. The proposed method allows us to decrease computational time by up to a factor of 15, while keeping memory consumption low, and without any modification of the transportation engine. Speeding up is related to the geometry complexity and the number of different materials used. We obtained an optimal number of steps with removal of all unnecessary steps between adjacent voxels sharing a similar material. However, the cost of each step is increased. When the number of steps cannot be decreased enough, due for example, to the large number of material boundaries, such a method is not considered suitable. Conclusion. This feasibility study shows that optimizing the representation of an image in memory potentially increases computing efficiency. We used the GEANT4 toolkit, but we could potentially use other Monte Carlo simulation codes. The method introduces a tradeoff between speed and geometry accuracy, allowing computational time gain. However, simulations with GEANT4 remain slow and further work is needed to speed up the procedure while preserving the desired accuracy.

  7. A computer-oriented system for high-speed recording of operant behavior1

    PubMed Central

    Barry, Herbert; Kinnard, William J.; Watzman, Nathan; Buckley, Joseph P.

    1966-01-01

    A method is described by which large quantities of data, generated at high and variable rates from a large number of test boxes, are recorded on a single eight-channel punched paper tape. The data, which include a record of the occurrence time of each event in 1/10-sec units, are in a compact form, suitable for conversion to standard Hollerith punched card codes and for decoding and summarizing by a large digital computer. Experience with the system has demonstrated a high degree of accuracy and reliability, and low operating cost. PMID:5907829

  8. A Computer Modeling Tool for Comparing Novel ICD Electrode Orientations in Children and Adults

    PubMed Central

    Jolley, Matthew; Stinstra, Jeroen; Pieper, Steve; MacLeod, Rob; Brooks, Dana H.; Cecchin, Frank; Triedman, John K.

    2009-01-01

    Background ICD implants in children and patients with congenital heart disease are complicated by body size and anatomy. A variety of creative implant techniques have been utilized empirically in these groups on an ad hoc basis. Objective To rationalize ICD placement in special populations, we used subject-specific, image-based finite element models (FEMs) to compare electric fields and expected defibrillation thresholds (DFTs) using standard and novel electrode configurations. Methods FEMs were created by segmenting normal torso CT scans of subjects aged 2, 10, and 29 years and one adult with congenital heart disease into tissue compartments, meshing and assigning tissue conductivities. The FEMs were modified by interactive placement of ICD electrode models in clinically relevant electrode configurations, and metrics of relative defibrillation safety and efficacy calculated. Results Predicted DFTs for standard transvenous configurations were comparable to published results. While transvenous systems generally predicted lower DFTs, a variety of extracardiac orientations were also predicted to be comparably effective in children and adults. Significant trend effects on DFTs were associated with body size and electrode length. In many situations, small alterations in electrode placement and patient anatomy resulted in significant variation of predicted DFT. We also demonstrate patient specific use of this technique for optimization of electrode placement. Conclusions Image-based FEMs allow predictive modeling of defibrillation scenarios, and predict large changes in DFTs with clinically relevant variations of electrode placement. Extracardiac ICDs are predicted to be effective in both children and adults. This approach may aid both ICD development and patient-specific optimization of electrode placement. Further development and validation are needed for clinical or industrial utilization. PMID:18362024

  9. The Role of Computing in High-Energy Physics.

    ERIC Educational Resources Information Center

    Metcalf, Michael

    1983-01-01

    Examines present and future applications of computers in high-energy physics. Areas considered include high-energy physics laboratories, accelerators, detectors, networking, off-line analysis, software guidelines, event sizes and volumes, graphics applications, event simulation, theoretical studies, and future trends. (JN)

  10. EQUILIBRIUM AND NONEQUILIBRIUM FOUNDATIONS OF FREE ENERGY COMPUTATIONAL METHODS

    SciTech Connect

    C. JARZYNSKI

    2001-03-01

    Statistical mechanics provides a rigorous framework for the numerical estimation of free energy differences in complex systems such as biomolecules. This paper presents a brief review of the statistical mechanical identities underlying a number of techniques for computing free energy differences. Both equilibrium and nonequilibrium methods are covered.

  11. Asymmetric energy flow in liquid alkylbenzenes: A computational study

    SciTech Connect

    Leitner, David M.; Pandey, Hari Datt

    2015-10-14

    Ultrafast IR-Raman experiments on substituted benzenes [B. C. Pein et al., J. Phys. Chem. B 117, 10898–10904 (2013)] reveal that energy can flow more efficiently in one direction along a molecule than in others. We carry out a computational study of energy flow in the three alkyl benzenes, toluene, isopropylbenzene, and t-butylbenzene, studied in these experiments, and find an asymmetry in the flow of vibrational energy between the two chemical groups of the molecule due to quantum mechanical vibrational relaxation bottlenecks, which give rise to a preferred direction of energy flow. We compare energy flow computed for all modes of the three alkylbenzenes over the relaxation time into the liquid with energy flow through the subset of modes monitored in the time-resolved Raman experiments and find qualitatively similar results when using the subset compared to all the modes.

  12. Tissue Characterization Using Energy-Selective Computed Tomography

    NASA Astrophysics Data System (ADS)

    Alvarez, Robert E.; Marshall, William H.; Lewis, Roger

    1981-07-01

    Energy-selective computed tomography has several important properties useful for in-vivo tissue characterization. Most importantly, it produces more information than conventional computed tomography. This information can be considered to be an added dimension which can be used to eliminate the ambiguities in conventional CT data. The noise in energy-selective computed tomography is also two dimensional and an un-correlated coordinate system can be defined which is needed for studying the capabilities of the technique for characterizing tissues. By using the calibration material basis set, the information from energy-selective CT can be extracted with extreme accuracy. Our preliminary experiments indicate that the technique is accurate enough to characterize the difference between gray and white matter. Most conventional systems have difficulty in distinguishing these materials, much less characterizing the reason for their differing attenuation. Thus energy-selective CT has the promise of providing extremely accurate tissue characterization based on its physical properties.

  13. High energy charged particle optics computer programs

    SciTech Connect

    Carey, D.C.

    1980-09-01

    The computer programs TRANSPORT and TURTLE are described, with special emphasis on recent developments. TRANSPORT is a general matrix evaluation and fitting program. First and second-order transfer matrix elements, including those contributing to time-of-flight differences can be evaluated. Matrix elements of both orders can be fit, separately or simultaneously. Floor coordinates of the beam line may be calculated and included in any fits. Tables of results of misalignments, including effects of bilinear terms can be produced. Fringe fields and pole face rotation angles of bending magnets may be included and also adjusted automatically during the fitting process to produce rectangular magnets. A great variety of output options are available. TURTLE is a Monte Carlo program used to simulate beam line performance. It includes second-order terms and aperture constraints. Replacable subroutines allow an unliminated variety of input beam distributions, scattering algorithms, variables which can be histogrammed, and aperture shapes. Histograms of beam loss can also be produced. Rectangular zero-gradient bending magnets with proper circular trajectories, sagitta offsets, pole face rotation angles, and aperture constraints can be included. The effect of multiple components of quadrupoles up to 40 poles can be evaluated.

  14. Memory device for two-dimensional radiant energy array computers

    NASA Technical Reports Server (NTRS)

    Schaefer, D. H.; Strong, J. P., III (Inventor)

    1977-01-01

    A memory device for two dimensional radiant energy array computers was developed, in which the memory device stores digital information in an input array of radiant energy digital signals that are characterized by ordered rows and columns. The memory device contains a radiant energy logic storing device having a pair of input surface locations for receiving a pair of separate radiant energy digital signal arrays and an output surface location adapted to transmit a radiant energy digital signal array. A regenerative feedback device that couples one of the input surface locations to the output surface location in a manner for causing regenerative feedback is also included

  15. Computed potential energy surfaces for chemical reactions

    NASA Technical Reports Server (NTRS)

    Walch, Stephen P.; Levin, Eugene

    1993-01-01

    A new global potential energy surface (PES) is being generated for O(P-3) + H2 yields OH + H. This surface is being fit using the rotated Morse oscillator method, which was used to fit the previous POL-CI surface. The new surface is expected to be more accurate and also includes a much more complete sampling of bent geometries. A new study has been undertaken of the reaction N + O2 yields NO + O. The new studies have focused on the region of the surface near a possible minimum corresponding to the peroxy form of NOO. A large portion of the PES for this second reaction has been mapped out. Since state to state cross sections for the reaction are important in the chemistry of high temperature air, these studies will probably be extended to permit generation of a new global potential for reaction.

  16. Sink-oriented Dynamic Location Service Protocol for Mobile Sinks with an Energy Efficient Grid-Based Approach.

    PubMed

    Jeon, Hyeonjae; Park, Kwangjin; Hwang, Dae-Joon; Choo, Hyunseung

    2009-01-01

    Sensor nodes transmit the sensed information to the sink through wireless sensor networks (WSNs). They have limited power, computational capacities and memory. Portable wireless devices are increasing in popularity. Mechanisms that allow information to be efficiently obtained through mobile WSNs are of significant interest. However, a mobile sink introduces many challenges to data dissemination in large WSNs. For example, it is important to efficiently identify the locations of mobile sinks and disseminate information from multi-source nodes to the multi-mobile sinks. In particular, a stationary dissemination path may no longer be effective in mobile sink applications, due to sink mobility. In this paper, we propose a Sink-oriented Dynamic Location Service (SDLS) approach to handle sink mobility. In SDLS, we propose an Eight-Direction Anchor (EDA) system that acts as a location service server. EDA prevents intensive energy consumption at the border sensor nodes and thus provides energy balancing to all the sensor nodes. Then we propose a Location-based Shortest Relay (LSR) that efficiently forwards (or relays) data from a source node to a sink with minimal delay path. Our results demonstrate that SDLS not only provides an efficient and scalable location service, but also reduces the average data communication overhead in scenarios with multiple and moving sinks and sources. PMID:22573964

  17. ms-data-core-api: an open-source, metadata-oriented library for computational proteomics

    PubMed Central

    Perez-Riverol, Yasset; Uszkoreit, Julian; Sanchez, Aniel; Ternent, Tobias; del Toro, Noemi; Hermjakob, Henning; Vizcaíno, Juan Antonio; Wang, Rui

    2015-01-01

    Summary: The ms-data-core-api is a free, open-source library for developing computational proteomics tools and pipelines. The Application Programming Interface, written in Java, enables rapid tool creation by providing a robust, pluggable programming interface and common data model. The data model is based on controlled vocabularies/ontologies and captures the whole range of data types included in common proteomics experimental workflows, going from spectra to peptide/protein identifications to quantitative results. The library contains readers for three of the most used Proteomics Standards Initiative standard file formats: mzML, mzIdentML, and mzTab. In addition to mzML, it also supports other common mass spectra data formats: dta, ms2, mgf, pkl, apl (text-based), mzXML and mzData (XML-based). Also, it can be used to read PRIDE XML, the original format used by the PRIDE database, one of the world-leading proteomics resources. Finally, we present a set of algorithms and tools whose implementation illustrates the simplicity of developing applications using the library. Availability and implementation: The software is freely available at https://github.com/PRIDE-Utilities/ms-data-core-api. Supplementary information: Supplementary data are available at Bioinformatics online Contact: juan@ebi.ac.uk PMID:25910694

  18. Computational fluid dynamics investigation of human aspiration in low velocity air: orientation effects on nose-breathing simulations.

    PubMed

    Anderson, Kimberly R; Anthony, T Renée

    2014-06-01

    An understanding of how particles are inhaled into the human nose is important for developing samplers that measure biologically relevant estimates of exposure in the workplace. While previous computational mouth-breathing investigations of particle aspiration have been conducted in slow moving air, nose breathing still required exploration. Computational fluid dynamics was used to estimate nasal aspiration efficiency for an inhaling humanoid form in low velocity wind speeds (0.1-0.4 m s(-1)). Breathing was simplified as continuous inhalation through the nose. Fluid flow and particle trajectories were simulated over seven discrete orientations relative to the oncoming wind (0, 15, 30, 60, 90, 135, 180°). Sensitivities of the model simplification and methods were assessed, particularly the placement of the recessed nostril surface and the size of the nose. Simulations identified higher aspiration (13% on average) when compared to published experimental wind tunnel data. Significant differences in aspiration were identified between nose geometry, with the smaller nose aspirating an average of 8.6% more than the larger nose. Differences in fluid flow solution methods accounted for 2% average differences, on the order of methodological uncertainty. Similar trends to mouth-breathing simulations were observed including increasing aspiration efficiency with decreasing freestream velocity and decreasing aspiration with increasing rotation away from the oncoming wind. These models indicate nasal aspiration in slow moving air occurs only for particles <100 µm. PMID:24665111

  19. Computational Fluid Dynamics Investigation of Human Aspiration in Low Velocity Air: Orientation Effects on Nose-Breathing Simulations

    PubMed Central

    Anderson, Kimberly R.; Anthony, T. Renée

    2014-01-01

    An understanding of how particles are inhaled into the human nose is important for developing samplers that measure biologically relevant estimates of exposure in the workplace. While previous computational mouth-breathing investigations of particle aspiration have been conducted in slow moving air, nose breathing still required exploration. Computational fluid dynamics was used to estimate nasal aspiration efficiency for an inhaling humanoid form in low velocity wind speeds (0.1–0.4 m s−1). Breathing was simplified as continuous inhalation through the nose. Fluid flow and particle trajectories were simulated over seven discrete orientations relative to the oncoming wind (0, 15, 30, 60, 90, 135, 180°). Sensitivities of the model simplification and methods were assessed, particularly the placement of the recessed nostril surface and the size of the nose. Simulations identified higher aspiration (13% on average) when compared to published experimental wind tunnel data. Significant differences in aspiration were identified between nose geometry, with the smaller nose aspirating an average of 8.6% more than the larger nose. Differences in fluid flow solution methods accounted for 2% average differences, on the order of methodological uncertainty. Similar trends to mouth-breathing simulations were observed including increasing aspiration efficiency with decreasing freestream velocity and decreasing aspiration with increasing rotation away from the oncoming wind. These models indicate nasal aspiration in slow moving air occurs only for particles <100 µm. PMID:24665111

  20. Computational Study of Low Energy Nuclear Scattering from Metal Nuclei

    NASA Astrophysics Data System (ADS)

    Jaramillo, Danelle; Hira, Ajit; Pacheco, Jose; Salazar, Justin

    2014-03-01

    We continue our interest in the interactions between different nuclear species with a computational study of the scattering of the low-energy nuclei of H through F atoms (Z <= 9) from Palladium, Nickel and other metals. First, a FORTRAN computer program was developed to compute stopping cross sections and scattering angles in Pd and other metals for the small nuclear projectiles, using Monte Carlo calculation. This code allows for different angles of incidence. Next, simulations were done in the energy interval from 10 to 140 keV. The computational results thus obtained are compared with relevant experimental data. The data are further analyzed to identify periodic trends in terms of the atomic number of the projectile. Such studies have potential applications in nuclear physics and in nuclear medicine.

  1. Computational Analysis of Energy Pooling to Harvest Low-Energy Solar Energy in Organic Photovoltaic Devices

    NASA Astrophysics Data System (ADS)

    Lacount, Michael; Shaheen, Sean; Rumbles, Garry; van de Lagemaat, Jao; Hu, Nan; Ostrowski, Dave; Lusk, Mark

    2014-03-01

    Current photovoltaic energy conversions do not typically utilize low energy sunlight absorption, leaving large sections of the solar spectrum untapped. It is possible, though, to absorb such radiation, generating low-energy excitons, and then pool them to create higher energy excitons, which can result in an increase in efficiency. Calculation of the rates at which such upconversion processes occur requires an accounting of all possible molecular quantum electrodynamics (QED) pathways. There are two paths associated with the upconversion. The cooperative mechanism involves a three-body interaction in which low energy excitons are transferred sequentially onto an acceptor molecule. The accretive pathway, requires that an exciton transfer its energy to a second exciton that subsequently transfers its energy to the acceptor molecule. We have computationally modeled both types of molecular QED obtaining rates using a combination of DFT and many-body Green function theory. The simulation platform is exercised by considering upconversion events associated with material composed of a high energy absorbing core of hexabenzocoronene (HBC) and low energy absorbing arms of oligothiophene. In addition, we make estimates for all competing processes in order to judge the relative efficiencies of these two processes.

  2. Can computed crystal energy landscapes help understand pharmaceutical solids?

    PubMed

    Price, Sarah L; Braun, Doris E; Reutzel-Edens, Susan M

    2016-06-01

    Computational crystal structure prediction (CSP) methods can now be applied to the smaller pharmaceutical molecules currently in drug development. We review the recent uses of computed crystal energy landscapes for pharmaceuticals, concentrating on examples where they have been used in collaboration with industrial-style experimental solid form screening. There is a strong complementarity in aiding experiment to find and characterise practically important solid forms and understanding the nature of the solid form landscape. PMID:27067116

  3. Multiple Energy Computer Tomography (MECT) at the NSLS: Status report

    SciTech Connect

    Dilmanian, F.A.; Wu, X.Y.; Chen, Z.; Ren, B.; Slatkin, D.N.; Chapman, D.; Schleifer, M.; Staicu, F.A.; Thomlinson, W.

    1994-09-01

    Status of the synchrotron-based computed tomography (CT) system Multiple Energy Computed Tomography (MECT) system is described. MECT, that uses monochromatic beams from the X17 superconducting wiggler beam line at the National Synchrotron Light Source (NSLS), will be used for imaging the human head and neck. An earlier prototype MECT produced images of phantoms and living rodents. This report summarizes the studies with the prototype, and describes the design, construction, and test results of the Clinical MECT system components.

  4. Opportunities for discovery: Theory and computation in Basic Energy Sciences

    SciTech Connect

    Harmon, Bruce; Kirby, Kate; McCurdy, C. William

    2005-01-11

    New scientific frontiers, recent advances in theory, and rapid increases in computational capabilities have created compelling opportunities for theory and computation to advance the scientific mission of the Office of Basic Energy Sciences (BES). The prospects for success in the experimental programs of BES will be enhanced by pursuing these opportunities. This report makes the case for an expanded research program in theory and computation in BES. The Subcommittee on Theory and Computation of the Basic Energy Sciences Advisory Committee was charged with identifying current and emerging challenges and opportunities for theoretical research within the scientific mission of BES, paying particular attention to how computing will be employed to enable that research. A primary purpose of the Subcommittee was to identify those investments that are necessary to ensure that theoretical research will have maximum impact in the areas of importance to BES, and to assure that BES researchers will be able to exploit the entire spectrum of computational tools, including leadership class computing facilities. The Subcommittee s Findings and Recommendations are presented in Section VII of this report.

  5. Complete description of ionization energy and electron affinity in organic solids: Determining contributions from electronic polarization, energy band dispersion, and molecular orientation

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Yamada, Kazuto; Tsutsumi, Jun'ya; Sato, Naoki

    2015-08-01

    Ionization energy and electron affinity in organic solids are understood in terms of a single molecule perturbed by solid-state effects such as polarization energy, band dispersion, and molecular orientation as primary factors. However, no work has been done to determine the individual contributions experimentally. In this work, the electron affinities of thin films of pentacene and perfluoropentacene with different molecular orientations are determined to a precision of 0.1 eV using low-energy inverse photoemission spectroscopy. Based on the precisely determined electron affinities in the solid state together with the corresponding data of the ionization energies and other energy parameters, we quantitatively evaluate the contribution of these effects. It turns out that the bandwidth as well as the polarization energy contributes to the ionization energy and electron affinity in the solid state while the effect of the surface dipole is at most a few eV and does not vary with the molecular orientation. As a result, we conclude that the molecular orientation dependence of the ionization energy and electron affinity of organic solids originates from the orientation-dependent polarization energy in the film.

  6. Computing the Casimir energy using the point-matching method

    SciTech Connect

    Lombardo, F. C.; Mazzitelli, F. D.; Vazquez, M.; Villar, P. I.

    2009-09-15

    We use a point-matching approach to numerically compute the Casimir interaction energy for a two perfect-conductor waveguide of arbitrary section. We present the method and describe the procedure used to obtain the numerical results. At first, our technique is tested for geometries with known solutions, such as concentric and eccentric cylinders. Then, we apply the point-matching technique to compute the Casimir interaction energy for new geometries such as concentric corrugated cylinders and cylinders inside conductors with focal lines.

  7. Limits of Free Energy Computation for Protein-Ligand Interactions

    PubMed Central

    Merz, Kenneth M.

    2010-01-01

    A detailed error analysis is presented for the computation of protein-ligand interaction energies. In particular, we show that it is probable that even highly accurate computed binding free energies have errors that represent a large percentage of the target free energies of binding. This is due to the observation that the error for computed energies quasi-linearly increases with the increasing number of interactions present in a protein-ligand complex. This principle is expected to hold true for any system that involves an ever increasing number of inter or intra-molecular interactions (e.g. ab initio protein folding). We introduce the concept of best-case scenario errors (BCSerrors) that can be routinely applied to docking and scoring exercises and used to provide errors bars for the computed binding free energies. These BCSerrors form a basis by which one can evaluate the outcome of a docking and scoring exercise. Moreover, the resultant error analysis enables the formation of an hypothesis that defines the best direction to proceed in order to improve scoring functions used in molecular docking studies. PMID:20467461

  8. Orientation dependences of surface morphologies and energies of iron-gallium alloys

    NASA Astrophysics Data System (ADS)

    Costa, Marcio; Wang, Hui; Hu, Jun; Wu, Ruqian; Na, Suok-Min; Chun, Hyunsuk; Flatau, Alison B.

    2016-05-01

    We investigated the surface energies of several low-index surfaces of the D03-type FeGa alloys (Galfenol), using density functional theory (DFT) simulations and contact angle measurements. DFT calculations predict that (1) the Ga-covered (110) surface of Galfenol is more stable in the Ga-rich condition, while Ga-covered (001) surface of Galfenol become more favorable in Ga-poor condition; and (2) a full Ga overlayer tends to form on top of Galfenol surfaces regardless their orientation, both in agreement with the experimental observation. We also studied Ga segregation in the bcc Fe matrix in order to explore the possibility of Ga precipitation away from Fe. It was found that the Fe-Ga separation is unlikely to occur since Ga diffusion toward the surface is effectively self-stopped once the Ga overlayers form on the facets.

  9. Applications of dual energy computed tomography in abdominal imaging.

    PubMed

    Lestra, T; Mulé, S; Millet, I; Carsin-Vu, A; Taourel, P; Hoeffel, C

    2016-06-01

    Dual energy computed tomography (CT) is an imaging technique based on data acquisition at two different energy settings. Recent advances in CT have allowed data acquisition and almost simultaneously analysis of two spectra of X-rays at different energy levels resulting in novel developments in the field of abdominal imaging. This technique is widely used in cardiovascular imaging, especially for pulmonary embolism work-up but is now also increasingly developed in the field of abdominal imaging. With dual-energy CT it is possible to obtain virtual unenhanced images from monochromatic reconstructions as well as attenuation maps of different elements, thereby improving detection and characterization of a variety of renal, adrenal, hepatic and pancreatic abnormalities. Also, dual-energy CT can provide information regarding urinary calculi composition. This article reviews and illustrates the different applications of dual-energy CT in routine abdominal imaging. PMID:26993967

  10. View southeast of computer controlled energy monitoring system. System replaced ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    View southeast of computer controlled energy monitoring system. System replaced strip chart recorders and other instruments under the direct observation of the load dispatcher. - Thirtieth Street Station, Load Dispatch Center, Thirtieth & Market Streets, Railroad Station, Amtrak (formerly Pennsylvania Railroad Station), Philadelphia, Philadelphia County, PA

  11. Energy efficient hybrid computing systems using spin devices

    NASA Astrophysics Data System (ADS)

    Sharad, Mrigank

    Emerging spin-devices like magnetic tunnel junctions (MTJ's), spin-valves and domain wall magnets (DWM) have opened new avenues for spin-based logic design. This work explored potential computing applications which can exploit such devices for higher energy-efficiency and performance. The proposed applications involve hybrid design schemes, where charge-based devices supplement the spin-devices, to gain large benefits at the system level. As an example, lateral spin valves (LSV) involve switching of nanomagnets using spin-polarized current injection through a metallic channel such as Cu. Such spin-torque based devices possess several interesting properties that can be exploited for ultra-low power computation. Analog characteristic of spin current facilitate non-Boolean computation like majority evaluation that can be used to model a neuron. The magneto-metallic neurons can operate at ultra-low terminal voltage of ˜20mV, thereby resulting in small computation power. Moreover, since nano-magnets inherently act as memory elements, these devices can facilitate integration of logic and memory in interesting ways. The spin based neurons can be integrated with CMOS and other emerging devices leading to different classes of neuromorphic/non-Von-Neumann architectures. The spin-based designs involve `mixed-mode' processing and hence can provide very compact and ultra-low energy solutions for complex computation blocks, both digital as well as analog. Such low-power, hybrid designs can be suitable for various data processing applications like cognitive computing, associative memory, and currentmode on-chip global interconnects. Simulation results for these applications based on device-circuit co-simulation framework predict more than ˜100x improvement in computation energy as compared to state of the art CMOS design, for optimal spin-device parameters.

  12. Development of problem-oriented software packages for numerical studies and computer-aided design (CAD) of gyrotrons

    NASA Astrophysics Data System (ADS)

    Damyanova, M.; Sabchevski, S.; Zhelyazkov, I.; Vasileva, E.; Balabanova, E.; Dankov, P.; Malinov, P.

    2016-03-01

    Gyrotrons are the most powerful sources of coherent CW (continuous wave) radiation in the frequency range situated between the long-wavelength edge of the infrared light (far-infrared region) and the microwaves, i.e., in the region of the electromagnetic spectrum which is usually called the THz-gap (or T-gap), since the output power of other devices (e.g., solid-state oscillators) operating in this interval is by several orders of magnitude lower. In the recent years, the unique capabilities of the sub-THz and THz gyrotrons have opened the road to many novel and future prospective applications in various physical studies and advanced high-power terahertz technologies. In this paper, we present the current status and functionality of the problem-oriented software packages (most notably GYROSIM and GYREOSS) used for numerical studies, computer-aided design (CAD) and optimization of gyrotrons for diverse applications. They consist of a hierarchy of codes specialized to modelling and simulation of different subsystems of the gyrotrons (EOS, resonant cavity, etc.) and are based on adequate physical models, efficient numerical methods and algorithms.

  13. Requirements for supercomputing in energy research: The transition to massively parallel computing

    SciTech Connect

    Not Available

    1993-02-01

    This report discusses: The emergence of a practical path to TeraFlop computing and beyond; requirements of energy research programs at DOE; implementation: supercomputer production computing environment on massively parallel computers; and implementation: user transition to massively parallel computing.

  14. An Atomistic Statistically Effective Energy Function for Computational Protein Design.

    PubMed

    Topham, Christopher M; Barbe, Sophie; André, Isabelle

    2016-08-01

    Shortcomings in the definition of effective free-energy surfaces of proteins are recognized to be a major contributory factor responsible for the low success rates of existing automated methods for computational protein design (CPD). The formulation of an atomistic statistically effective energy function (SEEF) suitable for a wide range of CPD applications and its derivation from structural data extracted from protein domains and protein-ligand complexes are described here. The proposed energy function comprises nonlocal atom-based and local residue-based SEEFs, which are coupled using a novel atom connectivity number factor to scale short-range, pairwise, nonbonded atomic interaction energies and a surface-area-dependent cavity energy term. This energy function was used to derive additional SEEFs describing the unfolded-state ensemble of any given residue sequence based on computed average energies for partially or fully solvent-exposed fragments in regions of irregular structure in native proteins. Relative thermal stabilities of 97 T4 bacteriophage lysozyme mutants were predicted from calculated energy differences for folded and unfolded states with an average unsigned error (AUE) of 0.84 kcal mol(-1) when compared to experiment. To demonstrate the utility of the energy function for CPD, further validation was carried out in tests of its capacity to recover cognate protein sequences and to discriminate native and near-native protein folds, loop conformers, and small-molecule ligand binding poses from non-native benchmark decoys. Experimental ligand binding free energies for a diverse set of 80 protein complexes could be predicted with an AUE of 2.4 kcal mol(-1) using an additional energy term to account for the loss in ligand configurational entropy upon binding. The atomistic SEEF is expected to improve the accuracy of residue-based coarse-grained SEEFs currently used in CPD and to extend the range of applications of extant atom-based protein statistical

  15. Large Scale Computing and Storage Requirements for High Energy Physics

    SciTech Connect

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes

  16. Computer simulated building energy consumption for verification of energy conservation measures in network facilities

    NASA Technical Reports Server (NTRS)

    Plankey, B.

    1981-01-01

    A computer program called ECPVER (Energy Consumption Program - Verification) was developed to simulate all energy loads for any number of buildings. The program computes simulated daily, monthly, and yearly energy consumption which can be compared with actual meter readings for the same time period. Such comparison can lead to validation of the model under a variety of conditions, which allows it to be used to predict future energy saving due to energy conservation measures. Predicted energy saving can then be compared with actual saving to verify the effectiveness of those energy conservation changes. This verification procedure is planned to be an important advancement in the Deep Space Network Energy Project, which seeks to reduce energy cost and consumption at all DSN Deep Space Stations.

  17. Orientation dependences of surface morphologies and energies of iron-gallium alloys

    NASA Astrophysics Data System (ADS)

    Costa, Marcio; Wang, Hui; Hu, Jun; Wu, Ruqian; Na, Suok-Min; Chun, Hyunsuk; Flatau, Alison B.; University of California, Irvine Collaboration; University of Maryland Collaboration

    Magnetostrictive Fe-Ga alloys (Galfenol) are very promising rare-earth free materials for applications in sensors, actuators, energy-harvesters and spintronic devices. Investigation on surface energies of Galfenol based on density functional calculations (DFT) and contact angle measurements may provide fundamental understandings and guidance to further optimize the performance of Galfenol. DFT calculations predict that Ga-covered (110) surface of Galfenol is more stable in Ga-rich condition, while Ga-covered (001) surface of Galfenol surface become more favorable in Ga-poor condition. Moreover, a full Ga overlayer tends to form on top of Gafenol surfaces regardless their orientation, both in agreement with the experimental observation. Further studies on Ga segregation in the Fe bcc matrix demonstrate that the Fe-Ga separation is unlikely to occur since Ga diffusion toward the surface is effectively self-stopped once the Ga overlayers form on the facets. This work was supported by the National Science Foundation through the SUSCHEM-Collaborative Research program (Grant Numbers: DMR-1310494 at UCI and DMR-1310447 at UMD). Work at UCI was also supported by the ONR (Grant Number: N00014-13-1-0445).

  18. Dielectric energy of orientation in dead and living cells of Schizosaccharomyces pombe. Fitting of experimental results to a theoretical model.

    PubMed Central

    Asencor, F J; Santamaría, C; Iglesias, F J; Domínguez, A

    1993-01-01

    Using the experimental data obtained with killed cells of Schizosaccharomyces pombe (1), we have formulated a theoretical model that is able to predict cell orientation for microorganisms with ellipsoidal or cylindrical shapes as a function of the frequency of the electric field and of the conductivity of the external medium. In this model, comparison of the difference in potential energy for both orientations parallel-perpendicular with the thermal agitation energy allows one to interpret the intervals where these orientations occur. The model implies that the conductivity of the cytoplasm is slightly higher than that of the external medium. This assumption is easy to understand taking into account that not all the intracytoplasmic material is released to the exterior during cell death. PMID:8324197

  19. Publication patterns in HEP computing

    NASA Astrophysics Data System (ADS)

    Pia, M. G.; Basaglia, T.; Bell, Z. W.; Dressendorfer, P. V.

    2012-12-01

    An overview of the evolution of computing-oriented publications in high energy physics following the start of operation of LHC. Quantitative analyses are illustrated, which document the production of scholarly papers on computing-related topics by high energy physics experiments and core tools projects, and the citations they receive. Several scientometric indicators are analyzed to characterize the role of computing in high energy physics literature. Distinctive features of software-oriented and hardware-oriented scholarly publications are highlighted. Current patterns and trends are compared to the situation in previous generations’ experiments.

  20. Energy consumption program: A computer model simulating energy loads in buildings

    NASA Technical Reports Server (NTRS)

    Stoller, F. W.; Lansing, F. L.; Chai, V. W.; Higgins, S.

    1978-01-01

    The JPL energy consumption computer program developed as a useful tool in the on-going building modification studies in the DSN energy conservation project is described. The program simulates building heating and cooling loads and computes thermal and electric energy consumption and cost. The accuracy of computations are not sacrificed, however, since the results lie within + or - 10 percent margin compared to those read from energy meters. The program is carefully structured to reduce both user's time and running cost by asking minimum information from the user and reducing many internal time-consuming computational loops. Many unique features were added to handle two-level electronics control rooms not found in any other program.

  1. Providing a computing environment for a high energy physics workshop

    SciTech Connect

    Nicholls, J.

    1991-03-01

    Although computing facilities have been provided at conferences and workshops remote from the hose institution for some years, the equipment provided has rarely been capable of providing for much more than simple editing and electronic mail over leased lines. This presentation describes the pioneering effort involved by the Computing Department/Division at Fermilab in providing a local computing facility with world-wide networking capability for the Physics at Fermilab in the 1990's workshop held in Breckenridge, Colorado, in August 1989, as well as the enhanced facilities provided for the 1990 Summer Study on High Energy Physics at Snowmass, Colorado, in June/July 1990. Issues discussed include type and sizing of the facilities, advance preparations, shipping, on-site support, as well as an evaluation of the value of the facility to the workshop participants.

  2. Approximating ground and excited state energies on a quantum computer

    NASA Astrophysics Data System (ADS)

    Hadfield, Stuart; Papageorgiou, Anargyros

    2015-04-01

    Approximating ground and a fixed number of excited state energies, or equivalently low-order Hamiltonian eigenvalues, is an important but computationally hard problem. Typically, the cost of classical deterministic algorithms grows exponentially with the number of degrees of freedom. Under general conditions, and using a perturbation approach, we provide a quantum algorithm that produces estimates of a constant number of different low-order eigenvalues. The algorithm relies on a set of trial eigenvectors, whose construction depends on the particular Hamiltonian properties. We illustrate our results by considering a special case of the time-independent Schrödinger equation with degrees of freedom. Our algorithm computes estimates of a constant number of different low-order eigenvalues with error and success probability at least , with cost polynomial in and . This extends our earlier results on algorithms for estimating the ground state energy. The technique we present is sufficiently general to apply to problems beyond the application studied in this paper.

  3. Massive affordable computing using ARM processors in high energy physics

    NASA Astrophysics Data System (ADS)

    Smith, J. W.; Hamilton, A.

    2015-05-01

    High Performance Computing is relevant in many applications around the world, particularly high energy physics. Experiments such as ATLAS, CMS, ALICE and LHCb generate huge amounts of data which need to be stored and analyzed at server farms located on site at CERN and around the world. Apart from the initial cost of setting up an effective server farm the cost of power consumption and cooling are significant. The proposed solution to reduce costs without losing performance is to make use of ARM® processors found in nearly all smartphones and tablet computers. Their low power consumption, low cost and respectable processing speed makes them an interesting choice for future large scale parallel data processing centers. Benchmarks on the CortexTM-A series of ARM® processors including the HPL and PMBW suites will be presented as well as preliminary results from the PROOF benchmark in the context of high energy physics will be analyzed.

  4. Department of Energy: MICS (Mathematical Information, and Computational Sciences Division). High performance computing and communications program

    SciTech Connect

    1996-06-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, {open_quotes}The DOE Program in HPCC{close_quotes}), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW). The information pointed to by the URL is updated frequently, and the interested reader is urged to access the WWW for the latest information.

  5. Department of Energy Mathematical, Information, and Computational Sciences Division: High Performance Computing and Communications Program

    SciTech Connect

    1996-11-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, The DOE Program in HPCC), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW).

  6. Energy Proportionality and Performance in Data Parallel Computing Clusters

    SciTech Connect

    Kim, Jinoh; Chou, Jerry; Rotem, Doron

    2011-02-14

    Energy consumption in datacenters has recently become a major concern due to the rising operational costs andscalability issues. Recent solutions to this problem propose the principle of energy proportionality, i.e., the amount of energy consumedby the server nodes must be proportional to the amount of work performed. For data parallelism and fault tolerancepurposes, most common file systems used in MapReduce-type clusters maintain a set of replicas for each data block. A coveringset is a group of nodes that together contain at least one replica of the data blocks needed for performing computing tasks. In thiswork, we develop and analyze algorithms to maintain energy proportionality by discovering a covering set that minimizesenergy consumption while placing the remaining nodes in lowpower standby mode. Our algorithms can also discover coveringsets in heterogeneous computing environments. In order to allow more data parallelism, we generalize our algorithms so that itcan discover k-covering sets, i.e., a set of nodes that contain at least k replicas of the data blocks. Our experimental results showthat we can achieve substantial energy saving without significant performance loss in diverse cluster configurations and workingenvironments.

  7. IR Spectra and Bond Energies Computed Using DFT

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles; Andrews, Lester; Arnold, James (Technical Monitor)

    2000-01-01

    The combination of density functional theory (DFT) frequencies and infrared (IR) intensities and experimental spectra is a very powerful tool in the identification of molecules and ions. The computed and measured isotopic ratios make the identification much more secure than frequencies and intensities alone. This will be illustrated using several examples, such as Mn(CO)n and Mn(CO)n-. The accuracy of DFT metal-ligand bond energies will also be discussed.

  8. Thrifty: An Exascale Architecture for Energy Proportional Computing

    SciTech Connect

    Torrellas, Josep

    2014-12-23

    The objective of this project is to design key aspects of an exascale architecture called Thrifty that addresses the challenges of power/energy efficiency, resiliency, and performance in exascale systems. The project includes work on computer architecture (Josep Torrellas from University of Illinois), compilation (Daniel Quinlan from Lawrence Livermore National Laboratory), runtime and applications (Laura Carrington from University of California San Diego), and circuits (Wilfred Pinfold from Intel Corporation).

  9. KEYNOTE: Simulation, computation, and the Global Nuclear Energy Partnership

    NASA Astrophysics Data System (ADS)

    Reis, Victor, Dr.

    2006-01-01

    Dr. Victor Reis delivered the keynote talk at the closing session of the conference. The talk was forward looking and focused on the importance of advanced computing for large-scale nuclear energy goals such as Global Nuclear Energy Partnership (GNEP). Dr. Reis discussed the important connections of GNEP to the Scientific Discovery through Advanced Computing (SciDAC) program and the SciDAC research portfolio. In the context of GNEP, Dr. Reis talked about possible fuel leasing configurations, strategies for their implementation, and typical fuel cycle flow sheets. A major portion of the talk addressed lessons learnt from ‘Science Based Stockpile Stewardship’ and the Accelerated Strategic Computing Initiative (ASCI) initiative and how they can provide guidance for advancing GNEP and SciDAC goals. Dr. Reis’s colorful and informative presentation included international proverbs, quotes and comments, in tune with the international flavor that is part of the GNEP philosophy and plan. He concluded with a positive and motivating outlook for peaceful nuclear energy and its potential to solve global problems. An interview with Dr. Reis, addressing some of the above issues, is the cover story of Issue 2 of the SciDAC Review and available at http://www.scidacreview.org This summary of Dr. Reis’s PowerPoint presentation was prepared by Institute of Physics Publishing, the complete PowerPoint version of Dr. Reis’s talk at SciDAC 2006 is given as a multimedia attachment to this summary.

  10. Crystallographic preferred orientations may develop in nanocrystalline materials on fault planes due to surface energy interactions

    NASA Astrophysics Data System (ADS)

    Toy, Virginia G.; Mitchell, Thomas M.; Druiventak, Anthony; Wirth, Richard

    2015-09-01

    A layer of substantially noncrystalline material, composed of partially annealed nanopowder with local melt, was experimentally generated by comminution during ˜1.5 mm total slip at ˜2.5 × 10-6 m s-1, Pconf ˜ 0.5 GPa, and 450°C or 600°C, on saw cut surfaces in novaculite. The partially annealed nanopowder comprises angular grains mostly 5-200 nm diameter in a variably dense packing arrangement. A sharp transition from wall rock to partially annealed nanopowder illustrates that the nanopowder effectively localizes shear, consistent with generation of nanoparticles during initial fragmentation, not by progressive grain size reduction. Dislocation densities in nanopowder grains or immediate wall rock are not significantly high, but there are planar plastic defects spaced at 5-200 nm parallel to the host quartz grain's basal plane. We propose these plastic defects developed into through-going fractures to generate nanocrystals. The partially annealed nanopowder has a crystallographic preferred orientation (CPO) that we hypothesize developed due to surface energy interactions to maximize coincident site lattices (CSL) during annealing. This mechanism may also have generated CPOs recently described in micro/nanocrystalline calcite fault gouges.

  11. New developments in the multiscale hybrid energy density computational method

    NASA Astrophysics Data System (ADS)

    Min, Sun; Shanying, Wang; Dianwu, Wang; Chongyu, Wang

    2016-01-01

    Further developments in the hybrid multiscale energy density method are proposed on the basis of our previous papers. The key points are as follows. (i) The theoretical method for the determination of the weight parameter in the energy coupling equation of transition region in multiscale model is given via constructing underdetermined equations. (ii) By applying the developed mathematical method, the weight parameters have been given and used to treat some problems in homogeneous charge density systems, which are directly related with multiscale science. (iii) A theoretical algorithm has also been presented for treating non-homogeneous systems of charge density. The key to the theoretical computational methods is the decomposition of the electrostatic energy in the total energy of density functional theory for probing the spanning characteristic at atomic scale, layer by layer, by which the choice of chemical elements and the defect complex effect can be understood deeply. (iv) The numerical computational program and design have also been presented. Project supported by the National Basic Research Program of China (Grant No. 2011CB606402) and the National Natural Science Foundation of China (Grant No. 51071091).

  12. Energy and time determine scaling in biological and computer designs.

    PubMed

    Moses, Melanie; Bezerra, George; Edwards, Benjamin; Brown, James; Forrest, Stephanie

    2016-08-19

    Metabolic rate in animals and power consumption in computers are analogous quantities that scale similarly with size. We analyse vascular systems of mammals and on-chip networks of microprocessors, where natural selection and human engineering, respectively, have produced systems that minimize both energy dissipation and delivery times. Using a simple network model that simultaneously minimizes energy and time, our analysis explains empirically observed trends in the scaling of metabolic rate in mammals and power consumption and performance in microprocessors across several orders of magnitude in size. Just as the evolutionary transitions from unicellular to multicellular animals in biology are associated with shifts in metabolic scaling, our model suggests that the scaling of power and performance will change as computer designs transition to decentralized multi-core and distributed cyber-physical systems. More generally, a single energy-time minimization principle may govern the design of many complex systems that process energy, materials and information.This article is part of the themed issue 'The major synthetic evolutionary transitions'. PMID:27431524

  13. Preferred orientation in carbon and boron nitride: Does a thermodynamic theory of elastic strain energy get it right. [C; BN

    SciTech Connect

    McCarty, K.F. )

    1999-09-01

    We address whether the elastic strain-energy theory (minimizing the Gibbs energy of a stressed crystal) of McKenzie and co-workers [D. R. McKenzie and M. M. M. Bilek, J. Vac. Sci. Technol. A [bold 16], 2733 (1998)] adequately explains the preferred orientation observed in carbon and BN films. In the formalism, the Gibbs energy of the cubic materials diamond and cubic boron includes the strain that occurs when the phases form, through specific structural transformations, from graphitic precursors. This treatment violates the requirement of thermodynamics that the Gibbs energy be a path-independent, state function. If the cubic phases are treated using the same (path-independent) formalism applied to the graphitic materials, the crystallographic orientation of lowest Gibbs energy is not that observed experimentally. For graphitic (hexagonal) carbon and BN, an elastic strain approach seems inappropriate because the compressive stresses in energetically deposited films are orders of magnitude higher than the elastic limit of the materials. Furthermore, using the known elastic constants of either ordered or disordered graphitic materials, the theory does not predict the orientation observed by experiment. [copyright] [ital 1999 American Vacuum Society.

  14. Computing at the leading edge: Research in the energy sciences

    SciTech Connect

    Mirin, A.A.; Van Dyke, P.T.

    1994-02-01

    The purpose of this publication is to highlight selected scientific challenges that have been undertaken by the DOE Energy Research community. The high quality of the research reflected in these contributions underscores the growing importance both to the Grand Challenge scientific efforts sponsored by DOE and of the related supporting technologies that the National Energy Research Supercomputer Center (NERSC) and other facilities are able to provide. The continued improvement of the computing resources available to DOE scientists is prerequisite to ensuring their future progress in solving the Grand Challenges. Titles of articles included in this publication include: the numerical tokamak project; static and animated molecular views of a tumorigenic chemical bound to DNA; toward a high-performance climate systems model; modeling molecular processes in the environment; lattice Boltzmann models for flow in porous media; parallel algorithms for modeling superconductors; parallel computing at the Superconducting Super Collider Laboratory; the advanced combustion modeling environment; adaptive methodologies for computational fluid dynamics; lattice simulations of quantum chromodynamics; simulating high-intensity charged-particle beams for the design of high-power accelerators; electronic structure and phase stability of random alloys.

  15. Exascale for Energy: The Role of Exascale Computing in Energy Security

    SciTech Connect

    Authors, Various

    2010-07-15

    How will the United States satisfy energy demand in a tightening global energy marketplace while, at the same time, reducing greenhouse gas emissions? Exascale computing -- expected to be available within the next eight to ten years ? may play a crucial role in answering that question by enabling a paradigm shift from test-based to science-based design and engineering. Computational modeling of complete power generation systems and engines, based on scientific first principles, will accelerate the improvement of existing energy technologies and the development of new transformational technologies by pre-selecting the designs most likely to be successful for experimental validation, rather than relying on trial and error. The predictive understanding of complex engineered systems made possible by computational modeling will also reduce the construction and operations costs, optimize performance, and improve safety. Exascale computing will make possible fundamentally new approaches to quantifying the uncertainty of safety and performance engineering. This report discusses potential contributions of exa-scale modeling in four areas of energy production and distribution: nuclear power, combustion, the electrical grid, and renewable sources of energy, which include hydrogen fuel, bioenergy conversion, photovoltaic solar energy, and wind turbines. Examples of current research are taken from projects funded by the U.S. Department of Energy (DOE) Office of Science at universities and national laboratories, with a special focus on research conducted at Lawrence Berkeley National Laboratory.

  16. Energy-resolved computed tomography: first experimental results

    NASA Astrophysics Data System (ADS)

    Shikhaliev, Polad M.

    2008-10-01

    First experimental results with energy-resolved computed tomography (CT) are reported. The contrast-to-noise ratio (CNR) in CT has been improved with x-ray energy weighting for the first time. Further, x-ray energy weighting improved the CNR in material decomposition CT when applied to CT projections prior to dual-energy subtraction. The existing CT systems use an energy (charge) integrating x-ray detector that provides a signal proportional to the energy of the x-ray photon. Thus, the x-ray photons with lower energies are scored less than those with higher energies. This underestimates contribution of lower energy photons that would provide higher contrast. The highest CNR can be achieved if the x-ray photons are scored by a factor that would increase as the x-ray energy decreases. This could be performed by detecting each x-ray photon separately and measuring its energy. The energy selective CT data could then be saved, and any weighting factor could be applied digitally to a detected x-ray photon. The CT system includes a photon counting detector with linear arrays of pixels made from cadmium zinc telluride (CZT) semiconductor. A cylindrical phantom with 10.2 cm diameter made from tissue-equivalent material was used for CT imaging. The phantom included contrast elements representing calcifications, iodine, adipose and glandular tissue. The x-ray tube voltage was 120 kVp. The energy selective CT data were acquired, and used to generate energy-weighted and material-selective CT images. The energy-weighted and material decomposition CT images were generated using a single CT scan at a fixed x-ray tube voltage. For material decomposition the x-ray spectrum was digitally spilt into low- and high-energy parts and dual-energy subtraction was applied. The x-ray energy weighting resulted in CNR improvement of calcifications and iodine by a factor of 1.40 and 1.63, respectively, as compared to conventional charge integrating CT. The x-ray energy weighting was also applied

  17. Computational predictions of energy materials using density functional theory

    NASA Astrophysics Data System (ADS)

    Jain, Anubhav; Shin, Yongwoo; Persson, Kristin A.

    2016-01-01

    In the search for new functional materials, quantum mechanics is an exciting starting point. The fundamental laws that govern the behaviour of electrons have the possibility, at the other end of the scale, to predict the performance of a material for a targeted application. In some cases, this is achievable using density functional theory (DFT). In this Review, we highlight DFT studies predicting energy-related materials that were subsequently confirmed experimentally. The attributes and limitations of DFT for the computational design of materials for lithium-ion batteries, hydrogen production and storage materials, superconductors, photovoltaics and thermoelectric materials are discussed. In the future, we expect that the accuracy of DFT-based methods will continue to improve and that growth in computing power will enable millions of materials to be virtually screened for specific applications. Thus, these examples represent a first glimpse of what may become a routine and integral step in materials discovery.

  18. Dictionary-based image denoising for dual energy computed tomography

    NASA Astrophysics Data System (ADS)

    Mechlem, Korbinian; Allner, Sebastian; Mei, Kai; Pfeiffer, Franz; Noël, Peter B.

    2016-03-01

    Compared to conventional computed tomography (CT), dual energy CT allows for improved material decomposition by conducting measurements at two distinct energy spectra. Since radiation exposure is a major concern in clinical CT, there is a need for tools to reduce the noise level in images while preserving diagnostic information. One way to achieve this goal is the application of image-based denoising algorithms after an analytical reconstruction has been performed. We have developed a modified dictionary denoising algorithm for dual energy CT aimed at exploiting the high spatial correlation between between images obtained from different energy spectra. Both the low-and high energy image are partitioned into small patches which are subsequently normalized. Combined patches with improved signal-to-noise ratio are formed by a weighted addition of corresponding normalized patches from both images. Assuming that corresponding low-and high energy image patches are related by a linear transformation, the signal in both patches is added coherently while noise is neglected. Conventional dictionary denoising is then performed on the combined patches. Compared to conventional dictionary denoising and bilateral filtering, our algorithm achieved superior performance in terms of qualitative and quantitative image quality measures. We demonstrate, in simulation studies, that this approach can produce 2d-histograms of the high- and low-energy reconstruction which are characterized by significantly improved material features and separation. Moreover, in comparison to other approaches that attempt denoising without simultaneously using both energy signals, superior similarity to the ground truth can be found with our proposed algorithm.

  19. Vertically Oriented Arrays of ReS2 Nanosheets for Electrochemical Energy Storage and Electrocatalysis.

    PubMed

    Gao, Jian; Li, Lu; Tan, Jiawei; Sun, Hao; Li, Baichang; Idrobo, Juan Carlos; Singh, Chandra Veer; Lu, Toh-Ming; Koratkar, Nikhil

    2016-06-01

    Transition-metal dichalcogenide (TMD) nanolayers show potential as high-performance catalysts in energy conversion and storage devices. Synthetic TMDs produced by chemical-vapor deposition (CVD) methods tend to grow parallel to the growth substrate. Here, we show that with the right precursors and appropriate tuning of the CVD growth conditions, ReS2 nanosheets can be made to orient perpendicular to the growth substrate. This accomplishes two important objectives; first, it drastically increases the wetted or exposed surface area of the ReS2 sheets, and second, it exposes the sharp edges and corners of the ReS2 sheets. We show that these structural features of the vertically grown ReS2 sheets can be exploited to significantly improve their performance as polysulfide immobilizers and electrochemical catalysts in lithium-sulfur (Li-S) batteries and in hydrogen evolution reactions (HER). After 300 cycles, the specific capacity of the Li-S battery with vertical ReS2 catalyst is retained above 750 mA h g(-1), with only ∼0.063% capacity decay per cycle, much better than the baseline battery (without ReS2), which shows ∼0.184% capacity decay per cycle under the same test conditions. As a HER catalyst, the vertical ReS2 provides very small onset overpotential (<100 mV) and an exceptional exchange-current density (∼67.6 μA/cm(2)), which is vastly superior to the baseline electrode without ReS2. PMID:27187173

  20. Single-exposure dual-energy computed radiography.

    PubMed

    Stewart, B K; Huang, H K

    1990-01-01

    This paper focuses on analysis and development of a single-exposure dual-energy digital radiographic method using computed radiography (Fuji FCR-101 storage phosphor system). A detector sandwich consisting of storage phosphor imaging plates and an interdetector filter is used. The goal of this process is to provide a simple dual-energy method using typical plane-projection radiographic equipment and techniques. This approach exploits the transparency of the storage phosphor plates, using radiographic information that would be otherwise lost, to provide energy selective information essentially as a by-product of the radiographic examination. In order to effectively make use of the large dynamic range of the storage phosphor imaging plates (10,000:1), a computed radiography image reading mode of fixed analog-to-digital converter gain and variable photomultiplier sensitivity provides image data which can be related to relative incident exposure for export to the decomposition algorithm. Scatter rejection requirements necessitated crossed 12:1 grids for a field size of 36 x 36 cm. Optimal technique parameters obtained from computer simulation through minimization of the aluminum and Plexiglas equivalent image uncertainty under conditions of constant absorbed does resulted as: 100 kVp using a 0.15-mm-thick tin (Sn) interdetector filter for the lung field. This yields a surface exposure of 23 mR and a surface absorbed dose of 0.26 mGy for a 23-cm-thick chest. Clinical application in evaluation of the solitary pulmonary nodule is discussed, along with an image set demonstrating this application. PMID:2233574

  1. Computing model independent perturbations in dark energy and modified gravity

    SciTech Connect

    Battye, Richard A.; Pearson, Jonathan A. E-mail: jonathan.pearson@durham.ac.uk

    2014-03-01

    We present a methodology for computing model independent perturbations in dark energy and modified gravity. This is done from the Lagrangian for perturbations, by showing how field content, symmetries, and physical principles are often sufficient ingredients for closing the set of perturbed fluid equations. The fluid equations close once ''equations of state for perturbations'' are identified: these are linear combinations of fluid and metric perturbations which construct gauge invariant entropy and anisotropic stress perturbations for broad classes of theories. Our main results are the proof of the equation of state for perturbations presented in a previous paper, and the development of the required calculational tools.

  2. PRaVDA: High Energy Physics towards proton Computed Tomography

    NASA Astrophysics Data System (ADS)

    Price, T.

    2016-07-01

    Proton radiotherapy is an increasingly popular modality for treating cancers of the head and neck, and in paediatrics. To maximise the potential of proton radiotherapy it is essential to know the distribution, and more importantly the proton stopping powers, of the body tissues between the proton beam and the tumour. A stopping power map could be measured directly, and uncertainties in the treatment vastly reduce, if the patient was imaged with protons instead of conventional x-rays. Here we outline the application of technologies developed for High Energy Physics to provide clinical-quality proton Computed Tomography, in so reducing range uncertainties and enhancing the treatment of cancer.

  3. Computed Potential Energy Surfaces and Minimum Energy Pathway for Chemical Reactions

    NASA Technical Reports Server (NTRS)

    Walch, Stephen P.; Langhoff, S. R. (Technical Monitor)

    1994-01-01

    Computed potential energy surfaces are often required for computation of such observables as rate constants as a function of temperature, product branching ratios, and other detailed properties. We have found that computation of the stationary points/reaction pathways using CASSCF/derivative methods, followed by use of the internally contracted CI method with the Dunning correlation consistent basis sets to obtain accurate energetics, gives useful results for a number of chemically important systems. Applications to complex reactions leading to NO and soot formation in hydrocarbon combustion are discussed.

  4. EDITORIAL: Optical orientation Optical orientation

    NASA Astrophysics Data System (ADS)

    SAME ADDRESS *, Yuri; Landwehr, Gottfried

    2008-11-01

    priority of the discovery in the literature, which was partly caused by the existence of the Iron Curtain. I had already enjoyed contact with Boris in the 1980s when the two volumes of Landau Level Spectroscopy were being prepared [2]. He was one of the pioneers of magneto-optics in semiconductors. In the 1950s the band structure of germanium and silicon was investigated by magneto-optical methods, mainly in the United States. No excitonic effects were observed and the band structure parameters were determined without taking account of excitons. However, working with cuprous oxide, which is a direct semiconductor with a relative large energy gap, Zakharchenya and his co-worker Seysan showed that in order to obtain correct band structure parameters, it is necessary to take excitons into account [3]. About 1970 Boris started work on optical orientation. Early work by Hanle in Germany in the 1920s on the depolarization of luminescence in mercury vapour by a transverse magnetic field was not appreciated for a long time. Only in the late 1940s did Kastler and co-workers in Paris begin a systematic study of optical pumping, which led to the award of a Nobel prize. The ideas of optical pumping were first applied by Georges Lampel to solid state physics in 1968. He demonstrated optical orientation of free carriers in silicon. The detection method was nuclear magnetic resonance; optically oriented free electrons dynamically polarized the 29Si nuclei of the host lattice. The first optical detection of spin orientation was demonstrated by with the III-V semiconductor GaSb by Parsons. Due to the various interaction mechanisms of spins with their environment, the effects occurring in semiconductors are naturally more complex than those in atoms. Optical detection is now the preferred method to detect spin alignment in semiconductors. The orientation of spins in crystals pumped with circularly polarized light is deduced from the degree of circular polarization of the recombination

  5. Electrolytes induce long-range orientational order and free energy changes in the H-bond network of bulk water

    PubMed Central

    Chen, Yixing; Okur, Halil I.; Gomopoulos, Nikolaos; Macias-Romero, Carlos; Cremer, Paul S.; Petersen, Poul B.; Tocci, Gabriele; Wilkins, David M.; Liang, Chungwen; Ceriotti, Michele; Roke, Sylvie

    2016-01-01

    Electrolytes interact with water in many ways: changing dipole orientation, inducing charge transfer, and distorting the hydrogen-bond network in the bulk and at interfaces. Numerous experiments and computations have detected short-range perturbations that extend up to three hydration shells around individual ions. We report a multiscale investigation of the bulk and surface of aqueous electrolyte solutions that extends from the atomic scale (using atomistic modeling) to nanoscopic length scales (using bulk and interfacial femtosecond second harmonic measurements) to the macroscopic scale (using surface tension experiments). Electrolytes induce orientational order at concentrations starting at 10 μM that causes nonspecific changes in the surface tension of dilute electrolyte solutions. Aside from ion-dipole interactions, collective hydrogen-bond interactions are crucial and explain the observed difference of a factor of 6 between light water and heavy water. PMID:27152357

  6. Study of the orientation and energy partition of three-jet events in hadronic Z0 decays

    NASA Astrophysics Data System (ADS)

    Abe, K.; Abe, K.; Abt, I.; Akagi, T.; Allen, N. J.; Ash, W. W.; Aston, D.; Baird, K. G.; Baltay, C.; Band, H. R.; Barakat, M. B.; Baranko, G.; Bardon, O.; Barklow, T.; Bashindzhagyan, G. L.; Bazarko, A. O.; Ben-David, R.; Benvenuti, A. C.; Bilei, G. M.; Bisello, D.; Blaylock, G.; Bogart, J. R.; Bolton, T.; Bower, G. R.; Brau, J. E.; Breidenbach, M.; Bugg, W. M.; Burke, D.; Burnett, T. H.; Burrows, P. N.; Busza, W.; Calcaterra, A.; Caldwell, D. O.; Calloway, D.; Camanzi, B.; Carpinelli, M.; Cassell, R.; Castaldi, R.; Castro, A.; Cavalli-Sforza, M.; Chou, A.; Church, E.; Cohn, H. O.; Coller, J. A.; Cook, V.; Cotton, R.; Cowan, R. F.; Coyne, D. G.; Crawford, G.; D'oliveira, A.; Damerell, C. J.; Daoudi, M.; de Sangro, R.; de Simone, P.; dell'orso, R.; Dervan, P. J.; Dima, M.; Dong, D. N.; Du, P. Y.; Dubois, R.; Eisenstein, B. I.; Elia, R.; Etzion, E.; Falciai, D.; Fan, C.; Fero, M. J.; Frey, R.; Furuno, K.; Gillman, T.; Gladding, G.; Gonzalez, S.; Hallewell, G. D.; Hart, E. L.; Hasan, A.; Hasegawa, Y.; Hasuko, K.; Hedges, S.; Hertzbach, S. S.; Hildreth, M. D.; Huber, J.; Huffer, M. E.; Hughes, E. W.; Hwang, H.; Iwasaki, Y.; Jackson, D. J.; Jacques, P.; Jaros, J.; Johnson, A. S.; Johnson, J. R.; Johnson, R. A.; Junk, T.; Kajikawa, R.; Kalelkar, M.; Kang, H. J.; Karliner, I.; Kawahara, H.; Kendall, H. W.; Kim, Y.; King, M. E.; King, R.; Kofler, R. R.; Krishna, N. M.; Kroeger, R. S.; Labs, J. F.; Langston, M.; Lath, A.; Lauber, J. A.; Leith, D. W.; Lia, V.; Liu, M. X.; Liu, X.; Loreti, M.; Lu, A.; Lynch, H. L.; Ma, J.; Mancinelli, G.; Manly, S.; Mantovani, G.; Markiewicz, T. W.; Maruyama, T.; Massetti, R.; Masuda, H.; Mazzucato, E.; McKemey, A. K.; Meadows, B. T.; Messner, R.; Mockett, P. M.; Moffeit, K. C.; Mours, B.; Muller, D.; Nagamine, T.; Narita, S.; Nauenberg, U.; Neal, H.; Nussbaum, M.; Ohnishi, Y.; Osborne, L. S.; Panvini, R. S.; Park, H.; Pavel, T. J.; Peruzzi, I.; Piccolo, M.; Piemontese, L.; Pieroni, E.; Pitts, K. T.; Plano, R. J.; Prepost, R.; Prescott, C. Y.; Punkar, G. D.; Quigley, J.; Ratcliff, B. N.; Reeves, T. W.; Reidy, J.; Rensing, P. E.; Rizzo, T. G.; Rochester, L. S.; Rowson, P. C.; Russell, J. J.; Saxton, O. H.; Schalk, T.; Schindler, R. H.; Schumm, B. A.; Sen, S.; Serbo, V. V.; Shaevitz, M. H.; Shank, J. T.; Shapiro, G.; Sherden, D. J.; Shmakov, K. D.; Simopoulos, C.; Sinev, N. B.; Smith, S. R.; Snyder, J. A.; Stamer, P.; Steiner, H.; Steiner, R.; Strauss, M. G.; Su, D.; Suekane, F.; Sugiyama, A.; Suzuki, S.; Swartz, M.; Szumilo, A.; Takahashi, T.; Taylor, F. E.; Torrence, E.; Trandafir, A. I.; Turk, J. D.; Usher, T.; Va'vra, J.; Vannini, C.; Vella, E.; Venuti, J. P.; Verdier, R.; Verdini, P. G.; Wagner, S. R.; Waite, A. P.; Watts, S. J.; Weidemann, A. W.; Weiss, E. R.; Whitaker, J. S.; White, S. L.; Wickens, F. J.; Williams, D. A.; Williams, D. C.; Williams, S. H.; Willocq, S.; Wilson, R. J.; Wisniewski, W. J.; Woods, M.; Word, G. B.; Wyss, J.; Yamamoto, R. K.; Yamartino, J. M.; Yang, X.; Yellin, S. J.; Young, C. C.; Yuta, H.; Zapalac, G.; Zdarko, R. W.; Zeitlin, C.; Zhou, J.

    1997-03-01

    We have measured the distributions of the jet energies in e+e--->qq¯g events, and of the three orientation angles of the event plane, using hadronic Z0 decays collected in the SLD experiment at SLAC. We find that the data are well described by perturbative QCD incorporating vector gluons. We have also studied models of scalar and tensor gluon production and find them to be incompatible with our data.

  7. Energy Conservation Using Dynamic Voltage Frequency Scaling for Computational Cloud.

    PubMed

    Florence, A Paulin; Shanthi, V; Simon, C B Sunil

    2016-01-01

    Cloud computing is a new technology which supports resource sharing on a "Pay as you go" basis around the world. It provides various services such as SaaS, IaaS, and PaaS. Computation is a part of IaaS and the entire computational requests are to be served efficiently with optimal power utilization in the cloud. Recently, various algorithms are developed to reduce power consumption and even Dynamic Voltage and Frequency Scaling (DVFS) scheme is also used in this perspective. In this paper we have devised methodology which analyzes the behavior of the given cloud request and identifies the associated type of algorithm. Once the type of algorithm is identified, using their asymptotic notations, its time complexity is calculated. Using best fit strategy the appropriate host is identified and the incoming job is allocated to the victimized host. Using the measured time complexity the required clock frequency of the host is measured. According to that CPU frequency is scaled up or down using DVFS scheme, enabling energy to be saved up to 55% of total Watts consumption. PMID:27239551

  8. Energy Conservation Using Dynamic Voltage Frequency Scaling for Computational Cloud

    PubMed Central

    Florence, A. Paulin; Shanthi, V.; Simon, C. B. Sunil

    2016-01-01

    Cloud computing is a new technology which supports resource sharing on a “Pay as you go” basis around the world. It provides various services such as SaaS, IaaS, and PaaS. Computation is a part of IaaS and the entire computational requests are to be served efficiently with optimal power utilization in the cloud. Recently, various algorithms are developed to reduce power consumption and even Dynamic Voltage and Frequency Scaling (DVFS) scheme is also used in this perspective. In this paper we have devised methodology which analyzes the behavior of the given cloud request and identifies the associated type of algorithm. Once the type of algorithm is identified, using their asymptotic notations, its time complexity is calculated. Using best fit strategy the appropriate host is identified and the incoming job is allocated to the victimized host. Using the measured time complexity the required clock frequency of the host is measured. According to that CPU frequency is scaled up or down using DVFS scheme, enabling energy to be saved up to 55% of total Watts consumption. PMID:27239551

  9. Computational design of RNAs with complex energy landscapes.

    PubMed

    Höner zu Siederdissen, Christian; Hammer, Stefan; Abfalter, Ingrid; Hofacker, Ivo L; Flamm, Christoph; Stadler, Peter F

    2013-12-01

    RNA has become an integral building material in synthetic biology. Dominated by their secondary structures, which can be computed efficiently, RNA molecules are amenable not only to in vitro and in vivo selection, but also to rational, computation-based design. While the inverse folding problem of constructing an RNA sequence with a prescribed ground-state structure has received considerable attention for nearly two decades, there have been few efforts to design RNAs that can switch between distinct prescribed conformations. We introduce a user-friendly tool for designing RNA sequences that fold into multiple target structures. The underlying algorithm makes use of a combination of graph coloring and heuristic local optimization to find sequences whose energy landscapes are dominated by the prescribed conformations. A flexible interface allows the specification of a wide range of design goals. We demonstrate that bi- and tri-stable "switches" can be designed easily with moderate computational effort for the vast majority of compatible combinations of desired target structures. RNAdesign is freely available under the GPL-v3 license. PMID:23818234

  10. Aiding Design of Wave Energy Converters via Computational Simulations

    NASA Astrophysics Data System (ADS)

    Jebeli Aqdam, Hejar; Ahmadi, Babak; Raessi, Mehdi; Tootkaboni, Mazdak

    2015-11-01

    With the increasing interest in renewable energy sources, wave energy converters will continue to gain attention as a viable alternative to current electricity production methods. It is therefore crucial to develop computational tools for the design and analysis of wave energy converters. A successful design requires balance between the design performance and cost. Here an analytical solution is used for the approximate analysis of interactions between a flap-type wave energy converter (WEC) and waves. The method is verified using other flow solvers and experimental test cases. Then the model is used in conjunction with a powerful heuristic optimization engine, Charged System Search (CSS) to explore the WEC design space. CSS is inspired by charged particles behavior. It searches the design space by considering candidate answers as charged particles and moving them based on the Coulomb's laws of electrostatics and Newton's laws of motion to find the global optimum. Finally the impacts of changes in different design parameters on the power takeout of the superior WEC designs are investigated. National Science Foundation, CBET-1236462.

  11. Computational assessment of several hydrogen-free high energy compounds.

    PubMed

    Tan, Bisheng; Huang, Ming; Long, Xinping; Li, Jinshan; Fan, Guijuan

    2016-01-01

    Tetrazino-tetrazine-tetraoxide (TTTO) is an attractive high energy compound, but unfortunately, it is not yet experimentally synthesized so far. Isomerization of TTTO leads to its five isomers, bond-separation energies were empolyed to compare the global stability of six compounds, it is found that isomer 1 has the highest bond-separation energy (1204.6kJ/mol), compared with TTTO (1151.2kJ/mol); thermodynamic properties of six compounds were theoretically calculated, including standard formation enthalpies (solid and gaseous), standard fusion enthalpies, standard vaporation enthalpies, standard sublimation enthalpies, lattice energies and normal melting points, normal boiling points; their detonation performances were also computed, including detonation heat (Q, cal/g), detonation velocity (D, km/s), detonation pressure (P, GPa) and impact sensitivity (h50, cm), compared with TTTO (Q=1311.01J/g, D=9.228km/s, P=40.556GPa, h50=12.7cm), isomer 5 exhibites better detonation performances (Q=1523.74J/g, D=9.389km/s, P=41.329GPa, h50= 28.4cm). PMID:26705845

  12. Computer simulations of glasses: the potential energy landscape

    NASA Astrophysics Data System (ADS)

    Raza, Zamaan; Alling, Björn; Abrikosov, Igor A.

    2015-07-01

    We review the current state of research on glasses, discussing the theoretical background and computational models employed to describe them. This article focuses on the use of the potential energy landscape (PEL) paradigm to account for the phenomenology of glassy systems, and the way in which it can be applied in simulations and the interpretation of their results. This article provides a broad overview of the rich phenomenology of glasses, followed by a summary of the theoretical frameworks developed to describe this phenomonology. We discuss the background of the PEL in detail, the onerous task of how to generate computer models of glasses, various methods of analysing numerical simulations, and the literature on the most commonly used model systems. Finally, we tackle the problem of how to distinguish a good glass former from a good crystal former from an analysis of the PEL. In summarising the state of the potential energy landscape picture, we develop the foundations for new theoretical methods that allow the ab initio prediction of the glass-forming ability of new materials by analysis of the PEL.

  13. Computer simulations of glasses: the potential energy landscape.

    PubMed

    Raza, Zamaan; Alling, Björn; Abrikosov, Igor A

    2015-07-29

    We review the current state of research on glasses, discussing the theoretical background and computational models employed to describe them. This article focuses on the use of the potential energy landscape (PEL) paradigm to account for the phenomenology of glassy systems, and the way in which it can be applied in simulations and the interpretation of their results. This article provides a broad overview of the rich phenomenology of glasses, followed by a summary of the theoretical frameworks developed to describe this phenomonology. We discuss the background of the PEL in detail, the onerous task of how to generate computer models of glasses, various methods of analysing numerical simulations, and the literature on the most commonly used model systems. Finally, we tackle the problem of how to distinguish a good glass former from a good crystal former from an analysis of the PEL. In summarising the state of the potential energy landscape picture, we develop the foundations for new theoretical methods that allow the ab initio prediction of the glass-forming ability of new materials by analysis of the PEL. PMID:26139691

  14. Analyzing high energy physics data using database computing: Preliminary report

    NASA Technical Reports Server (NTRS)

    Baden, Andrew; Day, Chris; Grossman, Robert; Lifka, Dave; Lusk, Ewing; May, Edward; Price, Larry

    1991-01-01

    A proof of concept system is described for analyzing high energy physics (HEP) data using data base computing. The system is designed to scale up to the size required for HEP experiments at the Superconducting SuperCollider (SSC) lab. These experiments will require collecting and analyzing approximately 10 to 100 million 'events' per year during proton colliding beam collisions. Each 'event' consists of a set of vectors with a total length of approx. one megabyte. This represents an increase of approx. 2 to 3 orders of magnitude in the amount of data accumulated by present HEP experiments. The system is called the HEPDBC System (High Energy Physics Database Computing System). At present, the Mark 0 HEPDBC System is completed, and can produce analysis of HEP experimental data approx. an order of magnitude faster than current production software on data sets of approx. 1 GB. The Mark 1 HEPDBC System is currently undergoing testing and is designed to analyze data sets 10 to 100 times larger.

  15. Computed Potential Energy Surfaces and Minimum Energy Pathways for Chemical Reactions

    NASA Technical Reports Server (NTRS)

    Walch, Stephen P.; Langhoff, S. R. (Technical Monitor)

    1994-01-01

    Computed potential energy surfaces are often required for computation of such parameters as rate constants as a function of temperature, product branching ratios, and other detailed properties. For some dynamics methods, global potential energy surfaces are required. In this case, it is necessary to obtain the energy at a complete sampling of all the possible arrangements of the nuclei, which are energetically accessible, and then a fitting function must be obtained to interpolate between the computed points. In other cases, characterization of the stationary points and the reaction pathway connecting them is sufficient. These properties may be readily obtained using analytical derivative methods. We have found that computation of the stationary points/reaction pathways using CASSCF/derivative methods, followed by use of the internally contracted CI method to obtain accurate energetics, gives usefull results for a number of chemically important systems. The talk will focus on a number of applications including global potential energy surfaces, H + O2, H + N2, O(3p) + H2, and reaction pathways for complex reactions, including reactions leading to NO and soot formation in hydrocarbon combustion.

  16. Orientational Coherent Effects of High-Energy Particles in a LiNbO3 Crystal

    NASA Astrophysics Data System (ADS)

    Bagli, E.; Guidi, V.; Mazzolari, A.; Bandiera, L.; Germogli, G.; Sytov, A. I.; De Salvador, D.; Argiolas, A.; Bazzan, M.; Carnera, A.; Berra, A.; Bolognini, D.; Lietti, D.; Prest, M.; Vallazza, E.

    2015-07-01

    A bent lithium niobate strip was exposed to a 400 -GeV /c proton beam at the external lines of CERN Super Proton Synchrotron to probe its capabilities versus coherent interactions of the particles with the crystal such as channeling and volume reflection. Lithium niobate (LiNbO3 ) exhibits an interplanar electric field comparable to that of Silicon (Si) and remarkable piezoelectric properties, which could be exploited for the realization of piezo-actuated devices for the control of high-energy particle beams. In contrast to Si and germanium (Ge), LiNbO3 shows an intriguing effect; in spite of a low channeling efficiency (3%), the volume reflection maintains a high deflection efficiency (83%). Such discrepancy was ascribed to the high concentration (1 04 per cm2 ) of dislocations in our sample, which was obtained from a commercial wafer. Indeed, it has been theoretically shown that a channeling efficiency comparable with that of Si or Ge would be attained with a crystal at low defect concentration (less than ten per cm2 ). To better understand the role of dislocations on volume reflection, we have worked out computer simulation via dynecharm++ Monte Carlo code to study the effect of dislocations on volume reflection. The results of the simulations agree with experimental records, demonstrating that volume reflection is more robust than channeling in the presence of dislocations.

  17. Orientational Coherent Effects of High-Energy Particles in a LiNbO3 Crystal.

    PubMed

    Bagli, E; Guidi, V; Mazzolari, A; Bandiera, L; Germogli, G; Sytov, A I; De Salvador, D; Argiolas, A; Bazzan, M; Carnera, A; Berra, A; Bolognini, D; Lietti, D; Prest, M; Vallazza, E

    2015-07-01

    A bent lithium niobate strip was exposed to a 400-GeV/c proton beam at the external lines of CERN Super Proton Synchrotron to probe its capabilities versus coherent interactions of the particles with the crystal such as channeling and volume reflection. Lithium niobate (LiNbO3) exhibits an interplanar electric field comparable to that of Silicon (Si) and remarkable piezoelectric properties, which could be exploited for the realization of piezo-actuated devices for the control of high-energy particle beams. In contrast to Si and germanium (Ge), LiNbO3 shows an intriguing effect; in spite of a low channeling efficiency (3%), the volume reflection maintains a high deflection efficiency (83%). Such discrepancy was ascribed to the high concentration (10(4) per cm2) of dislocations in our sample, which was obtained from a commercial wafer. Indeed, it has been theoretically shown that a channeling efficiency comparable with that of Si or Ge would be attained with a crystal at low defect concentration (less than ten per cm2). To better understand the role of dislocations on volume reflection, we have worked out computer simulation via dynecharm++ Monte Carlo code to study the effect of dislocations on volume reflection. The results of the simulations agree with experimental records, demonstrating that volume reflection is more robust than channeling in the presence of dislocations. PMID:26182106

  18. Adolescent girls' energy expenditure during dance simulation active computer gaming.

    PubMed

    Fawkner, Samantha G; Niven, Alisa; Thin, Alasdair G; Macdonald, Mhairi J; Oakes, Jemma R

    2010-01-01

    The objective of this study was to determine the energy expended and intensity of physical activity achieved by adolescent girls while playing on a dance simulation game. Twenty adolescent girls were recruited from a local secondary school. Resting oxygen uptake (VO(2)) and heart rate were analysed while sitting quietly and subsequently during approximately 30 min of game play, with 10 min at each of three increasing levels of difficulty. Energy expenditure was predicted from VO(2) at rest and during game play at three levels of play, from which the metabolic equivalents (METS) of game playing were derived. Mean +/- standard deviation energy expenditure for levels 1, 2, and 3 was 3.63 +/- 0.58, 3.65 +/- 0.54, and 4.14 +/- 0.71 kcal . min(-1) respectively, while mean activity for each level of play was at least of moderate intensity (>3 METS). Dance simulation active computer games provide an opportunity for most adolescent girls to exercise at moderate intensity. Therefore, regular playing might contribute to daily physical activity recommendations for good health in this at-risk population. PMID:20013462

  19. Power/energy use cases for high performance computing.

    SciTech Connect

    Laros, James H.,; Kelly, Suzanne M; Hammond, Steven; Elmore, Ryan; Munch, Kristin

    2013-12-01

    Power and Energy have been identified as a first order challenge for future extreme scale high performance computing (HPC) systems. In practice the breakthroughs will need to be provided by the hardware vendors. But to make the best use of the solutions in an HPC environment, it will likely require periodic tuning by facility operators and software components. This document describes the actions and interactions needed to maximize power resources. It strives to cover the entire operational space in which an HPC system occupies. The descriptions are presented as formal use cases, as documented in the Unified Modeling Language Specification [1]. The document is intended to provide a common understanding to the HPC community of the necessary management and control capabilities. Assuming a common understanding can be achieved, the next step will be to develop a set of Application Programing Interfaces (APIs) to which hardware vendors and software developers could utilize to steer power consumption.

  20. Novel clinical applications of dual energy computed tomography.

    PubMed

    Kraśnicki, Tomasz; Podgórski, Przemysław; Guziński, Maciej; Czarnecka, Anna; Tupikowski, Krzysztof; Garcarek, Jerzy; Marek Sąsiadek, Marek

    2012-01-01

    Dual energy CT (DECT) was conceived at the very beginning of the computed tomography era. However the first DECT scanner was developed in 2006. Nowadays there are three different types of DECT available: dual-source CT with 80(100) kVp and 140 kVp tubes (Siemens Medical Solution); dual-layer multi-detector scanner with acquisition 120 or 140kVp (Philips Healthcare); CT unit with one rapid kVp switching source and new detector based on gemstone scintillator materials (GE Healthcare). This article describes the physical background and principles of DECT imaging as well as applications of this innovative method in routine clinical practice (renal stone differentiation, pulmonary perfusion, neuroradiology and metallic implant imaging). The particular applications are illustrated by cases from author's material. PMID:23457140

  1. Direct computation of general chemical energy differences: Application to ionization potentials, excitation, and bond energies

    SciTech Connect

    Beste, Ariana; Harrison, Robert J; Yanai, Takeshi

    2006-01-01

    Chemists are mainly interested in energy differences. In contrast, most quantum chemical methods yield the total energy which is a large number compared to the difference and has therefore to be computed to a higher relative precision than would be necessary for the difference alone. Hence, it is desirable to compute energy differences directly, thereby avoiding the precision problem. Whenever it is possible to find a parameter which transforms smoothly from an initial to a final state, the energy difference can be obtained by integrating the energy derivative with respect to that parameter (c.f., thermodynamic integration or adiabatic connection methods). If the dependence on the parameter is predominantly linear, accurate results can be obtained by single-point integration. In density functional theory (DFT) and Hartree-Fock, we applied the formalism to ionization potentials, excitation energies, and chemical bond breaking. Example calculations for ionization potentials and excitation energies showed that accurate results could be obtained with a linear estimate. For breaking bonds, we introduce a non-geometrical parameter which gradually turns the interaction between two fragments of a molecule on. The interaction changes the potentials used to determine the orbitals as well as constraining the orbitals to be orthogonal.

  2. Direct computation of general chemical energy differences: Application to ionization potentials, excitation, and bond energies

    NASA Astrophysics Data System (ADS)

    Beste, A.; Harrison, R. J.; Yanai, T.

    2006-08-01

    Chemists are mainly interested in energy differences. In contrast, most quantum chemical methods yield the total energy which is a large number compared to the difference and has therefore to be computed to a higher relative precision than would be necessary for the difference alone. Hence, it is desirable to compute energy differences directly, thereby avoiding the precision problem. Whenever it is possible to find a parameter which transforms smoothly from an initial to a final state, the energy difference can be obtained by integrating the energy derivative with respect to that parameter (cf. thermodynamic integration or adiabatic connection methods). If the dependence on the parameter is predominantly linear, accurate results can be obtained by single-point integration. In density functional theory and Hartree-Fock, we applied the formalism to ionization potentials, excitation energies, and chemical bond breaking. Example calculations for ionization potentials and excitation energies showed that accurate results could be obtained with a linear estimate. For breaking bonds, we introduce a nongeometrical parameter which gradually turns the interaction between two fragments of a molecule on. The interaction changes the potentials used to determine the orbitals as well as the constraint on the orbitals to be orthogonal.

  3. Quantification of Hepatic Steatosis With Dual-Energy Computed Tomography

    PubMed Central

    Artz, Nathan S.; Hines, Catherine D.G.; Brunner, Stephen T.; Agni, Rashmi M.; Kühn, Jens-Peter; Roldan-Alzate, Alejandro; Chen, Guang-Hong; Reeder, Scott B.

    2012-01-01

    Objective The aim of this study was to compare dual-energy computed tomography (DECT) and magnetic resonance imaging (MRI) for fat quantification using tissue triglyceride concentration and histology as references in an animal model of hepatic steatosis. Materials and Methods This animal study was approved by our institution's Research Animal Resource Center. After validation of DECT and MRI using a phantom consisting of different triglyceride concentrations, a leptin-deficient obese mouse model (ob/ob) was used for this study. Twenty mice were divided into 3 groups based on expected levels of hepatic steatosis: low (n = 6), medium (n = 7), and high (n = 7) fat. After MRI at 3 T, a DECT scan was immediately performed. The caudate lobe of the liver was harvested and analyzed for triglyceride concentration using a colorimetric assay. The left lateral lobe was also extracted for histology. Magnetic resonance imaging fat-fraction (FF) and DECT measurements (attenuation, fat density, and effective atomic number) were compared with triglycerides and histology. Results Phantom results demonstrated excellent correlation between triglyceride content and each of the MRI and DECT measurements (r2 ≥ 0.96, P ≤ 0.003). In vivo, however, excellent triglyceride correlation was observed only with attenuation (r2 = 0.89, P < 0.001) and MRI-FF (r2 = 0.92, P < 0.001). Strong correlation existed between attenuation and MRI-FF (r2 = 0.86, P < 0.001). Nonlinear correlation with histology was also excellent for attenuation and MRI-FF. Conclusions Dual-energy computed tomography (CT) data generated by the current Gemstone Spectral Imaging analysis tool do not improve the accuracy of fat quantification in the liver beyond what CT attenuation can already provide. Furthermore, MRI may provide an excellent reference standard for liver fat quantification when validating new CT or DECT methods in human subjects. PMID:22836309

  4. Flux Solitons Studied for Energy-Conserving Reversible Computing

    NASA Astrophysics Data System (ADS)

    Osborn, Kevin D.; Wustmann, Waltraut

    2015-03-01

    On-chip logic is desired for controlling superconducting qubits. Since qubits are very sensitive to photon field noise, it is desirable to develop an energy-conserving reversible logic, i.e. one which can compute without substantial energy dissipation or applied drive fields. With this goal in mind, simulations on discretized long Josephson junctions (DLJJs) have been performed, where the flux soliton is studied as a potential information carrier. Undriven soliton propagation is studied as a function of discreteness, dissipation, and uncertainty in the junction critical current. The perturbing parameters are low in the simulations such that the solitons fit well to an ideal Sine-Gordon soliton. Surprisingly, using realizable parameters a single flux soliton in a DLJJ is found to travel hundreds of Josephson penetration depths without backscattering in the absence of a driving force. In addition, even with a non-ideal launch, solitons are found to propagate predictably such that they show potential for synchronous routing into reversible logic gates.

  5. Array files for computational chemistry: MP2 energies.

    PubMed

    Ford, Alan R; Janowski, Tomasz; Pulay, Peter

    2007-05-01

    A simple message-passing implementation for distributed disk storage, called array files (AF), is described. It is designed primarily for parallelizing computational chemistry applications but it should be useful for any application that handles large amounts of data stored on disk. AF allows transparent distributed storage and access of large data files. An AF consists of a set of logically related records, i.e., blocks of data. It is assumed that the records have the typical dimension of matrices in quantum chemical calculations, i.e., they range from 0.1 to approximately 32 MB in size. The individual records are not striped over nodes; each record is stored on a single node. As a simple application, second-order Møller-Plesset (MP2) energies have been implemented using AF. The AF implementation approaches the efficiency of the hand-coded program. MP2 is relatively simple to parallelize but for more complex applications, such as Coupled Cluster energies, the AF system greatly simplifies the programming effort. PMID:17299726

  6. Orientation dependent size effects in single crystalline anisotropic nanoplates with regard to surface energy

    NASA Astrophysics Data System (ADS)

    Assadi, Abbas; Salehi, Manouchehr; Akhlaghi, Mehdi

    2015-07-01

    In this work, size dependent behavior of single crystalline normal and auxetic anisotropic nanoplates is discussed with consideration of material surface stresses via a generalized model. Bending of pressurized nanoplates and their fundamental resonant frequency are discussed for different crystallographic directions and anisotropy degrees. It is explained that the orientation effects are considerable when the nanoplates' edges are pinned but for clamped nanoplates, the anisotropy effect may be ignored. The size effects are the highest when the simply supported nanoplates are parallel to [110] direction but as the anisotropy gets higher, the size effects are reduced. The orientation effect is also discussed for possibility of self-instability occurrence in nanoplates. The results in simpler cases are compared with previous experiments for nanowires but with a correction factor. There are still some open questions for future studies.

  7. A primer on the energy efficiency of computing

    SciTech Connect

    Koomey, Jonathan G.

    2015-03-30

    The efficiency of computing at peak output has increased rapidly since the dawn of the computer age. This paper summarizes some of the key factors affecting the efficiency of computing in all usage modes. While there is still great potential for improving the efficiency of computing devices, we will need to alter how we do computing in the next few decades because we are finally approaching the limits of current technologies.

  8. A primer on the energy efficiency of computing

    NASA Astrophysics Data System (ADS)

    Koomey, Jonathan G.

    2015-03-01

    The efficiency of computing at peak output has increased rapidly since the dawn of the computer age. This paper summarizes some of the key factors affecting the efficiency of computing in all usage modes. While there is still great potential for improving the efficiency of computing devices, we will need to alter how we do computing in the next few decades because we are finally approaching the limits of current technologies.

  9. Application of an object-oriented programming paradigm in three-dimensional computer modeling of mechanically active gastrointestinal tissues.

    PubMed

    Rashev, P Z; Mintchev, M P; Bowes, K L

    2000-09-01

    The aim of this study was to develop a novel three-dimensional (3-D) object-oriented modeling approach incorporating knowledge of the anatomy, electrophysiology, and mechanics of externally stimulated excitable gastrointestinal (GI) tissues and emphasizing the "stimulus-response" principle of extracting the modeling parameters. The modeling method used clusters of class hierarchies representing GI tissues from three perspectives: 1) anatomical; 2) electrophysiological; and 3) mechanical. We elaborated on the first four phases of the object-oriented system development life-cycle: 1) analysis; 2) design; 3) implementation; and 4) testing. Generalized cylinders were used for the implementation of 3-D tissue objects modeling the cecum, the descending colon, and the colonic circular smooth muscle tissue. The model was tested using external neural electrical tissue excitation of the descending colon with virtual implanted electrodes and the stimulating current density distributions over the modeled surfaces were calculated. Finally, the tissue deformations invoked by electrical stimulation were estimated and represented by a mesh-surface visualization technique. PMID:11026595

  10. Structural and orientation effects on electronic energy transfer between silicon quantum dots with dopants and with silver adsorbates

    SciTech Connect

    Vinson, N.; Freitag, H.; Micha, D. A.

    2014-06-28

    Starting from the atomic structure of silicon quantum dots (QDs), and utilizing ab initio electronic structure calculations within the Förster resonance energy transfer (FRET) treatment, a model has been developed to characterize electronic excitation energy transfer between QDs. Electronic energy transfer rates, K{sub EET}, between selected identical pairs of crystalline silicon quantum dots systems, either bare, doped with Al or P, or adsorbed with Ag and Ag{sub 3}, have been calculated and analyzed to extend previous work on light absorption by QDs. The effects of their size and relative orientation on energy transfer rates for each system have also been considered. Using time-dependent density functional theory and the hybrid functional HSE06, the FRET treatment was employed to model electronic energy transfer rates within the dipole-dipole interaction approximation. Calculations with adsorbed Ag show that: (a) addition of Ag increases rates up to 100 times, (b) addition of Ag{sub 3} increases rates up to 1000 times, (c) collinear alignment of permanent dipoles increases transfer rates by an order of magnitude compared to parallel orientation, and (d) smaller QD-size increases transfer due to greater electronic orbitals overlap. Calculations with dopants show that: (a) p-type and n-type dopants enhance energy transfer up to two orders of magnitude, (b) surface-doping with P and center-doping with Al show the greatest rates, and (c) K{sub EET} is largest for collinear permanent dipoles when the dopant is on the outer surface and for parallel permanent dipoles when the dopant is inside the QD.

  11. Role of laser energy density on growth of highly oriented topological insulator Bi2Se3 thin films

    NASA Astrophysics Data System (ADS)

    Chaturvedi, P.; Saha, B.; Saha, D.; Ganguly, S.

    2016-05-01

    Topological insulators (TIs) are very promising in the field of nanoelectronics due to their exotic properties. Bismuth Selenide, a 3D Topological insulator is considered as reference TI owing to its simple band structure and large bandgap. However, the presence of unintentional doping, which masks the metallic surface states, is still a major concern. In this work, we report the effect of laser energy density on the growth of highly oriented and stoichiometric thin films of Bi2Se3 by pulsed laser deposition (PLD). Structural characterizations by X-ray diffraction (XRD) and Raman Spectroscopy confirms the c-axis orientation and good crystallinity of films. Atomic force microscopy (AFM) study shows the increase in average grain size and rms roughness (from 3.1 nm to 5.1 nm) with the decrease in laser energy density. Compositional study by X-Ray Reflectivity (XRR) measurement is found to be in agreement with AFM results. Energy dispersive x-ray spectroscopy (EDS) measurements confirm the desired stoichiometry of the samples.

  12. A study of the orientation and energy partition of three-jet events in hadronic Z{sup 0} decays

    SciTech Connect

    The SLD Collaboration

    1995-07-01

    Using hadronic Z{sup 0} decays collected in the SLD experiment at SLAC we have measured the distributions of the jet energies in e{sup +}e{sup -}{yields} Z{sup 0}{yields} three-jet events of the three orientation angles of the event plane. We find that these distributions are well described by perturbative QCD incorporating vector gluons. We have also compared our data with models of scalar and tensor gluon production, and discuss limits on the relative contributions of these particles to three-jet production in e{sup +}e{sup -} annihilation.

  13. Object-oriented design and implementation of CFDLab: a computer-assisted learning tool for fluid dynamics using dual reciprocity boundary element methodology

    NASA Astrophysics Data System (ADS)

    Friedrich, J.

    1999-08-01

    As lecturers, our main concern and goal is to develop more attractive and efficient ways of communicating up-to-date scientific knowledge to our students and facilitate an in-depth understanding of physical phenomena. Computer-based instruction is very promising to help both teachers and learners in their difficult task, which involves complex cognitive psychological processes. This complexity is reflected in high demands on the design and implementation methods used to create computer-assisted learning (CAL) programs. Due to their concepts, flexibility, maintainability and extended library resources, object-oriented modeling techniques are very suitable to produce this type of pedagogical tool. Computational fluid dynamics (CFD) enjoys not only a growing importance in today's research, but is also very powerful for teaching and learning fluid dynamics. For this purpose, an educational PC program for university level called 'CFDLab 1.1' for Windows™ was developed with an interactive graphical user interface (GUI) for multitasking and point-and-click operations. It uses the dual reciprocity boundary element method as a versatile numerical scheme, allowing to handle a variety of relevant governing equations in two dimensions on personal computers due to its simple pre- and postprocessing including 2D Laplace, Poisson, diffusion, transient convection-diffusion.

  14. Exploring the controls of soil biogeochemistry in a restored coastal wetland using object-oriented computer simulations of uptake kinetics and thermodynamic optimization in batch reactors

    NASA Astrophysics Data System (ADS)

    Payn, R. A.; Helton, A. M.; Poole, G.; Izurieta, C.; Bernhardt, E. S.; Burgin, A. J.

    2012-12-01

    Many hypotheses have been proposed to predict patterns of biogeochemical redox reactions based on the availability of electron donors and acceptors and the thermodynamic theory of chemistry. Our objective was to develop a computer model that would allow us to test various alternatives of these hypotheses against data gathered from soil slurry batch reactors, experimental soil perfusion cores, and in situ soil profile observations from the restored Timberlake Wetland in coastal North Carolina, USA. Software requirements to meet this objective included the ability to rapidly develop and compare different hypothetical formulations of kinetic and thermodynamic theory, and the ability to easily change the list of potential biogeochemical reactions used in the optimization scheme. For future work, we also required an object pattern that could easily be coupled with an existing soil hydrologic model. These requirements were met using Network Exchange Objects (NEO), our recently developed object-oriented distributed modeling framework that facilitates simulations of multiple interacting currencies moving through network-based systems. An initial implementation of the object pattern was developed in NEO based on maximizing growth of the microbial community from available dissolved organic carbon. We then used this implementation to build a modeling system for comparing results across multiple simulated batch reactors with varied initial solute concentrations, varied biogeochemical parameters, or varied optimization schemes. Among heterotrophic aerobic and anaerobic reactions, we have found that this model reasonably predicts the use of terminal electron acceptors in simulated batch reactors, where reactions with higher energy yields occur before reactions with lower energy yields. However, among the aerobic reactions, we have also found this model predicts dominance of chemoautotrophs (e.g., nitrifiers) when their electron donor (e.g., ammonium) is abundant, despite the

  15. A stoichiometric calibration method for dual energy computed tomography.

    PubMed

    Bourque, Alexandra E; Carrier, Jean-François; Bouchard, Hugo

    2014-04-21

    The accuracy of radiotherapy dose calculation relies crucially on patient composition data. The computed tomography (CT) calibration methods based on the stoichiometric calibration of Schneider et al (1996 Phys. Med. Biol. 41 111-24) are the most reliable to determine electron density (ED) with commercial single energy CT scanners. Along with the recent developments in dual energy CT (DECT) commercial scanners, several methods were published to determine ED and the effective atomic number (EAN) for polyenergetic beams without the need for CT calibration curves. This paper intends to show that with a rigorous definition of the EAN, the stoichiometric calibration method can be successfully adapted to DECT with significant accuracy improvements with respect to the literature without the need for spectrum measurements or empirical beam hardening corrections. Using a theoretical framework of ICRP human tissue compositions and the XCOM photon cross sections database, the revised stoichiometric calibration method yields Hounsfield unit (HU) predictions within less than ±1.3 HU of the theoretical HU calculated from XCOM data averaged over the spectra used (e.g., 80 kVp, 100 kVp, 140 kVp and 140/Sn kVp). A fit of mean excitation energy (I-value) data as a function of EAN is provided in order to determine the ion stopping power of human tissues from ED-EAN measurements. Analysis of the calibration phantom measurements with the Siemens SOMATOM Definition Flash dual source CT scanner shows that the present formalism yields mean absolute errors of (0.3 ± 0.4)% and (1.6 ± 2.0)% on ED and EAN, respectively. For ion therapy, the mean absolute errors for calibrated I-values and proton stopping powers (216 MeV) are (4.1 ± 2.7)% and (0.5 ± 0.4)%, respectively. In all clinical situations studied, the uncertainties in ion ranges in water for therapeutic energies are found to be less than 1.3 mm, 0.7 mm and 0.5 mm for protons, helium and carbon ions respectively, using a

  16. A stoichiometric calibration method for dual energy computed tomography

    NASA Astrophysics Data System (ADS)

    Bourque, Alexandra E.; Carrier, Jean-François; Bouchard, Hugo

    2014-04-01

    The accuracy of radiotherapy dose calculation relies crucially on patient composition data. The computed tomography (CT) calibration methods based on the stoichiometric calibration of Schneider et al (1996 Phys. Med. Biol. 41 111-24) are the most reliable to determine electron density (ED) with commercial single energy CT scanners. Along with the recent developments in dual energy CT (DECT) commercial scanners, several methods were published to determine ED and the effective atomic number (EAN) for polyenergetic beams without the need for CT calibration curves. This paper intends to show that with a rigorous definition of the EAN, the stoichiometric calibration method can be successfully adapted to DECT with significant accuracy improvements with respect to the literature without the need for spectrum measurements or empirical beam hardening corrections. Using a theoretical framework of ICRP human tissue compositions and the XCOM photon cross sections database, the revised stoichiometric calibration method yields Hounsfield unit (HU) predictions within less than ±1.3 HU of the theoretical HU calculated from XCOM data averaged over the spectra used (e.g., 80 kVp, 100 kVp, 140 kVp and 140/Sn kVp). A fit of mean excitation energy (I-value) data as a function of EAN is provided in order to determine the ion stopping power of human tissues from ED-EAN measurements. Analysis of the calibration phantom measurements with the Siemens SOMATOM Definition Flash dual source CT scanner shows that the present formalism yields mean absolute errors of (0.3 ± 0.4)% and (1.6 ± 2.0)% on ED and EAN, respectively. For ion therapy, the mean absolute errors for calibrated I-values and proton stopping powers (216 MeV) are (4.1 ± 2.7)% and (0.5 ± 0.4)%, respectively. In all clinical situations studied, the uncertainties in ion ranges in water for therapeutic energies are found to be less than 1.3 mm, 0.7 mm and 0.5 mm for protons, helium and carbon ions respectively, using a generic

  17. Task-oriented training with computer gaming in people with rheumatoid arthritisor osteoarthritis of the hand: study protocol of a randomized controlled pilot trial

    PubMed Central

    2013-01-01

    Background Significant restriction in the ability to participate in home, work and community life results from pain, fatigue, joint damage, stiffness and reduced joint range of motion and muscle strength in people with rheumatoid arthritis or osteoarthritis of the hand. With modest evidence on the therapeutic effectiveness of conventional hand exercises, a task-oriented training program via real life object manipulations has been developed for people with arthritis. An innovative, computer-based gaming platform that allows a broad range of common objects to be seamlessly transformed into therapeutic input devices through instrumentation with a motion-sense mouse has also been designed. Personalized objects are selected to target specific training goals such as graded finger mobility, strength, endurance or fine/gross dexterous functions. The movements and object manipulation tasks that replicate common situations in everyday living will then be used to control and play any computer game, making practice challenging and engaging. Methods/Design The ongoing study is a 6-week, single-center, parallel-group, equally allocated and assessor-blinded pilot randomized controlled trial. Thirty people with rheumatoid arthritis or osteoarthritis affecting the hand will be randomized to receive either conventional hand exercises or the task-oriented training. The purpose is to determine a preliminary estimation of therapeutic effectiveness and feasibility of the task-oriented training program. Performance based and self-reported hand function, and exercise compliance are the study outcomes. Changes in outcomes (pre to post intervention) within each group will be assessed by paired Student t test or Wilcoxon signed-rank test and between groups (control versus experimental) post intervention using unpaired Student t test or Mann–Whitney U test. Discussion The study findings will inform decisions on the feasibility, safety and completion rate and will also provide preliminary

  18. Solar energy harvesting in the epicuticle of the oriental hornet (Vespa orientalis).

    PubMed

    Plotkin, Marian; Hod, Idan; Zaban, Arie; Boden, Stuart A; Bagnall, Darren M; Galushko, Dmitry; Bergman, David J

    2010-12-01

    The Oriental hornet worker correlates its digging activity with solar insolation. Solar radiation passes through the epicuticle, which exhibits a grating-like structure, and continues to pass through layers of the exo-endocuticle until it is absorbed by the pigment melanin in the brown-colored cuticle or xanthopterin in the yellow-colored cuticle. The correlation between digging activity and the ability of the cuticle to absorb part of the solar radiation implies that the Oriental hornet may harvest parts of the solar radiation. In this study, we explore this intriguing possibility by analyzing the biophysical properties of the cuticle. We use rigorous coupled wave analysis simulations to show that the cuticle surfaces are structured to reduced reflectance and act as diffraction gratings to trap light and increase the amount absorbed in the cuticle. A dye-sensitized solar cell (DSSC) was constructed in order to show the ability of xanthopterin to serve as a light-harvesting molecule. PMID:21052618

  19. Effect of the interplanetary magnetic field orientation and intensity in the mass and energy deposition on the Hermean surface

    NASA Astrophysics Data System (ADS)

    Varela, J.; Pantellini, F.; Moncuquet, M.

    2016-09-01

    The aim of the present study is to simulate the interaction between the solar wind and the Hermean magnetosphere. We use the MHD code PLUTO in spherical coordinates with an axisymmetric multipolar expansion of the Hermean magnetic field, to perform a set of simulations with different interplanetary magnetic field orientations and intensities. We fix the hydrodynamic parameters of the solar wind to study the distortions driven by the interplanetary magnetic field in the topology of the Hermean magnetosphere, leading to variations of the mass and energy deposition distributions, the integrated mass deposition, the oval aperture, the area covered by open magnetic field lines and the regions of efficient particle sputtering on the planet surface. The simulations show a correlation between the reconnection regions and the local maxima of plasma inflow and energy deposition on the planet surface.

  20. Final rotational state distributions from NO(vi = 11) in collisions with Au(111): the magnitude of vibrational energy transfer depends on orientation in molecule-surface collisions.

    PubMed

    Krüger, Bastian C; Bartels, Nils; Wodtke, Alec M; Schäfer, Tim

    2016-06-01

    When NO molecules collide at a Au(111) surface, their interaction is controlled by several factors; especially important are the molecules' orientation with respect to the surface (N-first vs. O-first) and their distance of closest approach. In fact, the former may control the latter as N-first orientations are attractive and O-first orientations are repulsive. In this work, we employ electric fields to control the molecules' incidence orientation in combination with rotational rainbow scattering detection. Specifically, we report final rotational state distributions of oriented NO(vi = 11) molecules scattered from Au(111) for final vibrational states between vf = 4 and 11. For O-first collisions, the interaction potential is highly repulsive preventing the close approach and scattering results in high-J rainbows. By contrast, these rainbows are not seen for the more intimate collisions possible for attractive N-first orientations. In this way, we reveal the influence of orientation and the distance of closest approach on vibrational relaxation of NO(vi = 11) in collisions with a Au(111) surface. We also elucidate the influence of steering forces which cause the O-first oriented molecules to rotate to an N-first orientation during their approach to the surface. The experiments show that when NO collides at the surface with the N-atom first, on average more than half of the initial vibrational energy is lost; whereas O-first oriented collisions lose much less vibrational energy. These observations qualitatively confirm theoretical predictions of electronically non-adiabatic NO interactions at Au(111). PMID:27193070

  1. Bessel Fourier Orientation Reconstruction (BFOR): An Analytical Diffusion Propagator Reconstruction for Hybrid Diffusion Imaging and Computation of q-Space Indices

    PubMed Central

    Hosseinbor, A. Pasha; Chung, Moo K.; Wu, Yu-Chien; Alexander, Andrew L.

    2012-01-01

    The ensemble average propagator (EAP) describes the 3D average diffusion process of water molecules, capturing both its radial and angular contents. The EAP can thus provide richer information about complex tissue microstructure properties than the orientation distribution function (ODF), an angular feature of the EAP. Recently, several analytical EAP reconstruction schemes for multiple q-shell acquisitions have been proposed, such as diffusion propagator imaging (DPI) and spherical polar Fourier imaging (SPFI). In this study, a new analytical EAP reconstruction method is proposed, called Bessel Fourier orientation reconstruction (BFOR), whose solution is based on heat equation estimation of the diffusion signal for each shell acquisition, and is validated on both synthetic and real datasets. A significant portion of the paper is dedicated to comparing BFOR, SPFI, and DPI using hybrid, non-Cartesian sampling for multiple b-value acquisitions. Ways to mitigate the effects of Gibbs ringing on EAP reconstruction are also explored. In addition to analytical EAP reconstruction, the aforementioned modeling bases can be used to obtain rotationally invariant q-space indices of potential clinical value, an avenue which has not yet been thoroughly explored. Three such measures are computed: zero-displacement probability (Po), mean squared displacement (MSD), and generalized fractional anisotropy (GFA). PMID:22963853

  2. An Energy-Efficient, Application-Oriented Control Algorithm for MAC Protocols in WSN

    NASA Astrophysics Data System (ADS)

    Li, Deliang; Peng, Fei; Qian, Depei

    Energy efficiency has been a main concern in wireless sensor networks where Medium Access Control (MAC) protocol plays an important role. However, current MAC protocols designed for energy saving have seldom considered multiple applications coexisting in WSN with variation of traffic load dynamics and different QoS requirements. In this paper, we propose an adaptive control algorithm at MAC layer to promote energy efficiency. We focus on the tradeoff relation between collisions and control overhead as a reflection of traffic load and propose to balance the tradeoff under the constraints of QoS options. We integrate the algorithm into S-MAC and verify it through NS-2 platform. The results demonstrate the algorithm achieves observable improvement in energy performance while meeting QoS requirement for different coexisting applications in comparison with S-MAC.

  3. Saving Energy and Money: A Lesson in Computer Power Management

    ERIC Educational Resources Information Center

    Lazaros, Edward J.; Hua, David

    2012-01-01

    In this activity, students will develop an understanding of the economic impact of technology by estimating the cost savings of power management strategies in the classroom. Students will learn how to adjust computer display settings to influence the impact that the computer has on the financial burden to the school. They will use mathematics to…

  4. 1988 International Conference on Computer Processing of Chinese and Oriental Languages, Toronto, Canada, Aug. 29-Sept. 1, 1988, Proceedings

    SciTech Connect

    Not Available

    1988-01-01

    Papers on the technologies and applications in computer processing of Chinese and East Asian languages are presented, including papers on character recognition, input and output, natural language, and speech recognition and intelligent systems. Specific topics include a Chinese expert system tool for meteorological forecasting, keyboard designs for Chinese character entry, fuzzy recognition of characters, methods for on-line handwritten character recognition, Chinese word processing programs, electronic dictionaries, Japanese and Chinese text generation, and methods for Mandarin syllable and consonant recognition. Additional topics include classification of Chinese characters by radicals, Japanese document readers, character recognition by stroke order codes, clustering of machine-printed characters, Chinese language indexing systems, a neural network approach for Chinese information retrieval, writing tools for Japanese documents on a PC, speech recognition of Cantonese, and a data base retrieval system for technical periodicals.

  5. Object-Oriented Plant Species Classification for Estimating Energy Balance of Evapotranspiration

    NASA Astrophysics Data System (ADS)

    Mariotto, I.; Gutschick, V. P.

    2012-12-01

    Remote-sensing (RS) measurements of evapotranspiration (ET) require accurate estimates of surface roughness, hence, of plant cover and height. RS imagery at spatial resolutions coarser than that of individual plants (trees, shrubs, grass patches) yields low accuracy in such roughness estimates in heterogeneous terrain, even with inverse modeling of multiangle and multispectral imagery (which imagery is also commonly costly and of low temporal and spatial coverage). A solution is imagery with high spatial resolution, such as from low-altitude photography obtained with, e.g., unmanned aerial systems. While these measurements must be performed in campaigns that are inherently limited in total area covered, they offer excellent ground-truthing. In such a campaign, we have performed ground identifications of all major species, including height and crown area, on a 300 m by 300 m area of desert grassland (Jornada Experimental Range near Las Cruces, NM, USA). We obtained aerial imagery which was then processed to species identifications, via object-oriented classification using an automated feature extraction software (Feature Analyst in ArcGIS). Plant species were classified with an overall accuracy of 80.4%. Linear regressions of plant height on plant diameter for each major species yielded mean canopy height over an arbitrary image pixel. Over and above the utility of improving the accuracy of ET estimates, the study offers ecological information on community structure and, moreover, on its possible relation to spatial distributions of ET and total water availability.

  6. OpenGeoSys: Performance-Oriented Computational Methods for Numerical Modeling of Flow in Large Hydrogeological Systems

    NASA Astrophysics Data System (ADS)

    Naumov, D.; Fischer, T.; Böttcher, N.; Watanabe, N.; Walther, M.; Rink, K.; Bilke, L.; Shao, H.; Kolditz, O.

    2014-12-01

    OpenGeoSys (OGS) is a scientific open source code for numerical simulation of thermo-hydro-mechanical-chemical processes in porous and fractured media. Its basic concept is to provide a flexible numerical framework for solving multi-field problems for applications in geoscience and hydrology as e.g. for CO2 storage applications, geothermal power plant forecast simulation, salt water intrusion, water resources management, etc. Advances in computational mathematics have revolutionized the variety and nature of the problems that can be addressed by environmental scientists and engineers nowadays and an intensive code development in the last years enables in the meantime the solutions of much larger numerical problems and applications. However, solving environmental processes along the water cycle at large scales, like for complete catchment or reservoirs, stays computationally still a challenging task. Therefore, we started a new OGS code development with focus on execution speed and parallelization. In the new version, a local data structure concept improves the instruction and data cache performance by a tight bundling of data with an element-wise numerical integration loop. Dedicated analysis methods enable the investigation of memory-access patterns in the local and global assembler routines, which leads to further data structure optimization for an additional performance gain. The concept is presented together with a technical code analysis of the recent development and a large case study including transient flow simulation in the unsaturated / saturated zone of the Thuringian Syncline, Germany. The analysis is performed on a high-resolution mesh (up to 50M elements) with embedded fault structures.

  7. Surface-Parallel Sensor Orientation for Assessing Energy Balance Components on Mountain Slopes

    NASA Astrophysics Data System (ADS)

    Serrano-Ortiz, P.; Sánchez-Cañete, E. P.; Olmo, F. J.; Metzger, S.; Pérez-Priego, O.; Carrara, A.; Alados-Arboledas, L.; Kowalski, A. S.

    2016-03-01

    The consistency of eddy-covariance measurements is often evaluated in terms of the degree of energy balance closure. Even over sloping terrain, instrumentation for measuring energy balance components is commonly installed horizontally, i.e. perpendicular to the geo-potential gradient. Subsequently, turbulent fluxes of sensible and latent heat are rotated perpendicular to the mean streamlines using tilt-correction algorithms. However, net radiation (Rn) and soil heat fluxes ( G) are treated differently, and typically only Rn is corrected to account for slope. With an applied case study, we show and argue several advantages of installing sensors surface-parallel to measure surface-normal Rn and G. For a 17 % south-west-facing slope, our results show that horizontal installation results in hysteresis in the energy balance closure and errors of up to 25 %. Finally, we propose an approximation to estimate the surface-normal Rn, when only vertical Rn measurements are available.

  8. The updated algorithm of the Energy Consumption Program (ECP): A computer model simulating heating and cooling energy loads in buildings

    NASA Technical Reports Server (NTRS)

    Lansing, F. L.; Strain, D. M.; Chai, V. W.; Higgins, S.

    1979-01-01

    The energy Comsumption Computer Program was developed to simulate building heating and cooling loads and compute thermal and electric energy consumption and cost. This article reports on the new additional algorithms and modifications made in an effort to widen the areas of application. The program structure was rewritten accordingly to refine and advance the building model and to further reduce the processing time and cost. The program is noted for its very low cost and ease of use compared to other available codes. The accuracy of computations is not sacrificed however, since the results are expected to lie within + or - 10% of actual energy meter readings.

  9. Effectiveness of Conceptual Change Text-Oriented Instruction on Students' Understanding of Energy in Chemical Reactions

    ERIC Educational Resources Information Center

    Tastan, Ozgecan; Yalcinkaya, Eylem; Boz, Yezdan

    2008-01-01

    The aim of this study is to compare the effectiveness of conceptual change text instruction (CCT) in the context of energy in chemical reactions. The subjects of the study were 60, 10th grade students at a high school, who were in two different classes and taught by the same teacher. One of the classes was randomly selected as the experimental…

  10. National Energy Research Scientific Computing Center 2007 Annual Report

    SciTech Connect

    Hules, John A.; Bashor, Jon; Wang, Ucilia; Yarris, Lynn; Preuss, Paul

    2008-10-23

    This report presents highlights of the research conducted on NERSC computers in a variety of scientific disciplines during the year 2007. It also reports on changes and upgrades to NERSC's systems and services aswell as activities of NERSC staff.

  11. Providing a computing environment for a high energy physics workshop

    SciTech Connect

    Andrews, C.; Butler, J.; Carter, T.; DeMar, P.; Fagan, D.; Gibbons, R.; Grigaliunas, V.; Haibeck, M.; Haring, P.; Horvath, C.; Hughart, N.; Johnstad, H.; Jones, S.; Kreymer, A.; LeBrun, P.; Lego, A.; Leninger, M.; Loebel, L.; McNamara, S.; Nguyen, T.; Nicholls, J.; O'Reilly, C.; Pabrai, U.; Pfister, J.; Ritchie, D.; Roberts, L.; Sazama, C.; Wohlt, D. ); Carven, R. (Wiscons

    1989-12-01

    Although computing facilities have been provided at conferences and workshops remote from the host institution for some years, the equipment provided has rarely been capable of providing for much more than simple editing and electronic mail. This report documents the effort involved in providing a local computing facility with world-wide networking capability for a physics workshop so that we and others can benefit from the knowledge gained through the experience.

  12. A comparison of the growth modes of (100)- and (110)-oriented CrO2 films through the calculation of surface and interface energies

    NASA Astrophysics Data System (ADS)

    Chetry, K. B.; Sims, H.; Butler, W. H.; Gupta, A.

    2011-12-01

    The mechanism leading to different growth modes of (100)- and (110)-oriented CrO2 films on a TiO2 substrate has been investigated by using first-principles calculations based on density functional theory (DFT). The surface energies of (100)- and (110)-oriented CrO2 and TiO2 structures were calculated within a three-dimensional slab model. The convergence of the surface energy was studied with respect to the interslab vacuum distance and the thickness of the slab. A sandwich geometry was used to study the interface energy between CrO2 and TiO2. These results shed light on published experimental results on the growth of epitaxially grown CrO2 on (100)- and (110)-oriented TiO2 substrates.

  13. Effects of excluded volume and correlated molecular orientations on Förster resonance energy transfer in liquid water

    SciTech Connect

    Yang, Mino

    2014-04-14

    Förster theory for the survival probability of excited chromophores is generalized to include the effects of excluded volume and orientation correlation in the molecular distribution. An analytical expression for survival probability was derived and written in terms of a few simple elementary functions. Because of the excluded volume, the survival probability exhibits exponential decay at early times and stretched exponential decay at later times. Experimental schemes to determine the size of the molecular excluded volume are suggested. With the present generalization of theory, we analyzed vibrational resonance energy transfer kinetics in neat water. Excluded volume effects prove to be important and slow down the kinetics at early times. The majority of intermolecular resonance energy transfer was found to occur with exponential kinetics, as opposed to the stretched exponential behavior predicted by Förster theory. Quantum yields of intra-molecular vibrational relaxation, intra-, and intermolecular energy transfer were calculated to be 0.413, 0.167, and 0.420, respectively.

  14. Effectiveness of Conceptual Change Text-oriented Instruction on Students' Understanding of Energy in Chemical Reactions

    NASA Astrophysics Data System (ADS)

    Taştan, Özgecan; Yalçınkaya, Eylem; Boz, Yezdan

    2008-10-01

    The aim of this study is to compare the effectiveness of conceptual change text instruction (CCT) in the context of energy in chemical reactions. The subjects of the study were 60, 10th grade students at a high school, who were in two different classes and taught by the same teacher. One of the classes was randomly selected as the experimental group in which CCT instruction was applied, and the other as the control group in which traditional teaching method was used. The data were obtained through the use of Energy Concept Test (ECT), the Attitude Scale towards Chemistry (ASC) and Science Process Skill Test (SPST). In order to find out the effect of the conceptual change text on students' learning of energy concept, independent sample t-tests, ANCOVA (analysis of covariance) and ANOVA (analysis of variance) were used. Results revealed that there was a statistically significant mean difference between the experimental and control group in terms of students' ECT total mean scores; however, there was no statistically significant difference between the experimental and control group in terms of students' attitude towards chemistry. These findings suggest that conceptual change text instruction enhances the understanding and achievement.

  15. Energy Efficient Biomolecular Simulations with FPGA-based Reconfigurable Computing

    SciTech Connect

    Hampton, Scott S; Agarwal, Pratul K

    2010-05-01

    Reconfigurable computing (RC) is being investigated as a hardware solution for improving time-to-solution for biomolecular simulations. A number of popular molecular dynamics (MD) codes are used to study various aspects of biomolecules. These codes are now capable of simulating nanosecond time-scale trajectories per day on conventional microprocessor-based hardware, but biomolecular processes often occur at the microsecond time-scale or longer. A wide gap exists between the desired and achievable simulation capability; therefore, there is considerable interest in alternative algorithms and hardware for improving the time-to-solution of MD codes. The fine-grain parallelism provided by Field Programmable Gate Arrays (FPGA) combined with their low power consumption make them an attractive solution for improving the performance of MD simulations. In this work, we use an FPGA-based coprocessor to accelerate the compute-intensive calculations of LAMMPS, a popular MD code, achieving up to 5.5 fold speed-up on the non-bonded force computations of the particle mesh Ewald method and up to 2.2 fold speed-up in overall time-to-solution, and potentially an increase by a factor of 9 in power-performance efficiencies for the pair-wise computations. The results presented here provide an example of the multi-faceted benefits to an application in a heterogeneous computing environment.

  16. Money for Research, Not for Energy Bills: Finding Energy and Cost Savings in High Performance Computer Facility Designs

    SciTech Connect

    Drewmark Communications; Sartor, Dale; Wilson, Mark

    2010-07-01

    High-performance computing facilities in the United States consume an enormous amount of electricity, cutting into research budgets and challenging public- and private-sector efforts to reduce energy consumption and meet environmental goals. However, these facilities can greatly reduce their energy demand through energy-efficient design of the facility itself. Using a case study of a facility under design, this article discusses strategies and technologies that can be used to help achieve energy reductions.

  17. Grain-oriented segmentation of images of porous structures using ray casting and curvature energy minimization.

    PubMed

    Lee, H-G; Choi, M-K; Lee, S-C

    2015-02-01

    We segment an image of a porous structure by successively identifying individual grains, using a process that requires no manual initialization. Adaptive thresholding is used to extract an incomplete edge map from the image. Then, seed points are created on a rectangular grid. Rays are cast from each point to identify the local grain. The grain with the best shape is selected by energy minimization, and the grain is used to update the edge map. This is repeated until all the grains have been recognized. Tests on scanning electron microscope images of titanium oxide and aluminium oxide show that their process achieves better results than five other contour detection techniques. PMID:25430498

  18. Large Scale Computing and Storage Requirements for Fusion Energy Sciences: Target 2017

    SciTech Connect

    Gerber, Richard

    2014-05-02

    The National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,500 users working on some 650 projects that involve nearly 600 codes in a wide variety of scientific disciplines. In March 2013, NERSC, DOE?s Office of Advanced Scientific Computing Research (ASCR) and DOE?s Office of Fusion Energy Sciences (FES) held a review to characterize High Performance Computing (HPC) and storage requirements for FES research through 2017. This report is the result.

  19. Lattice computations for high energy and nuclear physics

    NASA Astrophysics Data System (ADS)

    Jansen, K.

    2013-08-01

    An overview is given on present lattice field theory computations. We demonstrate the progress obtained in the field due to algorithmic, conceptual and supercomputer advances. We discuss as particular examples Higgs boson mass bounds in lattice Higgs-Yukawa models and the baryon spectrum, the anomalous magnetic moment of the muon and nuclear physics for lattice QCD. We emphasize a number of major challenges lattice field theory is still facing and estimate the computational cost for simulations at physical values of the pion mass.

  20. PNNL Data-Intensive Computing for a Smarter Energy Grid

    ScienceCinema

    Carol Imhoff; Zhenyu (Henry) Huang; Daniel Chavarria

    2012-12-31

    The Middleware for Data-Intensive Computing (MeDICi) Integration Framework, an integrated platform to solve data analysis and processing needs, supports PNNL research on the U.S. electric power grid. MeDICi is enabling development of visualizations of grid operations and vulnerabilities, with goal of near real-time analysis to aid operators in preventing and mitigating grid failures.

  1. An accurate and efficient computation method of the hydration free energy of a large, complex molecule

    NASA Astrophysics Data System (ADS)

    Yoshidome, Takashi; Ekimoto, Toru; Matubayasi, Nobuyuki; Harano, Yuichi; Kinoshita, Masahiro; Ikeguchi, Mitsunori

    2015-05-01

    The hydration free energy (HFE) is a crucially important physical quantity to discuss various chemical processes in aqueous solutions. Although an explicit-solvent computation with molecular dynamics (MD) simulations is a preferable treatment of the HFE, huge computational load has been inevitable for large, complex solutes like proteins. In the present paper, we propose an efficient computation method for the HFE. In our method, the HFE is computed as a sum of /2 ( is the ensemble average of the sum of pair interaction energy between solute and water molecule) and the water reorganization term mainly reflecting the excluded volume effect. Since can readily be computed through a MD of the system composed of solute and water, an efficient computation of the latter term leads to a reduction of computational load. We demonstrate that the water reorganization term can quantitatively be calculated using the morphometric approach (MA) which expresses the term as the linear combinations of the four geometric measures of a solute and the corresponding coefficients determined with the energy representation (ER) method. Since the MA enables us to finish the computation of the solvent reorganization term in less than 0.1 s once the coefficients are determined, the use of the MA enables us to provide an efficient computation of the HFE even for large, complex solutes. Through the applications, we find that our method has almost the same quantitative performance as the ER method with substantial reduction of the computational load.

  2. Computer Reduction Of Aerial Thermograms For Large Scale Energy Audits

    NASA Astrophysics Data System (ADS)

    Hazard, William R.

    1981-01-01

    A 32 kilobyte microcomputer is used for merging radiant (IR) temperatures of roof sections and building enclosures with meteorological data to produce per unit Energy Intensity Factors (EIFs) that are required for Comprehensive Energy planning. The EIFs can also be used as building blocks for a low cost RCS-type energy audit that has been shown to approximate the DOE model audit in terms of accuracy and completeness. The Type I or "Interactive Energy Audit" utilizes EIFs that are calculated from diffuse density levels of aerial IR recordings, supplemented by resident-supplied information concerning structural charac-teristics of a house and energy life-style of its occupants. Results of a statistical comparison between ASHRAE-based and IR audits of 175 single family homes in Garland, Texas show that, on the average, the aerial based heat loss estimates fall within a 10 percent error envelope around the true BTUH losses 90 percent of the time. The combination of an aerial infrared picture and an Interactive Energy Audit print-out have proven effective in (a) providing homeowners with the information they want from an energy audit; (b) persuading them to take appropriate remedial weatherization actions, and (c) screening out the homes that do not need a Class A audit, thereby eliminating the cost and bother of an on-site inspection.

  3. Computation of Adsorption Energies of Some Interstellar Species

    NASA Astrophysics Data System (ADS)

    Sil, Milan; Chakrabarti, Sandip Kumar; Das, Ankan; Majumdar, Liton; Gorai, Prasanta; Etim, Emmanuel; Arunan, Elangannan

    2016-07-01

    Adsorption energies of surface species are most crucial for chemical complexity of interstellar grain mantle. Aim of this work is to study the variation of the adsorption energies depending upon the nature of adsorbent. We use silicate and carbonaceous grains for the absorbents. For silicate grains, we use very simple crystalline ones, namely, Enstatite (MgSiO_3)_n, Ferrosilite (FeSiO_3)_n, Forsterite (Mg_2SiO_4)_n and Fayalite (Fe_2SiO_4)_n. We use n=1, 2, 4, 8 to study the variation of adsorption energies with the increase in cluster size. For carbonaceous grain, we use Coronene (polyaromatic hydrocarbon surface). Adsorption energy of all these species are calculated by means of quantum chemical calculation using self consistent density functional theory (DFT). MPWB1K hybrid meta-functional is employed since it has been proven useful to study the systems with weak interactions such as van der Waals interactions. Optimization are also carried out with MPWB1K/6-311g(d) and MPWB1K/6311g(d,p) and a comparison of adsorption energies are discussed for these two different basis sets. We use crystalline structure of the adsorbent. The adsorbate is placed in the different site of the grain with a suitable distance. The energy of adsorption for a species on the grain surface is defined as follows: E_a_d_s = E_s_s - (E_s_u_r_f_a_c_e + E_s_p_e_c_i_e_s), where E_a_d_s is the adsorption energy, E_s_s is the optimized energy for species placed in a suitable distance from the grain surface, E_s_u_r_f_a_c_e and E_s_p_e_c_i_e_s respectively are the optimized energies of the surface and species separately.

  4. A Cross-Cultural Study of the Effect of a Graph-Oriented Computer-Assisted Project-Based Learning Environment on Middle School Students' Science Knowledge and Argumentation Skills

    ERIC Educational Resources Information Center

    Hsu, P.-S.; Van Dyke, M.; Chen, Y.; Smith, T. J.

    2016-01-01

    The purpose of this mixed-methods study was to explore how seventh graders in a suburban school in the United States and sixth graders in an urban school in Taiwan developed argumentation skills and science knowledge in a project-based learning environment that incorporated a graph-oriented, computer-assisted application (GOCAA). A total of 42…

  5. Career Oriented Mathematics, Teacher's Manual. [Includes Mastering Computational Skill: A Use-Based Program; Owning an Automobile and Driving as a Career; Retail Sales; Measurement; and Area-Perimeter.

    ERIC Educational Resources Information Center

    Mahaffey, Michael L.; McKillip, William D.

    This manual is designed for teachers using the Career Oriented Mathematics units on owning an automobile and driving as a career, retail sales, measurement, and area-perimeter. The volume begins with a discussion of the philosophy and scheduling of the program which is designed to improve students' attitudes and ability in computation by…

  6. Computation of hyperfine energies of hydrogen, deuterium and tritium quantum dots

    NASA Astrophysics Data System (ADS)

    Çakır, Bekir; Özmen, Ayhan; Yakar, Yusuf

    2016-01-01

    The hyperfine energies and hyperfine constants of the ground and excited states of hydrogen, deuterium and tritium quantum dots(QDs) are calculated. Quantum genetic algorithm (QGA) and Hartree-Fock-Roothaan (HFR) methods are employed to calculate the unperturbed wave functions and energy eigenvalues. The results show that in the medium and strong confinement regions the hyperfine energy and hyperfine constant are strongly affected by dot radius, impurity charge, electron spin orientation, impurity spin and impurity magnetic moment. Besides, in all dot radii, the hyperfine splitting and hyperfine constant of the confined hydrogen and tritium atoms are approximately equivalent to each other and they are greater than the confined deuterium atom.

  7. Computational Design of 2D materials for Energy Applications

    NASA Astrophysics Data System (ADS)

    Sun, Qiang

    2015-03-01

    Since the successful synthesis of graphene, tremendous efforts have been devoted to two-dimensional monolayers such as boron nitride (BN), silicene and MoS2. These 2D materials exhibit a large variety of physical and chemical properties with unprecedented applications. Here we report our recent studies of computational design of 2D materials for fuel cell applications which include hydrogen storage, CO2 capture, CO conversion and O2 reduction.

  8. Orientation dependant charge transfer at fullerene/Zn-phthalocyanine (C60/ZnPc) interface: Implications for energy level alignment and photovoltaic properties

    NASA Astrophysics Data System (ADS)

    Javaid, Saqib; Javed Akhtar, M.

    2016-08-01

    Recently, experimental results have shown that photovoltaic properties of Fullerene (C60)/Phthalocyanine based devices improve considerably as molecular orientation is changed from edge-on to face-on. In this work, we have studied the impact of molecular orientation on C60/ZnPc interfacial properties, particularly focusing on experimentally observed face-on and edge-on configuration, using density functional theory based simulations. The results show that the interfacial electronic properties are strongly anisotropic: direction of charge transfer and interface dipole fluctuates as molecular orientation is switched. As a result of orientation dependant interface dipole, difference between acceptor LUMO and donor HOMO increases as the orientation is changed from edge-on to face-on, suggesting a consequent increase in open circuit voltage (VOC). Moreover, adsorption and electronic properties indicate that the interfacial interactions are much stronger in the face-on configuration which should further facilitate the charge-separation process. These findings elucidate the energy level alignment at C60/ZnPc interface and help to identify interface dipole as the origin of the orientation dependence of VOC.

  9. High Energy Physics Computer Networking: Report of the HEPNET Review Committee

    SciTech Connect

    Not Available

    1988-06-01

    This paper discusses the computer networks available to high energy physics facilities for transmission of data. Topics covered in this paper are: Existing and planned networks and HEPNET requirements. (LSP)

  10. Computing and Systems Applied in Support of Coordinated Energy, Environmental, and Climate Planning

    EPA Science Inventory

    This talk focuses on how Dr. Loughlin is applying Computing and Systems models, tools and methods to more fully understand the linkages among energy systems, environmental quality, and climate change. Dr. Loughlin will highlight recent and ongoing research activities, including: ...

  11. Development of a Learning-Oriented Computer Assisted Instruction Designed to Improve Skills in the Clinical Assessment of the Nutritional Status: A Pilot Evaluation

    PubMed Central

    García de Diego, Laura; Cuervo, Marta; Martínez, J. Alfredo

    2015-01-01

    Computer assisted instruction (CAI) is an effective tool for evaluating and training students and professionals. In this article we will present a learning-oriented CAI, which has been developed for students and health professionals to acquire and retain new knowledge through the practice. A two-phase pilot evaluation was conducted, involving 8 nutrition experts and 30 postgraduate students, respectively. In each training session, the software developed guides users in the integral evaluation of a patient’s nutritional status and helps them to implement actions. The program includes into the format clinical tools, which can be used to recognize possible patient’s needs, to improve the clinical reasoning and to develop professional skills. Among them are assessment questionnaires and evaluation criteria, cardiovascular risk charts, clinical guidelines and photographs of various diseases. This CAI is a complete software package easy to use and versatile, aimed at clinical specialists, medical staff, scientists, educators and clinical students, which can be used as a learning tool. This application constitutes an advanced method for students and health professionals to accomplish nutritional assessments combining theoretical and empirical issues, which can be implemented in their academic curriculum. PMID:25978456

  12. Solving difficult problems creatively: a role for energy optimised deterministic/stochastic hybrid computing

    PubMed Central

    Palmer, Tim N.; O’Shea, Michael

    2015-01-01

    How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete. PMID:26528173

  13. Computational Research Challenges and Opportunities for the Optimization of Fossil Energy Power Generation System

    SciTech Connect

    Zitney, S.E.

    2007-06-01

    Emerging fossil energy power generation systems must operate with unprecedented efficiency and near-zero emissions, while optimizing profitably amid cost fluctuations for raw materials, finished products, and energy. To help address these challenges, the fossil energy industry will have to rely increasingly on the use advanced computational tools for modeling and simulating complex process systems. In this paper, we present the computational research challenges and opportunities for the optimization of fossil energy power generation systems across the plant lifecycle from process synthesis and design to plant operations. We also look beyond the plant gates to discuss research challenges and opportunities for enterprise-wide optimization, including planning, scheduling, and supply chain technologies.

  14. Minimizing the Free Energy: A Computer Method for Teaching Chemical Equilibrium Concepts.

    ERIC Educational Resources Information Center

    Heald, Emerson F.

    1978-01-01

    Presents a computer method for teaching chemical equilibrium concepts using material balance conditions and the minimization of the free energy. Method for the calculation of chemical equilibrium, the computer program used to solve equilibrium problems and applications of the method are also included. (HM)

  15. APPLICATIONS OF COMPUTER GRAPHICS TO INTEGRATED ENVIRONMENTAL ASSESSMENTS OF ENERGY SYSTEMS

    EPA Science Inventory

    This report summarizes the first two years of research designed to demonstrate applications of computer graphics to environmental analyses associated with the evaluation of impacts from development of conventional energy systems. The work emphasizes the use of storage-tube comput...

  16. On-line computer system for use with low- energy nuclear physics experiments is reported

    NASA Technical Reports Server (NTRS)

    Gemmell, D. S.

    1969-01-01

    Computer program handles data from low-energy nuclear physics experiments which utilize the ND-160 pulse-height analyzer and the PHYLIS computing system. The program allows experimenters to choose from about 50 different basic data-handling functions and to prescribe the order in which these functions will be performed.

  17. Energy Use and Power Levels in New Monitors and Personal Computers

    SciTech Connect

    Roberson, Judy A.; Homan, Gregory K.; Mahajan, Akshay; Nordman, Bruce; Webber, Carrie A.; Brown, Richard E.; McWhinney, Marla; Koomey, Jonathan G.

    2002-07-23

    Our research was conducted in support of the EPA ENERGY STAR Office Equipment program, whose goal is to reduce the amount of electricity consumed by office equipment in the U.S. The most energy-efficient models in each office equipment category are eligible for the ENERGY STAR label, which consumers can use to identify and select efficient products. As the efficiency of each category improves over time, the ENERGY STAR criteria need to be revised accordingly. The purpose of this study was to provide reliable data on the energy consumption of the newest personal computers and monitors that the EPA can use to evaluate revisions to current ENERGY STAR criteria as well as to improve the accuracy of ENERGY STAR program savings estimates. We report the results of measuring the power consumption and power management capabilities of a sample of new monitors and computers. These results will be used to improve estimates of program energy savings and carbon emission reductions, and to inform rev isions of the ENERGY STAR criteria for these products. Our sample consists of 35 monitors and 26 computers manufactured between July 2000 and October 2001; it includes cathode ray tube (CRT) and liquid crystal display (LCD) monitors, Macintosh and Intel-architecture computers, desktop and laptop computers, and integrated computer systems, in which power consumption of the computer and monitor cannot be measured separately. For each machine we measured power consumption when off, on, and in each low-power level. We identify trends in and opportunities to reduce power consumption in new personal computers and monitors. Our results include a trend among monitor manufacturers to provide a single very low low-power level, well below the current ENERGY STAR criteria for sleep power consumption. These very low sleep power results mean that energy consumed when monitors are off or in active use has become more important in terms of contribution to the overall unit energy consumption (UEC

  18. Department of Energy research in utilization of high-performance computers

    SciTech Connect

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-08-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programmatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models, the execution of which is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex, and consequently it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure.

  19. Monte Carlo Computational Modeling of the Energy Dependence of Atomic Oxygen Undercutting of Protected Polymers

    NASA Technical Reports Server (NTRS)

    Banks, Bruce A.; Stueber, Thomas J.; Norris, Mary Jo

    1998-01-01

    A Monte Carlo computational model has been developed which simulates atomic oxygen attack of protected polymers at defect sites in the protective coatings. The parameters defining how atomic oxygen interacts with polymers and protective coatings as well as the scattering processes which occur have been optimized to replicate experimental results observed from protected polyimide Kapton on the Long Duration Exposure Facility (LDEF) mission. Computational prediction of atomic oxygen undercutting at defect sites in protective coatings for various arrival energies was investigated. The atomic oxygen undercutting energy dependence predictions enable one to predict mass loss that would occur in low Earth orbit, based on lower energy ground laboratory atomic oxygen beam systems. Results of computational model prediction of undercut cavity size as a function of energy and defect size will be presented to provide insight into expected in-space mass loss of protected polymers with protective coating defects based on lower energy ground laboratory testing.

  20. Energy Drain by Computers Stifles Efforts at Cost Control

    ERIC Educational Resources Information Center

    Keller, Josh

    2009-01-01

    The high price of storing and processing data is hurting colleges and universities across the country. In response, some institutions are embracing greener technologies to keep costs down and help the environment. But compared with other industries, colleges and universities have been slow to understand the problem and to adopt energy-saving…

  1. Surface modifications and optical variations of (−1 1 1) lattice oriented CuO nanofilms for solar energy applications

    SciTech Connect

    Dhanasekaran, V.; Mahalingam, T.

    2013-09-01

    Graphical abstract: - Highlights: • The films are grown using a low cost SILAR method. • The pH value is found to play a significant role in the property of the resulting films. • The fabrication of band pass filters between 450 nm and 1000 nm is envisaged. • Electrical conductivity and optical band gap values were found to be 68.1 × 10{sup −3} Ω{sup −1} cm{sup −1} and 1.08 eV. • Coating may aid the small band of frequencies could pave way for enhancing the efficiency. - Abstract: This paper reports on the preparation and characterization of Successive Ionic Layer by Adsorption and Reaction (SILAR) grown CuO thin films. The films were deposited onto glass substrates at various solution pH values. The thickness of the film is increased with increase of solution pH values. X-ray diffraction analysis revealed that the prepared films exhibited the monoclinic structure with (−1 1 1) predominant orientation. The optimized pH value is 11 ± 0.1. The microstructure, morphology, optical and electrical properties are studied and reported. The transmission spectra (T) at normal incidence revealed that the films exhibit indirect transitions and may be tailored for passing selected bands of frequencies in visible near IR range. The activation energy is estimated to be about 0.29 eV.

  2. Spin-neurons: A possible path to energy-efficient neuromorphic computers

    NASA Astrophysics Data System (ADS)

    Sharad, Mrigank; Fan, Deliang; Roy, Kaushik

    2013-12-01

    Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices. Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and "thresholding" operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that "spin-neurons" (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.

  3. Spin-neurons: A possible path to energy-efficient neuromorphic computers

    SciTech Connect

    Sharad, Mrigank; Fan, Deliang; Roy, Kaushik

    2013-12-21

    Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices. Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and “thresholding” operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that “spin-neurons” (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.

  4. Computed rotational rainbows from realistic potential energy surfaces

    SciTech Connect

    Gianturco, F.A.; Palma, A.

    1985-08-01

    The quantal IOS approximation in here employed to study interference structures in the rotationally inelastic, state-to-state differential cross sections for polar diatomic targets (LiH, FH, and CO) interacting with He atoms. Quite realistic expressions are used to describe the relevant potential energy surfaces (PES) which were taken from previous works that tested them against accurate experimental findings for total and partial differential cross sections. Specific features like short-range anisotropy and well depth, long-range attractive regions and overall range of action for each potential employed are analyzed and discussed in relation to their influence on rotational rainbows appearance and on the possible observation of cross section extrema in rotational energy distributions.

  5. Energy-Efficient Computational Chemistry: Comparison of x86 and ARM Systems.

    PubMed

    Keipert, Kristopher; Mitra, Gaurav; Sunriyal, Vaibhav; Leang, Sarom S; Sosonkina, Masha; Rendell, Alistair P; Gordon, Mark S

    2015-11-10

    The computational efficiency and energy-to-solution of several applications using the GAMESS quantum chemistry suite of codes is evaluated for 32-bit and 64-bit ARM-based computers, and compared to an x86 machine. The x86 system completes all benchmark computations more quickly than either ARM system and is the best choice to minimize time to solution. The ARM64 and ARM32 computational performances are similar to each other for Hartree-Fock and density functional theory energy calculations. However, for memory-intensive second-order perturbation theory energy and gradient computations the lower ARM32 read/write memory bandwidth results in computation times as much as 86% longer than on the ARM64 system. The ARM32 system is more energy efficient than the x86 and ARM64 CPUs for all benchmarked methods, while the ARM64 CPU is more energy efficient than the x86 CPU for some core counts and molecular sizes. PMID:26574303

  6. Factors Affecting Energy Barriers for Pyramidal Inversion in Amines and Phosphines: A Computational Chemistry Lab Exercise

    ERIC Educational Resources Information Center

    Montgomery, Craig D.

    2013-01-01

    An undergraduate exercise in computational chemistry that investigates the energy barrier for pyramidal inversion of amines and phosphines is presented. Semiempirical calculations (PM3) of the ground-state and transition-state energies for NR[superscript 1]R[superscript 2]R[superscript 3] and PR[superscript 1]R[superscript 2]R[superscript 3] allow…

  7. A Computer-Based Dialogue for Deriving Energy Conservation for Motion in One-Dimension. A Computer Simulation for the Study of Waves.

    ERIC Educational Resources Information Center

    Bork, Alfred M.; And Others

    Two computer programs are described, with the development and implementation of the first program described in some detail. This is a student-computer dialogue for beginning or intermediate physics classes entitled "A Computer-Based Dialogue for Deriving Energy Conservation for Motion in One-Dimension." A portion of the flowchart is included,…

  8. Large Scale Computing and Storage Requirements for Basic Energy Sciences Research

    SciTech Connect

    Gerber, Richard; Wasserman, Harvey

    2011-03-31

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility supporting research within the Department of Energy's Office of Science. NERSC provides high-performance computing (HPC) resources to approximately 4,000 researchers working on about 400 projects. In addition to hosting large-scale computing facilities, NERSC provides the support and expertise scientists need to effectively and efficiently use HPC systems. In February 2010, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR) and DOE's Office of Basic Energy Sciences (BES) held a workshop to characterize HPC requirements for BES research through 2013. The workshop was part of NERSC's legacy of anticipating users future needs and deploying the necessary resources to meet these demands. Workshop participants reached a consensus on several key findings, in addition to achieving the workshop's goal of collecting and characterizing computing requirements. The key requirements for scientists conducting research in BES are: (1) Larger allocations of computational resources; (2) Continued support for standard application software packages; (3) Adequate job turnaround time and throughput; and (4) Guidance and support for using future computer architectures. This report expands upon these key points and presents others. Several 'case studies' are included as significant representative samples of the needs of science teams within BES. Research teams scientific goals, computational methods of solution, current and 2013 computing requirements, and special software and support needs are summarized in these case studies. Also included are researchers strategies for computing in the highly parallel, 'multi-core' environment that is expected to dominate HPC architectures over the next few years. NERSC has strategic plans and initiatives already underway that address key workshop findings. This report includes a brief summary of those relevant to issues

  9. Positive-energy D-bar method for acoustic tomography: a computational study

    NASA Astrophysics Data System (ADS)

    de Hoop, M. V.; Lassas, M.; Santacesaria, M.; Siltanen, S.; Tamminen, J. P.

    2016-02-01

    A new computational method for reconstructing a potential from the Dirichlet-to-Neumann (DN) map at positive energy is developed. The method is based on D-bar techniques and it works in absence of exceptional points—in particular, if the potential is small enough compared to the energy. Numerical tests reveal exceptional points for perturbed, radial potentials. Reconstructions for several potentials are computed using simulated DN maps with and without added noise. The new reconstruction method is shown to work well for energy values between 10-5 and 5, smaller values giving better results.

  10. Material characterization of dual-energy computed tomographic data using polar coordinates.

    PubMed

    Havla, Lukas; Peller, Michael; Cyran, Clemens; Nikolaou, Konstantin; Reiser, Maximilian; Dietrich, Olaf

    2015-01-01

    The purpose of this study was to evaluate a new dual-energy computed tomographic postprocessing approach on the basis of the transformation of dual-energy radiodensity data into polar coordinates. Given 2 corresponding dual-energy computed tomographic images, the attenuation data D(U1), D(U2) in Hounsfield units of both tube voltages (U1,U2) were transformed for each voxel to polar coordinates: r (distance to the radiodensity coordinate origin) is an approximate measure of electron density and φ (angle to the abscissa) differentiates between materials. PMID:25279847

  11. Ab Initio Computation of the Energies of Circular Quantum Dots

    SciTech Connect

    Lohne, M. Pedersen; Hagen, Gaute; Hjorth-Jensen, M.; Kvaal, S.; Pederiva, F.

    2011-01-01

    We perform coupled-cluster and diffusion Monte Carlo calculations of the energies of circular quantum dots up to 20 electrons. The coupled-cluster calculations include triples corrections and a renormalized Coulomb interaction defined for a given number of low-lying oscillator shells. Using such a renormalized Coulomb interaction brings the coupled-cluster calculations with triples correlations in excellent agreement with the diffusion Monte Carlo calculations. This opens up perspectives for doing ab initio calculations for much larger systems of electrons.

  12. Computer aided optimal design of compressed air energy storage systems

    NASA Astrophysics Data System (ADS)

    Ahrens, F. W.; Sharma, A.; Ragsdell, K. M.

    1980-07-01

    An automated procedure for the design of Compressed Air Energy Storage (CAES) systems is presented. The procedure relies upon modern nonlinear programming algorithms, decomposition theory, and numerical models of the various system components. Two modern optimization methods are employed; BIAS, a Method of Multipliers code and OPT, a Generalized Reduced Gradient code. The procedure is demonstrated by the design of a CAES facility employing the Media, Illinois Galesville aquifer as the reservoir. The methods employed produced significant reduction in capital and operating cost, and in number of aquifer wells required.

  13. Theoretical studies of potential energy surfaces and computational methods.

    SciTech Connect

    Shepard, R.

    2006-01-01

    This project involves the development, implementation, and application of theoretical methods for the calculation and characterization of potential energy surfaces (PES) involving molecular species that occur in hydrocarbon combustion. These potential energy surfaces require an accurate and balanced treatment of reactants, intermediates, and products. Most of our work focuses on general multiconfiguration self-consistent-field (MCSCF) and multireference single- and double-excitation configuration interaction (MRSDCI) methods. In contrast to the more common single-reference electronic structure methods, this approach is capable of describing accurately molecular systems that are highly distorted away from their equilibrium geometries, including reactant, fragment, and transition-state geometries, and of describing regions of the potential surface that are associated with electronic wave functions of widely varying nature. The MCSCF reference wave functions are designed to be sufficiently flexible to describe qualitatively the changes in the electronic structure over the broad range of molecular geometries of interest. The necessary mixing of ionic, covalent, and Rydberg contributions, along with the appropriate treatment of the different electron-spin components (e.g. closed shell, high-spin open-shell, low-spin open shell, radical, diradical, etc.) of the wave functions are treated correctly at this level. Further treatment of electron correlation effects is included using large scale multireference CI wave functions, particularly including the single and double excitations relative to the MCSCF reference space. This leads to the most flexible and accurate large-scale MRSDCI wave functions that have been used to date in global PES studies.

  14. Theoretical studies of potential energy surfaces and computational methods

    SciTech Connect

    Shepard, R.

    1993-12-01

    This project involves the development, implementation, and application of theoretical methods for the calculation and characterization of potential energy surfaces involving molecular species that occur in hydrocarbon combustion. These potential energy surfaces require an accurate and balanced treatment of reactants, intermediates, and products. This difficult challenge is met with general multiconfiguration self-consistent-field (MCSCF) and multireference single- and double-excitation configuration interaction (MRSDCI) methods. In contrast to the more common single-reference electronic structure methods, this approach is capable of describing accurately molecular systems that are highly distorted away from their equilibrium geometries, including reactant, fragment, and transition-state geometries, and of describing regions of the potential surface that are associated with electronic wave functions of widely varying nature. The MCSCF reference wave functions are designed to be sufficiently flexible to describe qualitatively the changes in the electronic structure over the broad range of geometries of interest. The necessary mixing of ionic, covalent, and Rydberg contributions, along with the appropriate treatment of the different electron-spin components (e.g. closed shell, high-spin open-shell, low-spin open shell, radical, diradical, etc.) of the wave functions, are treated correctly at this level. Further treatment of electron correlation effects is included using large scale multireference CI wave functions, particularly including the single and double excitations relative to the MCSCF reference space. This leads to the most flexible and accurate large-scale MRSDCI wave functions that have been used to date in global PES studies.

  15. Structural models of zebrafish (Danio rerio) NOD1 and NOD2 NACHT domains suggest differential ATP binding orientations: insights from computational modeling, docking and molecular dynamics simulations.

    PubMed

    Maharana, Jitendra; Sahoo, Bikash Ranjan; Bej, Aritra; Jena, Itishree; Parida, Arunima; Sahoo, Jyoti Ranjan; Dehury, Budheswar; Patra, Mahesh Chandra; Martha, Sushma Rani; Balabantray, Sucharita; Pradhan, Sukanta Kumar; Behera, Bijay Kumar

    2015-01-01

    Nucleotide-binding oligomerization domain-containing protein 1 (NOD1) and NOD2 are cytosolic pattern recognition receptors playing pivotal roles in innate immune signaling. NOD1 and NOD2 recognize bacterial peptidoglycan derivatives iE-DAP and MDP, respectively and undergoes conformational alternation and ATP-dependent self-oligomerization of NACHT domain followed by downstream signaling. Lack of structural adequacy of NACHT domain confines our understanding about the NOD-mediated signaling mechanism. Here, we predicted the structure of NACHT domain of both NOD1 and NOD2 from model organism zebrafish (Danio rerio) using computational methods. Our study highlighted the differential ATP binding modes in NOD1 and NOD2. In NOD1, γ-phosphate of ATP faced toward the central nucleotide binding cavity like NLRC4, whereas in NOD2 the cavity was occupied by adenine moiety. The conserved 'Lysine' at Walker A formed hydrogen bonds (H-bonds) and Aspartic acid (Walker B) formed electrostatic interaction with ATP. At Sensor 1, Arg328 of NOD1 exhibited an H-bond with ATP, whereas corresponding Arg404 of NOD2 did not. 'Proline' of GxP motif (Pro386 of NOD1 and Pro464 of NOD2) interacted with adenine moiety and His511 at Sensor 2 of NOD1 interacted with γ-phosphate group of ATP. In contrast, His579 of NOD2 interacted with the adenine moiety having a relatively inverted orientation. Our findings are well supplemented with the molecular interaction of ATP with NLRC4, and consistent with mutagenesis data reported for human, which indicates evolutionary shared NOD signaling mechanism. Together, this study provides novel insights into ATP binding mechanism, and highlights the differential ATP binding modes in zebrafish NOD1 and NOD2. PMID:25811192

  16. Structural Models of Zebrafish (Danio rerio) NOD1 and NOD2 NACHT Domains Suggest Differential ATP Binding Orientations: Insights from Computational Modeling, Docking and Molecular Dynamics Simulations

    PubMed Central

    Maharana, Jitendra; Sahoo, Bikash Ranjan; Bej, Aritra; Sahoo, Jyoti Ranjan; Dehury, Budheswar; Patra, Mahesh Chandra; Martha, Sushma Rani; Balabantray, Sucharita; Pradhan, Sukanta Kumar; Behera, Bijay Kumar

    2015-01-01

    Nucleotide-binding oligomerization domain-containing protein 1 (NOD1) and NOD2 are cytosolic pattern recognition receptors playing pivotal roles in innate immune signaling. NOD1 and NOD2 recognize bacterial peptidoglycan derivatives iE-DAP and MDP, respectively and undergoes conformational alternation and ATP-dependent self-oligomerization of NACHT domain followed by downstream signaling. Lack of structural adequacy of NACHT domain confines our understanding about the NOD-mediated signaling mechanism. Here, we predicted the structure of NACHT domain of both NOD1 and NOD2 from model organism zebrafish (Danio rerio) using computational methods. Our study highlighted the differential ATP binding modes in NOD1 and NOD2. In NOD1, γ-phosphate of ATP faced toward the central nucleotide binding cavity like NLRC4, whereas in NOD2 the cavity was occupied by adenine moiety. The conserved ‘Lysine’ at Walker A formed hydrogen bonds (H-bonds) and Aspartic acid (Walker B) formed electrostatic interaction with ATP. At Sensor 1, Arg328 of NOD1 exhibited an H-bond with ATP, whereas corresponding Arg404 of NOD2 did not. ‘Proline’ of GxP motif (Pro386 of NOD1 and Pro464 of NOD2) interacted with adenine moiety and His511 at Sensor 2 of NOD1 interacted with γ-phosphate group of ATP. In contrast, His579 of NOD2 interacted with the adenine moiety having a relatively inverted orientation. Our findings are well supplemented with the molecular interaction of ATP with NLRC4, and consistent with mutagenesis data reported for human, which indicates evolutionary shared NOD signaling mechanism. Together, this study provides novel insights into ATP binding mechanism, and highlights the differential ATP binding modes in zebrafish NOD1 and NOD2. PMID:25811192

  17. A Variable Refrigerant Flow Heat Pump Computer Model in EnergyPlus

    SciTech Connect

    Raustad, Richard A.

    2013-01-01

    This paper provides an overview of the variable refrigerant flow heat pump computer model included with the Department of Energy's EnergyPlusTM whole-building energy simulation software. The mathematical model for a variable refrigerant flow heat pump operating in cooling or heating mode, and a detailed model for the variable refrigerant flow direct-expansion (DX) cooling coil are described in detail.

  18. Impact of office productivity cloud computing on energy consumption and greenhouse gas emissions.

    PubMed

    Williams, Daniel R; Tang, Yinshan

    2013-05-01

    Cloud computing is usually regarded as being energy efficient and thus emitting less greenhouse gases (GHG) than traditional forms of computing. When the energy consumption of Microsoft's cloud computing Office 365 (O365) and traditional Office 2010 (O2010) software suites were tested and modeled, some cloud services were found to consume more energy than the traditional form. The developed model in this research took into consideration the energy consumption at the three main stages of data transmission; data center, network, and end user device. Comparable products from each suite were selected and activities were defined for each product to represent a different computing type. Microsoft provided highly confidential data for the data center stage, while the networking and user device stages were measured directly. A new measurement and software apportionment approach was defined and utilized allowing the power consumption of cloud services to be directly measured for the user device stage. Results indicated that cloud computing is more energy efficient for Excel and Outlook which consumed less energy and emitted less GHG than the standalone counterpart. The power consumption of the cloud based Outlook (8%) and Excel (17%) was lower than their traditional counterparts. However, the power consumption of the cloud version of Word was 17% higher than its traditional equivalent. A third mixed access method was also measured for Word which emitted 5% more GHG than the traditional version. It is evident that cloud computing may not provide a unified way forward to reduce energy consumption and GHG. Direct conversion from the standalone package into the cloud provision platform can now consider energy and GHG emissions at the software development and cloud service design stage using the methods described in this research. PMID:23548097

  19. Computation of the Gibbs free energy difference between polymorphs

    NASA Astrophysics Data System (ADS)

    Sinkovits, Daniel W.; Kumar, Sanat K.

    2015-03-01

    Semi-crystalline polymers commonly crystallize into several different polymorphs; for example, the alpha and beta phases of isotactic polypropylene. While it is possible to favor particular polymorphs by kinetic means, such as with varying degrees of supercooling or through the use of different solvents in solution casting, we focus on the question of thermodynamic stability; that is, which polymorph possesses the lowest Gibbs free energy for a given temperature and pressure. We implement a version of the Bennett Acceptance Ratio method and find phase diagrams for several polymers. We also demonstrate agreement with phonon analysis in the quasi-harmonic approximation. The advantages and drawbacks of these methods will be discussed. Multidisciplinary University Research Initiative (MURI).

  20. Whose Orientations?

    ERIC Educational Resources Information Center

    Gutoff, Joshua

    2010-01-01

    This article presents the author's response to Jon A. Levisohn's article entitled "A Menu of Orientations in the Teaching of Rabbinic Literature." While the "menu" Levisohn describes in his groundbreaking work on orientations to the teaching of rabbinic texts will almost certainly be refined over time, even as it stands this article should be of…

  1. Orienteering injuries

    PubMed Central

    Folan, Jean M.

    1982-01-01

    At the Irish National Orienteering Championships in 1981 a survey of the injuries occurring over the two days of competition was carried out. Of 285 individual competitors there was a percentage injury rate of 5.26%. The article discusses the injuries and aspects of safety in orienteering. Imagesp236-ap237-ap237-bp238-ap239-ap240-a PMID:7159815

  2. Reverse energy partitioning-An efficient algorithm for computing the density of states, partition functions, and free energy of solids.

    PubMed

    Do, Hainam; Wheatley, Richard J

    2016-08-28

    A robust and model free Monte Carlo simulation method is proposed to address the challenge in computing the classical density of states and partition function of solids. Starting from the minimum configurational energy, the algorithm partitions the entire energy range in the increasing energy direction ("upward") into subdivisions whose integrated density of states is known. When combined with the density of states computed from the "downward" energy partitioning approach [H. Do, J. D. Hirst, and R. J. Wheatley, J. Chem. Phys. 135, 174105 (2011)], the equilibrium thermodynamic properties can be evaluated at any temperature and in any phase. The method is illustrated in the context of the Lennard-Jones system and can readily be extended to other molecular systems and clusters for which the structures are known. PMID:27586913

  3. Solar energy conversion systems engineering and economic analysis radiative energy input/thermal electric output computation. Volume III

    SciTech Connect

    Russo, G.

    1982-09-01

    The direct energy flux analytical model, an analysis of the results, and a brief description of a non-steady state model of a thermal solar energy conversion system implemented on a code, SIRR2, as well as the coupling of CIRR2 which computes global solar flux on a collector and SIRR2 are presented. It is shown how the CIRR2 and, mainly, the SIRR2 codes may be used for a proper design of a solar collector system. (LEW)

  4. Orientation-Preserving Rod Elements for Real-Time Thin-Shell Simulation.

    PubMed

    Zhang, Nan; Qu, Huamin; Sweet, Robert

    2011-06-01

    We propose a new computation model for simulating elastic thin shells at interactive rates. Existing graphical simulation methods are mostly based on dihedral angle energy functions, which need to compute the first order and second order partial derivatives with respect to current vertex positions as bending forces and stiffness matrices. The symbolic derivatives are complicated in nonisometric element deformations. To simplify computing the derivatives, instead of directly constructing the dihedral angle energy, we use the orientation change energy of mesh edges. A continuum-mechanics-based orientation-preserving rod element model is developed to provide the bending forces. The advantage of our method is simple bending force and stiffness matrix computation, since in the rod model, we apply a novel incremental construction of the deformation gradient tensor to linearize both tensile and orientation deformations. Consequently, our model is efficient, easy to implement, and supports both quadrilateral and triangle meshes. It also treats shells and plates uniformly. PMID:20548108

  5. EDITORIAL: Optical orientation Optical orientation

    NASA Astrophysics Data System (ADS)

    SAME ADDRESS *, Yuri; Landwehr, Gottfried

    2008-11-01

    priority of the discovery in the literature, which was partly caused by the existence of the Iron Curtain. I had already enjoyed contact with Boris in the 1980s when the two volumes of Landau Level Spectroscopy were being prepared [2]. He was one of the pioneers of magneto-optics in semiconductors. In the 1950s the band structure of germanium and silicon was investigated by magneto-optical methods, mainly in the United States. No excitonic effects were observed and the band structure parameters were determined without taking account of excitons. However, working with cuprous oxide, which is a direct semiconductor with a relative large energy gap, Zakharchenya and his co-worker Seysan showed that in order to obtain correct band structure parameters, it is necessary to take excitons into account [3]. About 1970 Boris started work on optical orientation. Early work by Hanle in Germany in the 1920s on the depolarization of luminescence in mercury vapour by a transverse magnetic field was not appreciated for a long time. Only in the late 1940s did Kastler and co-workers in Paris begin a systematic study of optical pumping, which led to the award of a Nobel prize. The ideas of optical pumping were first applied by Georges Lampel to solid state physics in 1968. He demonstrated optical orientation of free carriers in silicon. The detection method was nuclear magnetic resonance; optically oriented free electrons dynamically polarized the 29Si nuclei of the host lattice. The first optical detection of spin orientation was demonstrated by with the III-V semiconductor GaSb by Parsons. Due to the various interaction mechanisms of spins with their environment, the effects occurring in semiconductors are naturally more complex than those in atoms. Optical detection is now the preferred method to detect spin alignment in semiconductors. The orientation of spins in crystals pumped with circularly polarized light is deduced from the degree of circular polarization of the recombination

  6. Orientation-dependent energy level alignment and film growth of 2,7-diocty[1]benzothieno[3,2-b]benzothiophene (C8-BTBT) on HOPG

    NASA Astrophysics Data System (ADS)

    Lyu, Lu; Niu, Dongmei; Xie, Haipeng; Cao, Ningtong; Zhang, Hong; Zhang, Yuhe; Liu, Peng; Gao, Yongli

    2016-01-01

    Combining ultraviolet photoemission spectroscopy, X-ray photoemission spectroscopy, atomic force microscopy, and X-ray diffraction measurements, we performed a systematic investigation on the correlation of energy level alignment, film growth, and molecular orientation of 2,7-diocty[1]benzothieno[3,2-b]benzothiophene (C8-BTBT) on highly oriented pyrolytic graphite. The molecules lie down in the first layer and then stand up from the second layer. The ionization potential shows a sharp decrease from the lying down region to the standing up region. When C8-BTBT molecules start standing up, unconventional energy level band-bending-like shifts are observed as the film thickness increases. These shifts are ascribed to gradual decreasing of the molecular tilt angle about the substrate normal with the increasing film thickness.

  7. A biomolecular implementation of logically reversible computation with minimal energy dissipation.

    PubMed

    Klein, J P; Leete, T H; Rubin, H

    1999-10-01

    Energy dissipation associated with logic operations imposes a fundamental physical limit on computation and is generated by the entropic cost of information erasure, which is a consequence of irreversible logic elements. We show how to encode information in DNA and use DNA amplification to implement a logically reversible gate that comprises a complete set of operators capable of universal computation. We also propose a method using this design to connect, or 'wire', these gates together in a biochemical fashion to create a logic network, allowing complex parallel computations to be executed. The architecture of the system permits highly parallel operations and has properties that resemble well known genetic regulatory systems. PMID:10636026

  8. Cloud computing for energy management in smart grid - an application survey

    NASA Astrophysics Data System (ADS)

    Naveen, P.; Kiing Ing, Wong; Kobina Danquah, Michael; Sidhu, Amandeep S.; Abu-Siada, Ahmed

    2016-03-01

    The smart grid is the emerging energy system wherein the application of information technology, tools and techniques that make the grid run more efficiently. It possesses demand response capacity to help balance electrical consumption with supply. The challenges and opportunities of emerging and future smart grids can be addressed by cloud computing. To focus on these requirements, we provide an in-depth survey on different cloud computing applications for energy management in the smart grid architecture. In this survey, we present an outline of the current state of research on smart grid development. We also propose a model of cloud based economic power dispatch for smart grid.

  9. Computer control of the energy output of a klystron in the SLC

    SciTech Connect

    Jobe, R.K.; Browne, M.J.; Flores, M.; Phinney, N.; Schwarz, H.D.; Sheppard, J.C.

    1987-02-01

    Hardware and software have been developed to permit computer control of the output of high power klystrons on a pulsed basis. Control of the klystron output is accomplished by varying the input drive via a pulsed rf attenuator. Careful power calibrations permit accurate calculation of the available energy, as seen by the beam, over the full range of the klystron output. The ability to control precisely the energy output allows for energy feed-forward as well as energy feedback applications. Motivation for this work has been the need to adjust the energy of beams launched into various regions of the SLC. Vernier klystrons play a crucial role in the energy delivered from the SLC injector, linac, and positron source. This paper discusses the hardware development, energy calculations, and software implementation. Operational results are presented.

  10. Analyzing Orientations

    NASA Astrophysics Data System (ADS)

    Ruggles, Clive L. N.

    Archaeoastronomical field survey typically involves the measurement of structural orientations (i.e., orientations along and between built structures) in relation to the visible landscape and particularly the surrounding horizon. This chapter focuses on the process of analyzing the astronomical potential of oriented structures, whether in the field or as a desktop appraisal, with the aim of establishing the archaeoastronomical "facts". It does not address questions of data selection (see instead Chap. 25, "Best Practice for Evaluating the Astronomical Significance of Archaeological Sites", 10.1007/978-1-4614-6141-8_25) or interpretation (see Chap. 24, "Nature and Analysis of Material Evidence Relevant to Archaeoastronomy", 10.1007/978-1-4614-6141-8_22). The main necessity is to determine the azimuth, horizon altitude, and declination in the direction "indicated" by any structural orientation. Normally, there are a range of possibilities, reflecting the various errors and uncertainties in estimating the intended (or, at least, the constructed) orientation, and in more formal approaches an attempt is made to assign a probability distribution extending over a spread of declinations. These probability distributions can then be cumulated in order to visualize and analyze the combined data from several orientations, so as to identify any consistent astronomical associations that can then be correlated with the declinations of particular astronomical objects or phenomena at any era in the past. The whole process raises various procedural and methodological issues and does not proceed in isolation from the consideration of corroborative data, which is essential in order to develop viable cultural interpretations.

  11. Computational Plasma Physics at the Bleeding Edge: Simulating Kinetic Turbulence Dynamics in Fusion Energy Sciences

    NASA Astrophysics Data System (ADS)

    Tang, William

    2013-04-01

    Advanced computing is generally recognized to be an increasingly vital tool for accelerating progress in scientific research in the 21st Century. The imperative is to translate the combination of the rapid advances in super-computing power together with the emergence of effective new algorithms and computational methodologies to help enable corresponding increases in the physics fidelity and the performance of the scientific codes used to model complex physical systems. If properly validated against experimental measurements and verified with mathematical tests and computational benchmarks, these codes can provide more reliable predictive capability for the behavior of complex systems, including fusion energy relevant high temperature plasmas. The magnetic fusion energy research community has made excellent progress in developing advanced codes for which computer run-time and problem size scale very well with the number of processors on massively parallel supercomputers. A good example is the effective usage of the full power of modern leadership class computational platforms from the terascale to the petascale and beyond to produce nonlinear particle-in-cell simulations which have accelerated progress in understanding the nature of plasma turbulence in magnetically-confined high temperature plasmas. Illustrative results provide great encouragement for being able to include increasingly realistic dynamics in extreme-scale computing campaigns to enable predictive simulations with unprecedented physics fidelity. Some illustrative examples will be presented of the algorithmic progress from the magnetic fusion energy sciences area in dealing with low memory per core extreme scale computing challenges for the current top 3 supercomputers worldwide. These include advanced CPU systems (such as the IBM-Blue-Gene-Q system and the Fujitsu K Machine) as well as the GPU-CPU hybrid system (Titan).

  12. Computational chemistry for graphene-based energy applications: progress and challenges

    NASA Astrophysics Data System (ADS)

    Hughes, Zak E.; Walsh, Tiffany R.

    2015-04-01

    Research in graphene-based energy materials is a rapidly growing area. Many graphene-based energy applications involve interfacial processes. To enable advances in the design of these energy materials, such that their operation, economy, efficiency and durability is at least comparable with fossil-fuel based alternatives, connections between the molecular-scale structure and function of these interfaces are needed. While it is experimentally challenging to resolve this interfacial structure, molecular simulation and computational chemistry can help bridge these gaps. In this Review, we summarise recent progress in the application of computational chemistry to graphene-based materials for fuel cells, batteries, photovoltaics and supercapacitors. We also outline both the bright prospects and emerging challenges these techniques face for application to graphene-based energy materials in future.

  13. Developing an orientation program.

    PubMed

    Edwards, K

    1999-01-01

    When the local area experienced tremendous growth and change, the radiology department at Maury Hospital in Columbia, Tennessee looked seriously at its orientation process in preparation for hiring additional personnel. It was an appropriate time for the department to review its orientation process and to develop a manual to serve as both a tool for supervisors and an ongoing reference for new employees. To gather information for the manual, supervisors were asked to identify information they considered vital for new employees to know concerning the daily operations of the department, its policies and procedures, the organizational structure of the hospital, and hospital and departmental computer systems. That information became the basis of the orientation manual, and provided an introduction to the hospital and radiology department; the structure of the organization; an overview of the radiology department; personnel information; operating procedures and computer systems; and various policies and procedures. With the manual complete, the radiology department concentrated on an orientation process that would meet the needs of supervisors who said they had trouble remembering the many details necessary to teach new employees. A pre-orientation checklist was developed, which contained the many details supervisors must handle between the time an employee is hired and arrives for work. The next step was the creation of a checklist for use by the supervisor during a new employee's first week on the job. A final step in the hospital's orientation program is to have each new employee evaluate the entire orientation process. That information is then used to update and revise the manual. PMID:10346648

  14. The Clinical Impact of Accurate Cystine Calculi Characterization Using Dual-Energy Computed Tomography

    PubMed Central

    Haley, William E.; Ibrahim, El-Sayed H.; Qu, Mingliang; Cernigliaro, Joseph G.; Goldfarb, David S.; McCollough, Cynthia H.

    2015-01-01

    Dual-energy computed tomography (DECT) has recently been suggested as the imaging modality of choice for kidney stones due to its ability to provide information on stone composition. Standard postprocessing of the dual-energy images accurately identifies uric acid stones, but not other types. Cystine stones can be identified from DECT images when analyzed with advanced postprocessing. This case report describes clinical implications of accurate diagnosis of cystine stones using DECT. PMID:26688770

  15. The Clinical Impact of Accurate Cystine Calculi Characterization Using Dual-Energy Computed Tomography.

    PubMed

    Haley, William E; Ibrahim, El-Sayed H; Qu, Mingliang; Cernigliaro, Joseph G; Goldfarb, David S; McCollough, Cynthia H

    2015-01-01

    Dual-energy computed tomography (DECT) has recently been suggested as the imaging modality of choice for kidney stones due to its ability to provide information on stone composition. Standard postprocessing of the dual-energy images accurately identifies uric acid stones, but not other types. Cystine stones can be identified from DECT images when analyzed with advanced postprocessing. This case report describes clinical implications of accurate diagnosis of cystine stones using DECT. PMID:26688770

  16. Energy conservation and analysis and evaluation. [specifically at Slidell Computer Complex

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The survey assembled and made recommendations directed at conserving utilities and reducing the use of energy at the Slidell Computer Complex. Specific items included were: (1) scheduling and controlling the use of gas and electricity, (2) building modifications to reduce energy, (3) replacement of old, inefficient equipment, (4) modifications to control systems, (5) evaluations of economizer cycles in HVAC systems, and (6) corrective settings for thermostats, ductstats, and other temperature and pressure control devices.

  17. Roles of deformation and orientation in heavy-ion collisions induced by light deformed nuclei at intermediate energy

    SciTech Connect

    Cao, X. G.; Zhang, G. Q.; Cai, X. Z.; Ma, Y. G.; Guo, W.; Chen, J. G.; Tian, W. D.; Fang, D. Q.; Wang, H. W.

    2010-06-15

    The reaction dynamics of axisymmetric deformed {sup 24}Mg+{sup 24}Mg collisions has been investigated systematically by an isospin-dependent quantum molecular dynamics model. It is found that different deformations and orientations result in apparently different properties of reaction dynamics. We reveal that some observables such as nuclear stopping power (R), multiplicity of fragments, and elliptic flow are very sensitive to the initial deformations and orientations. There exists an eccentricity scaling of elliptic flow in central body-body collisions with different deformations. In addition, the tip-tip and body-body configurations turn out to be two extreme cases in central reaction dynamical process.

  18. Passive orientation apparatus

    DOEpatents

    Spletzer, Barry L.; Fischer, Gary J.; Martinez, Michael A.

    2001-01-01

    An apparatus that can return a payload to a known orientation after unknown motion, without requiring external power or complex mechanical systems. The apparatus comprises a faceted cage that causes the system to rest in a stable position and orientation after arbitrary motion. A gimbal is mounted with the faceted cage and holds the payload, allowing the payload to move relative to the stable faceted cage. The payload is thereby placed in a known orientation by the interaction of gravity with the geometry of the faceted cage, the mass of the system, and the motion of the payload and gimbal. No additional energy, control, or mechanical actuation is required. The apparatus is suitable for use in applications requiring positioning of a payload to a known orientation after arbitrary or uncontrolled motion, including remote sensing and mobile robot applications.

  19. Computationally efficient characterization of potential energy surfaces based on fingerprint distances

    NASA Astrophysics Data System (ADS)

    Schaefer, Bastian; Goedecker, Stefan

    2016-07-01

    An analysis of the network defined by the potential energy minima of multi-atomic systems and their connectivity via reaction pathways that go through transition states allows us to understand important characteristics like thermodynamic, dynamic, and structural properties. Unfortunately computing the transition states and reaction pathways in addition to the significant energetically low-lying local minima is a computationally demanding task. We here introduce a computationally efficient method that is based on a combination of the minima hopping global optimization method and the insight that uphill barriers tend to increase with increasing structural distances of the educt and product states. This method allows us to replace the exact connectivity information and transition state energies with alternative and approximate concepts. Without adding any significant additional cost to the minima hopping global optimization approach, this method allows us to generate an approximate network of the minima, their connectivity, and a rough measure for the energy needed for their interconversion. This can be used to obtain a first qualitative idea on important physical and chemical properties by means of a disconnectivity graph analysis. Besides the physical insight obtained by such an analysis, the gained knowledge can be used to make a decision if it is worthwhile or not to invest computational resources for an exact computation of the transition states and the reaction pathways. Furthermore it is demonstrated that the here presented method can be used for finding physically reasonable interconversion pathways that are promising input pathways for methods like transition path sampling or discrete path sampling.

  20. Evaluation of Emerging Energy-Efficient Heterogeneous Computing Platforms for Biomolecular and Cellular Simulation Workloads

    PubMed Central

    Stone, John E.; Hallock, Michael J.; Phillips, James C.; Peterson, Joseph R.; Luthey-Schulten, Zaida; Schulten, Klaus

    2016-01-01

    Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers. PMID:27516922

  1. PREFACE: International Conference on Computing in High Energy and Nuclear Physics (CHEP'07)

    NASA Astrophysics Data System (ADS)

    Sobie, Randall; Tafirout, Reda; Thomson, Jana

    2007-07-01

    The 2007 International Conference on Computing in High Energy and Nuclear Physics (CHEP) was held on 2-7 September 2007 in Victoria, British Columbia, Canada. CHEP is a major series of international conferences for physicists and computing professionals from the High Energy and Nuclear Physics community, Computer Science and Information Technology. The CHEP conference provides an international forum to exchange information on computing experience and needs for the community, and to review recent, ongoing, and future activities. The CHEP'07 conference had close to 500 attendees with a program that included plenary sessions of invited oral presentations, a number of parallel sessions comprising oral and poster presentations, and an industrial exhibition. Conference tracks covered topics in Online Computing, Event Processing, Software Components, Tools and Databases, Software Tools and Information Systems, Computing Facilities, Production Grids and Networking, Grid Middleware and Tools, Distributed Data Analysis and Information Management and Collaborative Tools. The conference included a successful whale-watching excursion involving over 200 participants and a banquet at the Royal British Columbia Museum. The next CHEP conference will be held in Prague in March 2009. We would like thank the sponsors of the conference and the staff at the TRIUMF Laboratory and the University of Victoria who made the CHEP'07 a success. Randall Sobie and Reda Tafirout CHEP'07 Conference Chairs

  2. The use of symbolic computation in radiative, energy, and neutron transport calculations. Final report

    SciTech Connect

    Frankel, J.I.

    1997-09-01

    This investigation used sysmbolic manipulation in developing analytical methods and general computational strategies for solving both linear and nonlinear, regular and singular integral and integro-differential equations which appear in radiative and mixed-mode energy transport. Contained in this report are seven papers which present the technical results as individual modules.

  3. Federal High Performance Computing and Communications Program. The Department of Energy Component.

    ERIC Educational Resources Information Center

    Department of Energy, Washington, DC. Office of Energy Research.

    This report, profusely illustrated with color photographs and other graphics, elaborates on the Department of Energy (DOE) research program in High Performance Computing and Communications (HPCC). The DOE is one of seven agency programs within the Federal Research and Development Program working on HPCC. The DOE HPCC program emphasizes research in…

  4. Energy-efficient building design and operation: The role of computer technology

    SciTech Connect

    Brambley, M.R.

    1990-09-01

    Computer technology provides many opportunities to improve the energy performance of commercial buildings throughout the entire building life cycle. We are faced with developing those technologies to put the results of many years of buildings research into the hands of building owners, designers, and operators. This report discusses both the philosophical and technological aspect associated with this topic.

  5. Experiments on exactly computing non-linear energy transfer rate in MASNUM-WAM

    NASA Astrophysics Data System (ADS)

    Jiang, Xingjie; Wang, Daolong; Gao, Dalu; Zhang, Tingting

    2016-07-01

    The Webb-Resio-Tracy (WRT) method for exact computation of the non-linear energy transfer rate was implemented in MASNUM-WAM, which is a third-generation wave model solving the discrete spectral balance equation. In this paper, we describe the transformation of the spectral space in the original WRT method. Four numerical procedures were developed in which the acceleration techniques in the original WRT method, such as geometric scaling, pre-calculating, and grid-searching, are all reorganized. A series of numerical experiments including two simulations based on real data were performed. The availability of such implementation in both serial and parallel versions of the wave model was proved, and a comparison of computation times showed that some of the developed procedures provided good efficacy. With exact computation of non-linear energy transfer, MASNUM-WAM now can be used to perform numerical experiments for research purposes, which augurs well for further developments of the model.

  6. General purpose computational tools for simulation and analysis of medium-energy backscattering spectra

    NASA Astrophysics Data System (ADS)

    Weller, Robert A.

    1999-06-01

    This paper describes a suite of computational tools for general-purpose ion-solid calculations, which has been implemented in the platform-independent computational environment Mathematica®. Although originally developed for medium energy work (beam energies < 300 keV), they are suitable for general, classical, non-relativistic calculations. Routines are available for stopping power, Rutherford and Lenz-Jensen (screened) cross sections, sputtering yields, small-angle multiple scattering, and back-scattering-spectrum simulation and analysis. Also included are a full range of supporting functions, as well as easily accessible atomic mass and other data on all the stable isotopes in the periodic table. The functions use common calling protocols, recognize elements and isotopes by symbolic names and, wherever possible, return symbolic results for symbolic inputs, thereby facilitating further computation. A new paradigm for the representation of backscattering spectra is introduced.

  7. TU-A-12A-08: Computing Longitudinal Material Changes in Bone Metastases Using Dual Energy Computed Tomography

    SciTech Connect

    Schmidtlein, CR; Hwang, S; Veeraraghavan, H; Fehr, D; Humm, J; Deasy, J

    2014-06-15

    Purpose: This study demonstrates a methodology for tracking changes in metastatic bone disease using trajectories in material basis space in serial dual energy computed tomography (DECT) studies. Methods: This study includes patients with bone metastases from breast cancer that had clinical surveillance CT scans using a General Electric CT750HD in dual energy mode. A radiologist defined regions-of-interested (ROI) for bone metastasis, normal bone, and marrow across the serial DECT scans. Our approach employs a Radon transform to forward-projection the basis images, namely, water and iodine, into sinogram space. This data is then repartitioned into fat/bone and effective density/Z image pairs using assumed energy spectrums for the x-ray energies. This approach both helps remove negative material densities and avoids adding spectrum-hardening artifacts. These new basis data sets were then reconstructed via filtered back-projection to create new material basis pair images. The trajectories of these pairs were then plotted in the new basis space providing a means to both visualize and quantitatively measure changes in the material properties of the tumors. Results: ROI containing radiologist defined metastatic bone disease showed well-defined trajectories in both fat/bone and effective density/Z space. ROI that contained radiologist defined normal bone and marrow did not exhibit any discernible trajectories and were stable from scan to scan. Conclusions: The preliminary results show that changes in material composition and effective density/Z image pairs were seen primarily in metastasis and not in normal tissue. This study indicates that by using routine clinical DECT it may be possible to monitor therapy response of bone metastases because healing or worsening bone metastases change material composition of bone. Additional studies are needed to further validate these results and to test for their correlation with outcome.

  8. The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report

    SciTech Connect

    Diachin, L F; Garaizar, F X; Henson, V E; Pope, G

    2009-10-12

    In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE and the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.

  9. Biological Sex, Sex-Role Identity, and the Spectrum of Computing Orientations: A Re-Appraisal at the End of the 90s.

    ERIC Educational Resources Information Center

    Charlton, John P.

    1999-01-01

    Describes a study of undergraduates at Bolton Institute (England) that investigated biological sex, psychological masculinity and femininity, computer comfort, computer engagement, and computer over-use. Discusses the role of applications in determining sex differences, and explains findings that imply that some reduction of sex asymmetries in…

  10. Analytic computation of energy derivatives - Relationships among partial derivatives of a variationally determined function

    NASA Technical Reports Server (NTRS)

    King, H. F.; Komornicki, A.

    1986-01-01

    Formulas are presented relating Taylor series expansion coefficients of three functions of several variables, the energy of the trial wave function (W), the energy computed using the optimized variational wave function (E), and the response function (lambda), under certain conditions. Partial derivatives of lambda are obtained through solution of a recursive system of linear equations, and solution through order n yields derivatives of E through order 2n + 1, extending Puley's application of Wigner's 2n + 1 rule to partial derivatives in couple perturbation theory. An examination of numerical accuracy shows that the usual two-term second derivative formula is less stable than an alternative four-term formula, and that previous claims that energy derivatives are stationary properties of the wave function are fallacious. The results have application to quantum theoretical methods for the computation of derivative properties such as infrared frequencies and intensities.

  11. Simple prescription for computing the interparticle potential energy for D-dimensional gravity systems

    NASA Astrophysics Data System (ADS)

    Accioly, Antonio; Helayël-Neto, José; Barone, F. E.; Herdy, Wallace

    2015-02-01

    A straightforward prescription for computing the D-dimensional potential energy of gravitational models, which is strongly based on the Feynman path integral, is built up. Using this method, the static potential energy for the interaction of two masses is found in the context of D-dimensional higher-derivative gravity models, and its behavior is analyzed afterwards in both ultraviolet and infrared regimes. As a consequence, two new gravity systems in which the potential energy is finite at the origin, respectively, in D = 5 and D = 6, are found. Since the aforementioned prescription is equivalent to that based on the marriage between quantum mechanics (to leading order, i.e., in the first Born approximation) and the nonrelativistic limit of quantum field theory, and bearing in mind that the latter relies basically on the calculation of the nonrelativistic Feynman amplitude ({{M}NR}), a trivial expression for computing {{M}NR} is obtained from our prescription as an added bonus.

  12. Applied & Computational MathematicsChallenges for the Design and Control of Dynamic Energy Systems

    SciTech Connect

    Brown, D L; Burns, J A; Collis, S; Grosh, J; Jacobson, C A; Johansen, H; Mezic, I; Narayanan, S; Wetter, M

    2011-03-10

    The Energy Independence and Security Act of 2007 (EISA) was passed with the goal 'to move the United States toward greater energy independence and security.' Energy security and independence cannot be achieved unless the United States addresses the issue of energy consumption in the building sector and significantly reduces energy consumption in buildings. Commercial and residential buildings account for approximately 40% of the U.S. energy consumption and emit 50% of CO{sub 2} emissions in the U.S. which is more than twice the total energy consumption of the entire U.S. automobile and light truck fleet. A 50%-80% improvement in building energy efficiency in both new construction and in retrofitting existing buildings could significantly reduce U.S. energy consumption and mitigate climate change. Reaching these aggressive building efficiency goals will not happen without significant Federal investments in areas of computational and mathematical sciences. Applied and computational mathematics are required to enable the development of algorithms and tools to design, control and optimize energy efficient buildings. The challenge has been issued by the U.S. Secretary of Energy, Dr. Steven Chu (emphasis added): 'We need to do more transformational research at DOE including computer design tools for commercial and residential buildings that enable reductions in energy consumption of up to 80 percent with investments that will pay for themselves in less than 10 years.' On July 8-9, 2010 a team of technical experts from industry, government and academia were assembled in Arlington, Virginia to identify the challenges associated with developing and deploying newcomputational methodologies and tools thatwill address building energy efficiency. These experts concluded that investments in fundamental applied and computational mathematics will be required to build enabling technology that can be used to realize the target of 80% reductions in energy consumption. In addition the

  13. Computational insight into the catalytic implication of head/tail-first orientation of arachidonic acid in human 5-lipoxygenase: consequences for the positional specificity of oxygenation.

    PubMed

    Saura, Patricia; Maréchal, Jean-Didier; Masgrau, Laura; Lluch, José M; González-Lafont, Àngels

    2016-08-17

    In the present work we have combined homology modeling, protein-ligand dockings, quantum mechanics/molecular mechanics calculations and molecular dynamics simulations to generate human 5-lipoxygenase (5-LOX):arachidonic acid (AA) complexes consistent with the 5-lipoxygenating activity (which implies hydrogen abstraction at the C7 position). Our results suggest that both the holo and the apo forms of human Stable 5-LOX could accommodate AA in a productive form for 5-lipoxygenation. The former, in a tail-first orientation, with the AA carboxylate end interacting with Lys409, gives the desired structures with C7 close to the Fe-OH(-) cofactor and suitable barrier heights for H7 abstraction. Only when using the apo form structure, a head-first orientation with the AA carboxylate close to His600 (a residue recently proposed as essential for AA positioning) is obtained in the docking calculations. However, the calculated barrier heights for this head-first orientation are in principle consistent with 5-LOX specificity, but also with 12/8 regioselectivity. Finally, long MD simulations give support to the recent hypothesis that the Phe177 + Tyr181 pair needs to close the active site access during the chemical reaction, and suggest that in the case of a head-first orientation Phe177 may be the residue interacting with the AA carboxylate. PMID:27489112

  14. Updated energy budgets for neural computation in the neocortex and cerebellum

    PubMed Central

    Howarth, Clare; Gleeson, Padraig; Attwell, David

    2012-01-01

    The brain's energy supply determines its information processing power, and generates functional imaging signals. The energy use on the different subcellular processes underlying neural information processing has been estimated previously for the grey matter of the cerebral and cerebellar cortex. However, these estimates need reevaluating following recent work demonstrating that action potentials in mammalian neurons are much more energy efficient than was previously thought. Using this new knowledge, this paper provides revised estimates for the energy expenditure on neural computation in a simple model for the cerebral cortex and a detailed model of the cerebellar cortex. In cerebral cortex, most signaling energy (50%) is used on postsynaptic glutamate receptors, 21% is used on action potentials, 20% on resting potentials, 5% on presynaptic transmitter release, and 4% on transmitter recycling. In the cerebellar cortex, excitatory neurons use 75% and inhibitory neurons 25% of the signaling energy, and most energy is used on information processing by non-principal neurons: Purkinje cells use only 15% of the signaling energy. The majority of cerebellar signaling energy use is on the maintenance of resting potentials (54%) and postsynaptic receptors (22%), while action potentials account for only 17% of the signaling energy use. PMID:22434069

  15. Computer usage and national energy consumption: Results from a field-metering study

    SciTech Connect

    Desroches, Louis-Benoit; Fuchs, Heidi; Greenblatt, Jeffery; Pratt, Stacy; Willem, Henry; Claybaugh, Erin; Beraki, Bereket; Nagaraju, Mythri; Price, Sarah; Young, Scott

    2014-12-01

    The electricity consumption of miscellaneous electronic loads (MELs) in the home has grown in recent years, and is expected to continue rising. Consumer electronics, in particular, are characterized by swift technological innovation, with varying impacts on energy use. Desktop and laptop computers make up a significant share of MELs electricity consumption, but their national energy use is difficult to estimate, given uncertainties around shifting user behavior. This report analyzes usage data from 64 computers (45 desktop, 11 laptop, and 8 unknown) collected in 2012 as part of a larger field monitoring effort of 880 households in the San Francisco Bay Area, and compares our results to recent values from the literature. We find that desktop computers are used for an average of 7.3 hours per day (median = 4.2 h/d), while laptops are used for a mean 4.8 hours per day (median = 2.1 h/d). The results for laptops are likely underestimated since they can be charged in other, unmetered outlets. Average unit annual energy consumption (AEC) for desktops is estimated to be 194 kWh/yr (median = 125 kWh/yr), and for laptops 75 kWh/yr (median = 31 kWh/yr). We estimate national annual energy consumption for desktop computers to be 20 TWh. National annual energy use for laptops is estimated to be 11 TWh, markedly higher than previous estimates, likely reflective of laptops drawing more power in On mode in addition to greater market penetration. This result for laptops, however, carries relatively higher uncertainty compared to desktops. Different study methodologies and definitions, changing usage patterns, and uncertainty about how consumers use computers must be considered when interpreting our results with respect to existing analyses. Finally, as energy consumption in On mode is predominant, we outline several energy savings opportunities: improved power management (defaulting to low-power modes after periods of inactivity as well as power scaling), matching the rated power

  16. Energy scaling advantages of resistive memory crossbar based computation and its application to sparse coding

    DOE PAGESBeta

    Agarwal, Sapan; Quach, Tu -Thach; Parekh, Ojas; DeBenedictis, Erik P.; James, Conrad D.; Marinella, Matthew J.; Aimone, James B.

    2016-01-06

    In this study, the exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-basedmore » architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.« less

  17. Computation of Bond Dissociation Energies for Removal of Nitrogen Dioxide Groups in Certain Aliphatic Nitro Compounds

    NASA Astrophysics Data System (ADS)

    Shao, Ju-Xiang; Cheng, Xin-Lu; Yang, Xiang-Dong; Xiang, Shi-Kai

    2006-04-01

    Bond dissociation energies for removal of nitrogen dioxide groups in 10 aliphatic nitro compounds, including nitromethane, nitroethylene, nitroethane, dinitromethane, 1-nitropropane, 2-nitropropane, 1-nitrobutane, 2-methyl-2-nitropropane, nitropentane, and nitrohexane, are calculated using the highly accurate complete basis set (CBS-Q) and the three hybrid density functional theory (DFT) methods B3LYP, B3PW91 and B3P86 with 6-31G** basis set. By comparing the computed bond dissociation energies and experimental results, we find that the B3LYP/6-31G** and B3PW91/6-31G** methods are incapable of predicting the satisfactory bond dissociation energy (BDE). However, B3P86/6-31G** and CBS-Q computations are capable of giving the calculated BDEs, which are in extraordinary agreement with the experimental data. Nevertheless, since CBS-Q computational demands increase rapidly with the number of containing atoms in molecules, larger molecules soon become prohibitively expensive. Therefore, we suggest to take the B3P86/6-31G** method as a reliable method of computing the BDEs for removal of the NO2 groups in the aliphatic nitro compounds.

  18. Energy Scaling Advantages of Resistive Memory Crossbar Based Computation and Its Application to Sparse Coding.

    PubMed

    Agarwal, Sapan; Quach, Tu-Thach; Parekh, Ojas; Hsia, Alexander H; DeBenedictis, Erik P; James, Conrad D; Marinella, Matthew J; Aimone, James B

    2015-01-01

    The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning. PMID:26778946

  19. Energy Scaling Advantages of Resistive Memory Crossbar Based Computation and Its Application to Sparse Coding

    PubMed Central

    Agarwal, Sapan; Quach, Tu-Thach; Parekh, Ojas; Hsia, Alexander H.; DeBenedictis, Erik P.; James, Conrad D.; Marinella, Matthew J.; Aimone, James B.

    2016-01-01

    The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning. PMID:26778946

  20. An energy investigation into 1D/2D oriented-attachment assemblies of 1D Ag nanocrystals.

    PubMed

    Lv, Weiqiang; Yang, Xuemei; Wang, Wei; Niu, Yinghua; Liu, Zhongping; He, Weidong

    2014-09-15

    In the field of oriented-attachment crystal growth, one-dimensional nanocrystals are frequently employed as building blocks to synthesize two-dimensional or large-aspect-ratio one-dimensional nanocrystals. Despite recent extensive experimental advances, the underlying inter-particle interaction in the synthesis still remains elusive. In this report, using Ag as a platform, we investigate the van der Waals interactions associated with the side-by-side and end-to-end assemblies of one-dimensional nanorods. The size, aspect ratio, and inter-particle separation of the Ag precursor nanorods are found to have dramatically different impacts on the van der Waals interactions in the two types of assemblies. Our work facilitates the fundamental understanding of the oriented-attachment assembling mechanism based on one-dimensional nanocrystals. PMID:24954815

  1. Industrial Orientation.

    ERIC Educational Resources Information Center

    Rasor, Leslie; Brooks, Valerie

    These eight modules for an industrial orientation class were developed by a project to design an interdisciplinary program of basic skills training for disadvantaged students in a Construction Technology Program (see Note). The Drafting module overviews drafting career opportunities, job markets, salaries, educational requirements, and basic…

  2. A 4-cylinder Stirling engine computer program with dynamic energy equations

    NASA Technical Reports Server (NTRS)

    Daniele, C. J.; Lorenzo, C. F.

    1983-01-01

    A computer program for simulating the steady state and transient performance of a four cylinder Stirling engine is presented. The thermodynamic model includes both continuity and energy equations and linear momentum terms (flow resistance). Each working space between the pistons is broken into seven control volumes. Drive dynamics and vehicle load effects are included. The model contains 70 state variables. Also included in the model are piston rod seal leakage effects. The computer program includes a model of a hydrogen supply system, from which hydrogen may be added to the system to accelerate the engine. Flow charts are provided.

  3. Computing the universe: how large-scale simulations illuminate galaxies and dark energy

    NASA Astrophysics Data System (ADS)

    O'Shea, Brian

    2015-04-01

    High-performance and large-scale computing is absolutely to understanding astronomical objects such as stars, galaxies, and the cosmic web. This is because these are structures that operate on physical, temporal, and energy scales that cannot be reasonably approximated in the laboratory, and whose complexity and nonlinearity often defies analytic modeling. In this talk, I show how the growth of computing platforms over time has facilitated our understanding of astrophysical and cosmological phenomena, focusing primarily on galaxies and large-scale structure in the Universe.

  4. Eliminating beam-hardening artifacts in high-energy industrial computed tomography(ICT)

    NASA Astrophysics Data System (ADS)

    Kang, Kejun; Zhao, Ziran; Chen, Zhiqiang; Zhang, Li

    2004-10-01

    Beam-hardening is caused by the filtering of a polychromatic X-ray beam by the objects in the scan field. In industrial field, both the X-ray source and the attenuation characteristics of the materials are different with those in medical field. Methods that work in medical field cannot give satisfying results here. The author has developed a computer software, named simulative tomographic machine (STM) platform. STM platform is designed to simulate the procedure of high-energy ICT scanning. It is also the platform for developing data process algorithm. With the STM platform, this paper presents an efficient correction technique, which can eliminate beam-hardening artifacts efficiently in high-energy ICT. The new algorithm is based on the following facts: the attenuation coefficient of each substance is precisely known; the polychromatic spectrum of accelerator can be computed with Monte Carlo (MC) method; the total photon interaction cross-section of most inspected object can be treated as constant in the energy region between 1.5 and 9MeV. The monochromatic projection can be computed from the polychromatic projection with an iterative algorithm. So we can reconstruct perfect image from the projection made only by high-energy photons.

  5. Reducing Vehicle Weight and Improving U.S. Energy Efficiency Using Integrated Computational Materials Engineering

    NASA Astrophysics Data System (ADS)

    Joost, William J.

    2012-09-01

    Transportation accounts for approximately 28% of U.S. energy consumption with the majority of transportation energy derived from petroleum sources. Many technologies such as vehicle electrification, advanced combustion, and advanced fuels can reduce transportation energy consumption by improving the efficiency of cars and trucks. Lightweight materials are another important technology that can improve passenger vehicle fuel efficiency by 6-8% for each 10% reduction in weight while also making electric and alternative vehicles more competitive. Despite the opportunities for improved efficiency, widespread deployment of lightweight materials for automotive structures is hampered by technology gaps most often associated with performance, manufacturability, and cost. In this report, the impact of reduced vehicle weight on energy efficiency is discussed with a particular emphasis on quantitative relationships determined by several researchers. The most promising lightweight materials systems are described along with a brief review of the most significant technical barriers to their implementation. For each material system, the development of accurate material models is critical to support simulation-intensive processing and structural design for vehicles; improved models also contribute to an integrated computational materials engineering (ICME) approach for addressing technical barriers and accelerating deployment. The value of computational techniques is described by considering recent ICME and computational materials science success stories with an emphasis on applying problem-specific methods.

  6. Beam hardening artifact reduction using dual energy computed tomography: implications for myocardial perfusion studies

    PubMed Central

    Carrascosa, Patricia; Cipriano, Silvina; De Zan, Macarena; Deviggiano, Alejandro; Capunay, Carlos; Cury, Ricardo C.

    2015-01-01

    Background Myocardial computed tomography perfusion (CTP) using conventional single energy (SE) imaging is influenced by the presence of beam hardening artifacts (BHA), occasionally resembling perfusion defects and commonly observed at the left ventricular posterobasal wall (PB). We therefore sought to explore the ability of dual energy (DE) CTP to attenuate the presence of BHA. Methods Consecutive patients without history of coronary artery disease who were referred for computed tomography coronary angiography (CTCA) due to atypical chest pain and a normal stress-rest SPECT and had absence or mild coronary atherosclerosis constituted the study population. The study group was acquired using DE and the control group using SE imaging. Results Demographical characteristics were similar between groups, as well as the heart rate and the effective radiation dose. Myocardial signal density (SD) levels were evaluated in 280 basal segments among the DE group (140 PB segments for each energy level from 40 to 100 keV; and 140 reference segments), and in 40 basal segments (at the same locations) among the SE group. Among the DE group, myocardial SD levels and myocardial SD ratio evaluated at the reference segment were higher at low energy levels, with significantly lower SD levels at increasing energy levels. Myocardial signal-to-noise ratio was not significantly influenced by the energy level applied, although 70 keV was identified as the energy level with the best overall signal-to-noise ratio. Significant differences were identified between the PB segment and the reference segment among the lower energy levels, whereas at ≥70 keV myocardial SD levels were similar. Compared to DE reconstructions at the best energy level (70 keV), SE acquisitions showed no significant differences overall regarding myocardial SD levels among the reference segments. Conclusions BHA that influence the assessment of myocardial perfusion can be attenuated using DE at 70 keV or higher. PMID

  7. Grain Boundary Plane Orientation Fundamental Zones and Structure-Property Relationships

    PubMed Central

    Homer, Eric R.; Patala, Srikanth; Priedeman, Jonathan L.

    2015-01-01

    Grain boundary plane orientation is a profoundly important determinant of character in polycrystalline materials that is not well understood. This work demonstrates how boundary plane orientation fundamental zones, which capture the natural crystallographic symmetries of a grain boundary, can be used to establish structure-property relationships. Using the fundamental zone representation, trends in computed energy, excess volume at the grain boundary, and temperature-dependent mobility naturally emerge and show a strong dependence on the boundary plane orientation. Analysis of common misorientation axes even suggests broader trends of grain boundary energy as a function of misorientation angle and plane orientation. Due to the strong structure-property relationships that naturally emerge from this work, boundary plane fundamental zones are expected to simplify analysis of both computational and experimental data. This standardized representation has the potential to significantly accelerate research in the topologically complex and vast five-dimensional phase space of grain boundaries. PMID:26498715

  8. Grain boundary plane orientation fundamental zones and structure-property relationships

    DOE PAGESBeta

    Homer, Eric R.; Patala, Srikanth; Priedeman, Jonathan L.

    2015-10-26

    Grain boundary plane orientation is a profoundly important determinant of character in polycrystalline materials that is not well understood. This work demonstrates how boundary plane orientation fundamental zones, which capture the natural crystallographic symmetries of a grain boundary, can be used to establish structure-property relationships. Using the fundamental zone representation, trends in computed energy, excess volume at the grain boundary, and temperature-dependent mobility naturally emerge and show a strong dependence on the boundary plane orientation. Analysis of common misorientation axes even suggests broader trends of grain boundary energy as a function of misorientation angle and plane orientation. Due to themore » strong structure-property relationships that naturally emerge from this work, boundary plane fundamental zones are expected to simplify analysis of both computational and experimental data. This standardized representation has the potential to significantly accelerate research in the topologically complex and vast five-dimensional phase space of grain boundaries.« less

  9. Grain boundary plane orientation fundamental zones and structure-property relationships

    SciTech Connect

    Homer, Eric R.; Patala, Srikanth; Priedeman, Jonathan L.

    2015-10-26

    Grain boundary plane orientation is a profoundly important determinant of character in polycrystalline materials that is not well understood. This work demonstrates how boundary plane orientation fundamental zones, which capture the natural crystallographic symmetries of a grain boundary, can be used to establish structure-property relationships. Using the fundamental zone representation, trends in computed energy, excess volume at the grain boundary, and temperature-dependent mobility naturally emerge and show a strong dependence on the boundary plane orientation. Analysis of common misorientation axes even suggests broader trends of grain boundary energy as a function of misorientation angle and plane orientation. Due to the strong structure-property relationships that naturally emerge from this work, boundary plane fundamental zones are expected to simplify analysis of both computational and experimental data. This standardized representation has the potential to significantly accelerate research in the topologically complex and vast five-dimensional phase space of grain boundaries.

  10. Incorporating excluded solvent volume and physical dipoles for computing solvation free energy.

    PubMed

    Yang, Pei-Kun

    2015-07-01

    The solvation free energy described using the Born equation depends on the solute charge, solute radius, and solvent dielectric constant. However, the dielectric polarization derived from Gauss's law used in the Born equation differs from that obtained from molecular dynamics simulations. Therefore, the adjustment of Born radii is insufficient for fitting the solvation free energy to various solute conformations. In order to mimic the dielectric polarization surrounding a solute in molecular dynamics simulations, the water molecule in the first coordination shell is modeled as a physical dipole in a van der Waals sphere, and the intermediate water is treated as a bulk solvent. The electric dipole of the first-shell water is modeled as positive and negative surface charge layers with fixed charge magnitudes, but with variable separation distance as derived from the distributions of hydrogen and oxygen atoms of water dictated by their orientational distribution functions. An equation that describes the solvation free energy of ions using this solvent scheme with a TIP3P water model is derived, and the values of the solvation free energies of ions estimated from this derived equation are found to be similar to those obtained from the experimental data. PMID:26113115

  11. Computational model for noncontact atomic force microscopy: energy dissipation of cantilever.

    PubMed

    Senda, Yasuhiro; Blomqvist, Janne; Nieminen, Risto M

    2016-09-21

    We propose a computational model for noncontact atomic force microscopy (AFM) in which the atomic force between the cantilever tip and the surface is calculated using a molecular dynamics method, and the macroscopic motion of the cantilever is modeled by an oscillating spring. The movement of atoms in the tip and surface is connected with the oscillating spring using a recently developed coupling method. In this computational model, the oscillation energy is dissipated, as observed in AFM experiments. We attribute this dissipation to the hysteresis and nonconservative properties of the interatomic force that acts between the atoms in the tip and sample surface. The dissipation rate strongly depends on the parameters used in the computational model. PMID:27420398

  12. Visualization of flaws within heavy section ultrasonic test blocks using high energy computed tomography

    SciTech Connect

    House, M.B.; Ross, D.M.; Janucik, F.X.; Friedman, W.D.; Yancey, R.N.

    1996-05-01

    The feasibility of high energy computed tomography (9 MeV) to detect volumetric and planar discontinuities in large pressure vessel mock-up blocks was studied. The data supplied by the manufacturer of the test blocks on the intended flaw geometry were compared to manual, contact ultrasonic test and computed tomography test data. Subsequently, a visualization program was used to construct fully three-dimensional morphological information enabling interactive data analysis on the detected flaws. Density isosurfaces show the relative shape and location of the volumetric defects within the mock-up blocks. Such a technique may be used to qualify personnel or newly developed ultrasonic test methods without the associated high cost of destructive evaluation. Data is presented showing the capability of the volumetric data analysis program to overlay the computed tomography and destructive evaluation (serial metallography) data for a direct, three-dimensional comparison.

  13. Computing the energy of a water molecule using multideterminants: A simple, efficient algorithm

    NASA Astrophysics Data System (ADS)

    Clark, Bryan K.; Morales, Miguel A.; McMinis, Jeremy; Kim, Jeongnim; Scuseria, Gustavo E.

    2011-12-01

    Quantum Monte Carlo (QMC) methods such as variational Monte Carlo and fixed node diffusion Monte Carlo depend heavily on the quality of the trial wave function. Although Slater-Jastrow wave functions are the most commonly used variational ansatz in electronic structure, more sophisticated wave functions are critical to ascertaining new physics. One such wave function is the multi-Slater-Jastrow wave function which consists of a Jastrow function multiplied by the sum of Slater determinants. In this paper we describe a method for working with these wave functions in QMC codes that is easy to implement, efficient both in computational speed as well as memory, and easily parallelized. The computational cost scales quadratically with particle number making this scaling no worse than the single determinant case and linear with the total number of excitations. Additionally, we implement this method and use it to compute the ground state energy of a water molecule.

  14. Computing the energy of a water molecule using multideterminants: a simple, efficient algorithm.

    PubMed

    Clark, Bryan K; Morales, Miguel A; McMinis, Jeremy; Kim, Jeongnim; Scuseria, Gustavo E

    2011-12-28

    Quantum Monte Carlo (QMC) methods such as variational Monte Carlo and fixed node diffusion Monte Carlo depend heavily on the quality of the trial wave function. Although Slater-Jastrow wave functions are the most commonly used variational ansatz in electronic structure, more sophisticated wave functions are critical to ascertaining new physics. One such wave function is the multi-Slater-Jastrow wave function which consists of a Jastrow function multiplied by the sum of Slater determinants. In this paper we describe a method for working with these wave functions in QMC codes that is easy to implement, efficient both in computational speed as well as memory, and easily parallelized. The computational cost scales quadratically with particle number making this scaling no worse than the single determinant case and linear with the total number of excitations. Additionally, we implement this method and use it to compute the ground state energy of a water molecule. PMID:22225142

  15. Computational model for noncontact atomic force microscopy: energy dissipation of cantilever

    NASA Astrophysics Data System (ADS)

    Senda, Yasuhiro; Blomqvist, Janne; Nieminen, Risto M.

    2016-09-01

    We propose a computational model for noncontact atomic force microscopy (AFM) in which the atomic force between the cantilever tip and the surface is calculated using a molecular dynamics method, and the macroscopic motion of the cantilever is modeled by an oscillating spring. The movement of atoms in the tip and surface is connected with the oscillating spring using a recently developed coupling method. In this computational model, the oscillation energy is dissipated, as observed in AFM experiments. We attribute this dissipation to the hysteresis and nonconservative properties of the interatomic force that acts between the atoms in the tip and sample surface. The dissipation rate strongly depends on the parameters used in the computational model.

  16. Industrial Technology Orientation Curriculum Guide.

    ERIC Educational Resources Information Center

    Illinois State Board of Education, Springfield. Dept. of Adult, Vocational and Technical Education.

    The four courses in this guide were designed to meet the specifications for the career orientation level of Illinois' Education for Employment Curriculum Model. These orientation-level courses can be taken by high school students in any sequence: (1) communication technology; (2) energy utilization technology; (3) production technology; and (4)…

  17. Computationally efficient characterization of potential energy surfaces based on fingerprint distances.

    PubMed

    Schaefer, Bastian; Goedecker, Stefan

    2016-07-21

    An analysis of the network defined by the potential energy minima of multi-atomic systems and their connectivity via reaction pathways that go through transition states allows us to understand important characteristics like thermodynamic, dynamic, and structural properties. Unfortunately computing the transition states and reaction pathways in addition to the significant energetically low-lying local minima is a computationally demanding task. We here introduce a computationally efficient method that is based on a combination of the minima hopping global optimization method and the insight that uphill barriers tend to increase with increasing structural distances of the educt and product states. This method allows us to replace the exact connectivity information and transition state energies with alternative and approximate concepts. Without adding any significant additional cost to the minima hopping global optimization approach, this method allows us to generate an approximate network of the minima, their connectivity, and a rough measure for the energy needed for their interconversion. This can be used to obtain a first qualitative idea on important physical and chemical properties by means of a disconnectivity graph analysis. Besides the physical insight obtained by such an analysis, the gained knowledge can be used to make a decision if it is worthwhile or not to invest computational resources for an exact computation of the transition states and the reaction pathways. Furthermore it is demonstrated that the here presented method can be used for finding physically reasonable interconversion pathways that are promising input pathways for methods like transition path sampling or discrete path sampling. PMID:27448868

  18. SIVEH: numerical computing simulation of wireless energy-harvesting sensor nodes.

    PubMed

    Sanchez, Antonio; Blanc, Sara; Climent, Salvador; Yuste, Pedro; Ors, Rafael

    2013-01-01

    The paper presents a numerical energy harvesting model for sensor nodes, SIVEH (Simulator I-V for EH), based on I-V hardware tracking. I-V tracking is demonstrated to be more accurate than traditional energy modeling techniques when some of the components present different power dissipation at either different operating voltages or drawn currents. SIVEH numerical computing allows fast simulation of long periods of time-days, weeks, months or years-using real solar radiation curves. Moreover, SIVEH modeling has been enhanced with sleep time rate dynamic adjustment, while seeking energy-neutral operation. This paper presents the model description, a functional verification and a critical comparison with the classic energy approach. PMID:24008287

  19. A computation method of dual-material separation based on dual-energy CT imaging

    NASA Astrophysics Data System (ADS)

    Zou, Jing; Chen, Ming; Zhao, Jintao; Lv, Hanyu; Hu, Xiaodong

    2015-10-01

    Dual-energy x-ray technique, which consists in combining two radiographs acquired at two kilovoltage, can improve the identity of the compositions of object over regular CT, or at least improve image contrast. Dual-energy equations can be easily written and solved for ideally monochromatic x-ray source and perfect detector, but become complex when considering polychromatic x-ray source, detector sensitivity, and system non-linearity. In this paper, a new dual-energy algorithm which employed the basis material decomposition method was investigated for improving material separation capability. Studies by using computer-simulated data were performed to validate and evaluate the algorithm. The preliminary results of the study show that, with the proposed algorithm, separated "material specific" images of dual-material object could be obtained. Also monochromatic image can be acquired at arbitrary desired energy which could enhance image contrast in comparison with conventional reconstructed image.

  20. Effects of Gendered Language on Gender Stereotyping in Computer-Mediated Communication: The Moderating Role of Depersonalization and Gender-Role Orientation

    ERIC Educational Resources Information Center

    Lee, Eun-Ju

    2007-01-01

    This experiment examined what situational and dispositional features moderate the effects of linguistic gender cues on gender stereotyping in anonymous, text-based computer-mediated communication. Participants played a trivia game with an ostensible partner via computer, whose comments represented either prototypically masculine or feminine…

  1. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    SciTech Connect

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable of handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.

  2. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    DOE PAGESBeta

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less

  3. Straightforward prescription for computing the interparticle potential energy related to D -dimensional electromagnetic models

    NASA Astrophysics Data System (ADS)

    Accioly, Antonio; Helayël-Neto, José; Barone, F. E.; Barone, F. A.; Gaete, Patricio

    2014-11-01

    A simple expression for calculating the interparticle potential energy concerning D -dimensional electromagnetic models is obtained via Feynman path integral. This prescription converts the hard task of computing this potential into a trivial algebraic exercise. Since this method is equivalent to that based on the merging of quantum mechanics (to leading order, i.e., in the first Born approximation) with the nonrelativistic limit of quantum field theory, and keeping in mind that the latter relies basically on the computation of the nonrelativistic Feynman amplitude (MNR ), a trivial expression for calculating MNR is obtained from the alluded prescription as an added bonus. To test the efficacy and simplicity of the method, D -dimensional interparticle potential energy is found for a well-known extension of the standard model in which the massless electrodynamics U (1 )QED is coupled to a hidden sector U (1 )h , as well as Lee-Wick electrodynamics.

  4. Computational method for relative binding energies of enzyme-substrate complexes.

    PubMed

    Zhang, T; Koshland, D E

    1996-02-01

    A computational method for estimating the relative binding free energies of enzyme-substrate complexes is described that combines electrostatic and solvation models and X-ray crystallographic data. The polar contribution is evaluated by the Poisson-Boltzman equation. The nonpolar contribution is evaluated by solvent transfer data and surface area calculations. This algorithm was used to calculate the relative binding energies of 63 pairs of nine different mutant proteins with seven different substituted R-malate substrates of Escherichia coli isocitrate dehydrogenase. Comparison of calculated values with the experimentally observed values shows a high degree of correlation. PMID:8745413

  5. Dual-Energy Computed Tomography Characterization of Urinary Calculi: Basic Principles, Applications and Concerns.

    PubMed

    Mansouri, Mohammad; Aran, Shima; Singh, Ajay; Kambadakone, Avinash R; Sahani, Dushyant V; Lev, Michael H; Abujudeh, Hani H

    2015-01-01

    Dual-energy computed tomography (DECT) is based on obtaining 2 data sets with different peak kilovoltages from the same anatomical region, and material decomposition based on attenuation differences at different energy levels. Several DECT technologies are available such as: the dual-source CT, the fast kilovoltage-switching method, and the sandwich detectors technique. Calculi are detectable using iodine subtraction techniques. DECT also helps characterization of renal stone composition. The advanced postprocessing application enables differentiation of various renal stone types. Calculation of water content using spectral imaging is useful to diagnose urinary obstruction. PMID:26183068

  6. Computation studies into architecture and energy transfer properties of photosynthetic units from filamentous anoxygenic phototrophs

    SciTech Connect

    Linnanto, Juha Matti; Freiberg, Arvi

    2014-10-06

    We have used different computational methods to study structural architecture, and light-harvesting and energy transfer properties of the photosynthetic unit of filamentous anoxygenic phototrophs. Due to the huge number of atoms in the photosynthetic unit, a combination of atomistic and coarse methods was used for electronic structure calculations. The calculations reveal that the light energy absorbed by the peripheral chlorosome antenna complex transfers efficiently via the baseplate and the core B808–866 antenna complexes to the reaction center complex, in general agreement with the present understanding of this complex system.

  7. Verification of a VRF Heat Pump Computer Model in EnergyPlus

    SciTech Connect

    Nigusse, Bereket; Raustad, Richard

    2013-06-01

    This paper provides verification results of the EnergyPlus variable refrigerant flow (VRF) heat pump computer model using manufacturer's performance data. The paper provides an overview of the VRF model, presents the verification methodology, and discusses the results. The verification provides quantitative comparison of full and part-load performance to manufacturer's data in cooling-only and heating-only modes of operation. The VRF heat pump computer model uses dual range bi-quadratic performance curves to represent capacity and Energy Input Ratio (EIR) as a function of indoor and outdoor air temperatures, and dual range quadratic performance curves as a function of part-load-ratio for modeling part-load performance. These performance curves are generated directly from manufacturer's published performance data. The verification compared the simulation output directly to manufacturer's performance data, and found that the dual range equation fit VRF heat pump computer model predicts the manufacturer's performance data very well over a wide range of indoor and outdoor temperatures and part-load conditions. The predicted capacity and electric power deviations are comparbale to equation-fit HVAC computer models commonly used for packaged and split unitary HVAC equipment.

  8. Computer code to predict the heat of explosion of high energy materials.

    PubMed

    Muthurajan, H; Sivabalan, R; Pon Saravanan, N; Talawar, M B

    2009-01-30

    The computational approach to the thermochemical changes involved in the process of explosion of a high energy materials (HEMs) vis-à-vis its molecular structure aids a HEMs chemist/engineers to predict the important thermodynamic parameters such as heat of explosion of the HEMs. Such a computer-aided design will be useful in predicting the performance of a given HEM as well as in conceiving futuristic high energy molecules that have significant potential in the field of explosives and propellants. The software code viz., LOTUSES developed by authors predicts various characteristics of HEMs such as explosion products including balanced explosion reactions, density of HEMs, velocity of detonation, CJ pressure, etc. The new computational approach described in this paper allows the prediction of heat of explosion (DeltaH(e)) without any experimental data for different HEMs, which are comparable with experimental results reported in literature. The new algorithm which does not require any complex input parameter is incorporated in LOTUSES (version 1.5) and the results are presented in this paper. The linear regression analysis of all data point yields the correlation coefficient R(2)=0.9721 with a linear equation y=0.9262x+101.45. The correlation coefficient value 0.9721 reveals that the computed values are in good agreement with experimental values and useful for rapid hazard assessment of energetic materials. PMID:18513863

  9. Interdisciplinary Team-Teaching Experience for a Computer and Nuclear Energy Course for Electrical and Computer Engineering Students

    ERIC Educational Resources Information Center

    Kim, Charles; Jackson, Deborah; Keiller, Peter

    2016-01-01

    A new, interdisciplinary, team-taught course has been designed to educate students in Electrical and Computer Engineering (ECE) so that they can respond to global and urgent issues concerning computer control systems in nuclear power plants. This paper discusses our experience and assessment of the interdisciplinary computer and nuclear energy…

  10. Absolute Binding Free Energy Calculations: On the Accuracy of Computational Scoring of Protein-ligand Interactions

    PubMed Central

    Singh, Nidhi; Warshel, Arieh

    2010-01-01

    Calculating the absolute binding free energies is a challenging task. Reliable estimates of binding free energies should provide a guide for rational drug design. It should also provide us with deeper understanding of the correlation between protein structure and its function. Further applications may include identifying novel molecular scaffolds and optimizing lead compounds in computer-aided drug design. Available options to evaluate the absolute binding free energies range from the rigorous but expensive free energy perturbation to the microscopic Linear Response Approximation (LRA/β version) and its variants including the Linear Interaction Energy (LIE) to the more approximated and considerably faster scaled Protein Dipoles Langevin Dipoles (PDLD/S-LRA version), as well as the less rigorous Molecular Mechanics Poisson–Boltzmann/Surface Area (MM/PBSA) and Generalized Born/Surface Area (MM/GBSA) to the less accurate scoring functions. There is a need for an assessment of the performance of different approaches in terms of computer time and reliability. We present a comparative study of the LRA/β, the LIE, the PDLD/S-LRA/β and the more widely used MM/PBSA and assess their abilities to estimate the absolute binding energies. The LRA and LIE methods perform reasonably well but require specialized parameterization for the non-electrostatic term. On the average, the PDLD/S-LRA/β performs effectively. Our assessment of the MM/PBSA is less optimistic. This approach appears to provide erroneous estimates of the absolute binding energies due to its incorrect entropies and the problematic treatment of electrostatic energies. Overall, the PDLD/S-LRA/β appears to offer an appealing option for the final stages of massive screening approaches. PMID:20186976

  11. A digital computer simulation and study of a direct-energy-transfer power-conditioning system

    NASA Technical Reports Server (NTRS)

    Burns, W. W., III; Owen, H. A., Jr.; Wilson, T. G.; Rodriguez, G. E.; Paulkovich, J.

    1974-01-01

    A digital computer simulation technique, which can be used to study such composite power-conditioning systems, was applied to a spacecraft direct-energy-transfer power-processing system. The results obtained duplicate actual system performance with considerable accuracy. The validity of the approach and its usefulness in studying various aspects of system performance such as steady-state characteristics and transient responses to severely varying operating conditions are demonstrated experimentally.

  12. Computed Tomography Number Measurement Consistency Under Different Beam Hardening Conditions: Comparison Between Dual-Energy Spectral Computed Tomography and Conventional Computed Tomography Imaging in Phantom Experiment

    PubMed Central

    He, Tian; Qian, Xiaojun; Zhai, Renyou; Yang, Zongtao

    2015-01-01

    Purpose To compare computed tomography (CT) number measurement consistency under different beam hardening conditions in phantom experiment between dual-energy spectral CT and conventional CT imaging. Materials and Methods A phantom with 8 cells in periphery region and 1 cell in central region were used. The 8 conditioning tubes in the periphery region were filled with 1 of the 3 iodine solutions to simulate different beam hardening conditions: 0 for no beam hardening (NBH), 20 mg/mL for weak beam hardening (WBH) and 50 mg/mL for severe beam hardening (SBH) condition. Test tube filled with 0, 0.1, 0.5, 1, 2, 5, 10, 20, and 50 mg/mL iodine solution was placed in the central cell alternately. The phantom was scanned with conventional CT mode with 80, 100, 120, and 140 kVp and dual energy spectral CT mode. For spectral CT, 11 monochromatic image sets from 40 to 140 keV with interval of 10 keV were reconstructed. The CT number shift caused by beam hardening was evaluated by measuring the CT number difference (ΔCT) with and without beam hardening, with the following formulas: ΔCTWBH = |CTWBH − CTNBH| and ΔCTSBH = |CTSBH − CTNBH|. Data were compared with 1-way analysis of variance. Results Under both WBH and SBH conditions, the CT number shifts in all monochromatic image sets were less than those for polychromatic images (all P < 0.001). Under WBH condition, the maximum CT number shift was less than 6 Hounsfield units for monochromatic spectral CT images of all energy levels; under SBH condition, only monochromatic images at 70 keV and 80 keV had CT number shift less than 6 HU. Conclusion Dual energy spectral CT imaging provided more accurate CT number measurement than conventional CT under various beam hardening conditions. The optimal keV level for monochromatic spectral CT images with the most accurate CT number measurement depends on the severities of beam hardening condition. PMID:26196347

  13. Compare Energy Use in Variable Refrigerant Flow Heat Pumps Field Demonstration and Computer Model

    SciTech Connect

    Sharma, Chandan; Raustad, Richard

    2013-06-01

    Variable Refrigerant Flow (VRF) heat pumps are often regarded as energy efficient air-conditioning systems which offer electricity savings as well as reduction in peak electric demand while providing improved individual zone setpoint control. One of the key advantages of VRF systems is minimal duct losses which provide significant reduction in energy use and duct space. However, there is limited data available to show their actual performance in the field. Since VRF systems are increasingly gaining market share in the US, it is highly desirable to have more actual field performance data of these systems. An effort was made in this direction to monitor VRF system performance over an extended period of time in a US national lab test facility. Due to increasing demand by the energy modeling community, an empirical model to simulate VRF systems was implemented in the building simulation program EnergyPlus. This paper presents the comparison of energy consumption as measured in the national lab and as predicted by the program. For increased accuracy in the comparison, a customized weather file was created by using measured outdoor temperature and relative humidity at the test facility. Other inputs to the model included building construction, VRF system model based on lab measured performance, occupancy of the building, lighting/plug loads, and thermostat set-points etc. Infiltration model inputs were adjusted in the beginning to tune the computer model and then subsequent field measurements were compared to the simulation results. Differences between the computer model results and actual field measurements are discussed. The computer generated VRF performance closely resembled the field measurements.

  14. Outer Membrane Protein Folding and Topology from a Computational Transfer Free Energy Scale.

    PubMed

    Lin, Meishan; Gessmann, Dennis; Naveed, Hammad; Liang, Jie

    2016-03-01

    Knowledge of the transfer free energy of amino acids from aqueous solution to a lipid bilayer is essential for understanding membrane protein folding and for predicting membrane protein structure. Here we report a computational approach that can calculate the folding free energy of the transmembrane region of outer membrane β-barrel proteins (OMPs) by combining an empirical energy function with a reduced discrete state space model. We quantitatively analyzed the transfer free energies of 20 amino acid residues at the center of the lipid bilayer of OmpLA. Our results are in excellent agreement with the experimentally derived hydrophobicity scales. We further exhaustively calculated the transfer free energies of 20 amino acids at all positions in the TM region of OmpLA. We found that the asymmetry of the Gram-negative bacterial outer membrane as well as the TM residues of an OMP determine its functional fold in vivo. Our results suggest that the folding process of an OMP is driven by the lipid-facing residues in its hydrophobic core, and its NC-IN topology is determined by the differential stabilities of OMPs in the asymmetrical outer membrane. The folding free energy is further reduced by lipid A and assisted by general depth-dependent cooperativities that exist between polar and ionizable residues. Moreover, context-dependency of transfer free energies at specific positions in OmpLA predict regions important for protein function as well as structural anomalies. Our computational approach is fast, efficient and applicable to any OMP. PMID:26860422

  15. Computational issues in complex water-energy optimization problems: Time scales, parameterizations, objectives and algorithms

    NASA Astrophysics Data System (ADS)

    Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris

    2015-04-01

    Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial

  16. A Novel Cost Based Model for Energy Consumption in Cloud Computing

    PubMed Central

    Horri, A.; Dastghaibyfard, Gh.

    2015-01-01

    Cloud data centers consume enormous amounts of electrical energy. To support green cloud computing, providers also need to minimize cloud infrastructure energy consumption while conducting the QoS. In this study, for cloud environments an energy consumption model is proposed for time-shared policy in virtualization layer. The cost and energy usage of time-shared policy were modeled in the CloudSim simulator based upon the results obtained from the real system and then proposed model was evaluated by different scenarios. In the proposed model, the cache interference costs were considered. These costs were based upon the size of data. The proposed model was implemented in the CloudSim simulator and the related simulation results indicate that the energy consumption may be considerable and that it can vary with different parameters such as the quantum parameter, data size, and the number of VMs on a host. Measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment. Also, measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment. PMID:25705716

  17. A novel cost based model for energy consumption in cloud computing.

    PubMed

    Horri, A; Dastghaibyfard, Gh

    2015-01-01

    Cloud data centers consume enormous amounts of electrical energy. To support green cloud computing, providers also need to minimize cloud infrastructure energy consumption while conducting the QoS. In this study, for cloud environments an energy consumption model is proposed for time-shared policy in virtualization layer. The cost and energy usage of time-shared policy were modeled in the CloudSim simulator based upon the results obtained from the real system and then proposed model was evaluated by different scenarios. In the proposed model, the cache interference costs were considered. These costs were based upon the size of data. The proposed model was implemented in the CloudSim simulator and the related simulation results indicate that the energy consumption may be considerable and that it can vary with different parameters such as the quantum parameter, data size, and the number of VMs on a host. Measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment. Also, measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment. PMID:25705716

  18. Network Computing Infrastructure to Share Tools and Data in Global Nuclear Energy Partnership

    NASA Astrophysics Data System (ADS)

    Kim, Guehee; Suzuki, Yoshio; Teshima, Naoya

    CCSE/JAEA (Center for Computational Science and e-Systems/Japan Atomic Energy Agency) integrated a prototype system of a network computing infrastructure for sharing tools and data to support the U.S. and Japan collaboration in GNEP (Global Nuclear Energy Partnership). We focused on three technical issues to apply our information process infrastructure, which are accessibility, security, and usability. In designing the prototype system, we integrated and improved both network and Web technologies. For the accessibility issue, we adopted SSL-VPN (Security Socket Layer-Virtual Private Network) technology for the access beyond firewalls. For the security issue, we developed an authentication gateway based on the PKI (Public Key Infrastructure) authentication mechanism to strengthen the security. Also, we set fine access control policy to shared tools and data and used shared key based encryption method to protect tools and data against leakage to third parties. For the usability issue, we chose Web browsers as user interface and developed Web application to provide functions to support sharing tools and data. By using WebDAV (Web-based Distributed Authoring and Versioning) function, users can manipulate shared tools and data through the Windows-like folder environment. We implemented the prototype system in Grid infrastructure for atomic energy research: AEGIS (Atomic Energy Grid Infrastructure) developed by CCSE/JAEA. The prototype system was applied for the trial use in the first period of GNEP.

  19. Assessing the accuracy of the isotropic periodic sum method through Madelung energy computation.

    PubMed

    Ojeda-May, Pedro; Pu, Jingzhi

    2014-04-28

    We tested the isotropic periodic sum (IPS) method for computing Madelung energies of ionic crystals. The performance of the method, both in its nonpolar (IPSn) and polar (IPSp) forms, was compared with that of the zero-charge and Wolf potentials [D. Wolf, P. Keblinski, S. R. Phillpot, and J. Eggebrecht, J. Chem. Phys. 110, 8254 (1999)]. The results show that the IPSn and IPSp methods converge the Madelung energy to its reference value with an average deviation of ∼10(-4) and ∼10(-7) energy units, respectively, for a cutoff range of 18-24a (a/2 being the nearest-neighbor ion separation). However, minor oscillations were detected for the IPS methods when deviations of the computed Madelung energies were plotted on a logarithmic scale as a function of the cutoff distance. To remove such oscillations, we introduced a modified IPSn potential in which both the local-region and long-range electrostatic terms are damped, in analogy to the Wolf potential. With the damped-IPSn potential, a smoother convergence was achieved. In addition, we observed a better agreement between the damped-IPSn and IPSp methods, which suggests that damping the IPSn potential is in effect similar to adding a screening potential in IPSp. PMID:24784252

  20. Strong orientational coordinates and orientational order parameters for symmetric objects

    NASA Astrophysics Data System (ADS)

    Haji-Akbari, Amir; Glotzer, Sharon C.

    2015-12-01

    Recent advancements in the synthesis of anisotropic macromolecules and nanoparticles have spurred an immense interest in theoretical and computational studies of self-assembly. The cornerstone of such studies is the role of shape in self-assembly and in inducing complex order. The problem of identifying different types of order that can emerge in such systems can, however, be challenging. Here, we revisit the problem of quantifying orientational order in systems of building blocks with non-trivial rotational symmetries. We first propose a systematic way of constructing orientational coordinates for such symmetric building blocks. We call the arising tensorial coordinates strong orientational coordinates (SOCs) as they fully and exclusively specify the orientation of a symmetric object. We then use SOCs to describe and quantify local and global orientational order, and spatiotemporal orientational correlations in systems of symmetric building blocks. The SOCs and the orientational order parameters developed in this work are not only useful in performing and analyzing computer simulations of symmetric molecules or particles, but can also be utilized for the efficient storage of rotational information in long trajectories of evolving many-body systems.

  1. Computer Modeling VRF Heat Pumps in Commercial Buildings using EnergyPlus

    SciTech Connect

    Raustad, Richard

    2013-06-01

    Variable Refrigerant Flow (VRF) heat pumps are increasingly used in commercial buildings in the United States. Monitored energy use of field installations have shown, in some cases, savings exceeding 30% compared to conventional heating, ventilating, and air-conditioning (HVAC) systems. A simulation study was conducted to identify the installation or operational characteristics that lead to energy savings for VRF systems. The study used the Department of Energy EnergyPlus? building simulation software and four reference building models. Computer simulations were performed in eight U.S. climate zones. The baseline reference HVAC system incorporated packaged single-zone direct-expansion cooling with gas heating (PSZ-AC) or variable-air-volume systems (VAV with reheat). An alternate baseline HVAC system using a heat pump (PSZ-HP) was included for some buildings to directly compare gas and electric heating results. These baseline systems were compared to a VRF heat pump model to identify differences in energy use. VRF systems combine multiple indoor units with one or more outdoor unit(s). These systems move refrigerant between the outdoor and indoor units which eliminates the need for duct work in most cases. Since many applications install duct work in unconditioned spaces, this leads to installation differences between VRF systems and conventional HVAC systems. To characterize installation differences, a duct heat gain model was included to identify the energy impacts of installing ducts in unconditioned spaces. The configuration of variable refrigerant flow heat pumps will ultimately eliminate or significantly reduce energy use due to duct heat transfer. Fan energy is also studied to identify savings associated with non-ducted VRF terminal units. VRF systems incorporate a variable-speed compressor which may lead to operational differences compared to single-speed compression systems. To characterize operational differences, the computer model performance curves used

  2. Quantitative material decomposition using spectral computed tomography with an energy-resolved photon-counting detector

    NASA Astrophysics Data System (ADS)

    Lee, Seungwan; Choi, Yu-Na; Kim, Hee-Joung

    2014-09-01

    Dual-energy computed tomography (CT) techniques have been used to decompose materials and characterize tissues according to their physical and chemical compositions. However, these techniques are hampered by the limitations of conventional x-ray detectors operated in charge integrating mode. Energy-resolved photon-counting detectors provide spectral information from polychromatic x-rays using multiple energy thresholds. These detectors allow simultaneous acquisition of data in different energy ranges without spectral overlap, resulting in more efficient material decomposition and quantification for dual-energy CT. In this study, a pre-reconstruction dual-energy CT technique based on volume conservation was proposed for three-material decomposition. The technique was combined with iterative reconstruction algorithms by using a ray-driven projector in order to improve the quality of decomposition images and reduce radiation dose. A spectral CT system equipped with a CZT-based photon-counting detector was used to implement the proposed dual-energy CT technique. We obtained dual-energy images of calibration and three-material phantoms consisting of low atomic number materials from the optimal energy bins determined by Monte Carlo simulations. The material decomposition process was accomplished by both the proposed and post-reconstruction dual-energy CT techniques. Linear regression and normalized root-mean-square error (NRMSE) analyses were performed to evaluate the quantitative accuracy of decomposition images. The calibration accuracy of the proposed dual-energy CT technique was higher than that of the post-reconstruction dual-energy CT technique, with fitted slopes of 0.97-1.01 and NRMSEs of 0.20-4.50% for all basis materials. In the three-material phantom study, the proposed dual-energy CT technique decreased the NRMSEs of measured volume fractions by factors of 0.17-0.28 compared to the post-reconstruction dual-energy CT technique. It was concluded that the

  3. Quantitative material decomposition using spectral computed tomography with an energy-resolved photon-counting detector.

    PubMed

    Lee, Seungwan; Choi, Yu-Na; Kim, Hee-Joung

    2014-09-21

    Dual-energy computed tomography (CT) techniques have been used to decompose materials and characterize tissues according to their physical and chemical compositions. However, these techniques are hampered by the limitations of conventional x-ray detectors operated in charge integrating mode. Energy-resolved photon-counting detectors provide spectral information from polychromatic x-rays using multiple energy thresholds. These detectors allow simultaneous acquisition of data in different energy ranges without spectral overlap, resulting in more efficient material decomposition and quantification for dual-energy CT. In this study, a pre-reconstruction dual-energy CT technique based on volume conservation was proposed for three-material decomposition. The technique was combined with iterative reconstruction algorithms by using a ray-driven projector in order to improve the quality of decomposition images and reduce radiation dose. A spectral CT system equipped with a CZT-based photon-counting detector was used to implement the proposed dual-energy CT technique. We obtained dual-energy images of calibration and three-material phantoms consisting of low atomic number materials from the optimal energy bins determined by Monte Carlo simulations. The material decomposition process was accomplished by both the proposed and post-reconstruction dual-energy CT techniques. Linear regression and normalized root-mean-square error (NRMSE) analyses were performed to evaluate the quantitative accuracy of decomposition images. The calibration accuracy of the proposed dual-energy CT technique was higher than that of the post-reconstruction dual-energy CT technique, with fitted slopes of 0.97-1.01 and NRMSEs of 0.20-4.50% for all basis materials. In the three-material phantom study, the proposed dual-energy CT technique decreased the NRMSEs of measured volume fractions by factors of 0.17-0.28 compared to the post-reconstruction dual-energy CT technique. It was concluded that the

  4. Methods of determining loads and fiber orientations in anisotropic non-crystalline materials using energy flux deviation

    NASA Technical Reports Server (NTRS)

    Prosser, William H. (Inventor); Kriz, Ronald D. (Inventor); Fitting, Dale W. (Inventor)

    1993-01-01

    An ultrasonic wave is applied to an anisotropic sample material in an initial direction and an angle of flux deviation of the ultrasonic wave front is measured from this initial direction. This flux deviation angle is induced by the unknown applied load. The flux shift is determined between this flux deviation angle and a previously determined angle of flux deviation of an ultrasonic wave applied to a similar anisotropic reference material under an initial known load condition. This determined flux shift is then compared to a plurality of flux shifts of a similarly tested, similar anisotropic reference material under a plurality of respective, known load conditions, whereby the load applied to the particular anisotropic sample material is determined. A related method is disclosed for determining the fiber orientation from known loads and a determined flux shift.

  5. Low-energy electron diffraction study of potassium adsorbed on single-crystal graphite and highly oriented pyrolytic graphite

    SciTech Connect

    Ferralis, N.; Diehl, R.D.; Pussi, K.; Lindroos, M.; Finberg, S.E.; Smerdon, J.; McGrath, R.

    2004-12-15

    Potassium adsorption on graphite has been a model system for the understanding of the interaction of alkali metals with surfaces. The geometries of the (2x2) structure of potassium on both single-crystal graphite (SCG) and highly oriented pyrolytic graphite (HOPG) were investigated for various preparation conditions for graphite temperatures between 55 and 140 K. In all cases, the geometry was found to consist of K atoms in the hollow sites on top of the surface. The K-graphite average perpendicular spacing is 2.79{+-}0.03 A , corresponding to an average C-K distance of 3.13{+-}0.03 A , and the spacing between graphite planes is consistent with the bulk spacing of 3.35 A. No evidence was observed for a sublayer of potassium. The results of dynamical LEED studies for the clean SCG and HOPG surfaces indicate that the surface structures of both are consistent with the truncated bulk structure of graphite.

  6. Computational and experimental methodology for site-matched investigations of the influence of mineral mass fraction and collagen orientation on the axial indentation modulus of lamellar bone☆

    PubMed Central

    Spiesz, Ewa M.; Reisinger, Andreas G.; Kaminsky, Werner; Roschger, Paul; Pahr, Dieter H.; Zysset, Philippe K.

    2013-01-01

    Relationships between mineralization, collagen orientation and indentation modulus were investigated in bone structural units from the mid-shaft of human femora using a site-matched design. Mineral mass fraction, collagen fibril angle and indentation moduli were measured in registered anatomical sites using backscattered electron imaging, polarized light microscopy and nano-indentation, respectively. Theoretical indentation moduli were calculated with a homogenization model from the quantified mineral densities and mean collagen fibril orientations. The average indentation moduli predicted based on local mineralization and collagen fibers arrangement were not significantly different from the average measured experimentally with nanoindentation (p=0.9). Surprisingly, no substantial correlation of the measured indentation moduli with tissue mineralization and/or collagen fiber arrangement was found. Nano-porosity, micro-damage, collagen cross-links, non-collagenous proteins or other parameters affect the indentation measurements. Additional testing/simulation methods need to be considered to properly understand the variability of indentation moduli, beyond the mineralization and collagen arrangement in bone structural units. PMID:23994944

  7. Energy in Perspective: An Orientation Conference for Educators. Proceedings of a Conference (Tempe, Arizona, June 7-11, 1976).

    ERIC Educational Resources Information Center

    McKlveen, John W., Ed.

    The conference goal was to provide educators with knowledge and motivation about energy in order to establish an awareness of it in their classrooms. Speakers were from universities, research laboratories, utilities, government agencies, and private businesses. Coal, gas and oil, geothermal and solar sources of energy in Arizona were each…

  8. Computer simulation of energy use, greenhouse gas emissions, and process economics of the fluid milk process.

    PubMed

    Tomasula, P M; Yee, W C F; McAloon, A J; Nutter, D W; Bonnaillie, L M

    2013-05-01

    Energy-savings measures have been implemented in fluid milk plants to lower energy costs and the energy-related carbon dioxide (CO2) emissions. Although these measures have resulted in reductions in steam, electricity, compressed air, and refrigeration use of up to 30%, a benchmarking framework is necessary to examine the implementation of process-specific measures that would lower energy use, costs, and CO2 emissions even further. In this study, using information provided by the dairy industry and equipment vendors, a customizable model of the fluid milk process was developed for use in process design software to benchmark the electrical and fuel energy consumption and CO2 emissions of current processes. It may also be used to test the feasibility of new processing concepts to lower energy and CO2 emissions with calculation of new capital and operating costs. The accuracy of the model in predicting total energy usage of the entire fluid milk process and the pasteurization step was validated using available literature and industry energy data. Computer simulation of small (40.0 million L/yr), medium (113.6 million L/yr), and large (227.1 million L/yr) processing plants predicted the carbon footprint of milk, defined as grams of CO2 equivalents (CO2e) per kilogram of packaged milk, to within 5% of the value of 96 g of CO 2e/kg of packaged milk obtained in an industry-conducted life cycle assessment and also showed, in agreement with the same study, that plant size had no effect on the carbon footprint of milk but that larger plants were more cost effective in producing milk. Analysis of the pasteurization step showed that increasing the percentage regeneration of the pasteurizer from 90 to 96% would lower its thermal energy use by almost 60% and that implementation of partial homogenization would lower electrical energy use and CO2e emissions of homogenization by 82 and 5.4%, respectively. It was also demonstrated that implementation of steps to lower non

  9. Geology and mineral and energy resources, Roswell Resource Area, New Mexico; an interactive computer presentation

    USGS Publications Warehouse

    Tidball, Ronald R.; Bartsch-Winkler, S. B.

    1995-01-01

    This Compact Disc-Read Only Memory (CD-ROM) contains a program illustrating the geology and mineral and energy resources of the Roswell Resource Area, an administrative unit of the U.S. Bureau of Land Management in east-central New Mexico. The program enables the user to access information on the geology, geochemistry, geophysics, mining history, metallic and industrial mineral commodities, hydrocarbons, and assessments of the area. The program was created with the display software, SuperCard, version 1.5, by Aldus. The program will run only on a Macintosh personal computer. This CD-ROM was produced in accordance with Macintosh HFS standards. The program was developed on a Macintosh II-series computer with system 7.0.1. The program is a compiled, executable form that is nonproprietary and does not require the presence of the SuperCard software.

  10. Global energy minima of molecular clusters computed in polynomial time with semidefinite programming.

    PubMed

    Kamarchik, Eugene; Mazziotti, David A

    2007-12-14

    The global energy minima of pure and binary molecular clusters with 5-12 particles interacting pairwise are computed in polynomial time as a function of only the two-particle reduced density function (2-RDF). We derive linear matrix inequalities from the classical analogue of quantum N-representability constraints to ensure that the 2-RDF represents realistic N-particle configurations. The 2-RDF reformulation relaxes a combinatorial optimization into a convex optimization that scales polynomially in computer time. Clusters are optimized with a code for large-scale semidefinite programming developed for the quantum representability problem [D. A. Mazziotti, Phys. Rev. Lett. 93, 213001 (2004)10.1103/PhysRevLett.93.213001]. PMID:18233446

  11. Dual-energy computed tomography (DECT) in emergency radiology: basic principles, techniques, and limitations.

    PubMed

    Aran, Shima; Shaqdan, Khalid W; Abujudeh, Hani H

    2014-08-01

    Recent advances in computed tomography (CT) technology allow for acquisition of two CT datasets with different X-ray spectra. There are different dual-energy computed tomography (DECT) technical approaches such as: the dual-source CT, the fast kilovoltage-switching method, and the sandwich detectors technique. There are various postprocessing algorithms that are available to provide clinically relevant spectral information. There are several clinical applications of DECT that are easily accessible in the emergency setting. In this review article, we aim to provide the emergency radiologist with a discussion on how this new technology works and how some of its applications can be useful in the emergency room setting. PMID:24676736

  12. A network of spiking neurons for computing sparse representations in an energy efficient way

    PubMed Central

    Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B.

    2013-01-01

    Computing sparse redundant representations is an important problem both in applied mathematics and neuroscience. In many applications, this problem must be solved in an energy efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating via low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, such operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We compare the numerical performance of HDA with existing algorithms and show that in the asymptotic regime the representation error of HDA decays with time, t, as 1/t. We show that HDA is stable against time-varying noise, specifically, the representation error decays as 1/t for Gaussian white noise. PMID:22920853

  13. Capillary Instability of a Planar Jet: A Free Energy-Based Computational Model.

    NASA Astrophysics Data System (ADS)

    Nadiga, Balu

    1996-11-01

    The capillary instability of an initially high-Reynolds-number planar liquid jet in an ambient gaseous phase is studied. Linear analysis of the problem will be followed by comparisons of computational results to the analytical estimates. The computational model(B.T. Nadiga & S. Zaleski, European Journal of Mechanics B: Fluids, to appear (1996).) is based on the van der Waals-Cahn-Hilliard free energy(J.D. van der Waals, Z. Phys. Chem., 13, 657 (1894); English translation in J. Stat. Phys., 20, 197.) for an interface---something well known in the field of nonequilibrium thermodynamics, but seldom used in fluid dynamic modeling of interfaces. The advantage of such a model is that the ensuing volumetric nature of the interfacial stress term results in a simple and robust interface-capturing scheme.

  14. Multi-Objective Approach for Energy-Aware Workflow Scheduling in Cloud Computing Environments

    PubMed Central

    Kadima, Hubert; Granado, Bertrand

    2013-01-01

    We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach. PMID:24319361

  15. Measuring and tuning energy efficiency on large scale high performance computing platforms.

    SciTech Connect

    Laros, James H., III

    2011-08-01

    Recognition of the importance of power in the field of High Performance Computing, whether it be as an obstacle, expense or design consideration, has never been greater and more pervasive. While research has been conducted on many related aspects, there is a stark absence of work focused on large scale High Performance Computing. Part of the reason is the lack of measurement capability currently available on small or large platforms. Typically, research is conducted using coarse methods of measurement such as inserting a power meter between the power source and the platform, or fine grained measurements using custom instrumented boards (with obvious limitations in scale). To collect the measurements necessary to analyze real scientific computing applications at large scale, an in-situ measurement capability must exist on a large scale capability class platform. In response to this challenge, we exploit the unique power measurement capabilities of the Cray XT architecture to gain an understanding of power use and the effects of tuning. We apply these capabilities at the operating system level by deterministically halting cores when idle. At the application level, we gain an understanding of the power requirements of a range of important DOE/NNSA production scientific computing applications running at large scale (thousands of nodes), while simultaneously collecting current and voltage measurements on the hosting nodes. We examine the effects of both CPU and network bandwidth tuning and demonstrate energy savings opportunities of up to 39% with little or no impact on run-time performance. Capturing scale effects in our experimental results was key. Our results provide strong evidence that next generation large-scale platforms should not only approach CPU frequency scaling differently, but could also benefit from the capability to tune other platform components, such as the network, to achieve energy efficient performance.

  16. [Application of information technology in orthodontics. 1. Necessity, possibilities and prospects for use of task-orientated computers, for example the PC 1715].

    PubMed

    Reinhardt, H; Ifert, F; Müller, A; Schneider, P

    1989-07-01

    There is an urgent need to make use of modern information technology with fourth generation computers, which is also relevant to orthodontic practice. Properties of the PC 1715--small, user-friendly and cost-effective--make it seem particularly suited to this purpose. Possible areas of application and technical parameters are described. PMID:2636488

  17. Aspect-Oriented Programming

    NASA Technical Reports Server (NTRS)

    Elrad, Tzilla (Editor); Filman, Robert E. (Editor); Bader, Atef (Editor)

    2001-01-01

    Computer science has experienced an evolution in programming languages and systems from the crude assembly and machine codes of the earliest computers through concepts such as formula translation, procedural programming, structured programming, functional programming, logic programming, and programming with abstract data types. Each of these steps in programming technology has advanced our ability to achieve clear separation of concerns at the source code level. Currently, the dominant programming paradigm is object-oriented programming - the idea that one builds a software system by decomposing a problem into objects and then writing the code of those objects. Such objects abstract together behavior and data into a single conceptual and physical entity. Object-orientation is reflected in the entire spectrum of current software development methodologies and tools - we have OO methodologies, analysis and design tools, and OO programming languages. Writing complex applications such as graphical user interfaces, operating systems, and distributed applications while maintaining comprehensible source code has been made possible with OOP. Success at developing simpler systems leads to aspirations for greater complexity. Object orientation is a clever idea, but has certain limitations. We are now seeing that many requirements do not decompose neatly into behavior centered on a single locus. Object technology has difficulty localizing concerns invoking global constraints and pandemic behaviors, appropriately segregating concerns, and applying domain-specific knowledge. Post-object programming (POP) mechanisms that look to increase the expressiveness of the OO paradigm are a fertile arena for current research. Examples of POP technologies include domain-specific languages, generative programming, generic programming, constraint languages, reflection and metaprogramming, feature-oriented development, views/viewpoints, and asynchronous message brokering. (Czarneclu and

  18. Implementing the Data Center Energy Productivity Metric in a High Performance Computing Data Center

    SciTech Connect

    Sego, Landon H.; Marquez, Andres; Rawson, Andrew; Cader, Tahir; Fox, Kevin M.; Gustafson, William I.; Mundy, Christopher J.

    2013-06-30

    As data centers proliferate in size and number, the improvement of their energy efficiency and productivity has become an economic and environmental imperative. Making these improvements requires metrics that are robust, interpretable, and practical. We discuss the properties of a number of the proposed metrics of energy efficiency and productivity. In particular, we focus on the Data Center Energy Productivity (DCeP) metric, which is the ratio of useful work produced by the data center to the energy consumed performing that work. We describe our approach for using DCeP as the principal outcome of a designed experiment using a highly instrumented, high-performance computing data center. We found that DCeP was successful in clearly distinguishing different operational states in the data center, thereby validating its utility as a metric for identifying configurations of hardware and software that would improve energy productivity. We also discuss some of the challenges and benefits associated with implementing the DCeP metric, and we examine the efficacy of the metric in making comparisons within a data center and between data centers.

  19. Influence of Finite Element Software on Energy Release Rates Computed Using the Virtual Crack Closure Technique

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Goetze, Dirk; Ransom, Jonathon (Technical Monitor)

    2006-01-01

    Strain energy release rates were computed along straight delamination fronts of Double Cantilever Beam, End-Notched Flexure and Single Leg Bending specimens using the Virtual Crack Closure Technique (VCCT). Th e results were based on finite element analyses using ABAQUS# and ANSYS# and were calculated from the finite element results using the same post-processing routine to assure a consistent procedure. Mixed-mode strain energy release rates obtained from post-processing finite elem ent results were in good agreement for all element types used and all specimens modeled. Compared to previous studies, the models made of s olid twenty-node hexahedral elements and solid eight-node incompatible mode elements yielded excellent results. For both codes, models made of standard brick elements and elements with reduced integration did not correctly capture the distribution of the energy release rate acr oss the width of the specimens for the models chosen. The results suggested that element types with similar formulation yield matching results independent of the finite element software used. For comparison, m ixed-mode strain energy release rates were also calculated within ABAQUS#/Standard using the VCCT for ABAQUS# add on. For all specimens mod eled, mixed-mode strain energy release rates obtained from ABAQUS# finite element results using post-processing were almost identical to re sults calculated using the VCCT for ABAQUS# add on.

  20. A computer simulation appraisal of non-residential low energy cooling systems in California

    SciTech Connect

    Bourassa, Norman; Haves, Philip; Huang, Joe

    2002-05-17

    An appraisal of the potential performance of different Low Energy Cooling (LEC) systems in nonresidential buildings in California is being conducted using computer simulation. The paper presents results from the first phase of the study, which addressed the systems that can be modeled, with the DOE-2.1E simulation program. The following LEC technologies were simulated as variants of a conventional variable-air-volume system with vapor compression cooling and mixing ventilation in the occupied spaces: Air-side indirect and indirect/direct evaporative pre-cooling. Cool beams. Displacement ventilation. Results are presented for four populous climates, represented by Oakland, Sacramento, Pasadena and San Diego. The greatest energy savings are obtained from a combination of displacement ventilation and air-side indirect/direct evaporative pre-cooling. Cool beam systems have the lowest peak demand but do not reduce energy consumption significantly because the reduction in fan energy is offse t by a reduction in air-side free cooling. Overall, the results indicate significant opportunities for LEC technologies to reduce energy consumption and demand in nonresidential new construction and retrofit.

  1. Orienting hypnosis.

    PubMed

    Hope, Anna E; Sugarman, Laurence I

    2015-01-01

    This article presents a new frame for understanding hypnosis and its clinical applications. Despite great potential to transform health and care, hypnosis research and clinical integration is impaired in part by centuries of misrepresentation and ignorance about its demonstrated efficacy. The authors contend that advances in the field are primarily encumbered by the lack of distinct boundaries and definitions. Here, hypnosis, trance, and mind are all redefined and grounded in biological, neurological, and psychological phenomena. Solutions are proposed for boundary and language problems associated with hypnosis. The biological role of novelty stimulating an orienting response that, in turn, potentiates systemic plasticity forms the basis for trance. Hypnosis is merely the skill set that perpetuates and influences trance. This formulation meshes with many aspects of Milton Erickson's legacy and Ernest Rossi's recent theory of mind and health. Implications of this hypothesis for clinical skills, professional training, and research are discussed. PMID:25928677

  2. Measured energy savings and performance of power-managed personal computers and monitors

    SciTech Connect

    Nordman, B.; Piette, M.A.; Kinney, K.

    1996-08-01

    Personal computers and monitors are estimated to use 14 billion kWh/year of electricity, with power management potentially saving $600 million/year by the year 2000. The effort to capture these savings is lead by the US Environmental Protection Agency`s Energy Star program, which specifies a 30W maximum demand for the computer and for the monitor when in a {open_quote}sleep{close_quote} or idle mode. In this paper the authors discuss measured energy use and estimated savings for power-managed (Energy Star compliant) PCs and monitors. They collected electricity use measurements of six power-managed PCs and monitors in their office and five from two other research projects. The devices are diverse in machine type, use patterns, and context. The analysis method estimates the time spent in each system operating mode (off, low-, and full-power) and combines these with real power measurements to derive hours of use per mode, energy use, and energy savings. Three schedules are explored in the {open_quotes}As-operated,{close_quotes} {open_quotes}Standardized,{close_quotes} and `Maximum` savings estimates. Energy savings are established by comparing the measurements to a baseline with power management disabled. As-operated energy savings for the eleven PCs and monitors ranged from zero to 75 kWh/year. Under the standard operating schedule (on 20% of nights and weekends), the savings are about 200 kWh/year. An audit of power management features and configurations for several dozen Energy Star machines found only 11% of CPU`s fully enabled and about two thirds of monitors were successfully power managed. The highest priority for greater power management savings is to enable monitors, as opposed to CPU`s, since they are generally easier to configure, less likely to interfere with system operation, and have greater savings. The difficulties in properly configuring PCs and monitors is the largest current barrier to achieving the savings potential from power management.

  3. Low-energy light bulbs, computers, tablets and the blue light hazard.

    PubMed

    O'Hagan, J B; Khazova, M; Price, L L A

    2016-02-01

    The introduction of low energy lighting and the widespread use of computer and mobile technologies have changed the exposure of human eyes to light. Occasional claims that the light sources with emissions containing blue light may cause eye damage raise concerns in the media. The aim of the study was to determine if it was appropriate to issue advice on the public health concerns. A number of sources were assessed and the exposure conditions were compared with international exposure limits, and the exposure likely to be received from staring at a blue sky. None of the sources assessed approached the exposure limits, even for extended viewing times. PMID:26768920

  4. Using an iterative eigensolver to compute vibrational energies with phase-spaced localized basis functions

    SciTech Connect

    Brown, James Carrington, Tucker

    2015-07-28

    Although phase-space localized Gaussians are themselves poor basis functions, they can be used to effectively contract a discrete variable representation basis [A. Shimshovitz and D. J. Tannor, Phys. Rev. Lett. 109, 070402 (2012)]. This works despite the fact that elements of the Hamiltonian and overlap matrices labelled by discarded Gaussians are not small. By formulating the matrix problem as a regular (i.e., not a generalized) matrix eigenvalue problem, we show that it is possible to use an iterative eigensolver to compute vibrational energy levels in the Gaussian basis.

  5. Using an iterative eigensolver to compute vibrational energies with phase-spaced localized basis functions.

    PubMed

    Brown, James; Carrington, Tucker

    2015-07-28

    Although phase-space localized Gaussians are themselves poor basis functions, they can be used to effectively contract a discrete variable representation basis [A. Shimshovitz and D. J. Tannor, Phys. Rev. Lett. 109, 070402 (2012)]. This works despite the fact that elements of the Hamiltonian and overlap matrices labelled by discarded Gaussians are not small. By formulating the matrix problem as a regular (i.e., not a generalized) matrix eigenvalue problem, we show that it is possible to use an iterative eigensolver to compute vibrational energy levels in the Gaussian basis. PMID:26233104

  6. Using an iterative eigensolver to compute vibrational energies with phase-spaced localized basis functions

    NASA Astrophysics Data System (ADS)

    Brown, James; Carrington, Tucker

    2015-07-01

    Although phase-space localized Gaussians are themselves poor basis functions, they can be used to effectively contract a discrete variable representation basis [A. Shimshovitz and D. J. Tannor, Phys. Rev. Lett. 109, 070402 (2012)]. This works despite the fact that elements of the Hamiltonian and overlap matrices labelled by discarded Gaussians are not small. By formulating the matrix problem as a regular (i.e., not a generalized) matrix eigenvalue problem, we show that it is possible to use an iterative eigensolver to compute vibrational energy levels in the Gaussian basis.

  7. Multilayered perceptron neural networks to compute energy losses in magnetic cores

    NASA Astrophysics Data System (ADS)

    Kucuk, Ilker

    2006-12-01

    This paper presents a new approach based on multilayered perceptrons (MLPs) to compute the specific energy losses of toroidal wound cores built from 3% SiFe 0.27 mm thick M4, 0.1 and 0.08 mm thin gauge electrical steel strips. The MLP has been trained by a back-propagation and extended delta-bar-delta learning algorithm. The results obtained by using the MLP model were compared with a commonly used conventional method. The comparison has shown that the proposed model improved loss estimation with respect to the conventional method.

  8. User's manual: Computer-aided design programs for inductor-energy-storage dc-to-dc electronic power converters

    NASA Technical Reports Server (NTRS)

    Huffman, S.

    1977-01-01

    Detailed instructions on the use of two computer-aided-design programs for designing the energy storage inductor for single winding and two winding dc to dc converters are provided. Step by step procedures are given to illustrate the formatting of user input data. The procedures are illustrated by eight sample design problems which include the user input and the computer program output.

  9. Computer simulation to predict energy use, greenhouse gas emissions and costs for production of fluid milk using alternative processing methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Computer simulation is a useful tool for benchmarking the electrical and fuel energy consumption and water use in a fluid milk plant. In this study, a computer simulation model of the fluid milk process based on high temperature short time (HTST) pasteurization was extended to include models for pr...

  10. Computational Study of Environmental Effects on Torsional Free Energy Surface of N-Acetyl-N'-methyl-L-alanylamide Dipeptide

    ERIC Educational Resources Information Center

    Carlotto, Silvia; Zerbetto, Mirco

    2014-01-01

    We propose an articulated computational experiment in which both quantum mechanics (QM) and molecular mechanics (MM) methods are employed to investigate environment effects on the free energy surface for the backbone dihedral angles rotation of the small dipeptide N-Acetyl-N'-methyl-L-alanylamide. This computation exercise is appropriate for an…

  11. Dual-energy computed tomography for detection of coronary artery disease

    PubMed Central

    Danad, Ibrahim; Ó Hartaigh, Bríain; Min, James K.

    2016-01-01

    Recent technological advances in computed tomography (CT) technology have fulfilled the prerequisites for the cardiac application of dual-energy CT (DECT) imaging. By exploiting the unique characteristics of materials when exposed to two different x-ray energies, DECT holds great promise for the diagnosis and management of coronary artery disease. It allows for the assessment of myocardial perfusion to discern the hemodynamic significance of coronary disease and possesses high accuracy for the detection and characterization of coronary plaques, while facilitating reductions in radiation dose. As such, DECT enabled cardiac CT to advance beyond the mere detection of coronary stenosis expanding its role in the evaluation and management of coronary atherosclerosis. PMID:26549789

  12. Real Gas Computation Using an Energy Relaxation Method and High-Order WENO Schemes

    NASA Technical Reports Server (NTRS)

    Montarnal, Philippe; Shu, Chi-Wang

    1998-01-01

    In this paper, we use a recently developed energy relaxation theory by Coquel and Perthame and high order weighted essentially non-oscillatory (WENO) schemes to simulate the Euler equations of real gas. The main idea is an energy decomposition into two parts: one part is associated with a simpler pressure law and the other part (the nonlinear deviation) is convected with the flow. A relaxation process is performed for each time step to ensure that the original pressure law is satisfied. The necessary characteristic decomposition for the high order WENO schemes is performed on the characteristic fields based on the first part. The algorithm only calls for the original pressure law once per grid point per time step, without the need to compute its derivatives or any Riemann solvers. Both one and two dimensional numerical examples are shown to illustrate the effectiveness of this approach.

  13. Lookup tables to compute high energy cosmic ray induced atmospheric ionization and changes in atmospheric chemistry

    SciTech Connect

    Atri, Dimitra; Melott, Adrian L.; Thomas, Brian C. E-mail: melott@ku.edu

    2010-05-01

    A variety of events such as gamma-ray bursts and supernovae may expose the Earth to an increased flux of high-energy cosmic rays, with potentially important effects on the biosphere. Existing atmospheric chemistry software does not have the capability of incorporating the effects of substantial cosmic ray flux above 10 GeV. An atmospheric code, the NASA-Goddard Space Flight Center two-dimensional (latitude, altitude) time-dependent atmospheric model (NGSFC), is used to study atmospheric chemistry changes. Using CORSIKA, we have created tables that can be used to compute high energy cosmic ray (10 GeV–1 PeV) induced atmospheric ionization and also, with the use of the NGSFC code, can be used to simulate the resulting atmospheric chemistry changes. We discuss the tables, their uses, weaknesses, and strengths.

  14. HEPMath 1.4: A mathematica package for semi-automatic computations in high energy physics

    NASA Astrophysics Data System (ADS)

    Wiebusch, Martin

    2015-10-01

    This article introduces the Mathematica package HEPMath which provides a number of utilities and algorithms for High Energy Physics computations in Mathematica. Its functionality is similar to packages like FormCalc or FeynCalc, but it takes a more complete and extensible approach to implementing common High Energy Physics notations in the Mathematica language, in particular those related to tensors and index contractions. It also provides a more flexible method for the generation of numerical code which is based on new features for C code generation in Mathematica. In particular it can automatically generate Python extension modules which make the compiled functions callable from Python, thus eliminating the need to write any code in a low-level language like C or Fortran. It also contains seamless interfaces to LHAPDF, FeynArts, and LoopTools.

  15. The potential of computed crystal energy landscapes to aid solid-form development.

    PubMed

    Price, Sarah L; Reutzel-Edens, Susan M

    2016-06-01

    Solid-form screening to identify all solid forms of an active pharmaceutical ingredient (API) has become increasingly important in ensuring the quality by design of pharmaceutical products and their manufacturing processes. However, despite considerable enlargement of the range of techniques that have been shown capable of producing novel solid forms, it is possible that practically important forms might not be found in the short timescales currently allowed for solid-form screening. Here, we report on the state-of-the-art use of computed crystal energy landscapes to complement pharmaceutical solid-form screening. We illustrate how crystal energy landscapes can help establish molecular-level understanding of the crystallization behavior of APIs and enhance the ability of solid-form screening to facilitate pharmaceutical development. PMID:26851154

  16. Dual-energy computed tomography for detection of coronary artery disease.

    PubMed

    Danad, Ibrahim; Ó Hartaigh, Bríain; Min, James K

    2015-12-01

    Recent technological advances in computed tomography (CT) technology have fulfilled the prerequisites for the cardiac application of dual-energy CT (DECT) imaging. By exploiting the unique characteristics of materials when exposed to two different x-ray energies, DECT holds great promise for the diagnosis and management of coronary artery disease. It allows for the assessment of myocardial perfusion to discern the hemodynamic significance of coronary disease and possesses high accuracy for the detection and characterization of coronary plaques, while facilitating reductions in radiation dose. As such, DECT enabled cardiac CT to advance beyond the mere detection of coronary stenosis expanding its role in the evaluation and management of coronary atherosclerosis. PMID:26549789

  17. PREFACE: International Conference on Computing in High Energy and Nuclear Physics (CHEP 2010)

    NASA Astrophysics Data System (ADS)

    Lin, Simon C.; Shen, Stella; Neufeld, Niko; Gutsche, Oliver; Cattaneo, Marco; Fisk, Ian; Panzer-Steindel, Bernd; Di Meglio, Alberto; Lokajicek, Milos

    2011-12-01

    The International Conference on Computing in High Energy and Nuclear Physics (CHEP) was held at Academia Sinica in Taipei from 18-22 October 2010. CHEP is a major series of international conferences for physicists and computing professionals from the worldwide High Energy and Nuclear Physics community, Computer Science, and Information Technology. The CHEP conference provides an international forum to exchange information on computing progress and needs for the community, and to review recent, ongoing and future activities. CHEP conferences are held at roughly 18 month intervals, alternating between Europe, Asia, America and other parts of the world. Recent CHEP conferences have been held in Prauge, Czech Republic (2009); Victoria, Canada (2007); Mumbai, India (2006); Interlaken, Switzerland (2004); San Diego, California(2003); Beijing, China (2001); Padova, Italy (2000) CHEP 2010 was organized by Academia Sinica Grid Computing Centre. There was an International Advisory Committee (IAC) setting the overall themes of the conference, a Programme Committee (PC) responsible for the content, as well as Conference Secretariat responsible for the conference infrastructure. There were over 500 attendees with a program that included plenary sessions of invited speakers, a number of parallel sessions comprising around 260 oral and 200 poster presentations, and industrial exhibitions. We thank all the presenters, for the excellent scientific content of their contributions to the conference. Conference tracks covered topics on Online Computing, Event Processing, Software Engineering, Data Stores, and Databases, Distributed Processing and Analysis, Computing Fabrics and Networking Technologies, Grid and Cloud Middleware, and Collaborative Tools. The conference included excursions to various attractions in Northern Taiwan, including Sanhsia Tsu Shih Temple, Yingko, Chiufen Village, the Northeast Coast National Scenic Area, Keelung, Yehliu Geopark, and Wulai Aboriginal Village

  18. Software Aspects of IEEE Floating-Point Computations for Numerical Applications in High Energy Physics

    SciTech Connect

    2010-05-11

    Floating-point computations are at the heart of much of the computing done in high energy physics. The correctness, speed and accuracy of these computations are of paramount importance. The lack of any of these characteristics can mean the difference between new, exciting physics and an embarrassing correction. This talk will examine practical aspects of IEEE 754-2008 floating-point arithmetic as encountered in HEP applications. After describing the basic features of IEEE floating-point arithmetic, the presentation will cover: common hardware implementations (SSE, x87) techniques for improving the accuracy of summation, multiplication and data interchange compiler options for gcc and icc affecting floating-point operations hazards to be avoided About the speaker Jeffrey M Arnold is a Senior Software Engineer in the Intel Compiler and Languages group at Intel Corporation. He has been part of the Digital->Compaq->Intel compiler organization for nearly 20 years; part of that time, he worked on both low- and high-level math libraries. Prior to that, he was in the VMS Engineering organization at Digital Equipment Corporation. In the late 1980s, Jeff spent 2½ years at CERN as part of the CERN/Digital Joint Project. In 2008, he returned to CERN to spent 10 weeks working with CERN/openlab. Since that time, he has returned to CERN multiple times to teach at openlab workshops and consult with various LHC experiments. Jeff received his Ph.D. in physics from Case Western Reserve University.

  19. Computing the energy of a water molecule using multideterminants: A simple, efficient algorithm

    SciTech Connect

    Clark, Bryan K.; Morales, Miguel A; Mcminis, Jeremy; Kim, Jeongnim; Scuseria, Gustavo E

    2011-01-01

    Quantum Monte Carlo (QMC) methods such as variational Monte Carlo and fixed node diffusion Monte Carlo depend heavily on the quality of the trial wave function. Although Slater-Jastrow wave functions are the most commonly used variational ansatz in electronic structure, more sophisticated wave functions are critical to ascertaining new physics. One such wave function is the multi-Slater- Jastrow wave function which consists of a Jastrow function multiplied by the sum of Slater deter- minants. In this paper we describe a method for working with these wave functions in QMC codes that is easy to implement, efficient both in computational speed as well as memory, and easily par- allelized. The computational cost scales quadratically with particle number making this scaling no worse than the single determinant case and linear with the total number of excitations. Addition- ally, we implement this method and use it to compute the ground state energy of a water molecule. 2011 American Institute of Physics. [doi:10.1063/1.3665391

  20. Method for computing marginal costs associated with on-site energy technologies

    SciTech Connect

    Bright, R.; Davitian, H.

    1980-08-01

    A method for calculating long-run marginal costs for an electric utility is described. The method is especially suitable for computing the marginal costs associated with the use of small on-site energy technologies, i.e., cogenerators, solar heating and hot water systems, wind generators, etc., which are interconnected with electric utilities. In particular, both the costs a utility avoids when power is delivered to it from a facility with an on-site generator and marginal cost to the utility of supplementary power sold to the facility can be calculated. A utility capacity expansion model is used to compute changes in the utility's costs when loads are modified by the use of the on-site technology. Changes in capacity-related costs and production costs are thus computed in an internally consistent manner. The variable nature of the generation/load pattern of the on-site technology is treated explicitly. The method yields several measures of utility costs that can be used to develop rates based on marginal avoided costs for on-site technologies as well as marginal cost rates for conventional utility customers.

  1. Software Aspects of IEEE Floating-Point Computations for Numerical Applications in High Energy Physics

    ScienceCinema

    None

    2011-10-06

    Floating-point computations are at the heart of much of the computing done in high energy physics. The correctness, speed and accuracy of these computations are of paramount importance. The lack of any of these characteristics can mean the difference between new, exciting physics and an embarrassing correction. This talk will examine practical aspects of IEEE 754-2008 floating-point arithmetic as encountered in HEP applications. After describing the basic features of IEEE floating-point arithmetic, the presentation will cover: common hardware implementations (SSE, x87) techniques for improving the accuracy of summation, multiplication and data interchange compiler options for gcc and icc affecting floating-point operations hazards to be avoided About the speaker Jeffrey M Arnold is a Senior Software Engineer in the Intel Compiler and Languages group at Intel Corporation. He has been part of the Digital->Compaq->Intel compiler organization for nearly 20 years; part of that time, he worked on both low- and high-level math libraries. Prior to that, he was in the VMS Engineering organization at Digital Equipment Corporation. In the late 1980s, Jeff spent 2½ years at CERN as part of the CERN/Digital Joint Project. In 2008, he returned to CERN to spent 10 weeks working with CERN/openlab. Since that time, he has returned to CERN multiple times to teach at openlab workshops and consult with various LHC experiments. Jeff received his Ph.D. in physics from Case Western Reserve University.

  2. A Computational Approach for Model Update of an LS-DYNA Energy Absorbing Cell

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Jackson, Karen E.; Kellas, Sotiris

    2008-01-01

    NASA and its contractors are working on structural concepts for absorbing impact energy of aerospace vehicles. Recently, concepts in the form of multi-cell honeycomb-like structures designed to crush under load have been investigated for both space and aeronautics applications. Efforts to understand these concepts are progressing from tests of individual cells to tests of systems with hundreds of cells. Because of fabrication irregularities, geometry irregularities, and material properties uncertainties, the problem of reconciling analytical models, in particular LS-DYNA models, with experimental data is a challenge. A first look at the correlation results between single cell load/deflection data with LS-DYNA predictions showed problems which prompted additional work in this area. This paper describes a computational approach that uses analysis of variance, deterministic sampling techniques, response surface modeling, and genetic optimization to reconcile test with analysis results. Analysis of variance provides a screening technique for selection of critical parameters used when reconciling test with analysis. In this study, complete ignorance of the parameter distribution is assumed and, therefore, the value of any parameter within the range that is computed using the optimization procedure is considered to be equally likely. Mean values from tests are matched against LS-DYNA solutions by minimizing the square error using a genetic optimization. The paper presents the computational methodology along with results obtained using this approach.

  3. Can a dual-energy computed tomography predict unsuitable stone components for extracorporeal shock wave lithotripsy?

    PubMed Central

    Ahn, Sung Hoon; Oh, Tae Hoon

    2015-01-01

    Purpose To assess the potential of dual-energy computed tomography (DECT) to identify urinary stone components, particularly uric acid and calcium oxalate monohydrate, which are unsuitable for extracorporeal shock wave lithotripsy (ESWL). Materials and Methods This clinical study included 246 patients who underwent removal of urinary stones and an analysis of stone components between November 2009 and August 2013. All patients received preoperative DECT using two energy values (80 kVp and 140 kVp). Hounsfield units (HU) were measured and matched to the stone component. Results Significant differences in HU values were observed between uric acid and nonuric acid stones at the 80 and 140 kVp energy values (p<0.001). All uric acid stones were red on color-coded DECT images, whereas 96.3% of the nonuric acid stones were blue. Patients with calcium oxalate stones were divided into two groups according to the amount of monohydrate (calcium oxalate monohydrate group: monohydrate≥90%, calcium oxalate dihydrate group: monohydrate<90%). Significant differences in HU values were detected between the two groups at both energy values (p<0.001). Conclusions DECT improved the characterization of urinary stone components and was a useful method for identifying uric acid and calcium oxalate monohydrate stones, which are unsuitable for ESWL. PMID:26366277

  4. Dual- and Multi-Energy Computed Tomography: Principles, Technical Approaches, and Clinical Applications

    PubMed Central

    McCollough, Cynthia; Leng, Shuai; Yu, Lifeng; Fletcher, Joel G.

    2015-01-01

    In x-ray computed tomography (CT), materials having different elemental compositions can be represented by identical pixel values in a CT image (i.e. CT number values), depending on the materials’ mass density. Thus, the differentiation and classification of different tissue types and contrast agents can be extremely challenging. In dual-energy CT (DECT), an additional attenuation measurement is obtained with a second x-ray spectrum (i.e. a second “energy”), allowing the differentiation of multiple materials. Alternatively, this allows quantification of the mass density of two or three materials in a mixture with known elemental composition. Recent advances in the use of energy-resolving, photon-counting detectors for CT imaging suggest the ability to acquire data in multiple energy bins, which is expected to further improve the signal-to-noise ratio for material-specific imaging. In this work, the underlying motivation and physical principles of dual- or multi-energy CT are reviewed and each of the current technical approaches described. In addition, current and evolving clinical applications are introduced. PMID:26302388

  5. Thermodynamic analysis of five compressed-air energy-storage cycles. [Using CAESCAP computer code

    SciTech Connect

    Fort, J. A.

    1983-03-01

    One important aspect of the Compressed-Air Energy-Storage (CAES) Program is the evaluation of alternative CAES plant designs. The thermodynamic performance of the various configurations is particularly critical to the successful demonstration of CAES as an economically feasible energy-storage option. A computer code, the Compressed-Air Energy-Storage Cycle-Analysis Program (CAESCAP), was developed in 1982 at the Pacific Northwest Laboratory. This code was designed specifically to calculate overall thermodynamic performance of proposed CAES-system configurations. The results of applying this code to the analysis of five CAES plant designs are presented in this report. The designs analyzed were: conventional CAES; adiabatic CAES; hybrid CAES; pressurized fluidized-bed CAES; and direct coupled steam-CAES. Inputs to the code were based on published reports describing each plant cycle. For each cycle analyzed, CAESCAP calculated the thermodynamic station conditions and individual-component efficiencies, as well as overall cycle-performance-parameter values. These data were then used to diagram the availability and energy flow for each of the five cycles. The resulting diagrams graphically illustrate the overall thermodynamic performance inherent in each plant configuration, and enable a more accurate and complete understanding of each design.

  6. Computational modelling of protein interactions: energy minimization for the refinement and scoring of association decoys.

    PubMed

    Dibrov, Alexander; Myal, Yvonne; Leygue, Etienne

    2009-12-01

    The prediction of protein-protein interactions based on independently obtained structural information for each interacting partner remains an important challenge in computational chemistry. Procedures where hypothetical interaction models (or decoys) are generated, then ranked using a biochemically relevant scoring function have been garnering interest as an avenue for addressing such challenges. The program PatchDock has been shown to produce reasonable decoys for modeling the association between pig alpha-amylase and the VH-domains of camelide antibody raised against it. We designed a biochemically relevant method by which PatchDock decoys could be ranked in order to separate near-native structures from false positives. Several thousand steps of energy minimization were used to simulate induced fit within the otherwise rigid decoys and to rank them. We applied a partial free energy function to rank each of the binding modes, improving discrimination between near-native structures and false positives. Sorting decoys according to strain energy increased the proportion of near-native decoys near the bottom of the ranked list. Additionally, we propose a novel method which utilizes regression analysis for the selection of minimization convergence criteria and provides approximation of the partial free energy function as the number of algorithmic steps approaches infinity. PMID:19774465

  7. Computing dispersive, polarizable, and electrostatic shifts of excitation energy in supramolecular systems: PTCDI crystal.

    PubMed

    Megow, Jörg

    2016-09-01

    The gas-to-crystal-shift denotes the shift of electronic excitation energies, i.e., the difference between ground and excited state energies, for a molecule transferred from the gas to the bulk phase. The contributions to the gas-to-crystal-shift comprise electrostatic as well as inductive polarization and dispersive energy shifts of the molecular excitation energies due to interaction with environmental molecules. For the example of 3,4,9,10-perylene-tetracarboxylic-diimide (PTCDI) bulk, the contributions to the gas-to-crystal shift are investigated. In the present work, electrostatic interaction is calculated via Coulomb interaction of partial charges while inductive and dispersive interactions are obtained using respective sum over states expressions. The coupling of higher transition densities for the first 4500 excited states of PTCDI was computed using transition partial charges based on an atomistic model of PTCDI bulk obtained from molecular dynamics simulations. As a result it is concluded that for the investigated model system of a PTCDI crystal, the gas to crystal shift is dominated by dispersive interaction. PMID:27608991

  8. Computer simulation of an alternate-energy-based, high-density brooding facility

    SciTech Connect

    Simmons, J.D.

    1986-01-01

    A computer model was developed to simulate a poultry brooding facility characterized by high-density cage or floor brooding, environmental housing, ventilation heat recovery, solar energy collection, and biogas generation. Repeated simulations revealed the following: (1) Solar collection and ventilation heat recovery could reduce fossil fuel use by 12 and 91%, respectively. Combining solar collection and heat recovery may reduce fossil fuel use by only an additional 1.5%. (2) Methane generation can provide more energy on a yearly basis than is required for supplemental heat for brooding. Seasonal energy demands do not match supplies from methane generation and shortages may occur in winter as well as excesses in summer. A digester operated in the thermophilic temperature range produces more net energy than one operated in the mesophilic range. (3) Operating expenses for the simulated cage facility exceeded conventional brooding. (4) Relative humidity patterns of certain areas create the need for complex controls to properly maintain the internal environment. (5) Feed and fuel account for nearly 100% of the operating expenses of brooding. Controlling heat and ventilation with a microprocessor may be the only way to optimize the environment of a broiler brooding facility.

  9. PREFACE: 21st International Conference on Computing in High Energy and Nuclear Physics (CHEP2015)

    NASA Astrophysics Data System (ADS)

    Sakamoto, H.; Bonacorsi, D.; Ueda, I.; Lyon, A.

    2015-12-01

    The International Conference on Computing in High Energy and Nuclear Physics (CHEP) is a major series of international conferences intended to attract physicists and computing professionals to discuss on recent developments and trends in software and computing for their research communities. Experts from the high energy and nuclear physics, computer science, and information technology communities attend CHEP events. This conference series provides an international forum to exchange experiences and the needs of a wide community, and to present and discuss recent, ongoing, and future activities. At the beginning of the successful series of CHEP conferences in 1985, the latest developments in embedded systems, networking, vector and parallel processing were presented in Amsterdam. The software and computing ecosystem massively evolved since then, and along this path each CHEP event has marked a step further. A vibrant community of experts on a wide range of different high-energy and nuclear physics experiments, as well as technology explorer and industry contacts, attend and discuss the present and future challenges, and shape the future of an entire community. In such a rapidly evolving area, aiming to capture the state-of-the-art on software and computing through a collection of proceedings papers on a journal is a big challenge. Due to the large attendance, the final papers appear on the journal a few months after the conference is over. Additionally, the contributions often report about studies at very heterogeneous statuses, namely studies that are completed, or are just started, or yet to be done. It is not uncommon that by the time a specific paper appears on the journal some of the work is over a year old, or the investigation actually happened in different directions and with different methodologies than originally presented at the conference just a few months before. And by the time the proceedings appear in journal form, new ideas and explorations have

  10. Assessing energy efficiencies and greenhouse gas emissions under bioethanol-oriented paddy rice production in northern Japan.

    PubMed

    Koga, Nobuhisa; Tajima, Ryosuke

    2011-03-01

    To establish energetically and environmentally viable paddy rice-based bioethanol production systems in northern Japan, it is important to implement appropriately selected agronomic practice options during the rice cultivation step. In this context, effects of rice variety (conventional vs. high-yielding) and rice straw management (return to vs. removal from the paddy field) on energy inputs from fuels and consumption of materials, greenhouse gas emissions (fuel and material consumption-derived CO(2) emissions as well as paddy soil CH(4) and N(2)O emissions) and ethanol yields were assessed. The estimated ethanol yield from the high-yielding rice variety, "Kita-aoba" was 2.94 kL ha(-1), a 32% increase from the conventional rice variety, "Kirara 397". Under conventional rice production in northern Japan (conventional rice variety and straw returned to the paddy), raising seedlings, mechanical field operations, transportation of harvested unhulled brown rice and consumption of materials (seeds, fertilizers, biocides and agricultural machinery) amounted to 28.5 GJ ha(-1) in energy inputs. The total energy input was increased by 14% by using the high-yielding variety and straw removal, owing to increased requirements for fuels in harvesting and transporting harvested rice as well as in collecting, loading and transporting rice straw. In terms of energy efficiency, the variation among rice variety and straw management scenarios regarding rice varieties and rice straw management was small (28.5-32.6 GJ ha(-1) or 10.1-14.0 MJ L(-1)). Meanwhile, CO(2)-equivalent greenhouse gas emissions varied considerably from scenario to scenario, as straw management had significant impacts on CH(4) emissions from paddy soils. When rice straw was incorporated into the soil, total CO(2)-equivalent greenhouse gas emissions for "Kirara 397" and "Kita-aoba" were 25.5 and 28.2 Mg CO(2) ha(-1), respectively; however, these emissions were reduced notably for the two varieties when rice straw

  11. Model of the anisotropic behavior of doubly oriented and non-oriented materials using coenergy: Application to a large generator

    SciTech Connect

    Mekhiche, M.; Pera, T.; Marechal, Y.

    1995-05-01

    The anisotropic and nonlinear behavior of doubly oriented and non-oriented sheets are modeled using the coenergy density. These models have been implemented in a finite element computation. A large generator has been modeled and the advantages of doubly oriented sheets compared to the conventional non-oriented ones are shown.

  12. Protein:Ligand binding free energies: A stringent test for computational protein design.

    PubMed

    Druart, Karen; Palmai, Zoltan; Omarjee, Eyaz; Simonson, Thomas

    2016-02-01

    A computational protein design method is extended to allow Monte Carlo simulations where two ligands are titrated into a protein binding pocket, yielding binding free energy differences. These provide a stringent test of the physical model, including the energy surface and sidechain rotamer definition. As a test, we consider tyrosyl-tRNA synthetase (TyrRS), which has been extensively redesigned experimentally. We consider its specificity for its substrate l-tyrosine (l-Tyr), compared to the analogs d-Tyr, p-acetyl-, and p-azido-phenylalanine (ac-Phe, az-Phe). We simulate l- and d-Tyr binding to TyrRS and six mutants, and compare the structures and binding free energies to a more rigorous "MD/GBSA" procedure: molecular dynamics with explicit solvent for structures and a Generalized Born + Surface Area model for binding free energies. Next, we consider l-Tyr, ac- and az-Phe binding to six other TyrRS variants. The titration results are sensitive to the precise rotamer definition, which involves a short energy minimization for each sidechain pair to help relax bad contacts induced by the discrete rotamer set. However, when designed mutant structures are rescored with a standard GBSA energy model, results agree well with the more rigorous MD/GBSA. As a third test, we redesign three amino acid positions in the substrate coordination sphere, with either l-Tyr or d-Tyr as the ligand. For two, we obtain good agreement with experiment, recovering the wildtype residue when l-Tyr is the ligand and a d-Tyr specific mutant when d-Tyr is the ligand. For the third, we recover His with either ligand, instead of wildtype Gln. PMID:26503829

  13. PREFACE: International Conference on Computing in High Energy and Nuclear Physics (CHEP'09)

    NASA Astrophysics Data System (ADS)

    Gruntorad, Jan; Lokajicek, Milos

    2010-11-01

    The 17th International Conference on Computing in High Energy and Nuclear Physics (CHEP) was held on 21-27 March 2009 in Prague, Czech Republic. CHEP is a major series of international conferences for physicists and computing professionals from the worldwide High Energy and Nuclear Physics community, Computer Science, and Information Technology. The CHEP conference provides an international forum to exchange information on computing experience and needs for the community, and to review recent, ongoing and future activities. Recent conferences were held in Victoria, Canada 2007, Mumbai, India in 2006, Interlaken, Switzerland in 2004, San Diego, USA in 2003, Beijing, China in 2001, Padua, Italy in 2000. The CHEP'09 conference had 600 attendees with a program that included plenary sessions of invited oral presentations, a number of parallel sessions comprising 200 oral and 300 poster presentations, and an industrial exhibition. We thanks all the presenters, for the excellent scientific content of their contributions to the conference. Conference tracks covered topics on Online Computing, Event Processing, Software Components, Tools and Databases, Hardware and Computing Fabrics, Grid Middleware and Networking Technologies, Distributed Processing and Analysis and Collaborative Tools. The conference included excursions to Prague and other Czech cities and castles and a banquet held at the Zofin palace in Prague. The next CHEP conference will be held in Taipei, Taiwan on 18-22 October 2010. We would like thank the Ministry of Education Youth and Sports of the Czech Republic and the EU ACEOLE project for the conference support, further to commercial sponsors, the International Advisory Committee, the Local Organizing Committee members representing the five collaborating Czech institutions Jan Gruntorad (co-chair), CESNET, z.s.p.o., Prague Andrej Kugler, Nuclear Physics Institute AS CR v.v.i., Rez Rupert Leitner, Charles University in Prague, Faculty of Mathematics and

  14. Effects of molecular dipole orientation on the exciton binding energy of CH3NH3PbI3

    NASA Astrophysics Data System (ADS)

    Motta, Carlo; Mandal, Pankaj; Sanvito, Stefano

    2016-07-01

    We present a simple interacting tight-binding model for excitons, which is used to investigate the dependence of the exciton binding energy of CH3NH3PbI3 over the disorder induced by the molecular motion at room temperature. The model is fitted to the electronic structure of CH3NH3PbI3 by using data from density-functional theory and Born-Oppenheimer ab initio molecular dynamics, and it is solved in the mean-field approximation. When a finite-scale analysis is performed to extract the energetic of the excitons at experimental concentrations we find that disorder in general reduces the binding energy of about 10%. This suggests that the excitonic properties of CH3NH3PbI3 largely depend on the electronic structure of the PbI3 inorganic lattice.

  15. Energy Management of the Multi-Mission Space Exploration Vehicle Using a Goal-Oriented Control System

    NASA Technical Reports Server (NTRS)

    Braman, Julia M. B.; Wagner, David A.

    2010-01-01

    Safe human exploration in space missions requires careful management of limited resources such as breathable air and stored electrical energy. Daily activities for astronauts must be carefully planned with respect to such resources, and usage must be monitored as activities proceed to ensure that they can be completed while maintaining safe resource margins. Such planning and monitoring can be complex because they depend on models of resource usage, the activities being planned, and uncertainties. This paper describes a system - and the technology behind it - for energy management of the NASA-Johnson Space Center's Multi-Mission Space Exploration Vehicles (SEV), that provides, in an onboard advisory mode, situational awareness to astronauts and real-time guidance to mission operators. This new capability was evaluated during this year's Desert RATS (Research and Technology Studies) planetary exploration analog test in Arizona. This software aided ground operators and crew members in modifying the day s activities based on the real-time execution of the plan and on energy data received from the rovers.

  16. Refining the Resolution of Future Energy-Water Projection through High Performance Computing (Invited)

    NASA Astrophysics Data System (ADS)

    Kao, S.; Naz, B.; Ashfaq, M.; Mei, R.

    2013-12-01

    With the advance of high performance computing and more abundant historic observation, the resolution and accuracy of hydro-climate projection can now be efficiently improved. Based on the Coupled Model Intercomparison Project Phase 5 (CMIP5) climate projections, a series of hydro-climatic models and datasets, including Regional Climate Models, Variable Infiltration Capacity (VIC) hydrologic model, historic runoff-generation relationships and a national hydropower dataset, are jointly utilized to project the future hydropower production at various U.S. regions. To refine spatial resolution and reduce modeling uncertainty, particular efforts were focused on calibrating the VIC hydrologic model at 4-Km spatial resolution. Driven by 1980-2008 DAYMET meteorological observation (biased adjusted by PRISM dataset), the simulated VIC total runoff (baseflow + surface runoff) was calibrated to U.S. Geological Survey WaterWatch monthly runoff observation at 2107 hydrologic Subbasins (HUC8s) in the Conterminous U.S. Each HUC8 was subdivided into 16, 32, or 48 computation units for parallel computing. The simulation was conducted using Oak Ridge National Laboratory's Titan supercomputer, a Cray XK7 system with 18,688 computational nodes, each equipped with four quad-core CPUs and two GPU cards. To date, ~2.5 million CPU-hours (i.e., the number of CPUs multiplied by the average hours used by each CPU) have been used to improve the modeling performance for most of the HUC8s. Using the calibrated model, hydro-climate projections will be produced for various dynamically-downscaled CMIP5 simulations, and will be utilized to project seasonal and monthly hydropower production for various U.S. regions. It is expected that with reduced modeling uncertainty, the regional water budget can be more accurately estimated and it will eventually lead to better simulation and allocation of limited water resources under climate, energy, and water nexus.

  17. Oriental noodles.

    PubMed

    Hou, G

    2001-01-01

    Oriental noodles have been consumed for thousands of years and remain an important part in the diet of many Asians. There is a wide variety of noodles in Asia with many local variations as result of differences in culture, climate, region and a host of other factors. In this article noodle classification, formulation, processing and evaluation are reviewed, with emphasis on eight major types. Wheat quality requirements, basic flour specifications, ingredient functions, and production variables are identified for different noodles. In the evaluation of flour for noodle making, three key quality attributes are considered: processability, noodle color and texture. Noodle process behavior is particularly important in the modern industrial production. Each noodle type has its own unique color and texture characteristics. Flour color, protein content, ash content, yellow pigment and polyphenol oxidase activity are important factors responsible for noodle color. Starch characteristics, protein content and quality play major roles in governing the texture of cooked noodles. However, the relative importance of starch and proteins varies considerably with noodle type. Starch pasting quality is the primary trait determining the eating quality of Japanese and Korean noodles that are characterized by soft and elastic texture, while protein quantity and strength are very important to Chinese-type noodles that require firm bite and chewy texture. Other factors such as ingredients added in the noodle formula and processing variables used during noodle preparation also affect the cooked noodle texture as well. PMID:11285682

  18. A user oriented computer program for the analysis of microwave mixers, and a study of the effects of the series inductance and diode capacitance on the performance of some simple mixers

    NASA Technical Reports Server (NTRS)

    Siegel, P. H.; Kerr, A. R.

    1979-01-01

    A user oriented computer program for analyzing microwave and millimeter wave mixers with a single Schottky barrier diode of known I-V and C-V characteristics is described. The program first performs a nonlinear analysis to determine the diode conductance and capacitance waveforms produced by the local oscillator. A small signal linear analysis is then used to find the conversion loss, port impedances, and input noise temperature of the mixer. Thermal noise from the series resistance of the diode and shot noise from the periodically pumped current in the diode conductance are considered. The effects of the series inductance and diode capacitance on the performance of some simple mixer circuits using a conventional Schottky diode, a Schottky diode in which there is no capacitance variation, and a Mott diode are studied. It is shown that the parametric effects of the voltage dependent capacitance of a conventional Schottky diode may be either detrimental or beneficial depending on the diode and circuit parameters.

  19. Planning and drilling geothermal energy extraction hole EE-2: a precisely oriented and deviated hole in hot granitic rock

    SciTech Connect

    Helmick, C.; Koczan, S.; Pettitt, R.

    1982-04-01

    During the preceding work (Phase I) of the Hot Dry Rock (HDR) Geothermal Energy Project at Fenton Hill, two holes were drilled to a depth of nearly 3048 m (10,000 ft) and connected by a vertical hydraulic fracture. In this phase, water was pumped through the underground reservoir for approximately 417 days, producing an energy equivalent of 3 to 5 MW(t). Energy Extraction Hole No. 2 (EE-2) is the first of two deep holes that will be used in the Engineering-Resource Development System (Phase II) of the ongoing HDR Project of the Los Alamos National Laboratory. This phase of the work consists of drilling two parallel boreholes, inclined in their lower, open-hole sections at 35/sup 0/ to the vertical and separated by a vertical distance of 366 m (1200 ft) between the inclined parts of the drill holes. The holes will be connected by a series of vertical, hydraulically produced fractures in the Precambrian granitic rock complex. EE-2 was drilled to a depth of 4660 m (15,289 ft), where the bottom-hole temperature is approximately 320/sup 0/C (608/sup 0/F). Directional drilling techniques were used to control the azimuth and deviation of the hole. Upgrading of the temperature capability of existing hardware, and development of new equipment was necessary to complete the drilling of the hole in the extremely hot, hard, and abrasive granitic formation. The drilling history and the problems with bits, directional tools, tubular goods, cementing, and logging are described. A discussion of the problems and recommendations for overcoming them are also presented.

  20. Protocols Utilizing Constant pH Molecular Dynamics to Compute pH-Dependent Binding Free Energies

    PubMed Central

    2015-01-01

    In protein–ligand binding, the electrostatic environments of the two binding partners may vary significantly in bound and unbound states, which may lead to protonation changes upon binding. In cases where ligand binding results in a net uptake or release of protons, the free energy of binding is pH-dependent. Nevertheless, conventional free energy calculations and molecular docking protocols typically do not rigorously account for changes in protonation that may occur upon ligand binding. To address these shortcomings, we present a simple methodology based on Wyman’s binding polynomial formalism to account for the pH dependence of binding free energies and demonstrate its use on cucurbit[7]uril (CB[7]) host–guest systems. Using constant pH molecular dynamics and a reference binding free energy that is taken either from experiment or from thermodynamic integration computations, the pH-dependent binding free energy is determined. This computational protocol accurately captures the large pKa shifts observed experimentally upon CB[7]:guest association and reproduces experimental binding free energies at different levels of pH. We show that incorrect assignment of fixed protonation states in free energy computations can give errors of >2 kcal/mol in these host–guest systems. Use of the methods presented here avoids such errors, thus suggesting their utility in computing proton-linked binding free energies for protein–ligand complexes. PMID:25134690

  1. U.S. Department of Energy Office of Inspector General report on audit of selected aspects of the unclassified computer security program at a DOE headquarters computing facility

    SciTech Connect

    1995-07-31

    The purpose of this audit was to evaluate the effectiveness of the unclassified computer security program at the Germantown Headquarters Administrative Computer Center (Center). The Department of Energy (DOE) relies on the application systems at the Germantown Headquarters Administrative Computer Center to support its financial, payroll and personnel, security, and procurement functions. The review was limited to an evaluation of the administrative, technical, and physical safeguards governing utilization of the unclassified computer system which hosts many of the Department`s major application systems. The audit identified weaknesses in the Center`s computer security program that increased the risk of unauthorized disclosure or loss of sensitive data. Specifically, the authors found that (1) access to sensitive data was not limited to individuals who had a need for the information, and (2) accurate and complete information was not maintained on the inventory of tapes at the Center. Furthermore, the risk of unauthorized disclosure and loss of sensitive data was increased because other controls, such as physical security, had not been adequately implemented at the Center. Management generally agreed with the audit conclusions and recommendations, and initiated a number of actions to improve computer security at the Center.

  2. Orientation control of cold zone annealed Block copolymer films on tunable gradient surface energy substrates using combinatorial methods

    NASA Astrophysics Data System (ADS)

    Kulkarni, Manish; Singh, Gurpreet; Karim, Alamgir

    2012-02-01

    Microphase morphologies of poly(styrene)-block-poly(methylmethacrylate) (PS-PMMA) block co-polymer (BCP) films coated on various tunable surface energy gradient (SEG) substrates were compared. Substrates were prepared by coating silane self assembled monolayer (SAM) and hydrophobic sol-gel based layer of silica (xerogel) on quartz and exposed to UV-ozone radiation by placing them on an accelerating stage that oxidizes the surface to generate SEG. The combinatorial thickness gradient samples of BCP film were prepared by flow coating the BCP solution orthogonal to the SEG. Samples were annealed using novel cold zone annealing (CZA) method with a sharp thermal gradient (50 ^oC/mm) to obtain highly ordered BCP morphologies. Effect of CZA annealing rate and film thickness on BCP morphologies of the SAM treated and untreated quartz as well as xerogel substrates were compared. It was observed that BCP films coated on the untreated quartz substrates exhibited hexagonally packed perpendicular cylindrical morphologies whereas higher area fraction of parallel cylinders was observed for SEG xerogel substrates for higher surface energies (>40 mJ/m^2). BCP 2D surface morphologies studied using AFM, were confirmed to extend to the interior of the film (3D) by GISAXS.

  3. Computations of absolute solvation free energies of small molecules using explicit and implicit solvent model.

    SciTech Connect

    Shivakumar, D.; Deng, Y.; Roux, B.; Biosciences Division; Univ. of Chicago

    2009-01-01

    Accurate determination of absolute solvation free energy plays a critical role in numerous areas of biomolecular modeling and drug discovery. A quantitative representation of ligand and receptor desolvation, in particular, is an essential component of current docking and scoring methods. Furthermore, the partitioning of a drug between aqueous and nonpolar solvents is one of the important factors considered in pharmacokinetics. In this study, the absolute hydration free energy for a set of 239 neutral ligands spanning diverse chemical functional groups commonly found in drugs and drug-like candidates is calculated using the molecular dynamics free energy perturbation method (FEP/MD) with explicit water molecules, and compared to experimental data as well as its counterparts obtained using implicit solvent models. The hydration free energies are calculated from explicit solvent simulations using a staged FEP procedure permitting a separation of the total free energy into polar and nonpolar contributions. The nonpolar component is further decomposed into attractive (dispersive) and repulsive (cavity) components using the Weeks-Chandler-Anderson (WCA) separation scheme. To increase the computational efficiency, all of the FEP/MD simulations are generated using a mixed explicit/implicit solvent scheme with a relatively small number of explicit TIP3P water molecules, in which the influence of the remaining bulk is incorporated via the spherical solvent boundary potential (SSBP). The performances of two fixed-charge force fields designed for small organic molecules, the General Amber force field (GAFF), and the all-atom CHARMm-MSI, are compared. Because of the crucial role of electrostatics in solvation free energy, the results from various commonly used charge generation models based on the semiempirical (AM1-BCC) and QM calculations [charge fitting using ChelpG and RESP] are compared. In addition, the solvation free energies of the test set are also calculated using

  4. Quantitative material analysis by dual-energy computed tomography for industrial NDT applications

    NASA Astrophysics Data System (ADS)

    Nachtrab, F.; Weis, S.; Keßling, P.; Sukowski, F.; Haßler, U.; Fuchs, T.; Uhlmann, N.; Hanke, R.

    2011-05-01

    Dual-energy computed tomography (DECT) is an established method in the field of medical CT to obtain quantitative information on a material of interest instead of mean attenuation coefficients only. In the field of industrial X-ray imaging dual-energy techniques have been used to solve special problems on a case-by-case basis rather than as a standard tool. Our goal is to develop an easy-to-use dual-energy solution that can be handled by the average industrial operator without the need for a specialist. We are aiming at providing dual-energy CT as a measurement tool for those cases where qualitative images are not enough and one needs additional quantitative information (e.g. mass density ρ and atomic number Z) about the sample at hand. Our solution is based on an algorithm proposed by Heismann et al. (2003) [1] for application in medical CT . As input data this algorithm needs two CT data sets, one with low (LE) and one with high effective energy (HE). A first order linearization is applied to the raw data, and two volumes are reconstructed thereafter. The dual-energy analysis is done voxel by voxel, using a pre-calculated function F(Z) that implies the parameters of the low and high energy measurement (such as tube voltage, filtration and detector sensitivity). As a result, two volume data sets are obtained, one providing information about the mass density ρ in each voxel, the other providing the effective atomic number Z of the material therein. One main difference between medical and industrial CT is that the range of materials that can be contained in a sample is much wider and can cover the whole range of elements, from hydrogen to uranium. Heismann's algorithm is limited to the range of elements Z=1-30, because for Z>30 the function F(Z) as given by Heismann is not a bijective function anymore. While this still seems very suitable for medical application, it is not enough to cover the complete range of industrial applications. We therefore investigated the

  5. Multiferroic nanomagnetic logic: Hybrid spintronics-straintronic paradigm for ultra-low energy computing

    NASA Astrophysics Data System (ADS)

    Salehi Fashami, Mohammad

    Excessive energy dissipation in CMOS devices during switching is the primary threat to continued downscaling of computing devices in accordance with Moore's law. In the quest for alternatives to traditional transistor based electronics, nanomagnet-based computing [1, 2] is emerging as an attractive alternative since: (i) nanomagnets are intrinsically more energy-efficient than transistors due to the correlated switching of spins [3], and (ii) unlike transistors, magnets have no leakage and hence have no standby power dissipation. However, large energy dissipation in the clocking circuit appears to be a barrier to the realization of ultra low power logic devices with such nanomagnets. To alleviate this issue, we propose the use of a hybrid spintronics-straintronics or straintronic nanomagnetic logic (SML) paradigm. This uses a piezoelectric layer elastically coupled to an elliptically shaped magnetostrictive nanomagnetic layer for both logic [4-6] and memory [7-8] and other information processing [9-10] applications that could potentially be 2-3 orders of magnitude more energy efficient than current CMOS based devices. This dissertation focuses on studying the feasibility, performance and reliability of such nanomagnetic logic circuits by simulating the nanoscale magnetization dynamics of dipole coupled nanomagnets clocked by stress. Specifically, the topics addressed are: 1. Theoretical study of multiferroic nanomagnetic arrays laid out in specific geometric patterns to implement a "logic wire" for unidirectional information propagation and a universal logic gate [4-6]. 2. Monte Carlo simulations of the magnetization trajectories in a simple system of dipole coupled nanomagnets and NAND gate described by the Landau-Lifshitz-Gilbert (LLG) equations simulated in the presence of random thermal noise to understand the dynamics switching error [11, 12] in such devices. 3. Arriving at a lower bound for energy dissipation as a function of switching error [13] for a

  6. Short-range stabilizing potential for computing energies and lifetimes of temporary anions with extrapolation methods

    SciTech Connect

    Sommerfeld, Thomas; Ehara, Masahiro

    2015-01-21

    The energy of a temporary anion can be computed by adding a stabilizing potential to the molecular Hamiltonian, increasing the stabilization until the temporary state is turned into a bound state, and then further increasing the stabilization until enough bound state energies have been collected so that these can be extrapolated back to vanishing stabilization. The lifetime can be obtained from the same data, but only if the extrapolation is done through analytic continuation of the momentum as a function of the square root of a shifted stabilizing parameter. This method is known as analytic continuation of the coupling constant, and it requires—at least in principle—that the bound-state input data are computed with a short-range stabilizing potential. In the context of molecules and ab initio packages, long-range Coulomb stabilizing potentials are, however, far more convenient and have been used in the past with some success, although the error introduced by the long-rang nature of the stabilizing potential remains unknown. Here, we introduce a soft-Voronoi box potential that can serve as a short-range stabilizing potential. The difference between a Coulomb and the new stabilization is analyzed in detail for a one-dimensional model system as well as for the {sup 2}Π{sub u} resonance of CO{sub 2}{sup −}, and in both cases, the extrapolation results are compared to independently computed resonance parameters, from complex scaling for the model, and from complex absorbing potential calculations for CO{sub 2}{sup −}. It is important to emphasize that for both the model and for CO{sub 2}{sup −}, all three sets of results have, respectively, been obtained with the same electronic structure method and basis set so that the theoretical description of the continuum can be directly compared. The new soft-Voronoi-box-based extrapolation is then used to study the influence of the size of diffuse and the valence basis sets on the computed resonance parameters.

  7. Computational methods for reactive transport modeling: A Gibbs energy minimization approach for multiphase equilibrium calculations

    NASA Astrophysics Data System (ADS)

    Leal, Allan M. M.; Kulik, Dmitrii A.; Kosakowski, Georg

    2016-02-01

    We present a numerical method for multiphase chemical equilibrium calculations based on a Gibbs energy minimization approach. The method can accurately and efficiently determine the stable phase assemblage at equilibrium independently of the type of phases and species that constitute the chemical system. We have successfully applied our chemical equilibrium algorithm in reactive transport simulations to demonstrate its effective use in computationally intensive applications. We used FEniCS to solve the governing partial differential equations of mass transport in porous media using finite element methods in unstructured meshes. Our equilibrium calculations were benchmarked with GEMS3K, the numerical kernel of the geochemical package GEMS. This allowed us to compare our results with a well-established Gibbs energy minimization algorithm, as well as their performance on every mesh node, at every time step of the transport simulation. The benchmark shows that our novel chemical equilibrium algorithm is accurate, robust, and efficient for reactive transport applications, and it is an improvement over the Gibbs energy minimization algorithm used in GEMS3K. The proposed chemical equilibrium method has been implemented in Reaktoro, a unified framework for modeling chemically reactive systems, which is now used as an alternative numerical kernel of GEMS.

  8. Principles and Clinical Application of Dual-energy Computed Tomography in the Evaluation of Cerebrovascular Disease.

    PubMed

    Hsu, Charlie Chia-Tsong; Kwan, Gigi Nga Chi; Singh, Dalveer; Pratap, Jit; Watkins, Trevor William

    2016-01-01

    Dual-energy computed tomography (DECT) simultaneously acquires images at two X-ray energy levels, at both high- and low-peak voltages (kVp). The material attenuation difference obtained from the two X-ray energies can be processed by software to analyze material decomposition and to create additional image datasets, namely, virtual noncontrast, virtual contrast also known as iodine overlay, and bone/calcium subtraction images. DECT has a vast array of clinical applications in imaging cerebrovascular diseases, which includes: (1) Identification of active extravasation of iodinated contrast in various types of intracranial hemorrhage; (2) differentiation between hemorrhagic transformation and iodine staining in acute ischemic stroke following diagnostic and/or therapeutic catheter angiography; (3) identification of culprit lesions in intra-axial hemorrhage; (4) calcium subtraction from atheromatous plaque for the assessment of plaque morphology and improved quantification of luminal stenosis; (5) bone subtraction to improve the depiction of vascular anatomy with more clarity, especially at the skull base; (6) metal artifact reduction utilizing virtual monoenergetic reconstructions for improved luminal assessment postaneurysm coiling or clipping. We discuss the physical principles of DECT and review the clinical applications of DECT for the evaluation of cerebrovascular diseases. PMID:27512615

  9. Complementary contrast media for metal artifact reduction in dual-energy computed tomography.

    PubMed

    Lambert, Jack W; Edic, Peter M; FitzGerald, Paul F; Torres, Andrew S; Yeh, Benjamin M

    2015-07-01

    Metal artifacts have been a problem associated with computed tomography (CT) since its introduction. Recent techniques to mitigate this problem have included utilization of high-energy (keV) virtual monochromatic spectral (VMS) images, produced via dual-energy CT (DECT). A problem with these high-keV images is that contrast enhancement provided by all commercially available contrast media is severely reduced. Contrast agents based on higher atomic number elements can maintain contrast at the higher energy levels where artifacts are reduced. This study evaluated three such candidate elements: bismuth, tantalum, and tungsten, as well as two conventional contrast elements: iodine and barium. A water-based phantom with vials containing these five elements in solution, as well as different artifact-producing metal structures, was scanned with a DECT scanner capable of rapid operating voltage switching. In the VMS datasets, substantial reductions in the contrast were observed for iodine and barium, which suffered from contrast reductions of 97% and 91%, respectively, at 140 versus 40 keV. In comparison under the same conditions, the candidate agents demonstrated contrast enhancement reductions of only 20%, 29%, and 32% for tungsten, tantalum, and bismuth, respectively. At 140 versus 40 keV, metal artifact severity was reduced by 57% to 85% depending on the phantom configuration. PMID:26839905

  10. Principles and Clinical Application of Dual-energy Computed Tomography in the Evaluation of Cerebrovascular Disease

    PubMed Central

    Hsu, Charlie Chia-Tsong; Kwan, Gigi Nga Chi; Singh, Dalveer; Pratap, Jit; Watkins, Trevor William

    2016-01-01

    Dual-energy computed tomography (DECT) simultaneously acquires images at two X-ray energy levels, at both high- and low-peak voltages (kVp). The material attenuation difference obtained from the two X-ray energies can be processed by software to analyze material decomposition and to create additional image datasets, namely, virtual noncontrast, virtual contrast also known as iodine overlay, and bone/calcium subtraction images. DECT has a vast array of clinical applications in imaging cerebrovascular diseases, which includes: (1) Identification of active extravasation of iodinated contrast in various types of intracranial hemorrhage; (2) differentiation between hemorrhagic transformation and iodine staining in acute ischemic stroke following diagnostic and/or therapeutic catheter angiography; (3) identification of culprit lesions in intra-axial hemorrhage; (4) calcium subtraction from atheromatous plaque for the assessment of plaque morphology and improved quantification of luminal stenosis; (5) bone subtraction to improve the depiction of vascular anatomy with more clarity, especially at the skull base; (6) metal artifact reduction utilizing virtual monoenergetic reconstructions for improved luminal assessment postaneurysm coiling or clipping. We discuss the physical principles of DECT and review the clinical applications of DECT for the evaluation of cerebrovascular diseases. PMID:27512615

  11. Computer simulation program for medium-energy ion scattering and Rutherford backscattering spectrometry

    NASA Astrophysics Data System (ADS)

    Nishimura, Tomoaki

    2016-03-01

    A computer simulation program for ion scattering and its graphical user interface (MEISwin) has been developed. Using this program, researchers have analyzed medium-energy ion scattering and Rutherford backscattering spectrometry at Ritsumeikan University since 1998, and at Rutgers University since 2007. The main features of the program are as follows: (1) stopping power can be chosen from five datasets spanning several decades (from 1977 to 2011), (2) straggling can be chosen from two datasets, (3) spectral shape can be selected as Gaussian or exponentially modified Gaussian, (4) scattering cross sections can be selected as Coulomb or screened, (5) simulations adopt the resonant elastic scattering cross section of 16O(4He, 4He)16O, (6) pileup simulation for RBS spectra is supported, (7) natural and specific isotope abundances are supported, and (8) the charge fraction can be chosen from three patterns (fixed, energy-dependent, and ion fraction with charge-exchange parameters for medium-energy ion scattering). This study demonstrates and discusses the simulations and their results.

  12. Grand Challenges of Advanced Computing for Energy Innovation Report from the Workshop Held July 31-August 2, 2012

    SciTech Connect

    Larzelere, Alex R.; Ashby, Steven F.; Christensen, Dana C.; Crawford, Dona L.; Khaleel, Mohammad A.; John, Grosh; Stults, B. Ray; Lee, Steven L.; Hammond, Steven W.; Grover, Benjamin T.; Neely, Rob; Dudney, Lee Ann; Goldstein, Noah C.; Wells, Jack; Peltz, Jim

    2013-03-06

    On July 31-August 2 of 2012, the U.S. Department of Energy (DOE) held a workshop entitled Grand Challenges of Advanced Computing for Energy Innovation. This workshop built on three earlier workshops that clearly identified the potential for the Department and its national laboratories to enable energy innovation. The specific goal of the workshop was to identify the key challenges that the nation must overcome to apply the full benefit of taxpayer-funded advanced computing technologies to U.S. energy innovation in the ways that the country produces, moves, stores, and uses energy. Perhaps more importantly, the workshop also developed a set of recommendations to help the Department overcome those challenges. These recommendations provide an action plan for what the Department can do in the coming years to improve the nation’s energy future.

  13. Design oriented structural analysis

    NASA Technical Reports Server (NTRS)

    Giles, Gary L.

    1994-01-01

    Desirable characteristics and benefits of design oriented analysis methods are described and illustrated by presenting a synoptic description of the development and uses of the Equivalent Laminated Plate Solution (ELAPS) computer code. ELAPS is a design oriented structural analysis method which is intended for use in the early design of aircraft wing structures. Model preparation is minimized by using a few large plate segments to model the wing box structure. Computational efficiency is achieved by using a limited number of global displacement functions that encompass all segments over the wing planform. Coupling with other codes is facilitated since the output quantities such as deflections and stresses are calculated as continuous functions over the plate segments. Various aspects of the ELAPS development are discussed including the analytical formulation, verification of results by comparison with finite element analysis results, coupling with other codes, and calculation of sensitivity derivatives. The effectiveness of ELAPS for multidisciplinary design application is illustrated by describing its use in design studies of high speed civil transport wing structures.

  14. PREFACE: International Conference on Computing in High Energy and Nuclear Physics (CHEP 2010)

    NASA Astrophysics Data System (ADS)

    Lin, Simon C.; Shen, Stella; Neufeld, Niko; Gutsche, Oliver; Cattaneo, Marco; Fisk, Ian; Panzer-Steindel, Bernd; Di Meglio, Alberto; Lokajicek, Milos

    2011-12-01

    The International Conference on Computing in High Energy and Nuclear Physics (CHEP) was held at Academia Sinica in Taipei from 18-22 October 2010. CHEP is a major series of international conferences for physicists and computing professionals from the worldwide High Energy and Nuclear Physics community, Computer Science, and Information Technology. The CHEP conference provides an international forum to exchange information on computing progress and needs for the community, and to review recent, ongoing and future activities. CHEP conferences are held at roughly 18 month intervals, alternating between Europe, Asia, America and other parts of the world. Recent CHEP conferences have been held in Prauge, Czech Republic (2009); Victoria, Canada (2007); Mumbai, India (2006); Interlaken, Switzerland (2004); San Diego, California(2003); Beijing, China (2001); Padova, Italy (2000) CHEP 2010 was organized by Academia Sinica Grid Computing Centre. There was an International Advisory Committee (IAC) setting the overall themes of the conference, a Programme Committee (PC) responsible for the content, as well as Conference Secretariat responsible for the conference infrastructure. There were over 500 attendees with a program that included plenary sessions of invited speakers, a number of parallel sessions comprising around 260 oral and 200 poster presentations, and industrial exhibitions. We thank all the presenters, for the excellent scientific content of their contributions to the conference. Conference tracks covered topics on Online Computing, Event Processing, Software Engineering, Data Stores, and Databases, Distributed Processing and Analysis, Computing Fabrics and Networking Technologies, Grid and Cloud Middleware, and Collaborative Tools. The conference included excursions to various attractions in Northern Taiwan, including Sanhsia Tsu Shih Temple, Yingko, Chiufen Village, the Northeast Coast National Scenic Area, Keelung, Yehliu Geopark, and Wulai Aboriginal Village

  15. PREFACE: 21st International Conference on Computing in High Energy and Nuclear Physics (CHEP2015)

    NASA Astrophysics Data System (ADS)

    Sakamoto, H.; Bonacorsi, D.; Ueda, I.; Lyon, A.

    2015-12-01

    The International Conference on Computing in High Energy and Nuclear Physics (CHEP) is a major series of international conferences intended to attract physicists and computing professionals to discuss on recent developments and trends in software and computing for their research communities. Experts from the high energy and nuclear physics, computer science, and information technology communities attend CHEP events. This conference series provides an international forum to exchange experiences and the needs of a wide community, and to present and discuss recent, ongoing, and future activities. At the beginning of the successful series of CHEP conferences in 1985, the latest developments in embedded systems, networking, vector and parallel processing were presented in Amsterdam. The software and computing ecosystem massively evolved since then, and along this path each CHEP event has marked a step further. A vibrant community of experts on a wide range of different high-energy and nuclear physics experiments, as well as technology explorer and industry contacts, attend and discuss the present and future challenges, and shape the future of an entire community. In such a rapidly evolving area, aiming to capture the state-of-the-art on software and computing through a collection of proceedings papers on a journal is a big challenge. Due to the large attendance, the final papers appear on the journal a few months after the conference is over. Additionally, the contributions often report about studies at very heterogeneous statuses, namely studies that are completed, or are just started, or yet to be done. It is not uncommon that by the time a specific paper appears on the journal some of the work is over a year old, or the investigation actually happened in different directions and with different methodologies than originally presented at the conference just a few months before. And by the time the proceedings appear in journal form, new ideas and explorations have

  16. Field-orientation dependence of low-energy quasiparticle excitations in the heavy-electron superconductor UBe(13).

    PubMed

    Shimizu, Yusei; Kittaka, Shunichiro; Sakakibara, Toshiro; Haga, Yoshinori; Yamamoto, Etsuji; Amitsuka, Hiroshi; Tsutsumi, Yasumasa; Machida, Kazushige

    2015-04-10

    Low-energy quasiparticle excitations in the superconducting (SC) state of UBe_{13} were studied by means of specific-heat (C) measurements in a rotating field. Quite unexpectedly, the magnetic-field dependence of C(H) is linear in H with no angular dependence at low fields in the SC state, implying that the gap is fully open over the Fermi surfaces, in stark contrast to previous expectations. In addition, a characteristic cubic anisotropy of C(H) was observed above 2 T with a maximum (minimum) for H∥[001] ([111]) within the (11[over ¯]0) plane, in the normal as well as in the SC states. This oscillation possibly originates from the anisotropic response of the heavy quasiparticle bands, and might be a key to understand the unusual properties of UBe_{13}. PMID:25910153

  17. De novo Cloning and Annotation of Genes Associated with Immunity, Detoxification and Energy Metabolism from the Fat Body of the Oriental Fruit Fly, Bactrocera dorsalis

    PubMed Central

    Yang, Wen-Jia; Yuan, Guo-Rui; Cong, Lin; Xie, Yi-Fei; Wang, Jin-Jun

    2014-01-01

    The oriental fruit fly, Bactrocera dorsalis, is a destructive pest in tropical and subtropical areas. In this study, we performed transcriptome-wide analysis of the fat body of B. dorsalis and obtained more than 59 million sequencing reads, which were assembled into 27,787 unigenes with an average length of 591 bp. Among them, 17,442 (62.8%) unigenes matched known proteins in the NCBI database. The assembled sequences were further annotated with gene ontology, cluster of orthologous group terms, and Kyoto encyclopedia of genes and genomes. In depth analysis was performed to identify genes putatively involved in immunity, detoxification, and energy metabolism. Many new genes were identified including serpins, peptidoglycan recognition proteins and defensins, which were potentially linked to immune defense. Many detoxification genes were identified, including cytochrome P450s, glutathione S-transferases and ATP-binding cassette (ABC) transporters. Many new transcripts possibly involved in energy metabolism, including fatty acid desaturases, lipases, alpha amylases, and trehalose-6-phosphate synthases, were identified. Moreover, we randomly selected some genes to examine their expression patterns in different tissues by quantitative real-time PCR, which indicated that some genes exhibited fat body-specific expression in B. dorsalis. The identification of a numerous transcripts in the fat body of B. dorsalis laid the foundation for future studies on the functions of these genes. PMID:24710118

  18. A Monte Carlo simulation study of the effect of energy windows in computed tomography images based on an energy-resolved photon counting detector

    NASA Astrophysics Data System (ADS)

    Lee, Seung-Wan; Choi, Yu-Na; Cho, Hyo-Min; Lee, Young-Jin; Ryu, Hyun-Ju; Kim, Hee-Joung

    2012-08-01

    The energy-resolved photon counting detector provides the spectral information that can be used to generate images. The novel imaging methods, including the K-edge imaging, projection-based energy weighting imaging and image-based energy weighting imaging, are based on the energy-resolved photon counting detector and can be realized by using various energy windows or energy bins. The location and width of the energy windows or energy bins are important because these techniques generate an image using the spectral information defined by the energy windows or energy bins. In this study, the reconstructed images acquired with K-edge imaging, projection-based energy weighting imaging and image-based energy weighting imaging were simulated using the Monte Carlo simulation. The effect of energy windows or energy bins was investigated with respect to the contrast, coefficient-of-variation (COV) and contrast-to-noise ratio (CNR). The three images were compared with respect to the CNR. We modeled the x-ray computed tomography system based on the CdTe energy-resolved photon counting detector and polymethylmethacrylate phantom, which have iodine, gadolinium and blood. To acquire K-edge images, the lower energy thresholds were fixed at K-edge absorption energy of iodine and gadolinium and the energy window widths were increased from 1 to 25 bins. The energy weighting factors optimized for iodine, gadolinium and blood were calculated from 5, 10, 15, 19 and 33 energy bins. We assigned the calculated energy weighting factors to the images acquired at each energy bin. In K-edge images, the contrast and COV decreased, when the energy window width was increased. The CNR increased as a function of the energy window width and decreased above the specific energy window width. When the number of energy bins was increased from 5 to 15, the contrast increased in the projection-based energy weighting images. There is a little difference in the contrast, when the number of energy bin is

  19. Analysis of colliding nuclear matter in terms of symmetry energy and cross-section using computational method

    SciTech Connect

    Sharma, Arun Bharti, Arun; Gautam, Sakshi

    2015-08-28

    Here we perform a systematic study to extract the information for colliding nuclear matter via symmetry energy and nucleon-nucleon cross section in the fragmentation of some asymmetric colliding nuclei (O{sup 16}+Br{sup 80,} {sup 84,} {sup 92}) in the energy range between 50-200 MeV/nucleon. The simulations are carried out using isospin-dependent quantum-molecular dynamics (IQMD) computational approach for central collisions. Our study reveals that fragmentation pattern of neutron-rich colliding nuclei is sensitive to symmetry energy at lower incident energies, whereas isospin dependence of nucleon-nucleon cross section becomes dominant for reactions at higher incident energies.

  20. Monochromatic energy computed tomography image for active intestinal hemorrhage: A model investigation

    PubMed Central

    Liu, Wen-Dong; Wu, Xing-Wang; Hu, Jun-Mei; Wang, Bin; Liu, Bin

    2015-01-01

    AIM: To investigate the value of computed tomography (CT) spectral imaging in the evaluation of intestinal hemorrhage. METHODS: Seven blood flow rates were simulated in vitro. Energy spectral CT and mixed-energy CT scans were performed for each rate (0.5, 0.4, 0.3, 0.2, 0.1, 0.05 and 0.025 mL/min). The detection rates and the contrast-to-noise ratios (CNRs) of the contrast agent extravasation regions were compared between the two scanning methods in the arterial phase (AP) and the portal venous phase (PVP). Comparisons of the CNR values between the PVP and the AP were made for each energy level and carried out using a completely random t test. A χ2 test was used to compare the detection rates obtained from the two scanning methods. RESULTS: The total detection rates for energy spectral CT and mixed-energy CT in the AP were 88.57% (31/35) and 65.71% (23/35), respectively, and the difference was significant (χ2 = 5.185, P = 0.023); the total detection rates in the PVP were 100.00% (35/35) and 91.4% (32/35), respectively, and the difference was not significant (χ2 = 1.393, P = 0.238). In the AP, the CNR of the contrast agent extravasation regions was 3.58 ± 2.09 on the mixed-energy CT images, but the CNRs were 8.78 ± 7.21 and 8.83 ± 6.75 at 50 and 60 keV, respectively, on the single-energy CT images, which were significantly different (3.58 ± 2.09 vs 8.78 ± 7.21, P = 0.031; 3.58 ± 2.09 vs 8.83 ± 6.75, P = 0.029). In the PVP, the differences between the CNRs at 40, 50 and 60 keV different monochromatic energy levels and the polychromatic energy images were significant (19.35 ± 10.89 vs 11.68 ± 6.38, P = 0.010; 20.82 ± 11.26 vs 11.68 ± 6.38, P = 0.001; 20.63 ± 10.07 vs 11.68 ± 6.38, P = 0.001). The CNRs at the different energy levels in the AP and the PVP were significantly different (t = -2.415, -2.380, -2.575, -2.762, -2.945, -3.157, -3.996 and -3.189). CONCLUSION: Monochromatic energy imaging spectral CT is superior to polychromatic energy images for

  1. Computing Clinically Relevant Binding Free Energies of HIV-1 Protease Inhibitors

    PubMed Central

    2014-01-01

    The use of molecular simulation to estimate the strength of macromolecular binding free energies is becoming increasingly widespread, with goals ranging from lead optimization and enrichment in drug discovery to personalizing or stratifying treatment regimes. In order to realize the potential of such approaches to predict new results, not merely to explain previous experimental findings, it is necessary that the methods used are reliable and accurate, and that their limitations are thoroughly understood. However, the computational cost of atomistic simulation techniques such as molecular dynamics (MD) has meant that until recently little work has focused on validating and verifying the available free energy methodologies, with the consequence that many of the results published in the literature are not reproducible. Here, we present a detailed analysis of two of the most popular approximate methods for calculating binding free energies from molecular simulations, molecular mechanics Poisson–Boltzmann surface area (MMPBSA) and molecular mechanics generalized Born surface area (MMGBSA), applied to the nine FDA-approved HIV-1 protease inhibitors. Our results show that the values obtained from replica simulations of the same protease–drug complex, differing only in initially assigned atom velocities, can vary by as much as 10 kcal mol–1, which is greater than the difference between the best and worst binding inhibitors under investigation. Despite this, analysis of ensembles of simulations producing 50 trajectories of 4 ns duration leads to well converged free energy estimates. For seven inhibitors, we find that with correctly converged normal mode estimates of the configurational entropy, we can correctly distinguish inhibitors in agreement with experimental data for both the MMPBSA and MMGBSA methods and thus have the ability to rank the efficacy of binding of this selection of drugs to the protease (no account is made for free energy penalties associated with

  2. Recent Advances in Cardiac Computed Tomography: Dual Energy, Spectral and Molecular CT Imaging

    PubMed Central

    Danad, Ibrahim; Fayad, Zahi A.; Willemink, Martin J.; Min, James K.

    2015-01-01

    Computed tomography (CT) evolved into a powerful diagnostic tool and it is impossible to imagine current clinical practice without CT imaging. Due to its widespread availability, ease of clinical application, superb sensitivity for detection of CAD, and non-invasive nature, CT has become a valuable tool within the armamentarium of the cardiologist. In the last few years, numerous technological advances in CT have occurred—including dual energy CT (DECT), spectral CT and CT-based molecular imaging. By harnessing the advances in technology, cardiac CT has advanced beyond the mere evaluation of coronary stenosis to an imaging modality tool that permits accurate plaque characterization, assessment of myocardial perfusion and even probing of molecular processes that are involved in coronary atherosclerosis. Novel innovations in CT contrast agents and pre-clinical spectral CT devices have paved the way for CT-based molecular imaging. PMID:26068288

  3. Computational mechanics for geosystems management to support the energy and natural resources mission.

    SciTech Connect

    Stone, Charles Michael

    2010-07-01

    U.S. energy needs - minimizing climate change, mining and extraction technologies, safe waste disposal - require the ability to simulate, model, and predict the behavior of subsurface systems. They propose development of a coupled thermal, hydrological, mechanical, chemistry (THMC) modeling capability for massively parallel applications that can address these critical needs. The goal and expected outcome of this research is a state-of-the-art, extensible, simulation capability, based upon SIERRA Mechanics, to address multiphase, multicomponent reactive transport coupled to nonlinear geomechanics in heterogeneous (geologic) porous materials. The THMC code provides a platform for integrating research in numerical mathematics and algorithms for chemically reactive multiphase systems with computer science research in adaptive coupled solution control and framework architecture.

  4. Event parallelism: Distributed memory parallel computing for high energy physics experiments

    SciTech Connect

    Nash, T.

    1989-05-01

    This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC systems, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described. 6 figs.

  5. Comparative analysis of economic models in selected solar energy computer programs

    NASA Astrophysics Data System (ADS)

    Powell, J. W.; Barnes, K. A.

    1982-01-01

    The economic evaluation models in five computer programs widely used for analyzing solar energy systems (F-CHART 3.0, F-CHART 4.0, SOLCOST, BLAST, and DOE-2) are compared. Differences in analysis techniques and assumptions among the programs are assessed from the point of view of consistency with the Federal requirements for life cycle costing (10 CFR Part 436), effect on predicted economic performance, and optimal system size, case of use, and general applicability to diverse systems types and building types. The FEDSOL program developed by the National Bureau of Standards specifically to meet the Federal life cycle cost requirements serves as a basis for the comparison. Results of the study are illustrated in test cases of two different types of Federally owned buildings: a single family residence and a low rise office building.

  6. FRETView: a computer program to simplify the process of obtaining fluorescence resonance energy transfer parameters.

    PubMed

    Stevens, Nathan; Dyer, Joanne; Martí, Angel A; Solomon, Marissa; Turro, Nicholas J

    2007-08-01

    The process of modeling the fluorescence resonance energy transfer (FRET) process for a donor-acceptor pair can be rather challenging, yet few computer programs exist that allow such modeling to be done with relative ease. In order to address this, we have developed a Java-based program, FRETView, which allows numerous FRET parameters to be obtained with just a few mouse clicks. Being a Java-based program, it runs equally well on all the major operating systems such as Windows, Mac OS X, Linux, Solaris. The program allows the user to effortlessly input pertinent information about the donor-acceptor pair, including the absorption and/or emission spectra, and outputs the calculated FRET parameters in table format, as well as graphical plots. PMID:17668122

  7. New Applications of Cardiac Computed Tomography: Dual-Energy, Spectral, and Molecular CT Imaging.

    PubMed

    Danad, Ibrahim; Fayad, Zahi A; Willemink, Martin J; Min, James K

    2015-06-01

    Computed tomography (CT) has evolved into a powerful diagnostic tool, and it is impossible to imagine current clinical practice without CT imaging. Because of its widespread availability, ease of clinical application, superb sensitivity for the detection of coronary artery disease, and noninvasive nature, CT has become a valuable tool within the armamentarium of cardiologists. In the past few years, numerous technological advances in CT have occurred, including dual-energy CT, spectral CT, and CT-based molecular imaging. By harnessing the advances in technology, cardiac CT has advanced beyond the mere evaluation of coronary stenosis to an imaging tool that permits accurate plaque characterization, assessment of myocardial perfusion, and even probing of molecular processes that are involved in coronary atherosclerosis. Novel innovations in CT contrast agents and pre-clinical spectral CT devices have paved the way for CT-based molecular imaging. PMID:26068288

  8. Computing energy spectra for quantum systems using the Feynman-Kac path integral method

    NASA Astrophysics Data System (ADS)

    Rejcek, J. M.; Fazleev, N. G.

    2009-10-01

    We use group theory considerations and properties of a continuous path to define a failure tree numerical procedure for calculating the lowest energy eigenvalues for quantum systems using the Feynman-Kac path integral method. Within this method the solution of the imaginary time Schr"odinger equation is approximated by random walk simulations on a discrete grid constrained only by symmetry considerations of the Hamiltonian. The required symmetry constraints on random walk simulations are associated with a given irreducible representation and are found by identifying the eigenvalues for the irreducible representations corresponding to the symmetric or antisymmetric eigenfunctions for each group operator. The numerical method is applied to compute the eigenvalues of the ground and excited states of the hydrogen and helium atoms.

  9. Computer-assisted qualitative and quantitative analyses of energy-related complex mixtures

    SciTech Connect

    Stamoudis, V.C.; Picel, K.C.

    1985-10-24

    Recent advances in the efficiency of gas chromatography (GC) columns and improvements in instrument hardware and computer software have facilitated rapid and accurate analysis of complex organic mixtures. By applying manufacturer-supplied software (calibrated-peak methods) and custom software based on retention indices (RI) (Demirgian, 1984; Stamoudis and Demirgian, 1985), most of the classes of chemicals in these mixtures can be rapidly analyzed both qualitatively and quantitatively. Sample prefractionation is essential because it produces simpler mixtures for GC analysis, and it separates constituents by chemical class, which aids automated identification. In the analysis of any new material, existing sample preparation procedures are validated for the material or modified to produce well-resolved chemical class fractions. Representative samples and their subfractions are characterized by GC/mass spectrometry (GC/MS) before analysis by computer-assisted GC. During our studies of the toxicological interactions of chemicals in complex mixtures, we have isolated, subfractionated, and characterized the neutral components of a variety of energy-related materials. Here we present chemical characterization and mutagenicity data of selected fractions from three coal-gasification by-product tars, two from pilot-plant gasifiers, and one from a commercial scale gasifier, and analogous data for aromatic subfractions from two additional pilot gasifiers, as well as one from the commercial gasifier. 22 refs., 3 figs., 2 tabs.

  10. FireProt: Energy- and Evolution-Based Computational Design of Thermostable Multiple-Point Mutants

    PubMed Central

    Sebestova, Eva; Bendl, Jaroslav; Khare, Sagar; Chaloupkova, Radka; Prokop, Zbynek; Brezovsky, Jan; Baker, David; Damborsky, Jiri

    2015-01-01

    There is great interest in increasing proteins’ stability to enhance their utility as biocatalysts, therapeutics, diagnostics and nanomaterials. Directed evolution is a powerful, but experimentally strenuous approach. Computational methods offer attractive alternatives. However, due to the limited reliability of predictions and potentially antagonistic effects of substitutions, only single-point mutations are usually predicted in silico, experimentally verified and then recombined in multiple-point mutants. Thus, substantial screening is still required. Here we present FireProt, a robust computational strategy for predicting highly stable multiple-point mutants that combines energy- and evolution-based approaches with smart filtering to identify additive stabilizing mutations. FireProt’s reliability and applicability was demonstrated by validating its predictions against 656 mutations from the ProTherm database. We demonstrate that thermostability of the model enzymes haloalkane dehalogenase DhaA and γ-hexachlorocyclohexane dehydrochlorinase LinA can be substantially increased (ΔT m = 24°C and 21°C) by constructing and characterizing only a handful of multiple-point mutants. FireProt can be applied to any protein for which a tertiary structure and homologous sequences are available, and will facilitate the rapid development of robust proteins for biomedical and biotechnological applications. PMID:26529612

  11. FireProt: Energy- and Evolution-Based Computational Design of Thermostable Multiple-Point Mutants.

    PubMed

    Bednar, David; Beerens, Koen; Sebestova, Eva; Bendl, Jaroslav; Khare, Sagar; Chaloupkova, Radka; Prokop, Zbynek; Brezovsky, Jan; Baker, David; Damborsky, Jiri

    2015-11-01

    There is great interest in increasing proteins' stability to enhance their utility as biocatalysts, therapeutics, diagnostics and nanomaterials. Directed evolution is a powerful, but experimentally strenuous approach. Computational methods offer attractive alternatives. However, due to the limited reliability of predictions and potentially antagonistic effects of substitutions, only single-point mutations are usually predicted in silico, experimentally verified and then recombined in multiple-point mutants. Thus, substantial screening is still required. Here we present FireProt, a robust computational strategy for predicting highly stable multiple-point mutants that combines energy- and evolution-based approaches with smart filtering to identify additive stabilizing mutations. FireProt's reliability and applicability was demonstrated by validating its predictions against 656 mutations from the ProTherm database. We demonstrate that thermostability of the model enzymes haloalkane dehalogenase DhaA and γ-hexachlorocyclohexane dehydrochlorinase LinA can be substantially increased (ΔTm = 24°C and 21°C) by constructing and characterizing only a handful of multiple-point mutants. FireProt can be applied to any protein for which a tertiary structure and homologous sequences are available, and will facilitate the rapid development of robust proteins for biomedical and biotechnological applications. PMID:26529612

  12. On the performance of large Gaussian basis sets for the computation of total atomization energies

    NASA Technical Reports Server (NTRS)

    Martin, J. M. L.

    1992-01-01

    The total atomization energies of a number of molecules have been computed using an augmented coupled-cluster method and (5s4p3d2f1g) and 4s3p2d1f) atomic natural orbital (ANO) basis sets, as well as the correlation consistent valence triple zeta plus polarization (cc-pVTZ) correlation consistent valence quadrupole zeta plus polarization (cc-pVQZ) basis sets. The performance of ANO and correlation consistent basis sets is comparable throughout, although the latter can result in significant CPU time savings. Whereas the inclusion of g functions has significant effects on the computed Sigma D(e) values, chemical accuracy is still not reached for molecules involving multiple bonds. A Gaussian-1 (G) type correction lowers the error, but not much beyond the accuracy of the G1 model itself. Using separate corrections for sigma bonds, pi bonds, and valence pairs brings down the mean absolute error to less than 1 kcal/mol for the spdf basis sets, and about 0.5 kcal/mol for the spdfg basis sets. Some conclusions on the success of the Gaussian-1 and Gaussian-2 models are drawn.

  13. Use of multidimensional fluorescence resonance energy transfer to establish the orientation of cholecystokinin docked at the type A cholecystokinin receptor.

    PubMed

    Harikumar, Kaleeckal G; Gao, Fan; Pinon, Delia I; Miller, Laurence J

    2008-09-01

    Fluorescence resonance energy transfer (FRET) represents a powerful tool to establish relative distances between donor and acceptor fluorophores. By utilizing several donors situated in distinct positions within a docked full agonist ligand and several acceptors distributed at distinct sites within its receptor, multiple interdependent dimensions can be determined. These can provide a unique method to establish or confirm three-dimensional structure of the molecular complex. In this work, we have utilized full agonist analogues of cholecystokinin (CCK) with Aladan distributed throughout the pharmacophore in positions 24, 29, and 33, along with receptor constructs derivatized with Alexa (546) at positions 94, 102, 204, and 341 in the helical bundle and first, second, and third extracellular loops, respectively. These provided 12 FRET distances to overlay on working models of the CCK-occupied receptor. These established that the carboxyl terminus of CCK resides at the external surface of the lipid bilayer, adjacent to the receptor amino-terminal tail, rather than being inserted into the helical bundle. They also provide important experimentally derived constraints for understanding spatial relationships between the docked ligand and the flexible extracellular loop regions. Multidimensional FRET provides a new independent method to establish and refine structural insights into ligand-receptor complexes. PMID:18700727

  14. Use of Multidimensional Fluorescence Resonance Energy Transfer To Establish the Orientation of Cholecystokinin Docked at the Type A Cholecystokinin Receptor†

    PubMed Central

    Harikumar, Kaleeckal G.; Gao, Fan; Pinon, Delia I.; Miller, Laurence J.

    2013-01-01

    Fluorescence resonance energy transfer (FRET) represents a powerful tool to establish relative distances between donor and acceptor fluorophores. By utilizing several donors situated in distinct positions within a docked full agonist ligand and several acceptors distributed at distinct sites within its receptor, multiple interdependent dimensions can be determined. These can provide a unique method to establish or confirm three-dimensional structure of the molecular complex. In this work, we have utilized full agonist analogues of cholecystokinin (CCK) with Aladan distributed throughout the pharmacophore in positions 24, 29, and 33, along with receptor constructs derivatized with Alexa546 at positions 94, 102, 204, and 341 in the helical bundle and first, second, and third extracellular loops, respectively. These provided 12 FRET distances to overlay on working models of the CCK-occupied receptor. These established that the carboxyl terminus of CCK resides at the external surface of the lipid bilayer, adjacent to the receptor amino-terminal tail, rather than being inserted into the helical bundle. They also provide important experimentally derived constraints for understanding spatial relationships between the docked ligand and the flexible extracellular loop regions. Multidimensional FRET provides a new independent method to establish and refine structural insights into ligand–receptor complexes. PMID:18700727

  15. Human Computer Interactions in Next-Generation of Aircraft Smart Navigation Management Systems: Task Analysis and Architecture under an Agent-Oriented Methodological Approach

    PubMed Central

    Canino-Rodríguez, José M.; García-Herrero, Jesús; Besada-Portas, Juan; Ravelo-García, Antonio G.; Travieso-González, Carlos; Alonso-Hernández, Jesús B.

    2015-01-01

    The limited efficiency of current air traffic systems will require a next-generation of Smart Air Traffic System (SATS) that relies on current technological advances. This challenge means a transition toward a new navigation and air-traffic procedures paradigm, where pilots and air traffic controllers perform and coordinate their activities according to new roles and technological supports. The design of new Human-Computer Interactions (HCI) for performing these activities is a key element of SATS. However efforts for developing such tools need to be inspired on a parallel characterization of hypothetical air traffic scenarios compatible with current ones. This paper is focused on airborne HCI into SATS where cockpit inputs came from aircraft navigation systems, surrounding traffic situation, controllers’ indications, etc. So the HCI is intended to enhance situation awareness and decision-making through pilot cockpit. This work approach considers SATS as a system distributed on a large-scale with uncertainty in a dynamic environment. Therefore, a multi-agent systems based approach is well suited for modeling such an environment. We demonstrate that current methodologies for designing multi-agent systems are a useful tool to characterize HCI. We specifically illustrate how the selected methodological approach provides enough guidelines to obtain a cockpit HCI design that complies with future SATS specifications. PMID:25746092

  16. Human computer interactions in next-generation of aircraft smart navigation management systems: task analysis and architecture under an agent-oriented methodological approach.

    PubMed

    Canino-Rodríguez, José M; García-Herrero, Jesús; Besada-Portas, Juan; Ravelo-García, Antonio G; Travieso-González, Carlos; Alonso-Hernández, Jesús B

    2015-01-01

    The limited efficiency of current air traffic systems will require a next-generation of Smart Air Traffic System (SATS) that relies on current technological advances. This challenge means a transition toward a new navigation and air-traffic procedures paradigm, where pilots and air traffic controllers perform and coordinate their activities according to new roles and technological supports. The design of new Human-Computer Interactions (HCI) for performing these activities is a key element of SATS. However efforts for developing such tools need to be inspired on a parallel characterization of hypothetical air traffic scenarios compatible with current ones. This paper is focused on airborne HCI into SATS where cockpit inputs came from aircraft navigation systems, surrounding traffic situation, controllers' indications, etc. So the HCI is intended to enhance situation awareness and decision-making through pilot cockpit. This work approach considers SATS as a system distributed on a large-scale with uncertainty in a dynamic environment. Therefore, a multi-agent systems based approach is well suited for modeling such an environment. We demonstrate that current methodologies for designing multi-agent systems are a useful tool to characterize HCI. We specifically illustrate how the selected methodological approach provides enough guidelines to obtain a cockpit HCI design that complies with future SATS specifications. PMID:25746092

  17. A New Generation of Networks and Computing Models for High Energy Physics in the LHC Era

    NASA Astrophysics Data System (ADS)

    Newman, H.

    2011-12-01

    Wide area networks of increasing end-to-end capacity and capability are vital for every phase of high energy physicists' work. Our bandwidth usage, and the typical capacity of the major national backbones and intercontinental links used by our field have progressed by a factor of several hundred times over the past decade. With the opening of the LHC era in 2009-10 and the prospects for discoveries in the upcoming LHC run, the outlook is for a continuation or an acceleration of these trends using next generation networks over the next few years. Responding to the need to rapidly distribute and access datasets of tens to hundreds of terabytes drawn from multi-petabyte data stores, high energy physicists working with network engineers and computer scientists are learning to use long range networks effectively on an increasing scale, and aggregate flows reaching the 100 Gbps range have been observed. The progress of the LHC, and the unprecedented ability of the experiments to produce results rapidly using worldwide distributed data processing and analysis has sparked major, emerging changes in the LHC Computing Models, which are moving from the classic hierarchical model designed a decade ago to more agile peer-to-peer-like models that make more effective use of the resources at Tier2 and Tier3 sites located throughout the world. A new requirements working group has gauged the needs of Tier2 centers, and charged the LHCOPN group that runs the network interconnecting the LHC Tierls with designing a new architecture interconnecting the Tier2s. As seen from the perspective of ICFA's Standing Committee on Inter-regional Connectivity (SCIC), the Digital Divide that separates physicists in several regions of the developing world from those in the developed world remains acute, although many countries have made major advances through the rapid installation of modern network infrastructures. A case in point is Africa, where a new round of undersea cables promises to transform

  18. Measurement of breast tissue composition with dual energy cone-beam computed tomography: A postmortem study

    SciTech Connect

    Ding Huanjun; Ducote, Justin L.; Molloi, Sabee

    2013-06-15

    Purpose: To investigate the feasibility of a three-material compositional measurement of water, lipid, and protein content of breast tissue with dual kVp cone-beam computed tomography (CT) for diagnostic purposes. Methods: Simulations were performed on a flat panel-based computed tomography system with a dual kVp technique in order to guide the selection of experimental acquisition parameters. The expected errors induced by using the proposed calibration materials were also estimated by simulation. Twenty pairs of postmortem breast samples were imaged with a flat-panel based dual kVp cone-beam CT system, followed by image-based material decomposition using calibration data obtained from a three-material phantom consisting of water, vegetable oil, and polyoxymethylene plastic. The tissue samples were then chemically decomposed into their respective water, lipid, and protein contents after imaging to allow direct comparison with data from dual energy decomposition. Results: Guided by results from simulation, the beam energies for the dual kVp cone-beam CT system were selected to be 50 and 120 kVp with the mean glandular dose divided equally between each exposure. The simulation also suggested that the use of polyoxymethylene as the calibration material for the measurement of pure protein may introduce an error of -11.0%. However, the tissue decomposition experiments, which employed a calibration phantom made out of water, oil, and polyoxymethylene, exhibited strong correlation with data from the chemical analysis. The average root-mean-square percentage error for water, lipid, and protein contents was 3.58% as compared with chemical analysis. Conclusions: The results of this study suggest that the water, lipid, and protein contents can be accurately measured using dual kVp cone-beam CT. The tissue compositional information may improve the sensitivity and specificity for breast cancer diagnosis.

  19. Measurement of breast tissue composition with dual energy cone-beam computed tomography: A postmortem study

    PubMed Central

    Ding, Huanjun; Ducote, Justin L.; Molloi, Sabee

    2013-01-01

    Purpose: To investigate the feasibility of a three-material compositional measurement of water, lipid, and protein content of breast tissue with dual kVp cone-beam computed tomography (CT) for diagnostic purposes. Methods: Simulations were performed on a flat panel-based computed tomography system with a dual kVp technique in order to guide the selection of experimental acquisition parameters. The expected errors induced by using the proposed calibration materials were also estimated by simulation. Twenty pairs of postmortem breast samples were imaged with a flat-panel based dual kVp cone-beam CT system, followed by image-based material decomposition using calibration data obtained from a three-material phantom consisting of water, vegetable oil, and polyoxymethylene plastic. The tissue samples were then chemically decomposed into their respective water, lipid, and protein contents after imaging to allow direct comparison with data from dual energy decomposition. Results: Guided by results from simulation, the beam energies for the dual kVp cone-beam CT system were selected to be 50 and 120 kVp with the mean glandular dose divided equally between each exposure. The simulation also suggested that the use of polyoxymethylene as the calibration material for the measurement of pure protein may introduce an error of −11.0%. However, the tissue decomposition experiments, which employed a calibration phantom made out of water, oil, and polyoxymethylene, exhibited strong correlation with data from the chemical analysis. The average root-mean-square percentage error for water, lipid, and protein contents was 3.58% as compared with chemical analysis. Conclusions: The results of this study suggest that the water, lipid, and protein contents can be accurately measured using dual kVp cone-beam CT. The tissue compositional information may improve the sensitivity and specificity for breast cancer diagnosis. PMID:23718593

  20. Soft computing analysis of the possible correlation between temporal and energy release patterns in seismic activity

    NASA Astrophysics Data System (ADS)

    Konstantaras, Anthony; Katsifarakis, Emmanouil; Artzouxaltzis, Xristos; Makris, John; Vallianatos, Filippos; Varley, Martin

    2010-05-01

    This paper is a preliminary investigation of the possible correlation of temporal and energy release patterns of seismic activity involving the preparation processes of consecutive sizeable seismic events [1,2]. The background idea is that during periods of low-level seismic activity, stress processes in the crust accumulate energy at the seismogenic area whilst larger seismic events act as a decongesting mechanism releasing considerable energy [3,4]. A dynamic algorithm is being developed aiming to identify and cluster pre- and post- seismic events to the main earthquake following on research carried out by Zubkov [5] and Dobrovolsky [6,7]. This clustering technique along with energy release equations dependent on Richter's scale [8,9] allow for an estimate to be drawn regarding the amount of the energy being released by the seismic sequence. The above approach is being implemented as a monitoring tool to investigate the behaviour of the underlying energy management system by introducing this information to various neural [10,11] and soft computing models [1,12,13,14]. The incorporation of intelligent systems aims towards the detection and simulation of the possible relationship between energy release patterns and time-intervals among consecutive sizeable earthquakes [1,15]. Anticipated successful training of the imported intelligent systems may result in a real-time, on-line processing methodology [1,16] capable to dynamically approximate the time-interval between the latest and the next forthcoming sizeable seismic event by monitoring the energy release process in a specific seismogenic area. Indexing terms: pattern recognition, long-term earthquake precursors, neural networks, soft computing, earthquake occurrence intervals References [1] Konstantaras A., Vallianatos F., Varley M.R. and Makris J. P.: ‘Soft computing modelling of seismicity in the southern Hellenic arc', IEEE Geoscience and Remote Sensing Letters, vol. 5 (3), pp. 323-327, 2008 [2] Eneva M. and