2010-07-01
Cloud computing , an emerging form of computing in which users have access to scalable, on-demand capabilities that are provided through Internet... cloud computing , (2) the information security implications of using cloud computing services in the Federal Government, and (3) federal guidance and...efforts to address information security when using cloud computing . The complete report is titled Information Security: Federal Guidance Needed to
NASA Astrophysics Data System (ADS)
Chonacky, Norman; Winch, David
2008-04-01
There is substantial evidence of a need to make computation an integral part of the undergraduate physics curriculum. This need is consistent with data from surveys in both the academy and the workplace, and has been reinforced by two years of exploratory efforts by a group of physics faculty for whom computation is a special interest. We have examined past and current efforts at reform and a variety of strategic, organizational, and institutional issues involved in any attempt to broadly transform existing practice. We propose a set of guidelines for development based on this past work and discuss our vision of computationally integrated physics.
The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diachin, L F; Garaizar, F X; Henson, V E
2009-10-12
In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE andmore » the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.« less
ERIC Educational Resources Information Center
Peng, Jacob C.
2009-01-01
The author investigated whether students' effort in working on homework problems was affected by their need for cognition, their perception of the system, and their computer efficacy when instructors used an online system to collect accounting homework. Results showed that individual intrinsic motivation and computer efficacy are important factors…
NASA Astrophysics Data System (ADS)
Yoon, S.
2016-12-01
To define geodetic reference frame using GPS data collected by Continuously Operating Reference Stations (CORS) network, historical GPS data needs to be reprocessed regularly. Reprocessing GPS data collected by upto 2000 CORS sites for the last two decades requires a lot of computational resource. At National Geodetic Survey (NGS), there has been one completed reprocessing in 2011, and currently, the second reprocessing is undergoing. For the first reprocessing effort, in-house computing resource was utilized. In the current second reprocessing effort, outsourced cloud computing platform is being utilized. In this presentation, the outline of data processing strategy at NGS is described as well as the effort to parallelize the data processing procedure in order to maximize the benefit of the cloud computing. The time and cost savings realized by utilizing cloud computing approach will also be discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clouse, C. J.; Edwards, M. J.; McCoy, M. G.
2015-07-07
Through its Advanced Scientific Computing (ASC) and Inertial Confinement Fusion (ICF) code development efforts, Lawrence Livermore National Laboratory (LLNL) provides a world leading numerical simulation capability for the National HED/ICF program in support of the Stockpile Stewardship Program (SSP). In addition the ASC effort provides high performance computing platform capabilities upon which these codes are run. LLNL remains committed to, and will work with, the national HED/ICF program community to help insure numerical simulation needs are met and to make those capabilities available, consistent with programmatic priorities and available resources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Judith Alice; Long, Kevin Nicholas
2018-05-01
Sylgard® 184/Glass Microballoon (GMB) potting material is currently used in many NW systems. Analysts need a macroscale constitutive model that can predict material behavior under complex loading and damage evolution. To address this need, ongoing modeling and experimental efforts have focused on study of damage evolution in these materials. Micromechanical finite element simulations that resolve individual GMB and matrix components promote discovery and better understanding of the material behavior. With these simulations, we can study the role of the GMB volume fraction, time-dependent damage, behavior under confined vs. unconfined compression, and the effects of partial damage. These simulations are challengingmore » and push the boundaries of capability even with the high performance computing tools available at Sandia. We summarize the major challenges and the current state of this modeling effort, as an exemplar of micromechanical modeling needs that can motivate advances in future computing efforts.« less
Montague, P. Read; Dolan, Raymond J.; Friston, Karl J.; Dayan, Peter
2013-01-01
Computational ideas pervade many areas of science and have an integrative explanatory role in neuroscience and cognitive science. However, computational depictions of cognitive function have had surprisingly little impact on the way we assess mental illness because diseases of the mind have not been systematically conceptualized in computational terms. Here, we outline goals and nascent efforts in the new field of computational psychiatry, which seeks to characterize mental dysfunction in terms of aberrant computations over multiple scales. We highlight early efforts in this area that employ reinforcement learning and game theoretic frameworks to elucidate decision-making in health and disease. Looking forwards, we emphasize a need for theory development and large-scale computational phenotyping in human subjects. PMID:22177032
Refocusing the Vision: The Future of Instructional Technology
ERIC Educational Resources Information Center
Pence, Harry E.; McIntosh, Steven
2011-01-01
Two decades ago, many campuses mobilized a major effort to deal with a clear problem; faculty and students needed access to desktop computing technologies. Now the situation is much more complex. Responding to the current challenges, like mobile computing and social networking, will be ore difficult but equally important. There is a clear need for…
Implementing Equal Access Computer Labs.
ERIC Educational Resources Information Center
Clinton, Janeen; And Others
This paper discusses the philosophy followed in Palm Beach County to adapt computer literacy curriculum, hardware, and software to meet the needs of all children. The Department of Exceptional Student Education and the Department of Instructional Computing Services cooperated in planning strategies and coordinating efforts to implement equal…
[Earth Science Technology Office's Computational Technologies Project
NASA Technical Reports Server (NTRS)
Fischer, James (Technical Monitor); Merkey, Phillip
2005-01-01
This grant supported the effort to characterize the problem domain of the Earth Science Technology Office's Computational Technologies Project, to engage the Beowulf Cluster Computing Community as well as the High Performance Computing Research Community so that we can predict the applicability of said technologies to the scientific community represented by the CT project and formulate long term strategies to provide the computational resources necessary to attain the anticipated scientific objectives of the CT project. Specifically, the goal of the evaluation effort is to use the information gathered over the course of the Round-3 investigations to quantify the trends in scientific expectations, the algorithmic requirements and capabilities of high-performance computers to satisfy this anticipated need.
[Earth and Space Sciences Project Services for NASA HPCC
NASA Technical Reports Server (NTRS)
Merkey, Phillip
2002-01-01
This grant supported the effort to characterize the problem domain of the Earth Science Technology Office's Computational Technologies Project, to engage the Beowulf Cluster Computing Community as well as the High Performance Computing Research Community so that we can predict the applicability of said technologies to the scientific community represented by the CT project and formulate long term strategies to provide the computational resources necessary to attain the anticipated scientific objectives of the CT project. Specifically, the goal of the evaluation effort is to use the information gathered over the course of the Round-3 investigations to quantify the trends in scientific expectations, the algorithmic requirements and capabilities of high-performance computers to satisfy this anticipated need.
Large Scale Computing and Storage Requirements for High Energy Physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerber, Richard A.; Wasserman, Harvey
2010-11-24
The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. Themore » effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes a section that describes efforts already underway or planned at NERSC that address requirements collected at the workshop. NERSC has many initiatives in progress that address key workshop findings and are aligned with NERSC's strategic plans.« less
Continuous quality improvement and medical informatics: the convergent synergy.
Werth, G R; Connelly, D P
1992-01-01
Continuous quality improvement (CQI) and medical informatics specialists need to converge their efforts to create synergy for improving health care. Health care CQI needs medical informatics' expertise and technology to build the information systems needed to manage health care organizations according to quality improvement principles. Medical informatics needs CQI's philosophy and methods to build health care information systems that can evolve to meet the changing needs of clinicians and other stakeholders. This paper explores the philosophical basis for convergence of CQI and medical informatics efforts, and then examines a clinical computer workstation development project that is applying a combined approach.
Issues in undergraduate education in computational science and high performance computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marchioro, T.L. II; Martin, D.
1994-12-31
The ever increasing need for mathematical and computational literacy within their society and among members of the work force has generated enormous pressure to revise and improve the teaching of related subjects throughout the curriculum, particularly at the undergraduate level. The Calculus Reform movement is perhaps the best known example of an organized initiative in this regard. The UCES (Undergraduate Computational Engineering and Science) project, an effort funded by the Department of Energy and administered through the Ames Laboratory, is sponsoring an informal and open discussion of the salient issues confronting efforts to improve and expand the teaching of computationalmore » science as a problem oriented, interdisciplinary approach to scientific investigation. Although the format is open, the authors hope to consider pertinent questions such as: (1) How can faculty and research scientists obtain the recognition necessary to further excellence in teaching the mathematical and computational sciences? (2) What sort of educational resources--both hardware and software--are needed to teach computational science at the undergraduate level? Are traditional procedural languages sufficient? Are PCs enough? Are massively parallel platforms needed? (3) How can electronic educational materials be distributed in an efficient way? Can they be made interactive in nature? How should such materials be tied to the World Wide Web and the growing ``Information Superhighway``?« less
An efficient method for hybrid density functional calculation with spin-orbit coupling
NASA Astrophysics Data System (ADS)
Wang, Maoyuan; Liu, Gui-Bin; Guo, Hong; Yao, Yugui
2018-03-01
In first-principles calculations, hybrid functional is often used to improve accuracy from local exchange correlation functionals. A drawback is that evaluating the hybrid functional needs significantly more computing effort. When spin-orbit coupling (SOC) is taken into account, the non-collinear spin structure increases computing effort by at least eight times. As a result, hybrid functional calculations with SOC are intractable in most cases. In this paper, we present an approximate solution to this problem by developing an efficient method based on a mixed linear combination of atomic orbital (LCAO) scheme. We demonstrate the power of this method using several examples and we show that the results compare very well with those of direct hybrid functional calculations with SOC, yet the method only requires a computing effort similar to that without SOC. The presented technique provides a good balance between computing efficiency and accuracy, and it can be extended to magnetic materials.
Computational Models of Anterior Cingulate Cortex: At the Crossroads between Prediction and Effort.
Vassena, Eliana; Holroyd, Clay B; Alexander, William H
2017-01-01
In the last two decades the anterior cingulate cortex (ACC) has become one of the most investigated areas of the brain. Extensive neuroimaging evidence suggests countless functions for this region, ranging from conflict and error coding, to social cognition, pain and effortful control. In response to this burgeoning amount of data, a proliferation of computational models has tried to characterize the neurocognitive architecture of ACC. Early seminal models provided a computational explanation for a relatively circumscribed set of empirical findings, mainly accounting for EEG and fMRI evidence. More recent models have focused on ACC's contribution to effortful control. In parallel to these developments, several proposals attempted to explain within a single computational framework a wider variety of empirical findings that span different cognitive processes and experimental modalities. Here we critically evaluate these modeling attempts, highlighting the continued need to reconcile the array of disparate ACC observations within a coherent, unifying framework.
NASA Technical Reports Server (NTRS)
1993-01-01
This video documents efforts at NASA Langley Research Center to improve safety and economy in aircraft. Featured are the cockpit weather information needs computer system, which relays real time weather information to the pilot, and efforts to improve techniques to detect structural flaws and corrosion, such as the thermal bond inspection system.
Parallel computing for automated model calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.
2002-07-29
Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. Somore » far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.« less
Reducing obesity will require involvement of all sectors of society.
Hill, James O; Peters, John C; Blair, Steven N
2015-02-01
We need all sectors of society involved in reducing obesity. The food industry's effort to reduce energy intake as part of the Healthy Weight Commitment Foundation is a significant step in the right direction and should be recognized as such by the public health community. We also need to get organizations that promote physical inactivity, such as computer, automobile, and entertainment industries, to become engaged in efforts to reduce obesity. © 2014 The Obesity Society.
Application Portable Parallel Library
NASA Technical Reports Server (NTRS)
Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott
1995-01-01
Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.
ERIC Educational Resources Information Center
Decker, Adrienne; Phelps, Andrew; Egert, Christopher A.
2017-01-01
This article explores the critical need to articulate computing as a creative discipline and the potential for gender and ethnic diversity that such efforts enable. By embracing a culture shift within the discipline and using games as a medium of discourse, we can engage students and faculty in a broader definition of computing. The transformative…
Validating the soil vulnerability index for a claypan watershed
USDA-ARS?s Scientific Manuscript database
Assessment studies of conservation efforts have shown that best management practices were not always implemented in the most vulnerable areas where they are most needed. While complex computer simulation models can be used to identify these areas, resources needed for using such models are beyond re...
Dealing with Diversity in Computational Cancer Modeling
Johnson, David; McKeever, Steve; Stamatakos, Georgios; Dionysiou, Dimitra; Graf, Norbert; Sakkalis, Vangelis; Marias, Konstantinos; Wang, Zhihui; Deisboeck, Thomas S.
2013-01-01
This paper discusses the need for interconnecting computational cancer models from different sources and scales within clinically relevant scenarios to increase the accuracy of the models and speed up their clinical adaptation, validation, and eventual translation. We briefly review current interoperability efforts drawing upon our experiences with the development of in silico models for predictive oncology within a number of European Commission Virtual Physiological Human initiative projects on cancer. A clinically relevant scenario, addressing brain tumor modeling that illustrates the need for coupling models from different sources and levels of complexity, is described. General approaches to enabling interoperability using XML-based markup languages for biological modeling are reviewed, concluding with a discussion on efforts towards developing cancer-specific XML markup to couple multiple component models for predictive in silico oncology. PMID:23700360
Texas Education Computer Cooperative (TECC) Products and Services.
ERIC Educational Resources Information Center
Texas Education Agency, Austin.
Designed to broaden awareness of the Texas Education Computer Cooperative (TECC) and its efforts to develop products and services which can be most useful to schools, this publication provides an outline of the need, purpose, organization, and funding of TECC and descriptions of 19 projects. The project descriptions are organized by fiscal year in…
An Analysis of Cloud Computing with Amazon Web Services for the Atmospheric Science Data Center
NASA Astrophysics Data System (ADS)
Gleason, J. L.; Little, M. M.
2013-12-01
NASA science and engineering efforts rely heavily on compute and data handling systems. The nature of NASA science data is such that it is not restricted to NASA users, instead it is widely shared across a globally distributed user community including scientists, educators, policy decision makers, and the public. Therefore NASA science computing is a candidate use case for cloud computing where compute resources are outsourced to an external vendor. Amazon Web Services (AWS) is a commercial cloud computing service developed to use excess computing capacity at Amazon, and potentially provides an alternative to costly and potentially underutilized dedicated acquisitions whenever NASA scientists or engineers require additional data processing. AWS desires to provide a simplified avenue for NASA scientists and researchers to share large, complex data sets with external partners and the public. AWS has been extensively used by JPL for a wide range of computing needs and was previously tested on a NASA Agency basis during the Nebula testing program. Its ability to support the Langley Science Directorate needs to be evaluated by integrating it with real world operational needs across NASA and the associated maturity that would come with that. The strengths and weaknesses of this architecture and its ability to support general science and engineering applications has been demonstrated during the previous testing. The Langley Office of the Chief Information Officer in partnership with the Atmospheric Sciences Data Center (ASDC) has established a pilot business interface to utilize AWS cloud computing resources on a organization and project level pay per use model. This poster discusses an effort to evaluate the feasibility of the pilot business interface from a project level perspective by specifically using a processing scenario involving the Clouds and Earth's Radiant Energy System (CERES) project.
NASA Astrophysics Data System (ADS)
Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.
1989-03-01
The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.
Brain transcriptome atlases: a computational perspective.
Mahfouz, Ahmed; Huisman, Sjoerd M H; Lelieveldt, Boudewijn P F; Reinders, Marcel J T
2017-05-01
The immense complexity of the mammalian brain is largely reflected in the underlying molecular signatures of its billions of cells. Brain transcriptome atlases provide valuable insights into gene expression patterns across different brain areas throughout the course of development. Such atlases allow researchers to probe the molecular mechanisms which define neuronal identities, neuroanatomy, and patterns of connectivity. Despite the immense effort put into generating such atlases, to answer fundamental questions in neuroscience, an even greater effort is needed to develop methods to probe the resulting high-dimensional multivariate data. We provide a comprehensive overview of the various computational methods used to analyze brain transcriptome atlases.
An Overview of Recent Developments in Computational Aeroelasticity
NASA Technical Reports Server (NTRS)
Bennett, Robert M.; Edwards, John W.
2004-01-01
The motivation for Computational Aeroelasticity (CA) and the elements of one type of the analysis or simulation process are briefly reviewed. The need for streamlining and improving the overall process to reduce elapsed time and improve overall accuracy is discussed. Further effort is needed to establish the credibility of the methodology, obtain experience, and to incorporate the experience base to simplify the method for future use. Experience with the application of a variety of Computational Aeroelasticity programs is summarized for the transonic flutter of two wings, the AGARD 445.6 wing and a typical business jet wing. There is a compelling need for a broad range of additional flutter test cases for further comparisons. Some existing data sets that may offer CA challenges are presented.
Expanding Computer Science Education in Schools: Understanding Teacher Experiences and Challenges
ERIC Educational Resources Information Center
Yadav, Aman; Gretter, Sarah; Hambrusch, Susanne; Sands, Phil
2017-01-01
The increased push for teaching computer science (CS) in schools in the United States requires training a large number of new K-12 teachers. The current efforts to increase the number of CS teachers have predominantly focused on training teachers from other content areas. In order to support these beginning CS teachers, we need to better…
Using a cloud to replenish parched groundwater modeling efforts.
Hunt, Randall J; Luchette, Joseph; Schreuder, Willem A; Rumbaugh, James O; Doherty, John; Tonkin, Matthew J; Rumbaugh, Douglas B
2010-01-01
Groundwater models can be improved by introduction of additional parameter flexibility and simultaneous use of soft-knowledge. However, these sophisticated approaches have high computational requirements. Cloud computing provides unprecedented access to computing power via the Internet to facilitate the use of these techniques. A modeler can create, launch, and terminate "virtual" computers as needed, paying by the hour, and save machine images for future use. Such cost-effective and flexible computing power empowers groundwater modelers to routinely perform model calibration and uncertainty analysis in ways not previously possible.
Using a cloud to replenish parched groundwater modeling efforts
Hunt, Randall J.; Luchette, Joseph; Schreuder, Willem A.; Rumbaugh, James O.; Doherty, John; Tonkin, Matthew J.; Rumbaugh, Douglas B.
2010-01-01
Groundwater models can be improved by introduction of additional parameter flexibility and simultaneous use of soft-knowledge. However, these sophisticated approaches have high computational requirements. Cloud computing provides unprecedented access to computing power via the Internet to facilitate the use of these techniques. A modeler can create, launch, and terminate “virtual” computers as needed, paying by the hour, and save machine images for future use. Such cost-effective and flexible computing power empowers groundwater modelers to routinely perform model calibration and uncertainty analysis in ways not previously possible.
Multimedia architectures: from desktop systems to portable appliances
NASA Astrophysics Data System (ADS)
Bhaskaran, Vasudev; Konstantinides, Konstantinos; Natarajan, Balas R.
1997-01-01
Future desktop and portable computing systems will have as their core an integrated multimedia system. Such a system will seamlessly combine digital video, digital audio, computer animation, text, and graphics. Furthermore, such a system will allow for mixed-media creation, dissemination, and interactive access in real time. Multimedia architectures that need to support these functions have traditionally required special display and processing units for the different media types. This approach tends to be expensive and is inefficient in its use of silicon. Furthermore, such media-specific processing units are unable to cope with the fluid nature of the multimedia market wherein the needs and standards are changing and system manufacturers may demand a single component media engine across a range of products. This constraint has led to a shift towards providing a single-component multimedia specific computing engine that can be integrated easily within desktop systems, tethered consumer appliances, or portable appliances. In this paper, we review some of the recent architectural efforts in developing integrated media systems. We primarily focus on two efforts, namely the evolution of multimedia-capable general purpose processors and a more recent effort in developing single component mixed media co-processors. Design considerations that could facilitate the migration of these technologies to a portable integrated media system also are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, S.A.
In computing landscape which has a plethora of different hardware architectures and supporting software systems ranging from compilers to operating systems, there is an obvious and strong need for a philosophy of software development that lends itself to the design and construction of portable code systems. The current efforts to standardize software bear witness to this need. SABrE is an effort to implement a software development environment which is itself portable and promotes the design and construction of portable applications. SABrE does not include such important tools as editors and compilers. Well built tools of that kind are readily availablemore » across virtually all computer platforms. The areas that SABrE addresses are at a higher level involving issues such as data portability, portable inter-process communication, and graphics. These blocks of functionality have particular significance to the kind of code development done at LLNL. That is partly why the general computing community has not supplied us with these tools already. This is another key feature of the software development environments which we must recognize. The general computing community cannot and should not be expected to produce all of the tools which we require.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, S.A.
In computing landscape which has a plethora of different hardware architectures and supporting software systems ranging from compilers to operating systems, there is an obvious and strong need for a philosophy of software development that lends itself to the design and construction of portable code systems. The current efforts to standardize software bear witness to this need. SABrE is an effort to implement a software development environment which is itself portable and promotes the design and construction of portable applications. SABrE does not include such important tools as editors and compilers. Well built tools of that kind are readily availablemore » across virtually all computer platforms. The areas that SABrE addresses are at a higher level involving issues such as data portability, portable inter-process communication, and graphics. These blocks of functionality have particular significance to the kind of code development done at LLNL. That is partly why the general computing community has not supplied us with these tools already. This is another key feature of the software development environments which we must recognize. The general computing community cannot and should not be expected to produce all of the tools which we require.« less
Singular Perturbations and Time-Scale Methods in Control Theory: Survey 1976-1982.
1982-12-01
established in the 1960s, when they first became a means for simplified computation of optimal trajectories. It was soon recognized that singular...null-space of P(ao). The asymptotic values of the invariant zeros and associated invariant-zero directions as € O are the values computed from the...49 ’ 49 7. WEAK COUPLING AND TIME SCALES The need for model simplification with a reduction (or distribution) of computational effort is
Strategic directions of computing at Fermilab
NASA Astrophysics Data System (ADS)
Wolbers, Stephen
1998-05-01
Fermilab computing has changed a great deal over the years, driven by the demands of the Fermilab experimental community to record and analyze larger and larger datasets, by the desire to take advantage of advances in computing hardware and software, and by the advances coming from the R&D efforts of the Fermilab Computing Division. The strategic directions of Fermilab Computing continue to be driven by the needs of the experimental program. The current fixed-target run will produce over 100 TBytes of raw data and systems must be in place to allow the timely analysis of the data. The collider run II, beginning in 1999, is projected to produce of order 1 PByte of data per year. There will be a major change in methodology and software language as the experiments move away from FORTRAN and into object-oriented languages. Increased use of automation and the reduction of operator-assisted tape mounts will be required to meet the needs of the large experiments and large data sets. Work will continue on higher-rate data acquisition systems for future experiments and projects. R&D projects will be pursued as necessary to provide software, tools, or systems which cannot be purchased or acquired elsewhere. A closer working relation with other high energy laboratories will be pursued to reduce duplication of effort and to allow effective collaboration on many aspects of HEP computing.
Candidate R&D Thrusts for the Software Technology Initiative.
1981-05-01
computer-aided design and manufacturing efforts provide examples of multiple representations and multiple manipulation modes. R&D difficulties exist in...farfetched, but the potential payoffs are enormous. References Birk, J., and R. Kelley. Research Needed to Advance the State of Knowledge in Robotics . In...and specifica- tion languages would be benefical . This R&D effort may also result in fusion with management tools with which an acquisition manager
Computing at the speed limit (supercomputers)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernhard, R.
1982-07-01
The author discusses how unheralded efforts in the United States, mainly in universities, have removed major stumbling blocks to building cost-effective superfast computers for scientific and engineering applications within five years. These computers would have sustained speeds of billions of floating-point operations per second (flops), whereas with the fastest machines today the top sustained speed is only 25 million flops, with bursts to 160 megaflops. Cost-effective superfast machines can be built because of advances in very large-scale integration and the special software needed to program the new machines. VLSI greatly reduces the cost per unit of computing power. The developmentmore » of such computers would come at an opportune time. Although the US leads the world in large-scale computer technology, its supremacy is now threatened, not surprisingly, by the Japanese. Publicized reports indicate that the Japanese government is funding a cooperative effort by commercial computer manufacturers to develop superfast computers-about 1000 times faster than modern supercomputers. The US computer industry, by contrast, has balked at attempting to boost computer power so sharply because of the uncertain market for the machines and the failure of similar projects in the past to show significant results.« less
Computational Aerodynamic Simulations of a Spacecraft Cabin Ventilation Fan Design
NASA Technical Reports Server (NTRS)
Tweedt, Daniel L.
2010-01-01
Quieter working environments for astronauts are needed if future long-duration space exploration missions are to be safe and productive. Ventilation and payload cooling fans are known to be dominant sources of noise, with the International Space Station being a good case in point. To address this issue cost effectively, early attention to fan design, selection, and installation has been recommended, leading to an effort by NASA to examine the potential for small-fan noise reduction by improving fan aerodynamic design. As a preliminary part of that effort, the aerodynamics of a cabin ventilation fan designed by Hamilton Sundstrand has been simulated using computational fluid dynamics codes, and the computed solutions analyzed to quantify various aspects of the fan aerodynamics and performance. Four simulations were performed at the design rotational speed: two at the design flow rate and two at off-design flow rates. Following a brief discussion of the computational codes, various aerodynamic- and performance-related quantities derived from the computed flow fields are presented along with relevant flow field details. The results show that the computed fan performance is in generally good agreement with stated design goals.
The ACI-REF Program: Empowering Prospective Computational Researchers
NASA Astrophysics Data System (ADS)
Cuma, M.; Cardoen, W.; Collier, G.; Freeman, R. M., Jr.; Kitzmiller, A.; Michael, L.; Nomura, K. I.; Orendt, A.; Tanner, L.
2014-12-01
The ACI-REF program, Advanced Cyberinfrastructure - Research and Education Facilitation, represents a consortium of academic institutions seeking to further advance the capabilities of their respective campus research communities through an extension of the personal connections and educational activities that underlie the unique and often specialized cyberinfrastructure at each institution. This consortium currently includes Clemson University, Harvard University, University of Hawai'i, University of Southern California, University of Utah, and University of Wisconsin. Working together in a coordinated effort, the consortium is dedicated to the adoption of models and strategies which leverage the expertise and experience of its members with a goal of maximizing the impact of each institution's investment in research computing. The ACI-REFs (facilitators) are tasked with making connections and building bridges between the local campus researchers and the many different providers of campus, commercial, and national computing resources. Through these bridges, ACI-REFs assist researchers from all disciplines in understanding their computing and data needs and in mapping these needs to existing capabilities or providing assistance with development of these capabilities. From the Earth sciences perspective, we will give examples of how this assistance improved methods and workflows in geophysics, geography and atmospheric sciences. We anticipate that this effort will expand the number of researchers who become self-sufficient users of advanced computing resources, allowing them to focus on making research discoveries in a more timely and efficient manner.
ERIC Educational Resources Information Center
Willemssen, Joel C.
This document provides testimony on the U.S. Department of Education's efforts to ensure that its computer systems supporting critical student financial aid activities will be able to process information reliably through the turn of the century. After providing some background information, the statement recaps prior findings and the actions that…
Clinical nursing informatics. Developing tools for knowledge workers.
Ozbolt, J G; Graves, J R
1993-06-01
Current research in clinical nursing informatics is proceeding along three important dimensions: (1) identifying and defining nursing's language and structuring its data; (2) understanding clinical judgment and how computer-based systems can facilitate and not replace it; and (3) discovering how well-designed systems can transform nursing practice. A number of efforts are underway to find and use language that accurately represents nursing and that can be incorporated into computer-based information systems. These efforts add to understanding nursing problems, interventions, and outcomes, and provide the elements for databases from which nursing's costs and effectiveness can be studied. Research on clinical judgment focuses on how nurses (perhaps with different levels of expertise) assess patient needs, set goals, and plan and deliver care, as well as how computer-based systems can be developed to aid these cognitive processes. Finally, investigators are studying not only how computers can help nurses with the mechanics and logistics of processing information but also and more importantly how access to informatics tools changes nursing care.
Blended Learning: An Innovative Approach
ERIC Educational Resources Information Center
Lalima; Dangwal, Kiran Lata
2017-01-01
Blended learning is an innovative concept that embraces the advantages of both traditional teaching in the classroom and ICT supported learning including both offline learning and online learning. It has scope for collaborative learning; constructive learning and computer assisted learning (CAI). Blended learning needs rigorous efforts, right…
NASA Technical Reports Server (NTRS)
Jothiprasad, Giridhar; Mavriplis, Dimitri J.; Caughey, David A.
2002-01-01
The rapid increase in available computational power over the last decade has enabled higher resolution flow simulations and more widespread use of unstructured grid methods for complex geometries. While much of this effort has been focused on steady-state calculations in the aerodynamics community, the need to accurately predict off-design conditions, which may involve substantial amounts of flow separation, points to the need to efficiently simulate unsteady flow fields. Accurate unsteady flow simulations can easily require several orders of magnitude more computational effort than a corresponding steady-state simulation. For this reason, techniques for improving the efficiency of unsteady flow simulations are required in order to make such calculations feasible in the foreseeable future. The purpose of this work is to investigate possible reductions in computer time due to the choice of an efficient time-integration scheme from a series of schemes differing in the order of time-accuracy, and by the use of more efficient techniques to solve the nonlinear equations which arise while using implicit time-integration schemes. This investigation is carried out in the context of a two-dimensional unstructured mesh laminar Navier-Stokes solver.
Overview of heat transfer and fluid flow problem areas encountered in Stirling engine modeling
NASA Technical Reports Server (NTRS)
Tew, Roy C., Jr.
1988-01-01
NASA Lewis Research Center has been managing Stirling engine development programs for over a decade. In addition to contractual programs, this work has included in-house engine testing and development of engine computer models. Attempts to validate Stirling engine computer models with test data have demonstrated that engine thermodynamic losses need better characterization. Various Stirling engine thermodynamic losses and efforts that are underway to characterize these losses are discussed.
An Artificial Neural Network-Based Decision-Support System for Integrated Network Security
2014-09-01
group that they need to know in order to make team-based decisions in real-time environments, (c) Employ secure cloud computing services to host mobile...THESIS Presented to the Faculty Department of Electrical and Computer Engineering Graduate School of Engineering and Management Air Force...out-of-the-loop syndrome and create complexity creep. As a result, full automation efforts can lead to inappropriate decision-making despite a
PREPARING FOR EXASCALE: ORNL Leadership Computing Application Requirements and Strategy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joubert, Wayne; Kothe, Douglas B; Nam, Hai Ah
2009-12-01
In 2009 the Oak Ridge Leadership Computing Facility (OLCF), a U.S. Department of Energy (DOE) facility at the Oak Ridge National Laboratory (ORNL) National Center for Computational Sciences (NCCS), elicited petascale computational science requirements from leading computational scientists in the international science community. This effort targeted science teams whose projects received large computer allocation awards on OLCF systems. A clear finding of this process was that in order to reach their science goals over the next several years, multiple projects will require computational resources in excess of an order of magnitude more powerful than those currently available. Additionally, for themore » longer term, next-generation science will require computing platforms of exascale capability in order to reach DOE science objectives over the next decade. It is generally recognized that achieving exascale in the proposed time frame will require disruptive changes in computer hardware and software. Processor hardware will become necessarily heterogeneous and will include accelerator technologies. Software must undergo the concomitant changes needed to extract the available performance from this heterogeneous hardware. This disruption portends to be substantial, not unlike the change to the message passing paradigm in the computational science community over 20 years ago. Since technological disruptions take time to assimilate, we must aggressively embark on this course of change now, to insure that science applications and their underlying programming models are mature and ready when exascale computing arrives. This includes initiation of application readiness efforts to adapt existing codes to heterogeneous architectures, support of relevant software tools, and procurement of next-generation hardware testbeds for porting and testing codes. The 2009 OLCF requirements process identified numerous actions necessary to meet this challenge: (1) Hardware capabilities must be advanced on multiple fronts, including peak flops, node memory capacity, interconnect latency, interconnect bandwidth, and memory bandwidth. (2) Effective parallel programming interfaces must be developed to exploit the power of emerging hardware. (3) Science application teams must now begin to adapt and reformulate application codes to the new hardware and software, typified by hierarchical and disparate layers of compute, memory and concurrency. (4) Algorithm research must be realigned to exploit this hierarchy. (5) When possible, mathematical libraries must be used to encapsulate the required operations in an efficient and useful way. (6) Software tools must be developed to make the new hardware more usable. (7) Science application software must be improved to cope with the increasing complexity of computing systems. (8) Data management efforts must be readied for the larger quantities of data generated by larger, more accurate science models. Requirements elicitation, analysis, validation, and management comprise a difficult and inexact process, particularly in periods of technological change. Nonetheless, the OLCF requirements modeling process is becoming increasingly quantitative and actionable, as the process becomes more developed and mature, and the process this year has identified clear and concrete steps to be taken. This report discloses (1) the fundamental science case driving the need for the next generation of computer hardware, (2) application usage trends that illustrate the science need, (3) application performance characteristics that drive the need for increased hardware capabilities, (4) resource and process requirements that make the development and deployment of science applications on next-generation hardware successful, and (5) summary recommendations for the required next steps within the computer and computational science communities.« less
Generic approach to access barriers in dehydrogenation reactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Liang; Vilella, Laia; Abild-Pedersen, Frank
The introduction of linear energy correlations, which explicitly relate adsorption energies of reaction intermediates and activation energies in heterogeneous catalysis, has proven to be a key component in the computational search for new and promising catalysts. A simple linear approach to estimate activation energies still requires a significant computational effort. To simplify this process and at the same time incorporate the need for enhanced complexity of reaction intermediates, we generalize a recently proposed approach that evaluates transition state energies based entirely on bond-order conservation arguments. Here, we show that similar variation of the local electronic structure along the reaction coordinatemore » introduces a set of general functions that accurately defines the transition state energy and are transferable to other reactions with similar bonding nature. With such an approach, more complex reaction intermediates can be targeted with an insignificant increase in computational effort and without loss of accuracy.« less
Generic approach to access barriers in dehydrogenation reactions
Yu, Liang; Vilella, Laia; Abild-Pedersen, Frank
2018-03-08
The introduction of linear energy correlations, which explicitly relate adsorption energies of reaction intermediates and activation energies in heterogeneous catalysis, has proven to be a key component in the computational search for new and promising catalysts. A simple linear approach to estimate activation energies still requires a significant computational effort. To simplify this process and at the same time incorporate the need for enhanced complexity of reaction intermediates, we generalize a recently proposed approach that evaluates transition state energies based entirely on bond-order conservation arguments. Here, we show that similar variation of the local electronic structure along the reaction coordinatemore » introduces a set of general functions that accurately defines the transition state energy and are transferable to other reactions with similar bonding nature. With such an approach, more complex reaction intermediates can be targeted with an insignificant increase in computational effort and without loss of accuracy.« less
Formal Representations of Eligibility Criteria: A Literature Review
Weng, Chunhua; Tu, Samson W.; Sim, Ida; Richesson, Rachel
2010-01-01
Standards-based, computable knowledge representations for eligibility criteria are increasingly needed to provide computer-based decision support for automated research participant screening, clinical evidence application, and clinical research knowledge management. We surveyed the literature and identified five aspects of eligibility criteria knowledge representations that contribute to the various research and clinical applications: the intended use of computable eligibility criteria, the classification of eligibility criteria, the expression language for representing eligibility rules, the encoding of eligibility concepts, and the modeling of patient data. We consider three of them (expression language, codification of eligibility concepts, and patient data modeling), to be essential constructs of a formal knowledge representation for eligibility criteria. The requirements for each of the three knowledge constructs vary for different use cases, which therefore should inform the development and choice of the constructs toward cost-effective knowledge representation efforts. We discuss the implications of our findings for standardization efforts toward sharable knowledge representation of eligibility criteria. PMID:20034594
Computer-Aided Fabrication of Integrated Circuits
1989-09-30
baseline CMOS process. One result of this effort was the identification of several residual bugs in the PATRAN graphics processor . The vendor promises...virtual memory. The internal Nubus architecture uses a 32-bit LISP processor running at 10 megahertz (100 ns clock period). An ethernet controller is...For different patterns, we need different masks for the photo step, and for dif- ferent micro -structures of the wafers, we need different etching
NASA Technical Reports Server (NTRS)
1990-01-01
NASA's Space Station Freedom Program (SSFP) planning efforts have identified a need for a payload training simulator system to serve as both a training facility and as a demonstrator to validate operational concepts. The envisioned MSFC Payload Training Complex (PTC) required to meet this need will train the Space Station payload scientists, station scientists, and ground controllers to operate the wide variety of experiments that will be onboard the Space Station Freedom. The Simulation Computer System (SCS) is the computer hardware, software, and workstations that will support the Payload Training Complex at MSFC. The purpose of this SCS Study is to investigate issues related to the SCS, alternative requirements, simulator approaches, and state-of-the-art technologies to develop candidate concepts and designs.
Space Station Simulation Computer System (SCS) study for NASA/MSFC. Phased development plan
NASA Technical Reports Server (NTRS)
1990-01-01
NASA's Space Station Freedom Program (SSFP) planning efforts have identified a need for a payload training simulator system to serve as both a training facility and as a demonstrator to validate operational concepts. The envisioned MSFC Payload Training Complex (PTC) required to meet this need will train the Space Station payload scientists, station scientists and ground controllers to operate the wide variety of experiments that will be onboard the Space Station Freedom. The Simulation Computer System (SCS) is made up of computer hardware, software, and workstations that will support the Payload Training Complex at MSFC. The purpose of this SCS Study is to investigate issues related to the SCS, alternative requirements, simulator approaches, and state-of-the-art technologies to develop candidate concepts and designs.
NASA Technical Reports Server (NTRS)
1990-01-01
NASA's Space Station Freedom Program (SSFP) planning efforts have identified a need for a payload training simulator system to serve as both a training facility and as a demonstrator to validate operational concepts. The envisioned MSFC Payload Training Complex (PTC) required to meet this need will train the Space Station payload scientists, station scientists, and ground controllers to operate the wide variety of experiments that will be onboard the Space Station Freedom. The Simulation Computer System (SCS) is made up of the computer hardware, software, and workstations that will support the Payload Training Complex at MSFC. The purpose of this SCS Study is to investigate issues related to the SCS, alternative requirements, simulator approaches, and state-of-the-art technologies to develop candidate concepts and designs.
Space Station Simulation Computer System (SCS) study for NASA/MSFC. Operations concept report
NASA Technical Reports Server (NTRS)
1990-01-01
NASA's Space Station Freedom Program (SSFP) planning efforts have identified a need for a payload training simulator system to serve as both a training facility and as a demonstrator to validate operational concepts. The envisioned MSFC Payload Training Complex (PTC) required to meet this need will train the Space Station payload scientists, station scientists, and ground controllers to operate the wide variety of experiments that will be onboard the Space Station Freedom. The Simulation Computer System (SCS) is made up of computer hardware, software, and workstations that will support the Payload Training Complex at MSFC. The purpose of this SCS Study is to investigate issues related to the SCS, alternative requirements, simulator approaches, and state-of-the-art technologies to develop candidate concepts and designs.
Autonomic Computing: Panacea or Poppycock?
NASA Technical Reports Server (NTRS)
Sterritt, Roy; Hinchey, Mike
2005-01-01
Autonomic Computing arose out of a need for a means to cope with rapidly growing complexity of integrating, managing, and operating computer-based systems as well as a need to reduce the total cost of ownership of today's systems. Autonomic Computing (AC) as a discipline was proposed by IBM in 2001, with the vision to develop self-managing systems. As the name implies, the influence for the new paradigm is the human body's autonomic system, which regulates vital bodily functions such as the control of heart rate, the body's temperature and blood flow-all without conscious effort. The vision is to create selfivare through self-* properties. The initial set of properties, in terms of objectives, were self-configuring, self-healing, self-optimizing and self-protecting, along with attributes of self-awareness, self-monitoring and self-adjusting. This self-* list has grown: self-anticipating, self-critical, self-defining, self-destructing, self-diagnosis, self-governing, self-organized, self-reflecting, and self-simulation, for instance.
1988-11-01
Manufacturing System 22 4. Similar Parts Based Shape or Manufactuting Process 24 5. Projected Annual Unit Robot Sales and Installed Base Through 1992 30 6. U.S...effort needed to perform personnel, product design, marketing , and advertising, and finance tasks of the firm. Level III controls the resource...planning and accounting functions of the firm. Systems at this level support purchasing, accounts payable, accounts receivable, master scheduling and sales
DoDs Efforts to Consolidate Data Centers Need Improvement
2016-03-29
Consolidation Initiative, February 26, 2010. 3 Green IT minimizes negative environmental impact of IT operations by ensuring that computers and computer-related...objectives for consolidating data centers. DoD’s objectives were to: • reduce cost; • reduce environmental impact ; • improve efficiency and service levels...number of DoD data centers. Finding A DODIG-2016-068 │ 7 information in DCIM, the DoD CIO did not confirm whether those changes would impact DoD’s
Benchmark problems and solutions
NASA Technical Reports Server (NTRS)
Tam, Christopher K. W.
1995-01-01
The scientific committee, after careful consideration, adopted six categories of benchmark problems for the workshop. These problems do not cover all the important computational issues relevant to Computational Aeroacoustics (CAA). The deciding factor to limit the number of categories to six was the amount of effort needed to solve these problems. For reference purpose, the benchmark problems are provided here. They are followed by the exact or approximate analytical solutions. At present, an exact solution for the Category 6 problem is not available.
A study of workstation computational performance for real-time flight simulation
NASA Technical Reports Server (NTRS)
Maddalon, Jeffrey M.; Cleveland, Jeff I., II
1995-01-01
With recent advances in microprocessor technology, some have suggested that modern workstations provide enough computational power to properly operate a real-time simulation. This paper presents the results of a computational benchmark, based on actual real-time flight simulation code used at Langley Research Center, which was executed on various workstation-class machines. The benchmark was executed on different machines from several companies including: CONVEX Computer Corporation, Cray Research, Digital Equipment Corporation, Hewlett-Packard, Intel, International Business Machines, Silicon Graphics, and Sun Microsystems. The machines are compared by their execution speed, computational accuracy, and porting effort. The results of this study show that the raw computational power needed for real-time simulation is now offered by workstations.
2013-07-01
applications introduced by third-party developers to connect to the Android operating system through an open software interface. This allows customers...Definition Multimedia Interface have been developed to address the need for standards for high-definition televisions and computer monitors. Perhaps
Assistive Technology Developments in Puerto Rico.
ERIC Educational Resources Information Center
Lizama, Mauricio A.; Mendez, Hector L.
Recent efforts to develop Spanish-based adaptations for alternate computer input devices are considered, as are their implications for Hispanics with disabilities and for the development of language sensitive devices worldwide. Emphasis is placed on the particular need to develop low-cost high technology devices for Puerto Rico and Latin America…
ERIC Educational Resources Information Center
Smith, Susan; Tedford, Rosalind; Womack, H. David
2001-01-01
Discusses benefits and drawbacks of a team approach to building a library Web site, based on experiences of redesigning the Web site at Wake Forest University's library. Considers the community context at Wake Forest, including laptop computers being issued to students, faculty, and staff; and support needed from library administrators. (LRW)
Removing the center from computing: biology's new mode of digital knowledge production.
November, Joseph
2011-06-01
This article shows how the USA's National Institutes of Health (NIH) helped to bring about a major shift in the way computers are used to produce knowledge and in the design of computers themselves as a consequence of its early 1960s efforts to introduce information technology to biologists. Starting in 1960 the NIH sought to reform the life sciences by encouraging researchers to make use of digital electronic computers, but despite generous federal support biologists generally did not embrace the new technology. Initially the blame fell on biologists' lack of appropriate (i.e. digital) data for computers to process. However, when the NIH consulted MIT computer architect Wesley Clark about this problem, he argued that the computer's quality as a device that was centralized posed an even greater challenge to potential biologist users than did the computer's need for digital data. Clark convinced the NIH that if the agency hoped to effectively computerize biology, it would need to satisfy biologists' experimental and institutional needs by providing them the means to use a computer without going to a computing center. With NIH support, Clark developed the 1963 Laboratory Instrument Computer (LINC), a small, real-time interactive computer intended to be used inside the laboratory and controlled entirely by its biologist users. Once built, the LINC provided a viable alternative to the 1960s norm of large computers housed in computing centers. As such, the LINC not only became popular among biologists, but also served in later decades as an important precursor of today's computing norm in the sciences and far beyond, the personal computer.
Boundary formulations for sensitivity analysis without matrix derivatives
NASA Technical Reports Server (NTRS)
Kane, J. H.; Guru Prasad, K.
1993-01-01
A new hybrid approach to continuum structural shape sensitivity analysis employing boundary element analysis (BEA) is presented. The approach uses iterative reanalysis to obviate the need to factor perturbed matrices in the determination of surface displacement and traction sensitivities via a univariate perturbation/finite difference (UPFD) step. The UPFD approach makes it possible to immediately reuse existing subroutines for computation of BEA matrix coefficients in the design sensitivity analysis process. The reanalysis technique computes economical response of univariately perturbed models without factoring perturbed matrices. The approach provides substantial computational economy without the burden of a large-scale reprogramming effort.
Advanced flight computer. Special study
NASA Technical Reports Server (NTRS)
Coo, Dennis
1995-01-01
This report documents a special study to define a 32-bit radiation hardened, SEU tolerant flight computer architecture, and to investigate current or near-term technologies and development efforts that contribute to the Advanced Flight Computer (AFC) design and development. An AFC processing node architecture is defined. Each node may consist of a multi-chip processor as needed. The modular, building block approach uses VLSI technology and packaging methods that demonstrate a feasible AFC module in 1998 that meets that AFC goals. The defined architecture and approach demonstrate a clear low-risk, low-cost path to the 1998 production goal, with intermediate prototypes in 1996.
A Management System for Computer Performance Evaluation.
1981-12-01
AD-AIlS 538 AIR FORCE INST OF TECH WRIGHT-PATTERSON AFB OH SCHOO-ETC F/6 S/1 MANAGEMENT SYSTEM FOR COMPUTER PERFORMANCE EVALUATION. (U DEC 81 H K...release; distribution unlimited. AFIT/GCS/1,Y/81 D)-i PREFACE As an installation manager of a Burroughs 3500 1 erncountered many problems concerning its...techniques to select, and finally, how do I organize the effort. As a manager I felt that I needed a reference or tool that would broaden my OPE
NASA Astrophysics Data System (ADS)
Baveye, Philippe C.; Pot, Valérie; Garnier, Patricia
2017-12-01
In the last decade, X-ray computed tomography (CT) has become widely used to characterize the geometry and topology of the pore space of soils and natural porous media. Regardless of the resolution of CT images, a fundamental problem associated with their use, for example as a starting point in simulation efforts, is that sub-resolution pores are not detected. Over the last few years, a particular type of modeling method, known as ;Grey; or ;Partial Bounce Back; Lattice-Boltzmann (LB), has been adopted by increasing numbers of researchers to try to account for sub-resolution pores in the modeling of water and solute transport in natural porous media. In this short paper, we assess the extent to which Grey LB methods indeed offer a workable solution to the problem at hand. We conclude that, in spite of significant computational advances, a major experimental hurdle related to the evaluation of the penetrability of sub-resolution pores, is blocking the way ahead. This hurdle will need to be cleared before Grey LB can become a credible option in the microscale modeling of soils and sediments. A necessarily interdisciplinary effort, involving both modelers and experimentalists, is needed to clear the path forward.
Aerothermal modeling program, phase 2. Element B: Flow interaction experiment
NASA Technical Reports Server (NTRS)
Nikjooy, M.; Mongia, H. C.; Murthy, S. N. B.; Sullivan, J. P.
1986-01-01
The design process was improved and the efficiency, life, and maintenance costs of the turbine engine hot section was enhanced. Recently, there has been much emphasis on the need for improved numerical codes for the design of efficient combustors. For the development of improved computational codes, there is a need for an experimentally obtained data base to be used at test cases for the accuracy of the computations. The purpose of Element-B is to establish a benchmark quality velocity and scalar measurements of the flow interaction of circular jets with swirling flow typical of that in the dome region of annular combustor. In addition to the detailed experimental effort, extensive computations of the swirling flows are to be compared with the measurements for the purpose of assessing the accuracy of current and advanced turbulence and scalar transport models.
High Performance, Dependable Multiprocessor
NASA Technical Reports Server (NTRS)
Ramos, Jeremy; Samson, John R.; Troxel, Ian; Subramaniyan, Rajagopal; Jacobs, Adam; Greco, James; Cieslewski, Grzegorz; Curreri, John; Fischer, Michael; Grobelny, Eric;
2006-01-01
With the ever increasing demand for higher bandwidth and processing capacity of today's space exploration, space science, and defense missions, the ability to efficiently apply commercial-off-the-shelf (COTS) processors for on-board computing is now a critical need. In response to this need, NASA's New Millennium Program office has commissioned the development of Dependable Multiprocessor (DM) technology for use in payload and robotic missions. The Dependable Multiprocessor technology is a COTS-based, power efficient, high performance, highly dependable, fault tolerant cluster computer. To date, Honeywell has successfully demonstrated a TRL4 prototype of the Dependable Multiprocessor [I], and is now working on the development of a TRLS prototype. For the present effort Honeywell has teamed up with the University of Florida's High-performance Computing and Simulation (HCS) Lab, and together the team has demonstrated major elements of the Dependable Multiprocessor TRLS system.
Middleware for big data processing: test results
NASA Astrophysics Data System (ADS)
Gankevich, I.; Gaiduchok, V.; Korkhov, V.; Degtyarev, A.; Bogdanov, A.
2017-12-01
Dealing with large volumes of data is resource-consuming work which is more and more often delegated not only to a single computer but also to a whole distributed computing system at once. As the number of computers in a distributed system increases, the amount of effort put into effective management of the system grows. When the system reaches some critical size, much effort should be put into improving its fault tolerance. It is difficult to estimate when some particular distributed system needs such facilities for a given workload, so instead they should be implemented in a middleware which works efficiently with a distributed system of any size. It is also difficult to estimate whether a volume of data is large or not, so the middleware should also work with data of any volume. In other words, the purpose of the middleware is to provide facilities that adapt distributed computing system for a given workload. In this paper we introduce such middleware appliance. Tests show that this middleware is well-suited for typical HPC and big data workloads and its performance is comparable with well-known alternatives.
Computational structural mechanics for engine structures
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1989-01-01
The computational structural mechanics (CSM) program at Lewis encompasses: (1) fundamental aspects for formulating and solving structural mechanics problems, and (2) development of integrated software systems to computationally simulate the performance/durability/life of engine structures. It is structured to mainly supplement, complement, and whenever possible replace, costly experimental efforts which are unavoidable during engineering research and development programs. Specific objectives include: investigate unique advantages of parallel and multiprocesses for: reformulating/solving structural mechanics and formulating/solving multidisciplinary mechanics and develop integrated structural system computational simulators for: predicting structural performances, evaluating newly developed methods, and for identifying and prioritizing improved/missing methods needed. Herein the CSM program is summarized with emphasis on the Engine Structures Computational Simulator (ESCS). Typical results obtained using ESCS are described to illustrate its versatility.
Progress of Stirling cycle analysis and loss mechanism characterization
NASA Technical Reports Server (NTRS)
Tew, R. C., Jr.
1986-01-01
An assessment of Stirling engine thermodynamic modeling and design codes shows a general deficiency; this deficiency is due to poor understanding of the fluid flow and heat transfer phenomena that occur in the oscillating flow and pressure level environment within the engines. Stirling engine thermodynamic loss mechanisms are listed. Several experimental and computational research efforts now underway to characterize various loss mechanisms are reviewed. The need for additional experimental rigs and rig upgrades is discussed. Recent developments and current efforts in Stirling engine thermodynamic modeling are also reviewed.
Computational Fact Checking by Mining Knowledge Graphs
ERIC Educational Resources Information Center
Shiralkar, Prashant
2017-01-01
Misinformation and rumors have become rampant on online social platforms with adverse consequences for the real world. Fact-checking efforts are needed to mitigate the risks associated with the spread of digital misinformation. However, the pace at which information is generated online limits the capacity to fact-check claims at the same rate…
Understanding and enhancing user acceptance of computer technology
NASA Technical Reports Server (NTRS)
Rouse, William B.; Morris, Nancy M.
1986-01-01
Technology-driven efforts to implement computer technology often encounter problems due to lack of acceptance or begrudging acceptance of the personnel involved. It is argued that individuals' acceptance of automation, in terms of either computerization or computer aiding, is heavily influenced by their perceptions of the impact of the automation on their discretion in performing their jobs. It is suggested that desired levels of discretion reflect needs to feel in control and achieve self-satisfaction in task performance, as well as perceptions of inadequacies of computer technology. Discussion of these factors leads to a structured set of considerations for performing front-end analysis, deciding what to automate, and implementing the resulting changes.
Functional structure and dynamics of the human nervous system
NASA Technical Reports Server (NTRS)
Lawrence, J. A.
1981-01-01
The status of an effort to define the directions needed to take in extending pilot models is reported. These models are needed to perform closed-loop (man-in-the-loop) feedback flight control system designs and to develop cockpit display requirements. The approach taken is to develop a hypothetical working model of the human nervous system by reviewing the current literature in neurology and psychology and to develop a computer model of this hypothetical working model.
A National Virtual Specimen Database for Early Cancer Detection
NASA Technical Reports Server (NTRS)
Crichton, Daniel; Kincaid, Heather; Kelly, Sean; Thornquist, Mark; Johnsey, Donald; Winget, Marcy
2003-01-01
Access to biospecimens is essential for enabling cancer biomarker discovery. The National Cancer Institute's (NCI) Early Detection Research Network (EDRN) comprises and integrates a large number of laboratories into a network in order to establish a collaborative scientific environment to discover and validate disease markers. The diversity of both the institutions and the collaborative focus has created the need for establishing cross-disciplinary teams focused on integrating expertise in biomedical research, computational and biostatistics, and computer science. Given the collaborative design of the network, the EDRN needed an informatics infrastructure. The Fred Hutchinson Cancer Research Center, the National Cancer Institute,and NASA's Jet Propulsion Laboratory (JPL) teamed up to build an informatics infrastructure creating a collaborative, science-driven research environment despite the geographic and morphology differences of the information systems that existed within the diverse network. EDRN investigators identified the need to share biospecimen data captured across the country managed in disparate databases. As a result, the informatics team initiated an effort to create a virtual tissue database whereby scientists could search and locate details about specimens located at collaborating laboratories. Each database, however, was locally implemented and integrated into collection processes and methods unique to each institution. This meant that efforts to integrate databases needed to be done in a manner that did not require redesign or re-implementation of existing system
Seamless Digital Environment – Data Analytics Use Case Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oxstrand, Johanna
Multiple research efforts in the U.S Department of Energy Light Water Reactor Sustainability (LWRS) Program studies the need and design of an underlying architecture to support the increased amount and use of data in the nuclear power plant. More specifically the three LWRS research efforts; Digital Architecture for an Automated Plant, Automated Work Packages, Computer-Based Procedures for Field Workers, and the Online Monitoring efforts all have identified the need for a digital architecture and more importantly the need for a Seamless Digital Environment (SDE). A SDE provides a mean to access multiple applications, gather the data points needed, conduct themore » analysis requested, and present the result to the user with minimal or no effort by the user. During the 2016 annual Nuclear Information Technology Strategic Leadership (NITSL) group meeting the nuclear utilities identified the need for research focused on data analytics. The effort was to develop and evaluate use cases for data mining and analytics for employing information from plant sensors and database for use in developing improved business analytics. The goal of the study is to research potential approaches to building an analytics solution for equipment reliability, on a small scale, focusing on either a single piece of equipment or a single system. The analytics solution will likely consist of a data integration layer, predictive and machine learning layer and the user interface layer that will display the output of the analysis in a straight forward, easy to consume manner. This report describes the use case study initiated by NITSL and conducted in a collaboration between Idaho National Laboratory, Arizona Public Service – Palo Verde Nuclear Generating Station, and NextAxiom Inc.« less
Automating the parallel processing of fluid and structural dynamics calculations
NASA Technical Reports Server (NTRS)
Arpasi, Dale J.; Cole, Gary L.
1987-01-01
The NASA Lewis Research Center is actively involved in the development of expert system technology to assist users in applying parallel processing to computational fluid and structural dynamic analysis. The goal of this effort is to eliminate the necessity for the physical scientist to become a computer scientist in order to effectively use the computer as a research tool. Programming and operating software utilities have previously been developed to solve systems of ordinary nonlinear differential equations on parallel scalar processors. Current efforts are aimed at extending these capabilities to systems of partial differential equations, that describe the complex behavior of fluids and structures within aerospace propulsion systems. This paper presents some important considerations in the redesign, in particular, the need for algorithms and software utilities that can automatically identify data flow patterns in the application program and partition and allocate calculations to the parallel processors. A library-oriented multiprocessing concept for integrating the hardware and software functions is described.
Galaxy CloudMan: delivering cloud compute clusters.
Afgan, Enis; Baker, Dannon; Coraor, Nate; Chapman, Brad; Nekrutenko, Anton; Taylor, James
2010-12-21
Widespread adoption of high-throughput sequencing has greatly increased the scale and sophistication of computational infrastructure needed to perform genomic research. An alternative to building and maintaining local infrastructure is "cloud computing", which, in principle, offers on demand access to flexible computational infrastructure. However, cloud computing resources are not yet suitable for immediate "as is" use by experimental biologists. We present a cloud resource management system that makes it possible for individual researchers to compose and control an arbitrarily sized compute cluster on Amazon's EC2 cloud infrastructure without any informatics requirements. Within this system, an entire suite of biological tools packaged by the NERC Bio-Linux team (http://nebc.nerc.ac.uk/tools/bio-linux) is available for immediate consumption. The provided solution makes it possible, using only a web browser, to create a completely configured compute cluster ready to perform analysis in less than five minutes. Moreover, we provide an automated method for building custom deployments of cloud resources. This approach promotes reproducibility of results and, if desired, allows individuals and labs to add or customize an otherwise available cloud system to better meet their needs. The expected knowledge and associated effort with deploying a compute cluster in the Amazon EC2 cloud is not trivial. The solution presented in this paper eliminates these barriers, making it possible for researchers to deploy exactly the amount of computing power they need, combined with a wealth of existing analysis software, to handle the ongoing data deluge.
A Computational Chemistry Database for Semiconductor Processing
NASA Technical Reports Server (NTRS)
Jaffe, R.; Meyyappan, M.; Arnold, J. O. (Technical Monitor)
1998-01-01
The concept of 'virtual reactor' or 'virtual prototyping' has received much attention recently in the semiconductor industry. Commercial codes to simulate thermal CVD and plasma processes have become available to aid in equipment and process design efforts, The virtual prototyping effort would go nowhere if codes do not come with a reliable database of chemical and physical properties of gases involved in semiconductor processing. Commercial code vendors have no capabilities to generate such a database, rather leave the task to the user of finding whatever is needed. While individual investigations of interesting chemical systems continue at Universities, there has not been any large scale effort to create a database. In this presentation, we outline our efforts in this area. Our effort focuses on the following five areas: 1. Thermal CVD reaction mechanism and rate constants. 2. Thermochemical properties. 3. Transport properties.4. Electron-molecule collision cross sections. and 5. Gas-surface interactions.
Exploring Midwives' Need and Intention to Adopt Electronic Integrated Antenatal Care.
Markam, Hosizah; Hochheiser, Harry; Kuntoro, Kuntoro; Notobroto, Hari Basuki
2018-01-01
Documentation requirements for the Indonesian integrated antenatal care (ANC) program suggest the need for electronic systems to address gaps in existing paper documentation practices. Our goals were to quantify midwives' documentation completeness in a primary healthcare center, understand documentation challenges, develop a tool, and assess intention to use the tool. We analyzed existing ANC records in a primary healthcare center in Bangkalan, East Java, and conducted interviews with stakeholders to understand needs for an electronic system in support of ANC. Development of the web-based Electronic Integrated ANC (e-iANC) system used the System Development Life Cycle method. Training on the use of the system was held in the computer laboratory for 100 midwives chosen from four primary healthcare centers in each of five regions. The Unified Theory of Acceptance and Use of Technology (UTAUT) questionnaire was used to assess their intention to adopt e-iANC. The midwives' intention to adopt e-iANC was significantly influenced by performance expectancy, effort expectancy and facilitating conditions. Age, education level, and computer literacy did not significantly moderate the effects of performance expectancy and effort expectancy on adoption intention. The UTAUT results indicated that the factors that might influence intention to adopt e-iANC are potentially addressable. Results suggest that e-iANC might well be accepted by midwives.
High-resolution PET [Positron Emission Tomography] for Medical Science Studies
DOE R&D Accomplishments Database
Budinger, T. F.; Derenzo, S. E.; Huesman, R. H.; Jagust, W. J.; Valk, P. E.
1989-09-01
One of the unexpected fruits of basic physics research and the computer revolution is the noninvasive imaging power available to today's physician. Technologies that were strictly the province of research scientists only a decade or two ago now serve as the foundations for such standard diagnostic tools as x-ray computer tomography (CT), magnetic resonance imaging (MRI), magnetic resonance spectroscopy (MRS), ultrasound, single photon emission computed tomography (SPECT), and positron emission tomography (PET). Furthermore, prompted by the needs of both the practicing physician and the clinical researcher, efforts to improve these technologies continue. This booklet endeavors to describe the advantages of achieving high resolution in PET imaging.
Turbine Blade and Endwall Heat Transfer Measured in NASA Glenn's Transonic Turbine Blade Cascade
NASA Technical Reports Server (NTRS)
Giel, Paul W.
2000-01-01
Higher operating temperatures increase the efficiency of aircraft gas turbine engines, but can also degrade internal components. High-pressure turbine blades just downstream of the combustor are particularly susceptible to overheating. Computational fluid dynamics (CFD) computer programs can predict the flow around the blades so that potential hot spots can be identified and appropriate cooling schemes can be designed. Various blade and cooling schemes can be examined computationally before any hardware is built, thus saving time and effort. Often though, the accuracy of these programs has been found to be inadequate for predicting heat transfer. Code and model developers need highly detailed aerodynamic and heat transfer data to validate and improve their analyses. The Transonic Turbine Blade Cascade was built at the NASA Glenn Research Center at Lewis Field to help satisfy the need for this type of data.
CFD validation needs for advanced concepts at Northrop Corporation
NASA Technical Reports Server (NTRS)
George, Michael W.
1987-01-01
Information is given in viewgraph form on the Computational Fluid Dynamics (CFD) Workshop held July 14 - 16, 1987. Topics covered include the philosophy of CFD validation, current validation efforts, the wing-body-tail Euler code, F-20 Euler simulated oil flow, and Euler Navier-Stokes code validation for 2D and 3D nozzle afterbody applications.
Technology in New York's Classrooms: One Key To Improving Educational Outcomes.
ERIC Educational Resources Information Center
Public Policy Inst., Albany, NY.
This report discusses the potential for computers and other educational technologies to aid in school restructuring in New York State. Issues involved in such a restructuring are examined and include: (1) the need for a common effort in creating a state program; (2) current practices and current realities in the classroom; (3) current uses of such…
New directions for Artificial Intelligence (AI) methods in optimum design
NASA Technical Reports Server (NTRS)
Hajela, Prabhat
1989-01-01
Developments and applications of artificial intelligence (AI) methods in the design of structural systems is reviewed. Principal shortcomings in the current approach are emphasized, and the need for some degree of formalism in the development environment for such design tools is underscored. Emphasis is placed on efforts to integrate algorithmic computations in expert systems.
ERIC Educational Resources Information Center
Abildinova, Gulmira M.; Alzhanov, Aitugan K.; Ospanova, Nazira N.; Taybaldieva, Zhymatay; Baigojanova, Dametken S.; Pashovkin, Nikita O.
2016-01-01
Nowadays, when there is a need to introduce various innovations into the educational process, most efforts are aimed at simplifying the learning process. To that end, electronic textbooks, testing systems and other software is being developed. Most of them are intended to run on personal computers with limited mobility. Smart education is…
Program Helps Generate Boundary-Element Mathematical Models
NASA Technical Reports Server (NTRS)
Goldberg, R. K.
1995-01-01
Composite Model Generation-Boundary Element Method (COM-GEN-BEM) computer program significantly reduces time and effort needed to construct boundary-element mathematical models of continuous-fiber composite materials at micro-mechanical (constituent) scale. Generates boundary-element models compatible with BEST-CMS boundary-element code for anlaysis of micromechanics of composite material. Written in PATRAN Command Language (PCL).
Galaxy CloudMan: delivering cloud compute clusters
2010-01-01
Background Widespread adoption of high-throughput sequencing has greatly increased the scale and sophistication of computational infrastructure needed to perform genomic research. An alternative to building and maintaining local infrastructure is “cloud computing”, which, in principle, offers on demand access to flexible computational infrastructure. However, cloud computing resources are not yet suitable for immediate “as is” use by experimental biologists. Results We present a cloud resource management system that makes it possible for individual researchers to compose and control an arbitrarily sized compute cluster on Amazon’s EC2 cloud infrastructure without any informatics requirements. Within this system, an entire suite of biological tools packaged by the NERC Bio-Linux team (http://nebc.nerc.ac.uk/tools/bio-linux) is available for immediate consumption. The provided solution makes it possible, using only a web browser, to create a completely configured compute cluster ready to perform analysis in less than five minutes. Moreover, we provide an automated method for building custom deployments of cloud resources. This approach promotes reproducibility of results and, if desired, allows individuals and labs to add or customize an otherwise available cloud system to better meet their needs. Conclusions The expected knowledge and associated effort with deploying a compute cluster in the Amazon EC2 cloud is not trivial. The solution presented in this paper eliminates these barriers, making it possible for researchers to deploy exactly the amount of computing power they need, combined with a wealth of existing analysis software, to handle the ongoing data deluge. PMID:21210983
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pittman, Jeffery P.; Cassidy, Stephen R.; Mosey, Whitney LC
2013-07-31
Pacific Northwest National Laboratory (PNNL) and the Pacific Northwest Site Office (PNSO) have recently completed an effort to identify the current state of the campus and gaps that exist with regards to space needs, facilities and infrastructure. This effort has been used to establish a campus strategy to ensure PNNL is ready to further the United States (U.S.) Department of Energy (DOE) mission. Ten-year business projections and the impacts on space needs were assessed and incorporated into the long-term facility plans. In identifying/quantifying the space needs for PNNL, the following categories were addressed: Multi-purpose Programmatic (wet chemistry and imaging laboratorymore » space), Strategic (Systems Engineering and Computation Analytics, and Collaboration space), Remediation (space to offset the loss of the Research Technology Laboratory [RTL] Complex due to decontamination and demolition), and Optimization (the exit of older and less cost-effective facilities). The findings of the space assessment indicate a need for wet chemistry space, imaging space, and strategic space needs associated with systems engineering and collaboration space.« less
Cloud computing in medical imaging.
Kagadis, George C; Kloukinas, Christos; Moore, Kevin; Philbin, Jim; Papadimitroulas, Panagiotis; Alexakos, Christos; Nagy, Paul G; Visvikis, Dimitris; Hendee, William R
2013-07-01
Over the past century technology has played a decisive role in defining, driving, and reinventing procedures, devices, and pharmaceuticals in healthcare. Cloud computing has been introduced only recently but is already one of the major topics of discussion in research and clinical settings. The provision of extensive, easily accessible, and reconfigurable resources such as virtual systems, platforms, and applications with low service cost has caught the attention of many researchers and clinicians. Healthcare researchers are moving their efforts to the cloud, because they need adequate resources to process, store, exchange, and use large quantities of medical data. This Vision 20/20 paper addresses major questions related to the applicability of advanced cloud computing in medical imaging. The paper also considers security and ethical issues that accompany cloud computing.
The influence of a game-making project on male and female learners' attitudes to computing
NASA Astrophysics Data System (ADS)
Robertson, Judy
2013-03-01
There is a pressing need for gender inclusive approaches to engage young people in computer science. A recent popular approach has been to harness learners' enthusiasm for computer games to motivate them to learn computer science concepts through game authoring. This article describes a study in which 992 learners across 13 schools took part in a game-making project. It provides evidence from 225 pre-test and post-test questionnaires on how learners' attitudes to computing changed during the project, as well as qualitative reflections from the class teachers on how the project affected their learners. Results indicate that girls did not enjoy the experience as much as boys, and that in fact, the project may make pupils less inclined to study computing in the future. This has important implications for future efforts to engage young people in computing.
Code of Federal Regulations, 2014 CFR
2014-01-01
... determined necessary for Year 2000 computer conversion efforts. 630.310 Section 630.310 Administrative... Scheduling of annual leave by employees determined necessary for Year 2000 computer conversion efforts. (a) Year 2000 computer conversion efforts are deemed to be an exigency of the public business for the...
Code of Federal Regulations, 2012 CFR
2012-01-01
... determined necessary for Year 2000 computer conversion efforts. 630.310 Section 630.310 Administrative... Scheduling of annual leave by employees determined necessary for Year 2000 computer conversion efforts. (a) Year 2000 computer conversion efforts are deemed to be an exigency of the public business for the...
Code of Federal Regulations, 2013 CFR
2013-01-01
... determined necessary for Year 2000 computer conversion efforts. 630.310 Section 630.310 Administrative... Scheduling of annual leave by employees determined necessary for Year 2000 computer conversion efforts. (a) Year 2000 computer conversion efforts are deemed to be an exigency of the public business for the...
Code of Federal Regulations, 2011 CFR
2011-01-01
... determined necessary for Year 2000 computer conversion efforts. 630.310 Section 630.310 Administrative... Scheduling of annual leave by employees determined necessary for Year 2000 computer conversion efforts. (a) Year 2000 computer conversion efforts are deemed to be an exigency of the public business for the...
Code of Federal Regulations, 2010 CFR
2010-01-01
... determined necessary for Year 2000 computer conversion efforts. 630.310 Section 630.310 Administrative... Scheduling of annual leave by employees determined necessary for Year 2000 computer conversion efforts. (a) Year 2000 computer conversion efforts are deemed to be an exigency of the public business for the...
Discounting the value of safety: effects of perceived risk and effort.
Sigurdsson, Sigurdur O; Taylor, Matthew A; Wirth, Oliver
2013-09-01
Although falls from heights remain the most prevalent cause of fatalities in the construction industry, factors impacting safety-related choices associated with work at heights are not completely understood. Better tools are needed to identify and study the factors influencing safety-related choices and decision making. Using a computer-based task within a behavioral economics paradigm, college students were presented a choice between two hypothetical scenarios that differed in working height and effort associated with retrieving and donning a safety harness. Participants were instructed to choose the scenario in which they were more likely to wear the safety harness. Based on choice patterns, switch points were identified, indicating when the perceived risk in both scenarios was equivalent. Switch points were a systematic function of working height and effort, and the quantified relation between perceived risk and effort was described well by a hyperbolic equation. Choice patterns revealed that the perceived risk of working at heights decreased as the effort to retrieve and don a safety harness increased. Results contribute to the development of computer-based procedure for assessing risk discounting within a behavioral economics framework. Such a procedure can be used as a research tool to study factors that influence safety-related decision making with a goal of informing more effective prevention and intervention strategies. Copyright © 2013 National Safety Council and Elsevier Ltd. All rights reserved.
Numerical Uncertainty Quantification for Radiation Analysis Tools
NASA Technical Reports Server (NTRS)
Anderson, Brooke; Blattnig, Steve; Clowdsley, Martha
2007-01-01
Recently a new emphasis has been placed on engineering applications of space radiation analyses and thus a systematic effort of Verification, Validation and Uncertainty Quantification (VV&UQ) of the tools commonly used for radiation analysis for vehicle design and mission planning has begun. There are two sources of uncertainty in geometric discretization addressed in this paper that need to be quantified in order to understand the total uncertainty in estimating space radiation exposures. One source of uncertainty is in ray tracing, as the number of rays increase the associated uncertainty decreases, but the computational expense increases. Thus, a cost benefit analysis optimizing computational time versus uncertainty is needed and is addressed in this paper. The second source of uncertainty results from the interpolation over the dose vs. depth curves that is needed to determine the radiation exposure. The question, then, is what is the number of thicknesses that is needed to get an accurate result. So convergence testing is performed to quantify the uncertainty associated with interpolating over different shield thickness spatial grids.
NASA Astrophysics Data System (ADS)
McKee, Shawn;
2017-10-01
Networks have played a critical role in high-energy physics (HEP), enabling us to access and effectively utilize globally distributed resources to meet the needs of our physicists. Because of their importance in enabling our grid computing infrastructure many physicists have taken leading roles in research and education (R&E) networking, participating in, and even convening, network related meetings and research programs with the broader networking community worldwide. This has led to HEP benefiting from excellent global networking capabilities for little to no direct cost. However, as other science domains ramp-up their need for similar networking it becomes less clear that this situation will continue unchanged. What this means for ATLAS in particular needs to be understood. ATLAS has evolved its computing model since the LHC started based upon its experience with using globally distributed resources. The most significant theme of those changes has been increased reliance upon, and use of, its networks. We will report on a number of networking initiatives in ATLAS including participation in the global perfSONAR network monitoring and measuring efforts of WLCG and OSG, the collaboration with the LHCOPN/LHCONE effort, the integration of network awareness into PanDA, the use of the evolving ATLAS analytics framework to better understand our networks and the changes in our DDM system to allow remote access to data. We will also discuss new efforts underway that are exploring the inclusion and use of software defined networks (SDN) and how ATLAS might benefit from: • Orchestration and optimization of distributed data access and data movement. • Better control of workflows, end to end. • Enabling prioritization of time-critical vs normal tasks • Improvements in the efficiency of resource usage
ERIC Educational Resources Information Center
Lynch, Rebecca A.; Steen, M. Dale; Pritchard, Todd J.; Buzzell, Paul R.; Pintauro, Stephen J.
2008-01-01
More than 76 million persons become ill from foodborne pathogens in the United States each year. To reduce these numbers, food safety education efforts need to be targeted at not only adults, but school children as well. The middle school grades are ideal for integrating food safety education into the curriculum while simultaneously contributing…
Matthew J. Gregory; Zhiqiang Yang; David M. Bell; Warren B. Cohen; Sean Healey; Janet L. Ohmann; Heather M. Roberts
2015-01-01
Mapping vegetation and landscape change at fine spatial scales is needed to inform natural resource and conservation planning, but such maps are expensive and time-consuming to produce. For Landsat-based methodologies, mapping efforts are hampered by the daunting task of manipulating multivariate data for millions to billions of pixels. The advent of cloud-based...
Portent of Heine's Reciprocal Square Root Identity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohl, H W
Precise efforts in theoretical astrophysics are needed to fully understand the mechanisms that govern the structure, stability, dynamics, formation, and evolution of differentially rotating stars. Direct computation of the physical attributes of a star can be facilitated by the use of highly compact azimuthal and separation angle Fourier formulations of the Green's functions for the linear partial differential equations of mathematical physics.
A Socio-Technical Approach to Preventing, Mitigating, and Recovering from Ransomware Attacks.
Sittig, Dean F; Singh, Hardeep
2016-01-01
Recently there have been several high-profile ransomware attacks involving hospitals around the world. Ransomware is intended to damage or disable a user's computer unless the user makes a payment. Once the attack has been launched, users have three options: 1) try to restore their data from backup; 2) pay the ransom; or 3) lose their data. In this manuscript, we discuss a socio-technical approach to address ransomware and outline four overarching steps that organizations can undertake to secure an electronic health record (EHR) system and the underlying computing infrastructure. First, health IT professionals need to ensure adequate system protection by correctly installing and configuring computers and networks that connect them. Next, the health care organizations need to ensure more reliable system defense by implementing user-focused strategies, including simulation and training on correct and complete use of computers and network applications. Concomitantly, the organization needs to monitor computer and application use continuously in an effort to detect suspicious activities and identify and address security problems before they cause harm. Finally, organizations need to respond adequately to and recover quickly from ransomware attacks and take actions to prevent them in future. We also elaborate on recommendations from other authoritative sources, including the National Institute of Standards and Technology (NIST). Similar to approaches to address other complex socio-technical health IT challenges, the responsibility of preventing, mitigating, and recovering from these attacks is shared between health IT professionals and end-users.
A Socio-Technical Approach to Preventing, Mitigating, and Recovering from Ransomware Attacks
Singh, Hardeep
2016-01-01
Summary Recently there have been several high-profile ransomware attacks involving hospitals around the world. Ransomware is intended to damage or disable a user’s computer unless the user makes a payment. Once the attack has been launched, users have three options: 1) try to restore their data from backup; 2) pay the ransom; or 3) lose their data. In this manuscript, we discuss a socio-technical approach to address ransomware and outline four overarching steps that organizations can undertake to secure an electronic health record (EHR) system and the underlying computing infrastructure. First, health IT professionals need to ensure adequate system protection by correctly installing and configuring computers and networks that connect them. Next, the health care organizations need to ensure more reliable system defense by implementing user-focused strategies, including simulation and training on correct and complete use of computers and network applications. Concomitantly, the organization needs to monitor computer and application use continuously in an effort to detect suspicious activities and identify and address security problems before they cause harm. Finally, organizations need to respond adequately to and recover quickly from ransomware attacks and take actions to prevent them in future. We also elaborate on recommendations from other authoritative sources, including the National Institute of Standards and Technology (NIST). Similar to approaches to address other complex socio-technical health IT challenges, the responsibility of preventing, mitigating, and recovering from these attacks is shared between health IT professionals and end-users. PMID:27437066
Medical privacy protection based on granular computing.
Wang, Da-Wei; Liau, Churn-Jung; Hsu, Tsan-Sheng
2004-10-01
Based on granular computing methodology, we propose two criteria to quantitatively measure privacy invasion. The total cost criterion measures the effort needed for a data recipient to find private information. The average benefit criterion measures the benefit a data recipient obtains when he received the released data. These two criteria remedy the inadequacy of the deterministic privacy formulation proposed in Proceedings of Asia Pacific Medical Informatics Conference, 2000; Int J Med Inform 2003;71:17-23. Granular computing methodology provides a unified framework for these quantitative measurements and previous bin size and logical approaches. These two new criteria are implemented in a prototype system Cellsecu 2.0. Preliminary system performance evaluation is conducted and reviewed.
Turbomachinery CFD on parallel computers
NASA Technical Reports Server (NTRS)
Blech, Richard A.; Milner, Edward J.; Quealy, Angela; Townsend, Scott E.
1992-01-01
The role of multistage turbomachinery simulation in the development of propulsion system models is discussed. Particularly, the need for simulations with higher fidelity and faster turnaround time is highlighted. It is shown how such fast simulations can be used in engineering-oriented environments. The use of parallel processing to achieve the required turnaround times is discussed. Current work by several researchers in this area is summarized. Parallel turbomachinery CFD research at the NASA Lewis Research Center is then highlighted. These efforts are focused on implementing the average-passage turbomachinery model on MIMD, distributed memory parallel computers. Performance results are given for inviscid, single blade row and viscous, multistage applications on several parallel computers, including networked workstations.
Methods for Prediction of High-Speed Reacting Flows in Aerospace Propulsion
NASA Technical Reports Server (NTRS)
Drummond, J. Philip
2014-01-01
Research to develop high-speed airbreathing aerospace propulsion systems was underway in the late 1950s. A major part of the effort involved the supersonic combustion ramjet, or scramjet, engine. Work had also begun to develop computational techniques for solving the equations governing the flow through a scramjet engine. However, scramjet technology and the computational methods to assist in its evolution would remain apart for another decade. The principal barrier was that the computational methods needed for engine evolution lacked the computer technology required for solving the discrete equations resulting from the numerical methods. Even today, computer resources remain a major pacing item in overcoming this barrier. Significant advances have been made over the past 35 years, however, in modeling the supersonic chemically reacting flow in a scramjet combustor. To see how scramjet development and the required computational tools finally merged, we briefly trace the evolution of the technology in both areas.
Exploring Midwives' Need and Intention to Adopt Electronic Integrated Antenatal Care
Markam, Hosizah; Hochheiser, Harry; Kuntoro, Kuntoro; Notobroto, Hari Basuki
2018-01-01
Documentation requirements for the Indonesian integrated antenatal care (ANC) program suggest the need for electronic systems to address gaps in existing paper documentation practices. Our goals were to quantify midwives' documentation completeness in a primary healthcare center, understand documentation challenges, develop a tool, and assess intention to use the tool. We analyzed existing ANC records in a primary healthcare center in Bangkalan, East Java, and conducted interviews with stakeholders to understand needs for an electronic system in support of ANC. Development of the web-based Electronic Integrated ANC (e-iANC) system used the System Development Life Cycle method. Training on the use of the system was held in the computer laboratory for 100 midwives chosen from four primary healthcare centers in each of five regions. The Unified Theory of Acceptance and Use of Technology (UTAUT) questionnaire was used to assess their intention to adopt e-iANC. The midwives' intention to adopt e-iANC was significantly influenced by performance expectancy, effort expectancy and facilitating conditions. Age, education level, and computer literacy did not significantly moderate the effects of performance expectancy and effort expectancy on adoption intention. The UTAUT results indicated that the factors that might influence intention to adopt e-iANC are potentially addressable. Results suggest that e-iANC might well be accepted by midwives. PMID:29618961
Altan, Irem; Charbonneau, Patrick; Snell, Edward H.
2016-01-01
Crystallization is a key step in macromolecular structure determination by crystallography. While a robust theoretical treatment of the process is available, due to the complexity of the system, the experimental process is still largely one of trial and error. In this article, efforts in the field are discussed together with a theoretical underpinning using a solubility phase diagram. Prior knowledge has been used to develop tools that computationally predict the crystallization outcome and define mutational approaches that enhance the likelihood of crystallization. For the most part these tools are based on binary outcomes (crystal or no crystal), and the full information contained in an assembly of crystallization screening experiments is lost. The potential of this additional information is illustrated by examples where new biological knowledge can be obtained and where a target can be sub-categorized to predict which class of reagents provides the crystallization driving force. Computational analysis of crystallization requires complete and correctly formatted data. While massive crystallization screening efforts are under way, the data available from many of these studies are sparse. The potential for this data and the steps needed to realize this potential are discussed. PMID:26792536
Sharing Research Models: Using Software Engineering Practices for Facilitation
Bryant, Stephanie P.; Solano, Eric; Cantor, Susanna; Cooley, Philip C.; Wagener, Diane K.
2011-01-01
Increasingly, researchers are turning to computational models to understand the interplay of important variables on systems’ behaviors. Although researchers may develop models that meet the needs of their investigation, application limitations—such as nonintuitive user interface features and data input specifications—may limit the sharing of these tools with other research groups. By removing these barriers, other research groups that perform related work can leverage these work products to expedite their own investigations. The use of software engineering practices can enable managed application production and shared research artifacts among multiple research groups by promoting consistent models, reducing redundant effort, encouraging rigorous peer review, and facilitating research collaborations that are supported by a common toolset. This report discusses three established software engineering practices— the iterative software development process, object-oriented methodology, and Unified Modeling Language—and the applicability of these practices to computational model development. Our efforts to modify the MIDAS TranStat application to make it more user-friendly are presented as an example of how computational models that are based on research and developed using software engineering practices can benefit a broader audience of researchers. PMID:21687780
Advances in Cross-Cutting Ideas for Computational Climate Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, Esmond; Evans, Katherine J.; Caldwell, Peter
This report presents results from the DOE-sponsored workshop titled, ``Advancing X-Cutting Ideas for Computational Climate Science Workshop,'' known as AXICCS, held on September 12--13, 2016 in Rockville, MD. The workshop brought together experts in climate science, computational climate science, computer science, and mathematics to discuss interesting but unsolved science questions regarding climate modeling and simulation, promoted collaboration among the diverse scientists in attendance, and brainstormed about possible tools and capabilities that could be developed to help address them. Emerged from discussions at the workshop were several research opportunities that the group felt could advance climate science significantly. These include (1)more » process-resolving models to provide insight into important processes and features of interest and inform the development of advanced physical parameterizations, (2) a community effort to develop and provide integrated model credibility, (3) including, organizing, and managing increasingly connected model components that increase model fidelity yet complexity, and (4) treating Earth system models as one interconnected organism without numerical or data based boundaries that limit interactions. The group also identified several cross-cutting advances in mathematics, computer science, and computational science that would be needed to enable one or more of these big ideas. It is critical to address the need for organized, verified, and optimized software, which enables the models to grow and continue to provide solutions in which the community can have confidence. Effectively utilizing the newest computer hardware enables simulation efficiency and the ability to handle output from increasingly complex and detailed models. This will be accomplished through hierarchical multiscale algorithms in tandem with new strategies for data handling, analysis, and storage. These big ideas and cross-cutting technologies for enabling breakthrough climate simulation advancements also need the "glue" of outreach and learning across the scientific domains to be successful. The workshop identified several strategies to allow productive, continuous engagement across those who have a broad knowledge of the various angles of the problem. Specific ideas to foster education and tools to make material progress were discussed. Examples include follow-on cross-cutting meetings that enable unstructured discussions of the types this workshop fostered. A concerted effort to recruit undergraduate and graduate students from all relevant domains and provide them experience, training, and networking across their immediate expertise is needed. This will broaden and expand their exposure to the future needs and solutions, and provide a pipeline of scientists with a diversity of knowledge and know-how. Providing real-world experience with subject matter experts from multiple angles may also motivate the students to attack these problems and even come up with the missing solutions.« less
Advances in Cross-Cutting Ideas for Computational Climate Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, E.; Evans, K.; Caldwell, P.
This report presents results from the DOE-sponsored workshop titled, Advancing X-Cutting Ideas for Computational Climate Science Workshop,'' known as AXICCS, held on September 12--13, 2016 in Rockville, MD. The workshop brought together experts in climate science, computational climate science, computer science, and mathematics to discuss interesting but unsolved science questions regarding climate modeling and simulation, promoted collaboration among the diverse scientists in attendance, and brainstormed about possible tools and capabilities that could be developed to help address them. Emerged from discussions at the workshop were several research opportunities that the group felt could advance climate science significantly. These include (1)more » process-resolving models to provide insight into important processes and features of interest and inform the development of advanced physical parameterizations, (2) a community effort to develop and provide integrated model credibility, (3) including, organizing, and managing increasingly connected model components that increase model fidelity yet complexity, and (4) treating Earth system models as one interconnected organism without numerical or data based boundaries that limit interactions. The group also identified several cross-cutting advances in mathematics, computer science, and computational science that would be needed to enable one or more of these big ideas. It is critical to address the need for organized, verified, and optimized software, which enables the models to grow and continue to provide solutions in which the community can have confidence. Effectively utilizing the newest computer hardware enables simulation efficiency and the ability to handle output from increasingly complex and detailed models. This will be accomplished through hierarchical multiscale algorithms in tandem with new strategies for data handling, analysis, and storage. These big ideas and cross-cutting technologies for enabling breakthrough climate simulation advancements also need the "glue" of outreach and learning across the scientific domains to be successful. The workshop identified several strategies to allow productive, continuous engagement across those who have a broad knowledge of the various angles of the problem. Specific ideas to foster education and tools to make material progress were discussed. Examples include follow-on cross-cutting meetings that enable unstructured discussions of the types this workshop fostered. A concerted effort to recruit undergraduate and graduate students from all relevant domains and provide them experience, training, and networking across their immediate expertise is needed. This will broaden and expand their exposure to the future needs and solutions, and provide a pipeline of scientists with a diversity of knowledge and know-how. Providing real-world experience with subject matter experts from multiple angles may also motivate the students to attack these problems and even come up with the missing solutions.« less
A direct-to-drive neural data acquisition system.
Kinney, Justin P; Bernstein, Jacob G; Meyer, Andrew J; Barber, Jessica B; Bolivar, Marti; Newbold, Bryan; Scholvin, Jorg; Moore-Kochlacs, Caroline; Wentz, Christian T; Kopell, Nancy J; Boyden, Edward S
2015-01-01
Driven by the increasing channel count of neural probes, there is much effort being directed to creating increasingly scalable electrophysiology data acquisition (DAQ) systems. However, all such systems still rely on personal computers for data storage, and thus are limited by the bandwidth and cost of the computers, especially as the scale of recording increases. Here we present a novel architecture in which a digital processor receives data from an analog-to-digital converter, and writes that data directly to hard drives, without the need for a personal computer to serve as an intermediary in the DAQ process. This minimalist architecture may support exceptionally high data throughput, without incurring costs to support unnecessary hardware and overhead associated with personal computers, thus facilitating scaling of electrophysiological recording in the future.
A direct-to-drive neural data acquisition system
Kinney, Justin P.; Bernstein, Jacob G.; Meyer, Andrew J.; Barber, Jessica B.; Bolivar, Marti; Newbold, Bryan; Scholvin, Jorg; Moore-Kochlacs, Caroline; Wentz, Christian T.; Kopell, Nancy J.; Boyden, Edward S.
2015-01-01
Driven by the increasing channel count of neural probes, there is much effort being directed to creating increasingly scalable electrophysiology data acquisition (DAQ) systems. However, all such systems still rely on personal computers for data storage, and thus are limited by the bandwidth and cost of the computers, especially as the scale of recording increases. Here we present a novel architecture in which a digital processor receives data from an analog-to-digital converter, and writes that data directly to hard drives, without the need for a personal computer to serve as an intermediary in the DAQ process. This minimalist architecture may support exceptionally high data throughput, without incurring costs to support unnecessary hardware and overhead associated with personal computers, thus facilitating scaling of electrophysiological recording in the future. PMID:26388740
Computational methods in metabolic engineering for strain design.
Long, Matthew R; Ong, Wai Kit; Reed, Jennifer L
2015-08-01
Metabolic engineering uses genetic approaches to control microbial metabolism to produce desired compounds. Computational tools can identify new biological routes to chemicals and the changes needed in host metabolism to improve chemical production. Recent computational efforts have focused on exploring what compounds can be made biologically using native, heterologous, and/or enzymes with broad specificity. Additionally, computational methods have been developed to suggest different types of genetic modifications (e.g. gene deletion/addition or up/down regulation), as well as suggest strategies meeting different criteria (e.g. high yield, high productivity, or substrate co-utilization). Strategies to improve the runtime performances have also been developed, which allow for more complex metabolic engineering strategies to be identified. Future incorporation of kinetic considerations will further improve strain design algorithms. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Seamon, E.; Gessler, P. E.; Flathers, E.
2015-12-01
The creation and use of large amounts of data in scientific investigations has become common practice. Data collection and analysis for large scientific computing efforts are not only increasing in volume as well as number, the methods and analysis procedures are evolving toward greater complexity (Bell, 2009, Clarke, 2009, Maimon, 2010). In addition, the growth of diverse data-intensive scientific computing efforts (Soni, 2011, Turner, 2014, Wu, 2008) has demonstrated the value of supporting scientific data integration. Efforts to bridge this gap between the above perspectives have been attempted, in varying degrees, with modular scientific computing analysis regimes implemented with a modest amount of success (Perez, 2009). This constellation of effects - 1) an increasing growth in the volume and amount of data, 2) a growing data-intensive science base that has challenging needs, and 3) disparate data organization and integration efforts - has created a critical gap. Namely, systems of scientific data organization and management typically do not effectively enable integrated data collaboration or data-intensive science-based communications. Our research efforts attempt to address this gap by developing a modular technology framework for data science integration efforts - with climate variation as the focus. The intention is that this model, if successful, could be generalized to other application areas. Our research aim focused on the design and implementation of a modular, deployable technology architecture for data integration. Developed using aspects of R, interactive python, SciDB, THREDDS, Javascript, and varied data mining and machine learning techniques, the Modular Data Response Framework (MDRF) was implemented to explore case scenarios for bio-climatic variation as they relate to pacific northwest ecosystem regions. Our preliminary results, using historical NETCDF climate data for calibration purposes across the inland pacific northwest region (Abatzoglou, Brown, 2011), show clear ecosystems shifting over a ten-year period (2001-2011), based on multiple supervised classifier methods for bioclimatic indicators.
Geoinformatics 2007: data to knowledge
Brady, Shailaja R.; Sinha, A. Krishna; Gundersen, Linda C.
2007-01-01
Geoinformatics is the term used to describe a variety of efforts to promote collaboration between the computer sciences and the geosciences to solve complex scientific questions. It refers to the distributed, integrated digital information system and working environment that provides innovative means for the study of the Earth systems, as well as other planets, through use of advanced information technologies. Geoinformatics activities range from major research and development efforts creating new technologies to provide high-quality, sustained production-level services for data discovery, integration and analysis, to small, discipline-specific efforts that develop earth science data collections and data analysis tools serving the needs of individual communities. The ultimate vision of Geoinformatics is a highly interconnected data system populated with high quality, freely available data, as well as, a robust set of software for analysis, visualization, and modeling.
Analogy Mapping Development for Learning Programming
NASA Astrophysics Data System (ADS)
Sukamto, R. A.; Prabawa, H. W.; Kurniawati, S.
2017-02-01
Programming skill is an important skill for computer science students, whereas nowadays, there many computer science students are lack of skills and information technology knowledges in Indonesia. This is contrary with the implementation of the ASEAN Economic Community (AEC) since the end of 2015 which is the qualified worker needed. This study provided an effort for nailing programming skills by mapping program code to visual analogies as learning media. The developed media was based on state machine and compiler principle and was implemented in C programming language. The state of every basic condition in programming were successful determined as analogy visualization.
Recurrence formulas for fully exponentially correlated four-body wave functions
NASA Astrophysics Data System (ADS)
Harris, Frank E.
2009-03-01
Formulas are presented for the recursive generation of four-body integrals in which the integrand consists of arbitrary integer powers (≥-1) of all the interparticle distances rij , multiplied by an exponential containing an arbitrary linear combination of all the rij . These integrals are generalizations of those encountered using Hylleraas basis functions and include all that are needed to make energy computations on the Li atom and other four-body systems with a fully exponentially correlated Slater-type basis of arbitrary quantum numbers. The only quantities needed to start the recursion are the basic four-body integral first evaluated by Fromm and Hill plus some easily evaluated three-body “boundary” integrals. The computational labor in constructing integral sets for practical computations is less than when the integrals are generated using explicit formulas obtained by differentiating the basic integral with respect to its parameters. Computations are facilitated by using a symbolic algebra program (MAPLE) to compute array index pointers and present syntactically correct FORTRAN source code as output; in this way it is possible to obtain error-free high-speed evaluations with minimal effort. The work can be checked by verifying sum rules the integrals must satisfy.
An Interface for Biomedical Big Data Processing on the Tianhe-2 Supercomputer.
Yang, Xi; Wu, Chengkun; Lu, Kai; Fang, Lin; Zhang, Yong; Li, Shengkang; Guo, Guixin; Du, YunFei
2017-12-01
Big data, cloud computing, and high-performance computing (HPC) are at the verge of convergence. Cloud computing is already playing an active part in big data processing with the help of big data frameworks like Hadoop and Spark. The recent upsurge of high-performance computing in China provides extra possibilities and capacity to address the challenges associated with big data. In this paper, we propose Orion-a big data interface on the Tianhe-2 supercomputer-to enable big data applications to run on Tianhe-2 via a single command or a shell script. Orion supports multiple users, and each user can launch multiple tasks. It minimizes the effort needed to initiate big data applications on the Tianhe-2 supercomputer via automated configuration. Orion follows the "allocate-when-needed" paradigm, and it avoids the idle occupation of computational resources. We tested the utility and performance of Orion using a big genomic dataset and achieved a satisfactory performance on Tianhe-2 with very few modifications to existing applications that were implemented in Hadoop/Spark. In summary, Orion provides a practical and economical interface for big data processing on Tianhe-2.
Static Schedulers for Embedded Real-Time Systems
1989-12-01
Because of the need for having efficient scheduling algorithms in large scale real time systems , software engineers put a lot of effort on developing...provide static schedulers for he Embedded Real Time Systems with single processor using Ada programming language. The independent nonpreemptable...support the Computer Aided Rapid Prototyping for Embedded Real Time Systems so that we determine whether the system, as designed, meets the required
ERIC Educational Resources Information Center
Hackett, Judith C., Ed.; McLemore, Lisa Ann, Ed.
The conference reported in this proceedings brought together states' rural development policymakers in an effort to identify the programs and research needed to establish successful state policies. Topics of papers presented at the conference and included in the proceedings covered: (1) telecommunications and computer technology as sources of…
Planning Systems for Distributed Operations
NASA Technical Reports Server (NTRS)
Maxwell, Theresa G.
2002-01-01
This viewgraph representation presents an overview of the mission planning process involving distributed operations (such as the International Space Station (ISS)) and the computer hardware and software systems needed to support such an effort. Topics considered include: evolution of distributed planning systems, ISS distributed planning, the Payload Planning System (PPS), future developments in distributed planning systems, Request Oriented Scheduling Engine (ROSE) and Next Generation distributed planning systems.
Characterization of a Robotic Manipulator for Dynamic Wind Tunnel Applications
2015-03-26
further enhancements would need to be performed individually for each joint. This research effort focused on the improvement of the MTA wrist roll ...Measurement Unit ( IMU ), was used to validate the Euler angle output calculated by the MTA Computer using forward kinematics. Additionally, fast-response...61 3.7 Modeling the Wrist Roll Motor and Controller . . . . . . . . . . . . . . . . . . . . . 64 3.8 Proportional Control for Improved Performance
Bruno Garza, J L; Eijckelhof, B H W; Johnson, P W; Raina, S M; Rynell, P W; Huysmans, M A; van Dieën, J H; van der Beek, A J; Blatter, B M; Dennerlein, J T
2012-01-01
This study, a part of the PRedicting Occupational biomechanics in OFfice workers (PROOF) study, investigated whether there are differences in field-measured forces, muscle efforts, postures, velocities and accelerations across computer activities. These parameters were measured continuously for 120 office workers performing their own work for two hours each. There were differences in nearly all forces, muscle efforts, postures, velocities and accelerations across keyboard, mouse and idle activities. Keyboard activities showed a 50% increase in the median right trapezius muscle effort when compared to mouse activities. Median shoulder rotation changed from 25 degrees internal rotation during keyboard use to 15 degrees external rotation during mouse use. Only keyboard use was associated with median ulnar deviations greater than 5 degrees. Idle activities led to the greatest variability observed in all muscle efforts and postures measured. In future studies, measurements of computer activities could be used to provide information on the physical exposures experienced during computer use. Practitioner Summary: Computer users may develop musculoskeletal disorders due to their force, muscle effort, posture and wrist velocity and acceleration exposures during computer use. We report that many physical exposures are different across computer activities. This information may be used to estimate physical exposures based on patterns of computer activities over time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steich, D J; Brugger, S T; Kallman, J S
2000-02-01
This final report describes our efforts on the Three-Dimensional Massively Parallel CEM Technologies LDRD project (97-ERD-009). Significant need exists for more advanced time domain computational electromagnetics modeling. Bookkeeping details and modifying inflexible software constitute a vast majority of the effort required to address such needs. The required effort escalates rapidly as problem complexity increases. For example, hybrid meshes requiring hybrid numerics on massively parallel platforms (MPPs). This project attempts to alleviate the above limitations by investigating flexible abstractions for these numerical algorithms on MPPs using object-oriented methods, providing a programming environment insulating physics from bookkeeping. The three major design iterationsmore » during the project, known as TIGER-I to TIGER-III, are discussed. Each version of TIGER is briefly discussed along with lessons learned during the development and implementation. An Application Programming Interface (API) of the object-oriented interface for Tiger-III is included in three appendices. The three appendices contain the Utilities, Entity-Attribute, and Mesh libraries developed during the project. The API libraries represent a snapshot of our latest attempt at insulated the physics from the bookkeeping.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johanna H Oxstrand; Katya L Le Blanc
The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts wemore » are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory, the Institute for Energy Technology, and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field operators. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do this. The underlying philosophy in the research effort is “Stop – Start – Continue”, i.e. what features from the use of paper-based procedures should we not incorporate (Stop), what should we keep (Continue), and what new features or work processes should be added (Start). One step in identifying the Stop – Start – Continue was to conduct a baseline study where affordances related to the current usage of paper-based procedures were identified. The purpose of the study was to develop a model of paper based procedure use which will help to identify desirable features for computer based procedure prototypes. Affordances such as note taking, markups, sharing procedures between fellow coworkers, the use of multiple procedures at once, etc. were considered. The model describes which affordances associated with paper based procedures should be transferred to computer-based procedures as well as what features should not be incorporated. The model also provides a means to identify what new features not present in paper based procedures need to be added to the computer-based procedures to further enhance performance. The next step is to use the requirements and specifications to develop concepts and prototypes of computer-based procedures. User tests and other data collection efforts will be conducted to ensure that the real issues with field procedures and their usage are being addressed and solved in the best manner possible. This paper describes the baseline study, the construction of the model of procedure use, and the requirements and specifications for computer-based procedures that were developed based on the model. It also addresses how the model and the insights gained from it were used to develop concepts and prototypes for computer based procedures.« less
Computers in mathematics: teacher-inservice training at a distance
NASA Astrophysics Data System (ADS)
Friedman, Edward A.; Jurkat, M. P.
1993-01-01
While research and experience show many advantages for incorporation of computer technology into secondary school mathematics instruction, less than 5 percent of the nation's teachers are actively using computers in their classrooms. This is the case even though mathematics teachers in grades 7 - 12 are often familiar with computer technology and have computers available to them in their schools. The implementation bottleneck is in-service teacher training and there are few models of effective implementation available for teachers to emulate. Stevens Institute of Technology has been active since 1988 in research and development efforts to incorporate computers into classroom use. We have found that teachers need to see examples of classroom experience with hardware and software and they need to have assistance as they experiment with applications of software and the development of lesson plans. High-band width technology can greatly facilitate teacher training in this area through transmission of video documentaries, software discussions, teleconferencing, peer interactions, classroom observations, etc. We discuss the experience that Stevens has had with face-to-face teacher training as well as with satellite-based teleconferencing using one-way video and two- way audio. Included are reviews of analyses of this project by researchers from Educational Testing Service, Princeton University, and Bank Street School of Education.
Hypersonic Shock Wave Computations Using the Generalized Boltzmann Equation
NASA Astrophysics Data System (ADS)
Agarwal, Ramesh; Chen, Rui; Cheremisin, Felix G.
2006-11-01
Hypersonic shock structure in diatomic gases is computed by solving the Generalized Boltzmann Equation (GBE), where the internal and translational degrees of freedom are considered in the framework of quantum and classical mechanics respectively [1]. The computational framework available for the standard Boltzmann equation [2] is extended by including both the rotational and vibrational degrees of freedom in the GBE. There are two main difficulties encountered in computation of high Mach number flows of diatomic gases with internal degrees of freedom: (1) a large velocity domain is needed for accurate numerical description of the distribution function resulting in enormous computational effort in calculation of the collision integral, and (2) about 50 energy levels are needed for accurate representation of the rotational spectrum of the gas. Our methodology addresses these problems, and as a result the efficiency of calculations has increased by several orders of magnitude. The code has been validated by computing the shock structure in Nitrogen for Mach numbers up to 25 including the translational and rotational degrees of freedom. [1] Beylich, A., ``An Interlaced System for Nitrogen Gas,'' Proc. of CECAM Workshop, ENS de Lyon, France, 2000. [2] Cheremisin, F., ``Solution of the Boltzmann Kinetic Equation for High Speed Flows of a Rarefied Gas,'' Proc. of the 24th Int. Symp. on Rarefied Gas Dynamics, Bari, Italy, 2004.
Issues to be resolved in Torrents—Future Revolutionised File Sharing
NASA Astrophysics Data System (ADS)
Thanekar, Sachin Arun
2010-11-01
Torrenting is a highly popular peer to peer file sharing activity that allows participants to send and receive files from other computers. As it is an advantageous technique as compare to traditional client server file sharing in terms of time, cost and speed, some drawbaks are also there. Content unavailability, lack of anonymity, leechers, cheaters and download speed consistency are the major problems to sort out. Efforts are needed to resolve these problems and to make this better application. Legal issues are also one of the measure factors of consideration. BitTorrent metafiles themselves do not store copyrighted data. Whether the publishers of BitTorrent metafiles violate copyrights by linking to copyrighted material is controversial. Various countries have taken legal action against websites that host BitTorrent trackers. Eg. Supernova.org, Torrentspy. Efforts are also needed to make such a useful protocol legal.
Planned development of a 3D computer based on free-space optical interconnects
NASA Astrophysics Data System (ADS)
Neff, John A.; Guarino, David R.
1994-05-01
Free-space optical interconnection has the potential to provide upwards of a million data channels between planes of electronic circuits. This may result in the planar board and backplane structures of today giving away to 3-D stacks of wafers or multi-chip modules interconnected via channels running perpendicular to the processor planes, thereby eliminating much of the packaging overhead. Three-dimensional packaging is very appealing for tightly coupled fine-grained parallel computing where the need for massive numbers of interconnections is severely taxing the capabilities of the planar structures. This paper describes a coordinated effort by four research organizations to demonstrate an operational fine-grained parallel computer that achieves global connectivity through the use of free space optical interconnects.
LaRC local area networks to support distributed computing
NASA Technical Reports Server (NTRS)
Riddle, E. P.
1984-01-01
The Langley Research Center's (LaRC) Local Area Network (LAN) effort is discussed. LaRC initiated the development of a LAN to support a growing distributed computing environment at the Center. The purpose of the network is to provide an improved capability (over inteactive and RJE terminal access) for sharing multivendor computer resources. Specifically, the network will provide a data highway for the transfer of files between mainframe computers, minicomputers, work stations, and personal computers. An important influence on the overall network design was the vital need of LaRC researchers to efficiently utilize the large CDC mainframe computers in the central scientific computing facility. Although there was a steady migration from a centralized to a distributed computing environment at LaRC in recent years, the work load on the central resources increased. Major emphasis in the network design was on communication with the central resources within the distributed environment. The network to be implemented will allow researchers to utilize the central resources, distributed minicomputers, work stations, and personal computers to obtain the proper level of computing power to efficiently perform their jobs.
Modeling Biodegradation and Reactive Transport: Analytical and Numerical Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Y; Glascoe, L
The computational modeling of the biodegradation of contaminated groundwater systems accounting for biochemical reactions coupled to contaminant transport is a valuable tool for both the field engineer/planner with limited computational resources and the expert computational researcher less constrained by time and computer power. There exists several analytical and numerical computer models that have been and are being developed to cover the practical needs put forth by users to fulfill this spectrum of computational demands. Generally, analytical models provide rapid and convenient screening tools running on very limited computational power, while numerical models can provide more detailed information with consequent requirementsmore » of greater computational time and effort. While these analytical and numerical computer models can provide accurate and adequate information to produce defensible remediation strategies, decisions based on inadequate modeling output or on over-analysis can have costly and risky consequences. In this chapter we consider both analytical and numerical modeling approaches to biodegradation and reactive transport. Both approaches are discussed and analyzed in terms of achieving bioremediation goals, recognizing that there is always a tradeoff between computational cost and the resolution of simulated systems.« less
ISCB Ebola Award for Important Future Research on the Computational Biology of Ebola Virus
Karp, Peter D.; Berger, Bonnie; Kovats, Diane; Lengauer, Thomas; Linial, Michal; Sabeti, Pardis; Hide, Winston; Rost, Burkhard
2015-01-01
Speed is of the essence in combating Ebola; thus, computational approaches should form a significant component of Ebola research. As for the development of any modern drug, computational biology is uniquely positioned to contribute through comparative analysis of the genome sequences of Ebola strains as well as 3-D protein modeling. Other computational approaches to Ebola may include large-scale docking studies of Ebola proteins with human proteins and with small-molecule libraries, computational modeling of the spread of the virus, computational mining of the Ebola literature, and creation of a curated Ebola database. Taken together, such computational efforts could significantly accelerate traditional scientific approaches. In recognition of the need for important and immediate solutions from the field of computational biology against Ebola, the International Society for Computational Biology (ISCB) announces a prize for an important computational advance in fighting the Ebola virus. ISCB will confer the ISCB Fight against Ebola Award, along with a prize of US$2,000, at its July 2016 annual meeting (ISCB Intelligent Systems for Molecular Biology (ISMB) 2016, Orlando, Florida). PMID:26097686
ISCB Ebola Award for Important Future Research on the Computational Biology of Ebola Virus.
Karp, Peter D; Berger, Bonnie; Kovats, Diane; Lengauer, Thomas; Linial, Michal; Sabeti, Pardis; Hide, Winston; Rost, Burkhard
2015-01-01
Speed is of the essence in combating Ebola; thus, computational approaches should form a significant component of Ebola research. As for the development of any modern drug, computational biology is uniquely positioned to contribute through comparative analysis of the genome sequences of Ebola strains as well as 3-D protein modeling. Other computational approaches to Ebola may include large-scale docking studies of Ebola proteins with human proteins and with small-molecule libraries, computational modeling of the spread of the virus, computational mining of the Ebola literature, and creation of a curated Ebola database. Taken together, such computational efforts could significantly accelerate traditional scientific approaches. In recognition of the need for important and immediate solutions from the field of computational biology against Ebola, the International Society for Computational Biology (ISCB) announces a prize for an important computational advance in fighting the Ebola virus. ISCB will confer the ISCB Fight against Ebola Award, along with a prize of US$2,000, at its July 2016 annual meeting (ISCB Intelligent Systems for Molecular Biology (ISMB) 2016, Orlando, Florida).
Automated segmentation and dose-volume analysis with DICOMautomaton
NASA Astrophysics Data System (ADS)
Clark, H.; Thomas, S.; Moiseenko, V.; Lee, R.; Gill, B.; Duzenli, C.; Wu, J.
2014-03-01
Purpose: Exploration of historical data for regional organ dose sensitivity is limited by the effort needed to (sub-)segment large numbers of contours. A system has been developed which can rapidly perform autonomous contour sub-segmentation and generic dose-volume computations, substantially reducing the effort required for exploratory analyses. Methods: A contour-centric approach is taken which enables lossless, reversible segmentation and dramatically reduces computation time compared with voxel-centric approaches. Segmentation can be specified on a per-contour, per-organ, or per-patient basis, and can be performed along either an embedded plane or in terms of the contour's bounds (e.g., split organ into fractional-volume/dose pieces along any 3D unit vector). More complex segmentation techniques are available. Anonymized data from 60 head-and-neck cancer patients were used to compare dose-volume computations with Varian's EclipseTM (Varian Medical Systems, Inc.). Results: Mean doses and Dose-volume-histograms computed agree strongly with Varian's EclipseTM. Contours which have been segmented can be injected back into patient data permanently and in a Digital Imaging and Communication in Medicine (DICOM)-conforming manner. Lossless segmentation persists across such injection, and remains fully reversible. Conclusions: DICOMautomaton allows researchers to rapidly, accurately, and autonomously segment large amounts of data into intricate structures suitable for analyses of regional organ dose sensitivity.
Toward a VPH/Physiome ToolKit.
Garny, Alan; Cooper, Jonathan; Hunter, Peter J
2010-01-01
The Physiome Project was officially launched in 1997 and has since brought together teams from around the world to work on the development of a computational framework for the modeling of the human body. At the European level, this effort is focused around patient-specific solutions and is known as the Virtual Physiological Human (VPH) Initiative.Such modeling is both multiscale (in space and time) and multiphysics. This, therefore, requires careful interaction and collaboration between the teams involved in the VPH/Physiome effort, if we are to produce computer models that are not only quantitative, but also integrative and predictive.In that context, several technologies and solutions are already available, developed both by groups involved in the VPH/Physiome effort, and by others. They address areas such as data handling/fusion, markup languages, model repositories, ontologies, tools (for simulation, imaging, data fitting, etc.), as well as grid, middleware, and workflow.Here, we provide an overview of resources that should be considered for inclusion in the VPH/Physiome ToolKit (i.e., the set of tools that addresses the needs and requirements of the Physiome Project and VPH Initiative) and discuss some of the challenges that we are still facing.
The life and death of ATR/sensor fusion and the hope for resurrection
NASA Astrophysics Data System (ADS)
Rogers, Steven K.; Sadowski, Charles; Bauer, Kenneth W.; Oxley, Mark E.; Kabrisky, Matthew; Rogers, Adam; Mott, Stephen D.
2008-04-01
For over half a century, scientists and engineers have worked diligently to advance computational intelligence. One application of interest is how computational intelligence can bring value to our war fighters. Automatic Target Recognition (ATR) and sensor fusion efforts have fallen far short of the desired capabilities. In this article we review the capabilities requested by war fighters. When compared to our current capabilities, it is easy to conclude current Combat Identification (CID) as a Family of Systems (FoS) does a lousy job. The war fighter needed capable, operationalized ATR and sensor fusion systems ten years ago but it did not happen. The article reviews the war fighter needs and the current state of the art. The article then concludes by looking forward to where we are headed to provide the capabilities required.
Towards data warehousing and mining of protein unfolding simulation data.
Berrar, Daniel; Stahl, Frederic; Silva, Candida; Rodrigues, J Rui; Brito, Rui M M; Dubitzky, Werner
2005-10-01
The prediction of protein structure and the precise understanding of protein folding and unfolding processes remains one of the greatest challenges in structural biology and bioinformatics. Computer simulations based on molecular dynamics (MD) are at the forefront of the effort to gain a deeper understanding of these complex processes. Currently, these MD simulations are usually on the order of tens of nanoseconds, generate a large amount of conformational data and are computationally expensive. More and more groups run such simulations and generate a myriad of data, which raises new challenges in managing and analyzing these data. Because the vast range of proteins researchers want to study and simulate, the computational effort needed to generate data, the large data volumes involved, and the different types of analyses scientists need to perform, it is desirable to provide a public repository allowing researchers to pool and share protein unfolding data. To adequately organize, manage, and analyze the data generated by unfolding simulation studies, we designed a data warehouse system that is embedded in a grid environment to facilitate the seamless sharing of available computer resources and thus enable many groups to share complex molecular dynamics simulations on a more regular basis. To gain insight into the conformational fluctuations and stability of the monomeric forms of the amyloidogenic protein transthyretin (TTR), molecular dynamics unfolding simulations of the monomer of human TTR have been conducted. Trajectory data and meta-data of the wild-type (WT) protein and the highly amyloidogenic variant L55P-TTR represent the test case for the data warehouse. Web and grid services, especially pre-defined data mining services that can run on or 'near' the data repository of the data warehouse, are likely to play a pivotal role in the analysis of molecular dynamics unfolding data.
Historical data recording for process computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hale, J.C.; Sellars, H.L.
1981-11-01
Computers have been used to monitor and control chemical and refining processes for more than 15 years. During this time, there has been a steady growth in the variety and sophistication of the functions performed by these process computers. Early systems were limited to maintaining only current operating measurements, available through crude operator's consoles or noisy teletypes. The value of retaining a process history, that is, a collection of measurements over time, became apparent, and early efforts produced shift and daily summary reports. The need for improved process historians which record, retrieve and display process information has grown as processmore » computers assume larger responsibilities in plant operations. This paper describes newly developed process historian functions that have been used on several of its in-house process monitoring and control systems in Du Pont factories. 3 refs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Connor, Carolyn Marie; Jacobson, Andree Lars; Bonnie, Amanda Marie
Sustainable and effective computing infrastructure depends critically on the skills and expertise of domain scientists and of committed and well-trained advanced computing professionals. But, in its ongoing High Performance Computing (HPC) work, Los Alamos National Laboratory noted a persistent shortage of well-prepared applicants, particularly for entry-level cluster administration, file systems administration, and high speed networking positions. Further, based upon recruiting efforts and interactions with universities graduating students in related majors of interest (e.g., computer science (CS)), there has been a long standing skillset gap, as focused training in HPC topics is typically lacking or absent in undergraduate and in evenmore » many graduate programs. Given that the effective operation and use of HPC systems requires specialized and often advanced training, that there is a recognized HPC skillset gap, and that there is intense global competition for computing and computational science talent, there is a long-standing and critical need for innovative approaches to help bridge the gap and create a well-prepared, next generation HPC workforce. Our paper places this need in the context of the HPC work and workforce requirements at Los Alamos National Laboratory (LANL) and presents one such innovative program conceived to address the need, bridge the gap, and grow an HPC workforce pipeline at LANL. The Computer System, Cluster, and Networking Summer Institute (CSCNSI) completed its 10th year in 2016. The story of the CSCNSI and its evolution is detailed below with a description of the design of its Boot Camp, and a summary of its success and some key factors that have enabled that success.« less
Connor, Carolyn Marie; Jacobson, Andree Lars; Bonnie, Amanda Marie; ...
2016-11-01
Sustainable and effective computing infrastructure depends critically on the skills and expertise of domain scientists and of committed and well-trained advanced computing professionals. But, in its ongoing High Performance Computing (HPC) work, Los Alamos National Laboratory noted a persistent shortage of well-prepared applicants, particularly for entry-level cluster administration, file systems administration, and high speed networking positions. Further, based upon recruiting efforts and interactions with universities graduating students in related majors of interest (e.g., computer science (CS)), there has been a long standing skillset gap, as focused training in HPC topics is typically lacking or absent in undergraduate and in evenmore » many graduate programs. Given that the effective operation and use of HPC systems requires specialized and often advanced training, that there is a recognized HPC skillset gap, and that there is intense global competition for computing and computational science talent, there is a long-standing and critical need for innovative approaches to help bridge the gap and create a well-prepared, next generation HPC workforce. Our paper places this need in the context of the HPC work and workforce requirements at Los Alamos National Laboratory (LANL) and presents one such innovative program conceived to address the need, bridge the gap, and grow an HPC workforce pipeline at LANL. The Computer System, Cluster, and Networking Summer Institute (CSCNSI) completed its 10th year in 2016. The story of the CSCNSI and its evolution is detailed below with a description of the design of its Boot Camp, and a summary of its success and some key factors that have enabled that success.« less
Teaching Computational Thinking: Deciding to Take Small Steps in a Curriculum
NASA Astrophysics Data System (ADS)
Madoff, R. D.; Putkonen, J.
2016-12-01
While computational thinking and reasoning are not necessarily the same as computer programming, programs such as MATLAB can provide the medium through which the logical and computational thinking at the foundation of science can be taught, learned, and experienced. And while math and computer anxiety are often discussed as critical obstacles to students' progress in their geoscience curriculum, it is here suggested that an unfamiliarity with the computational and logical reasoning is what poses a first stumbling block, in addition to the hurdle of expending the effort to learn how to translate a computational problem into the appropriate computer syntax in order to achieve the intended results. Because computational thinking is so vital for all fields, there is a need to initiate many and to build support in the curriculum for it. This presentation focuses on elements to bring into the teaching of computational thinking that are intended as additions to learning MATLAB programming as a basic tool. Such elements include: highlighting a key concept, discussing a basic geoscience problem where the concept would show up, having the student draw or outline a sketch of what they think an operation needs to do in order to perform a desired result, and then finding the relevant syntax to work with. This iterative pedagogy simulates what someone with more experience in programming does, so it discloses the thinking process in the black box of a result. Intended as only a very early stage introduction, advanced applications would need to be developed as students go through an academic program. The objective would be to expose and introduce computational thinking to majors and non-majors and to alleviate some of the math and computer anxiety so that students would choose to advance on with programming or modeling, whether it is built into a 4-year curriculum or not.
Subjective analysis of energy-management projects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morris, R.
The most successful energy conservation projects always reflect human effort to fine-tune engineering and technological improvements. Subjective analysis is a technique for predicting and measuring human interaction before a project begins. The examples of a subjective analysis for office buildings incorporate evaluative questions that are structured to produce numeric values for computer scoring. Each project would need to develop its own pertinent questions and determine appropriate values for the answers.
A Comparative Analysis of Computer End-User Support in the Air Force and Civilian Organizations
1991-12-01
This explanation implies a further stratification of end users based on the specific tasks they perform, a new model of application combinations, and a...its support efforts to meet the needs of its end-uiser clientele iore closely. 79 INTEGRATED .9 VERBAL ANALYTIC Figure 14. Test Model of Applications ...The IC Model : IEM, Canada. ...............19 Proliferation of ICs ... ............... 20 Services ... ..................... 21 IC States
Onboard processor technology review
NASA Technical Reports Server (NTRS)
Benz, Harry F.
1990-01-01
The general need and requirements for the onboard embedded processors necessary to control and manipulate data in spacecraft systems are discussed. The current known requirements are reviewed from a user perspective, based on current practices in the spacecraft development process. The current capabilities of available processor technologies are then discussed, and these are projected to the generation of spacecraft computers currently under identified, funded development. An appraisal is provided for the current national developmental effort.
Gaudi Evolution for Future Challenges
NASA Astrophysics Data System (ADS)
Clemencic, M.; Hegner, B.; Leggett, C.
2017-10-01
The LHCb Software Framework Gaudi was initially designed and developed almost twenty years ago, when computing was very different from today. It has also been used by a variety of other experiments, including ATLAS, Daya Bay, GLAST, HARP, LZ, and MINERVA. Although it has been always actively developed all these years, stability and backward compatibility have been favoured, reducing the possibilities of adopting new techniques, like multithreaded processing. R&D efforts like GaudiHive have however shown its potential to cope with the new challenges. In view of the LHC second Long Shutdown approaching and to prepare for the computing challenges for the Upgrade of the collider and the detectors, now is a perfect moment to review the design of Gaudi and plan future developments of the project. To do this LHCb, ATLAS and the Future Circular Collider community joined efforts to bring Gaudi forward and prepare it for the upcoming needs of the experiments. We present here how Gaudi will evolve in the next years and the long term development plans.
Lu, Qiongshi; Hu, Yiming; Sun, Jiehuan; Cheng, Yuwei; Cheung, Kei-Hoi; Zhao, Hongyu
2015-05-27
Identifying functional regions in the human genome is a major goal in human genetics. Great efforts have been made to functionally annotate the human genome either through computational predictions, such as genomic conservation, or high-throughput experiments, such as the ENCODE project. These efforts have resulted in a rich collection of functional annotation data of diverse types that need to be jointly analyzed for integrated interpretation and annotation. Here we present GenoCanyon, a whole-genome annotation method that performs unsupervised statistical learning using 22 computational and experimental annotations thereby inferring the functional potential of each position in the human genome. With GenoCanyon, we are able to predict many of the known functional regions. The ability of predicting functional regions as well as its generalizable statistical framework makes GenoCanyon a unique and powerful tool for whole-genome annotation. The GenoCanyon web server is available at http://genocanyon.med.yale.edu.
3DHZETRN: Inhomogeneous Geometry Issues
NASA Technical Reports Server (NTRS)
Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.
2017-01-01
Historical methods for assessing radiation exposure inside complicated geometries for space applications were limited by computational constraints and lack of knowledge associated with nuclear processes occurring over a broad range of particles and energies. Various methods were developed and utilized to simplify geometric representations and enable coupling with simplified but efficient particle transport codes. Recent transport code development efforts, leading to 3DHZETRN, now enable such approximate methods to be carefully assessed to determine if past exposure analyses and validation efforts based on those approximate methods need to be revisited. In this work, historical methods of representing inhomogeneous spacecraft geometry for radiation protection analysis are first reviewed. Two inhomogeneous geometry cases, previously studied with 3DHZETRN and Monte Carlo codes, are considered with various levels of geometric approximation. Fluence, dose, and dose equivalent values are computed in all cases and compared. It is found that although these historical geometry approximations can induce large errors in neutron fluences up to 100 MeV, errors on dose and dose equivalent are modest (<10%) for the cases studied here.
Automated Performance Prediction of Message-Passing Parallel Programs
NASA Technical Reports Server (NTRS)
Block, Robert J.; Sarukkai, Sekhar; Mehra, Pankaj; Woodrow, Thomas S. (Technical Monitor)
1995-01-01
The increasing use of massively parallel supercomputers to solve large-scale scientific problems has generated a need for tools that can predict scalability trends of applications written for these machines. Much work has been done to create simple models that represent important characteristics of parallel programs, such as latency, network contention, and communication volume. But many of these methods still require substantial manual effort to represent an application in the model's format. The NIK toolkit described in this paper is the result of an on-going effort to automate the formation of analytic expressions of program execution time, with a minimum of programmer assistance. In this paper we demonstrate the feasibility of our approach, by extending previous work to detect and model communication patterns automatically, with and without overlapped computations. The predictions derived from these models agree, within reasonable limits, with execution times of programs measured on the Intel iPSC/860 and Paragon. Further, we demonstrate the use of MK in selecting optimal computational grain size and studying various scalability metrics.
Quasiparticle band structure of rocksalt-CdO determined using maximally localized Wannier functions.
Dixit, H; Lamoen, D; Partoens, B
2013-01-23
CdO in the rocksalt structure is an indirect band gap semiconductor. Thus, in order to determine its band gap one needs to calculate the complete band structure. However, in practice, the exact evaluation of the quasiparticle band structure for the large number of k-points which constitute the different symmetry lines in the Brillouin zone can be an extremely demanding task compared to the standard density functional theory (DFT) calculation. In this paper we report the full quasiparticle band structure of CdO using a plane-wave pseudopotential approach. In order to reduce the computational effort and time, we make use of maximally localized Wannier functions (MLWFs). The MLWFs offer a highly accurate method for interpolation of the DFT or GW band structure from a coarse k-point mesh in the irreducible Brillouin zone, resulting in a much reduced computational effort. The present paper discusses the technical details of the scheme along with the results obtained for the quasiparticle band gap and the electron effective mass.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Jong, Wibe A.; Walker, Andrew M.; Hanwell, Marcus D.
Background Multidisciplinary integrated research requires the ability to couple the diverse sets of data obtained from a range of complex experiments and computer simulations. Integrating data requires semantically rich information. In this paper the generation of semantically rich data from the NWChem computational chemistry software is discussed within the Chemical Markup Language (CML) framework. Results The NWChem computational chemistry software has been modified and coupled to the FoX library to write CML compliant XML data files. The FoX library was expanded to represent the lexical input files used by the computational chemistry software. Conclusions The production of CML compliant XMLmore » files for the computational chemistry software NWChem can be relatively easily accomplished using the FoX library. A unified computational chemistry or CompChem convention and dictionary needs to be developed through a community-based effort. The long-term goal is to enable a researcher to do Google-style chemistry and physics searches.« less
Computational crystallization.
Altan, Irem; Charbonneau, Patrick; Snell, Edward H
2016-07-15
Crystallization is a key step in macromolecular structure determination by crystallography. While a robust theoretical treatment of the process is available, due to the complexity of the system, the experimental process is still largely one of trial and error. In this article, efforts in the field are discussed together with a theoretical underpinning using a solubility phase diagram. Prior knowledge has been used to develop tools that computationally predict the crystallization outcome and define mutational approaches that enhance the likelihood of crystallization. For the most part these tools are based on binary outcomes (crystal or no crystal), and the full information contained in an assembly of crystallization screening experiments is lost. The potential of this additional information is illustrated by examples where new biological knowledge can be obtained and where a target can be sub-categorized to predict which class of reagents provides the crystallization driving force. Computational analysis of crystallization requires complete and correctly formatted data. While massive crystallization screening efforts are under way, the data available from many of these studies are sparse. The potential for this data and the steps needed to realize this potential are discussed. Copyright © 2016 Elsevier Inc. All rights reserved.
Combining Thermal And Structural Analyses
NASA Technical Reports Server (NTRS)
Winegar, Steven R.
1990-01-01
Computer code makes programs compatible so stresses and deformations calculated. Paper describes computer code combining thermal analysis with structural analysis. Called SNIP (for SINDA-NASTRAN Interfacing Program), code provides interface between finite-difference thermal model of system and finite-element structural model when no node-to-element correlation between models. Eliminates much manual work in converting temperature results of SINDA (Systems Improved Numerical Differencing Analyzer) program into thermal loads for NASTRAN (NASA Structural Analysis) program. Used to analyze concentrating reflectors for solar generation of electric power. Large thermal and structural models needed to predict distortion of surface shapes, and SNIP saves considerable time and effort in combining models.
Introduction to Computational Methods for Stability and Control (COMSAC)
NASA Technical Reports Server (NTRS)
Hall, Robert M.; Fremaux, C. Michael; Chambers, Joseph R.
2004-01-01
This Symposium is intended to bring together the often distinct cultures of the Stability and Control (S&C) community and the Computational Fluid Dynamics (CFD) community. The COMSAC program is itself a new effort by NASA Langley to accelerate the application of high end CFD methodologies to the demanding job of predicting stability and control characteristics of aircraft. This talk is intended to set the stage for needing a program like COMSAC. It is not intended to give details of the program itself. The topics include: 1) S&C Challenges; 2) Aero prediction methodology; 3) CFD applications; 4) NASA COMSAC planning; 5) Objectives of symposium; and 6) Closing remarks.
NASA Astrophysics Data System (ADS)
Kruger, Scott; Shasharina, S.; Vadlamani, S.; McCune, D.; Holland, C.; Jenkins, T. G.; Candy, J.; Cary, J. R.; Hakim, A.; Miah, M.; Pletzer, A.
2010-11-01
As various efforts to integrate fusion codes proceed worldwide, standards for sharing data have emerged. In the U.S., the SWIM project has pioneered the development of the Plasma State, which has a flat-hierarchy and is dominated by its use within 1.5D transport codes. The European Integrated Tokamak Modeling effort has developed a more ambitious data interoperability effort organized around the concept of Consistent Physical Objects (CPOs). CPOs have deep hierarchies as needed by an effort that seeks to encompass all of fusion computing. Here, we discuss ideas for implementing data interoperability that is complementary to both the Plasma State and CPOs. By making use of attributes within the netcdf and HDF5 binary file formats, the goals of data interoperability can be achieved with a more informal approach. In addition, a file can be simultaneously interoperable to several standards at once. As an illustration of this approach, we discuss its application to the development of synthetic diagnostics that can be used for multiple codes.
Robotic tape library system level testing at NSA: Present and planned
NASA Technical Reports Server (NTRS)
Shields, Michael F.
1994-01-01
In the present of declining Defense budgets, increased pressure has been placed on the DOD to utilize Commercial Off the Shelf (COTS) solutions to incrementally solve a wide variety of our computer processing requirements. With the rapid growth in processing power, significant expansion of high performance networking, and the increased complexity of applications data sets, the requirement for high performance, large capacity, reliable and secure, and most of all affordable robotic tape storage libraries has greatly increased. Additionally, the migration to a heterogeneous, distributed computing environment has further complicated the problem. With today's open system compute servers approaching yesterday's supercomputer capabilities, the need for affordable, reliable secure Mass Storage Systems (MSS) has taken on an ever increasing importance to our processing center's ability to satisfy operational mission requirements. To that end, NSA has established an in-house capability to acquire, test, and evaluate COTS products. Its goal is to qualify a set of COTS MSS libraries, thereby achieving a modicum of standardization for robotic tape libraries which can satisfy our low, medium, and high performance file and volume serving requirements. In addition, NSA has established relations with other Government Agencies to complete this in-house effort and to maximize our research, testing, and evaluation work. While the preponderance of the effort is focused at the high end of the storage ladder, considerable effort will be extended this year and next at the server class or mid range storage systems.
Holt, Thomas J; Bossler, Adam M
2012-09-01
Cybercrime has created substantial challenges for law enforcement, particularly at the local level. Most scholars and police administrators believe that patrol officers need to become more effective first responders to cybercrime calls. The evidence illustrates, however, that many patrol officers are neither adequately prepared nor strongly interested in taking an active role in addressing cybercrime at the local level. This study, therefore, examined the factors that predicted patrol officer interest in cybercrime training and investigations in two southeastern U.S. cities. The study specifically examined the relationship between demographics, cybercrime exposure, computer training, computer proficiency, Internet and cybercrime perceptions, and views on policing cybercrime with officer interest in cybercrime investigation training and conducting cybercrime investigations in the future. Officer views on policing cybercrime, particularly whether they valued cybercrime investigations and believed that cybercrime would dramatically change policing, along with their computer skills, were the strongest predictors of interest in cybercrime efforts. Officers who had received previous computer training were less interested in additional training and conducting investigations. These findings support the argument that more command and departmental meetings focusing on the value of investigating these types of crime need to be held in order to increase officer interest.
NASA Astrophysics Data System (ADS)
Marzari, Nicola
The last 30 years have seen the steady and exhilarating development of powerful quantum-simulation engines for extended systems, dedicated to the solution of the Kohn-Sham equations of density-functional theory, often augmented by density-functional perturbation theory, many-body perturbation theory, time-dependent density-functional theory, dynamical mean-field theory, and quantum Monte Carlo. Their implementation on massively parallel architectures, now leveraging also GPUs and accelerators, has started a massive effort in the prediction from first principles of many or of complex materials properties, leading the way to the exascale through the combination of HPC (high-performance computing) and HTC (high-throughput computing). Challenges and opportunities abound: complementing hardware and software investments and design; developing the materials' informatics infrastructure needed to encode knowledge into complex protocols and workflows of calculations; managing and curating data; resisting the complacency that we have already reached the predictive accuracy needed for materials design, or a robust level of verification of the different quantum engines. In this talk I will provide an overview of these challenges, with the ultimate prize being the computational understanding, prediction, and design of properties and performance for novel or complex materials and devices.
NASA Astrophysics Data System (ADS)
Hayley, Kevin; Schumacher, J.; MacMillan, G. J.; Boutin, L. C.
2014-05-01
Expanding groundwater datasets collected by automated sensors, and improved groundwater databases, have caused a rapid increase in calibration data available for groundwater modeling projects. Improved methods of subsurface characterization have increased the need for model complexity to represent geological and hydrogeological interpretations. The larger calibration datasets and the need for meaningful predictive uncertainty analysis have both increased the degree of parameterization necessary during model calibration. Due to these competing demands, modern groundwater modeling efforts require a massive degree of parallelization in order to remain computationally tractable. A methodology for the calibration of highly parameterized, computationally expensive models using the Amazon EC2 cloud computing service is presented. The calibration of a regional-scale model of groundwater flow in Alberta, Canada, is provided as an example. The model covers a 30,865-km2 domain and includes 28 hydrostratigraphic units. Aquifer properties were calibrated to more than 1,500 static hydraulic head measurements and 10 years of measurements during industrial groundwater use. Three regionally extensive aquifers were parameterized (with spatially variable hydraulic conductivity fields), as was the aerial recharge boundary condition, leading to 450 adjustable parameters in total. The PEST-based model calibration was parallelized on up to 250 computing nodes located on Amazon's EC2 servers.
Cost analysis for computer supported multiple-choice paper examinations
Mandel, Alexander; Hörnlein, Alexander; Ifland, Marianus; Lüneburg, Edeltraud; Deckert, Jürgen; Puppe, Frank
2011-01-01
Introduction: Multiple-choice-examinations are still fundamental for assessment in medical degree programs. In addition to content related research, the optimization of the technical procedure is an important question. Medical examiners face three options: paper-based examinations with or without computer support or completely electronic examinations. Critical aspects are the effort for formatting, the logistic effort during the actual examination, quality, promptness and effort of the correction, the time for making the documents available for inspection by the students, and the statistical analysis of the examination results. Methods: Since three semesters a computer program for input and formatting of MC-questions in medical and other paper-based examinations is used and continuously improved at Wuerzburg University. In the winter semester (WS) 2009/10 eleven, in the summer semester (SS) 2010 twelve and in WS 2010/11 thirteen medical examinations were accomplished with the program and automatically evaluated. For the last two semesters the remaining manual workload was recorded. Results: The cost of the formatting and the subsequent analysis including adjustments of the analysis of an average examination with about 140 participants and about 35 questions was 5-7 hours for exams without complications in the winter semester 2009/2010, about 2 hours in SS 2010 and about 1.5 hours in the winter semester 2010/11. Including exams with complications, the average time was about 3 hours per exam in SS 2010 and 2.67 hours for the WS 10/11. Discussion: For conventional multiple-choice exams the computer-based formatting and evaluation of paper-based exams offers a significant time reduction for lecturers in comparison with the manual correction of paper-based exams and compared to purely electronically conducted exams it needs a much simpler technological infrastructure and fewer staff during the exam. PMID:22205913
Cost analysis for computer supported multiple-choice paper examinations.
Mandel, Alexander; Hörnlein, Alexander; Ifland, Marianus; Lüneburg, Edeltraud; Deckert, Jürgen; Puppe, Frank
2011-01-01
Multiple-choice-examinations are still fundamental for assessment in medical degree programs. In addition to content related research, the optimization of the technical procedure is an important question. Medical examiners face three options: paper-based examinations with or without computer support or completely electronic examinations. Critical aspects are the effort for formatting, the logistic effort during the actual examination, quality, promptness and effort of the correction, the time for making the documents available for inspection by the students, and the statistical analysis of the examination results. Since three semesters a computer program for input and formatting of MC-questions in medical and other paper-based examinations is used and continuously improved at Wuerzburg University. In the winter semester (WS) 2009/10 eleven, in the summer semester (SS) 2010 twelve and in WS 2010/11 thirteen medical examinations were accomplished with the program and automatically evaluated. For the last two semesters the remaining manual workload was recorded. The cost of the formatting and the subsequent analysis including adjustments of the analysis of an average examination with about 140 participants and about 35 questions was 5-7 hours for exams without complications in the winter semester 2009/2010, about 2 hours in SS 2010 and about 1.5 hours in the winter semester 2010/11. Including exams with complications, the average time was about 3 hours per exam in SS 2010 and 2.67 hours for the WS 10/11. For conventional multiple-choice exams the computer-based formatting and evaluation of paper-based exams offers a significant time reduction for lecturers in comparison with the manual correction of paper-based exams and compared to purely electronically conducted exams it needs a much simpler technological infrastructure and fewer staff during the exam.
Packaging strategies for printed circuit board components. Volume I, materials & thermal stresses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neilsen, Michael K.; Austin, Kevin N.; Adolf, Douglas Brian
2011-09-01
Decisions on material selections for electronics packaging can be quite complicated by the need to balance the criteria to withstand severe impacts yet survive deep thermal cycles intact. Many times, material choices are based on historical precedence perhaps ignorant of whether those initial choices were carefully investigated or whether the requirements on the new component match those of previous units. The goal of this program focuses on developing both increased intuition for generic packaging guidelines and computational methodologies for optimizing packaging in specific components. Initial efforts centered on characterization of classes of materials common to packaging strategies and computational analysesmore » of stresses generated during thermal cycling to identify strengths and weaknesses of various material choices. Future studies will analyze the same example problems incorporating the effects of curing stresses as needed and analyzing dynamic loadings to compare trends with the quasi-static conclusions.« less
NASA Technical Reports Server (NTRS)
Morgan, R. P.; Singh, J. P.; Rothenberg, D.; Robinson, B. E.
1975-01-01
The needs to be served, the subsectors in which the system might be used, the technology employed, and the prospects for future utilization of an educational telecommunications delivery system are described and analyzed. Educational subsectors are analyzed with emphasis on the current status and trends within each subsector. Issues which affect future development, and prospects for future use of media, technology, and large-scale electronic delivery within each subsector are included. Information on technology utilization is presented. Educational telecommunications services are identified and grouped into categories: public television and radio, instructional television, computer aided instruction, computer resource sharing, and information resource sharing. Technology based services, their current utilization, and factors which affect future development are stressed. The role of communications satellites in providing these services is discussed. Efforts to analyze and estimate future utilization of large-scale educational telecommunications are summarized. Factors which affect future utilization are identified. Conclusions are presented.
A Multifaceted Mathematical Approach for Complex Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexander, F.; Anitescu, M.; Bell, J.
2012-03-07
Applied mathematics has an important role to play in developing the tools needed for the analysis, simulation, and optimization of complex problems. These efforts require the development of the mathematical foundations for scientific discovery, engineering design, and risk analysis based on a sound integrated approach for the understanding of complex systems. However, maximizing the impact of applied mathematics on these challenges requires a novel perspective on approaching the mathematical enterprise. Previous reports that have surveyed the DOE's research needs in applied mathematics have played a key role in defining research directions with the community. Although these reports have had significantmore » impact, accurately assessing current research needs requires an evaluation of today's challenges against the backdrop of recent advances in applied mathematics and computing. To address these needs, the DOE Applied Mathematics Program sponsored a Workshop for Mathematics for the Analysis, Simulation and Optimization of Complex Systems on September 13-14, 2011. The workshop had approximately 50 participants from both the national labs and academia. The goal of the workshop was to identify new research areas in applied mathematics that will complement and enhance the existing DOE ASCR Applied Mathematics Program efforts that are needed to address problems associated with complex systems. This report describes recommendations from the workshop and subsequent analysis of the workshop findings by the organizing committee.« less
Robotics Technology Crosscutting Program. Technology summary
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
The Robotics Technology Development Program (RTDP) is a needs-driven effort. A length series of presentations and discussions at DOE sites considered critical to DOE`s Environmental Restoration and Waste Management (EM) Programs resulted in a clear understanding of needed robotics applications toward resolving definitive problems at the sites. A detailed analysis of the resulting robotics needs assessment revealed several common threads running through the sites: Tank Waste Retrieval (TWR), Contaminant Analysis Automation (CAA), Mixed Waste Operations (MWO), and Decontamination and Dismantlement (D and D). The RTDP Group also realized that some of the technology development in these four areas had commonmore » (Cross Cutting-CC) needs, for example, computer control and sensor interface protocols. Further, the OTD approach to the Research, Development, Demonstration, Testing, and Evaluation (RDDT and E) process urged an additional organizational breakdown between short-term (1--3 years) and long-term (3--5 years) efforts (Advanced Technology-AT). These factors lead to the formation of the fifth application area for Crosscutting and Advanced Technology (CC and AT) development. The RTDP is thus organized around these application areas -- TWR, CAA, MWO, D and D, and CC and AT -- with the first four developing short-term applied robotics. An RTDP Five-Year Plan was developed for organizing the Program to meet the needs in these application areas.« less
History of the numerical aerodynamic simulation program
NASA Technical Reports Server (NTRS)
Peterson, Victor L.; Ballhaus, William F., Jr.
1987-01-01
The Numerical Aerodynamic Simulation (NAS) program has reached a milestone with the completion of the initial operating configuration of the NAS Processing System Network. This achievement is the first major milestone in the continuing effort to provide a state-of-the-art supercomputer facility for the national aerospace community and to serve as a pathfinder for the development and use of future supercomputer systems. The underlying factors that motivated the initiation of the program are first identified and then discussed. These include the emergence and evolution of computational aerodynamics as a powerful new capability in aerodynamics research and development, the computer power required for advances in the discipline, the complementary nature of computation and wind tunnel testing, and the need for the government to play a pathfinding role in the development and use of large-scale scientific computing systems. Finally, the history of the NAS program is traced from its inception in 1975 to the present time.
Exploring Effective Decision Making through Human-Centered and Computational Intelligence Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Kyungsik; Cook, Kristin A.; Shih, Patrick C.
Decision-making has long been studied to understand a psychological, cognitive, and social process of selecting an effective choice from alternative options. Its studies have been extended from a personal level to a group and collaborative level, and many computer-aided decision-making systems have been developed to help people make right decisions. There has been significant research growth in computational aspects of decision-making systems, yet comparatively little effort has existed in identifying and articulating user needs and requirements in assessing system outputs and the extent to which human judgments could be utilized for making accurate and reliable decisions. Our research focus ismore » decision-making through human-centered and computational intelligence methods in a collaborative environment, and the objectives of this position paper are to bring our research ideas to the workshop, and share and discuss ideas.« less
Sustainable and Autonomic Space Exploration Missions
NASA Technical Reports Server (NTRS)
Hinchey, Michael G.; Sterritt, Roy; Rouff, Christopher; Rash, James L.; Truszkowski, Walter
2006-01-01
Visions for future space exploration have long term science missions in sight, resulting in the need for sustainable missions. Survivability is a critical property of sustainable systems and may be addressed through autonomicity, an emerging paradigm for self-management of future computer-based systems based on inspiration from the human autonomic nervous system. This paper examines some of the ongoing research efforts to realize these survivable systems visions, with specific emphasis on developments in Autonomic Policies.
Latest Sensors and Data Acquisition Development Efforts at KSC
NASA Technical Reports Server (NTRS)
Perotti, Jose M.
2002-01-01
This viewgraph presentation summarizes the characteristics required on sensors by consumers desiring access to space, a long term plan developed at KSC (Kennedy Space Center) to identify promising technologies for NASA's own future sensor needs, and the characteristics of several smart sensors already developed. Also addressed are the computer hardware and architecture used to operate sensors, and generic testing capabilities. Consumers desire sensors which are lightweight, inexpensive, intelligent, and easy to use.
The NASA aircraft icing research program
NASA Technical Reports Server (NTRS)
Shaw, Robert J.; Reinmann, John J.
1990-01-01
The objective of the NASA aircraft icing research program is to develop and make available to industry icing technology to support the needs and requirements for all-weather aircraft designs. Research is being done for both fixed wing and rotary wing applications. The NASA program emphasizes technology development in two areas, advanced ice protection concepts and icing simulation. Reviewed here are the computer code development/validation, icing wind tunnel testing, and icing flight testing efforts.
HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing
Karimi, Ramin; Hajdu, Andras
2016-01-01
Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis. PMID:26884678
HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing.
Karimi, Ramin; Hajdu, Andras
2016-01-01
Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis.
Image registration assessment in radiotherapy image guidance based on control chart monitoring.
Xia, Wenyao; Breen, Stephen L
2018-04-01
Image guidance with cone beam computed tomography in radiotherapy can guarantee the precision and accuracy of patient positioning prior to treatment delivery. During the image guidance process, operators need to take great effort to evaluate the image guidance quality before correcting a patient's position. This work proposes an image registration assessment method based on control chart monitoring to reduce the effort taken by the operator. According to the control chart plotted by daily registration scores of each patient, the proposed method can quickly detect both alignment errors and image quality inconsistency. Therefore, the proposed method can provide a clear guideline for the operators to identify unacceptable image quality and unacceptable image registration with minimal effort. Experimental results demonstrate that by using control charts from a clinical database of 10 patients undergoing prostate radiotherapy, the proposed method can quickly identify out-of-control signals and find special cause of out-of-control registration events.
Using Computational Toxicology to Enable Risk-Based ...
presentation at Drug Safety Gordon Research Conference 2016 on research efforts in NCCT to enable Computational Toxicology to support risk assessment. Slide presentation at Drug Safety Gordon Research Conference 2016 on research efforts in NCCT to enable Computational Toxicology to support risk assessment.
EPA’s National Center for Computational Toxicology is engaged in high-profile research efforts to improve the ability to more efficiently and effectively prioritize and screen thousands of environmental chemicals for potential toxicity. A central component of these efforts invol...
Space station Simulation Computer System (SCS) study for NASA/MSFC. Volume 1: Overview and summary
NASA Technical Reports Server (NTRS)
1989-01-01
NASA's Space Station Freedom Program (SSFP) planning efforts have identified a need for a payload training simulator system to serve as both a training facility and as a demonstrator to validate operational concepts. The envisioned Marshall Space Flight Center (MSFC) Payload Training Complex (PTC) required to meet this need will train the space station payload scientists, station scientists, and ground controllers to operate the wide variety of experiments that will be onboard the Space Station Freedom. The Simulation Computer System (SCS) is the computer hardware, software, and workstations that will support the Payload Training Complex at MSFC. The purpose of this SCS study is to investigate issues related to the SCS, alternative requirements, simulator approaches, and state-of-the-art technologies to develop candidate concepts and designs. This study was performed August 1988 to October 1989. Thus, the results are based on the SSFP August 1989 baseline, i.e., pre-Langley configuration/budget review (C/BR) baseline. Some terms, e.g., combined trainer, are being redefined. An overview of the study activities and a summary of study results are given here.
Costa - Introduction to 2015 Annual Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costa, James E.
In parallel with Sandia National Laboratories having two major locations (NM and CA), along with a number of smaller facilities across the nation, so too is the distribution of scientific, engineering and computing resources. As a part of Sandia’s Institutional Computing Program, CA site-based Sandia computer scientists and engineers have been providing mission and research staff with local CA resident expertise on computing options while also focusing on two growing high performance computing research problems. The first is how to increase system resilience to failure, as machines grow larger, more complex and heterogeneous. The second is how to ensure thatmore » computer hardware and configurations are optimized for specialized data analytical mission needs within the overall Sandia computing environment, including the HPC subenvironment. All of these activities support the larger Sandia effort in accelerating development and integration of high performance computing into national security missions. Sandia continues to both promote national R&D objectives, including the recent Presidential Executive Order establishing the National Strategic Computing Initiative and work to ensure that the full range of computing services and capabilities are available for all mission responsibilities, from national security to energy to homeland defense.« less
Multicore job scheduling in the Worldwide LHC Computing Grid
NASA Astrophysics Data System (ADS)
Forti, A.; Pérez-Calero Yzquierdo, A.; Hartmann, T.; Alef, M.; Lahiff, A.; Templon, J.; Dal Pra, S.; Gila, M.; Skipsey, S.; Acosta-Silva, C.; Filipcic, A.; Walker, R.; Walker, C. J.; Traynor, D.; Gadrat, S.
2015-12-01
After the successful first run of the LHC, data taking is scheduled to restart in Summer 2015 with experimental conditions leading to increased data volumes and event complexity. In order to process the data generated in such scenario and exploit the multicore architectures of current CPUs, the LHC experiments have developed parallelized software for data reconstruction and simulation. However, a good fraction of their computing effort is still expected to be executed as single-core tasks. Therefore, jobs with diverse resources requirements will be distributed across the Worldwide LHC Computing Grid (WLCG), making workload scheduling a complex problem in itself. In response to this challenge, the WLCG Multicore Deployment Task Force has been created in order to coordinate the joint effort from experiments and WLCG sites. The main objective is to ensure the convergence of approaches from the different LHC Virtual Organizations (VOs) to make the best use of the shared resources in order to satisfy their new computing needs, minimizing any inefficiency originated from the scheduling mechanisms, and without imposing unnecessary complexities in the way sites manage their resources. This paper describes the activities and progress of the Task Force related to the aforementioned topics, including experiences from key sites on how to best use different batch system technologies, the evolution of workload submission tools by the experiments and the knowledge gained from scale tests of the different proposed job submission strategies.
Resource Balancing Control Allocation
NASA Technical Reports Server (NTRS)
Frost, Susan A.; Bodson, Marc
2010-01-01
Next generation aircraft with a large number of actuators will require advanced control allocation methods to compute the actuator commands needed to follow desired trajectories while respecting system constraints. Previously, algorithms were proposed to minimize the l1 or l2 norms of the tracking error and of the control effort. The paper discusses the alternative choice of using the l1 norm for minimization of the tracking error and a normalized l(infinity) norm, or sup norm, for minimization of the control effort. The algorithm computes the norm of the actuator deflections scaled by the actuator limits. Minimization of the control effort then translates into the minimization of the maximum actuator deflection as a percentage of its range of motion. The paper shows how the problem can be solved effectively by converting it into a linear program and solving it using a simplex algorithm. Properties of the algorithm are investigated through examples. In particular, the min-max criterion results in a type of resource balancing, where the resources are the control surfaces and the algorithm balances these resources to achieve the desired command. A study of the sensitivity of the algorithms to the data is presented, which shows that the normalized l(infinity) algorithm has the lowest sensitivity, although high sensitivities are observed whenever the limits of performance are reached.
A CFD Database for Airfoils and Wings at Post-Stall Angles of Attack
NASA Technical Reports Server (NTRS)
Petrilli, Justin; Paul, Ryan; Gopalarathnam, Ashok; Frink, Neal T.
2013-01-01
This paper presents selected results from an ongoing effort to develop an aerodynamic database from Reynolds-Averaged Navier-Stokes (RANS) computational analysis of airfoils and wings at stall and post-stall angles of attack. The data obtained from this effort will be used for validation and refinement of a low-order post-stall prediction method developed at NCSU, and to fill existing gaps in high angle of attack data in the literature. Such data could have potential applications in post-stall flight dynamics, helicopter aerodynamics and wind turbine aerodynamics. An overview of the NASA TetrUSS CFD package used for the RANS computational approach is presented. Detailed results for three airfoils are presented to compare their stall and post-stall behavior. The results for finite wings at stall and post-stall conditions focus on the effects of taper-ratio and sweep angle, with particular attention to whether the sectional flows can be approximated using two-dimensional flow over a stalled airfoil. While this approximation seems reasonable for unswept wings even at post-stall conditions, significant spanwise flow on stalled swept wings preclude the use of two-dimensional data to model sectional flows on swept wings. Thus, further effort is needed in low-order aerodynamic modeling of swept wings at stalled conditions.
Eye-gaze and intent: Application in 3D interface control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schryver, J.C.; Goldberg, J.H.
1993-06-01
Computer interface control is typically accomplished with an input ``device`` such as keyboard, mouse, trackball, etc. An input device translates a users input actions, such as mouse clicks and key presses, into appropriate computer commands. To control the interface, the user must first convert intent into the syntax of the input device. A more natural means of computer control is possible when the computer can directly infer user intent, without need of intervening input devices. We describe an application of eye-gaze-contingent control of an interactive three-dimensional (3D) user interface. A salient feature of the user interface is natural input, withmore » a heightened impression of controlling the computer directly by the mind. With this interface, input of rotation and translation are intuitive, whereas other abstract features, such as zoom, are more problematic to match with user intent. This paper describes successes with implementation to date, and ongoing efforts to develop a more sophisticated intent inferencing methodology.« less
Eye-gaze and intent: Application in 3D interface control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schryver, J.C.; Goldberg, J.H.
1993-01-01
Computer interface control is typically accomplished with an input device'' such as keyboard, mouse, trackball, etc. An input device translates a users input actions, such as mouse clicks and key presses, into appropriate computer commands. To control the interface, the user must first convert intent into the syntax of the input device. A more natural means of computer control is possible when the computer can directly infer user intent, without need of intervening input devices. We describe an application of eye-gaze-contingent control of an interactive three-dimensional (3D) user interface. A salient feature of the user interface is natural input, withmore » a heightened impression of controlling the computer directly by the mind. With this interface, input of rotation and translation are intuitive, whereas other abstract features, such as zoom, are more problematic to match with user intent. This paper describes successes with implementation to date, and ongoing efforts to develop a more sophisticated intent inferencing methodology.« less
Predictive Capability Maturity Model for computational modeling and simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.
2007-10-01
The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronauticsmore » and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.« less
Space Station Simulation Computer System (SCS) study for NASA/MSFC. Concept document
NASA Technical Reports Server (NTRS)
1990-01-01
NASA's Space Station Freedom Program (SSFP) planning efforts have identified a need for a payload training simulator system to serve as both a training facility and as a demonstrator to validate operational concepts. The envisioned MSFC Payload Training Complex (PTC) required to meet this need will train the Space Station Payload of experiments that will be onboard the Space Station Freedom. The simulation will support the Payload Training Complex at MSFC. The purpose of this SCS Study is to investigate issues related to the SCS, alternative requirements, simulator approaches, and state-of-the-art technologies to develop candidate concepts and designs.
Progress on the Fabric for Frontier Experiments Project at Fermilab
NASA Astrophysics Data System (ADS)
Box, Dennis; Boyd, Joseph; Dykstra, Dave; Garzoglio, Gabriele; Herner, Kenneth; Kirby, Michael; Kreymer, Arthur; Levshina, Tanya; Mhashilkar, Parag; Sharma, Neha
2015-12-01
The FabrIc for Frontier Experiments (FIFE) project is an ambitious, major-impact initiative within the Fermilab Scientific Computing Division designed to lead the computing model for Fermilab experiments. FIFE is a collaborative effort between experimenters and computing professionals to design and develop integrated computing models for experiments of varying needs and infrastructure. The major focus of the FIFE project is the development, deployment, and integration of Open Science Grid solutions for high throughput computing, data management, database access and collaboration within experiment. To accomplish this goal, FIFE has developed workflows that utilize Open Science Grid sites along with dedicated and commercial cloud resources. The FIFE project has made significant progress integrating into experiment computing operations several services including new job submission services, software and reference data distribution through CVMFS repositories, flexible data transfer client, and access to opportunistic resources on the Open Science Grid. The progress with current experiments and plans for expansion with additional projects will be discussed. FIFE has taken a leading role in the definition of the computing model for Fermilab experiments, aided in the design of computing for experiments beyond Fermilab, and will continue to define the future direction of high throughput computing for future physics experiments worldwide.
Certification of Completion of ASC FY08 Level-2 Milestone ID #2933
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lipari, D A
2008-06-12
This report documents the satisfaction of the completion criteria associated with ASC FY08 Milestone ID No.2933: 'Deploy Moab resource management services on BlueGene/L'. Specifically, this milestone represents LLNL efforts to enhance both SLURM and Moab to extend Moab's capabilities to schedule and manage BlueGene/L, and increases portability of user scripts between ASC systems. The completion criteria for the milestone are the following: (1) Batch jobs can be specified, submitted to Moab, scheduled and run on the BlueGene/L system; (2) Moab will be able to support the markedly increased scale in node count as well as the wiring geometry that ismore » unique to BlueGene/L; and (3) Moab will also prepare and report statistics of job CPU usage just as it does for the current systems it supports. This document presents the completion evidence for both of the stated milestone certification methods: Completion evidence for this milestone will be in the form of (1) documentation--a report that certifies that the completion criteria have been met; and (2) user hand-off. As the selected Tri-Lab workload manager, Moab was chosen to replace LCRM as the enterprise-wide scheduler across Livermore Computing (LC) systems. While LCRM/SLURM successfully scheduled jobs on BG/L, the effort to replace LCRM with Moab on BG/L represented a significant challenge. Moab is a commercial product developed and sold by Cluster Resources, Inc. (CRI). Moab receives the users batch job requests and dispatches these jobs to run on a specific cluster. SLURM is an open-source resource manager whose development is managed by members of the Integrated Computational Resource Management Group (ICRMG) within the Services and Development Division at LLNL. SLURM is responsible for launching and running jobs on an individual cluster. Replacing LCRM with Moab on BG/L required substantial changes to both Moab and SLURM. While the ICRMG could directly manage the SLURM development effort, the work to enhance Moab had to be done by Moab's vendor. Members of the ICRMG held many meetings with CRI developers to develop the design and specify the requirements for what Moab needed to do. Extensions to SLURM are used to run jobs on the BlueGene/L architecture. These extensions support the three dimensional network topology unique to BG/L. While BG/L geometry support was already in SLURM, enhancements were needed to provide backfill capability and answer 'will-run' queries from Moab. For its part, the Moab architecture needed to be modified to interact with SLURM in a more coordinated way. It needed enhancements to support SLURM's shorthand notation for representing thousands of compute nodes and report this information using Moab's existing status commands. The LCRM wrapper scripts that emulated LCRM commands also needed to be enhanced to support BG/L usage. The effort was successful as Moab 5.2.2 and SLURM 1.3 was installed on the 106496 node BG/L machine on May 21, 2008, and turned over to the users to run production.« less
The Man computer Interactive Data Access System: 25 Years of Interactive Processing.
NASA Astrophysics Data System (ADS)
Lazzara, Matthew A.; Benson, John M.; Fox, Robert J.; Laitsch, Denise J.; Rueden, Joseph P.; Santek, David A.; Wade, Delores M.; Whittaker, Thomas M.; Young, J. T.
1999-02-01
On 12 October 1998, it was the 25th anniversary of the Man computer Interactive Data Access System (McIDAS). On that date in 1973, McIDAS was first used operationally by scientists as a tool for data analysis. Over the last 25 years, McIDAS has undergone numerous architectural changes in an effort to keep pace with changing technology. In its early years, significant technological breakthroughs were required to achieve the functionality needed by atmospheric scientists. Today McIDAS is challenged by new Internet-based approaches to data access and data display. The history and impact of McIDAS, along with some of the lessons learned, are presented here
General-Purpose Front End for Real-Time Data Processing
NASA Technical Reports Server (NTRS)
James, Mark
2007-01-01
FRONTIER is a computer program that functions as a front end for any of a variety of other software of both the artificial intelligence (AI) and conventional data-processing types. As used here, front end signifies interface software needed for acquiring and preprocessing data and making the data available for analysis by the other software. FRONTIER is reusable in that it can be rapidly tailored to any such other software with minimum effort. Each component of FRONTIER is programmable and is executed in an embedded virtual machine. Each component can be reconfigured during execution. The virtual-machine implementation making FRONTIER independent of the type of computing hardware on which it is executed.
Thermal analysis of radiometer containers for the 122m hoop column antenna concept
NASA Technical Reports Server (NTRS)
Dillon-Townes, L. A.
1986-01-01
A thermal analysis was conducted for the 122 Meter Hoop Column Antenna (HCA) Radiometer electronic package containers. The HCA radiometer containers were modeled using the computer aided graphics program, ANVIL 4000, and thermally simulated using two thermal programs, TRASYS and MITAS. The results of the analysis provided relationships between the absorptance-emittance ratio and the average surface temperature of the orbiting radiometer containers. These relationships can be used to specify the surface properties, absorptance and reflectance, of the radiometer containers. This is an initial effort in determining the passive thermal protection needs for the 122 m HCA radiometer containers. Several recommendations are provided which expand this effort so specific passive and active thermal protection systems can be defined and designed.
Using Adaptive Mesh Refinment to Simulate Storm Surge
NASA Astrophysics Data System (ADS)
Mandli, K. T.; Dawson, C.
2012-12-01
Coastal hazards related to strong storms such as hurricanes and typhoons are one of the most frequently recurring and wide spread hazards to coastal communities. Storm surges are among the most devastating effects of these storms, and their prediction and mitigation through numerical simulations is of great interest to coastal communities that need to plan for the subsequent rise in sea level during these storms. Unfortunately these simulations require a large amount of resolution in regions of interest to capture relevant effects resulting in a computational cost that may be intractable. This problem is exacerbated in situations where a large number of similar runs is needed such as in design of infrastructure or forecasting with ensembles of probable storms. One solution to address the problem of computational cost is to employ adaptive mesh refinement (AMR) algorithms. AMR functions by decomposing the computational domain into regions which may vary in resolution as time proceeds. Decomposing the domain as the flow evolves makes this class of methods effective at ensuring that computational effort is spent only where it is needed. AMR also allows for placement of computational resolution independent of user interaction and expectation of the dynamics of the flow as well as particular regions of interest such as harbors. The simulation of many different applications have only been made possible by using AMR-type algorithms, which have allowed otherwise impractical simulations to be performed for much less computational expense. Our work involves studying how storm surge simulations can be improved with AMR algorithms. We have implemented relevant storm surge physics in the GeoClaw package and tested how Hurricane Ike's surge into Galveston Bay and up the Houston Ship Channel compares to available tide gauge data. We will also discuss issues dealing with refinement criteria, optimal resolution and refinement ratios, and inundation.
1987-10-01
19 treated in interaction with each other and the hardware and software design. The authors point out some of the inadequacies in HP technologies and...life cycle costs recognition performance on secondary tasks effort/efficiency number of wins ( gaming tasks) number of instructors needed amount of...student interacts with this material in real time via a terminal and display system. The computer performs many functions, such as diagnose student
Atomic Detail Visualization of Photosynthetic Membranes with GPU-Accelerated Ray Tracing
Vandivort, Kirby L.; Barragan, Angela; Singharoy, Abhishek; Teo, Ivan; Ribeiro, João V.; Isralewitz, Barry; Liu, Bo; Goh, Boon Chong; Phillips, James C.; MacGregor-Chatwin, Craig; Johnson, Matthew P.; Kourkoutis, Lena F.; Hunter, C. Neil
2016-01-01
The cellular process responsible for providing energy for most life on Earth, namely photosynthetic light-harvesting, requires the cooperation of hundreds of proteins across an organelle, involving length and time scales spanning several orders of magnitude over quantum and classical regimes. Simulation and visualization of this fundamental energy conversion process pose many unique methodological and computational challenges. We present, in two accompanying movies, light-harvesting in the photosynthetic apparatus found in purple bacteria, the so-called chromatophore. The movies are the culmination of three decades of modeling efforts, featuring the collaboration of theoretical, experimental, and computational scientists. We describe the techniques that were used to build, simulate, analyze, and visualize the structures shown in the movies, and we highlight cases where scientific needs spurred the development of new parallel algorithms that efficiently harness GPU accelerators and petascale computers. PMID:27274603
NASA Technical Reports Server (NTRS)
Demerdash, Nabeel A. O.; Wang, Ren-Hong
1988-01-01
The main purpose of this project is the development of computer-aided models for purposes of studying the effects of various design changes on the parameters and performance characteristics of the modified Lundell class of alternators (MLA) as components of a solar dynamic power system supplying electric energy needs in the forthcoming space station. Key to this modeling effort is the computation of magnetic field distribution in MLAs. Since the nature of the magnetic field is three-dimensional, the first step in the investigation was to apply the finite element method to discretize volume, using the tetrahedron as the basic 3-D element. Details of the stator 3-D finite element grid are given. A preliminary look at the early stage of a 3-D rotor grid is presented.
ICASE/LaRC Workshop on Benchmark Problems in Computational Aeroacoustics (CAA)
NASA Technical Reports Server (NTRS)
Hardin, Jay C. (Editor); Ristorcelli, J. Ray (Editor); Tam, Christopher K. W. (Editor)
1995-01-01
The proceedings of the Benchmark Problems in Computational Aeroacoustics Workshop held at NASA Langley Research Center are the subject of this report. The purpose of the Workshop was to assess the utility of a number of numerical schemes in the context of the unusual requirements of aeroacoustical calculations. The schemes were assessed from the viewpoint of dispersion and dissipation -- issues important to long time integration and long distance propagation in aeroacoustics. Also investigated were the effect of implementation of different boundary conditions. The Workshop included a forum in which practical engineering problems related to computational aeroacoustics were discussed. This discussion took the form of a dialogue between an industrial panel and the workshop participants and was an effort to suggest the direction of evolution of this field in the context of current engineering needs.
A community effort to protect genomic data sharing, collaboration and outsourcing.
Wang, Shuang; Jiang, Xiaoqian; Tang, Haixu; Wang, Xiaofeng; Bu, Diyue; Carey, Knox; Dyke, Stephanie Om; Fox, Dov; Jiang, Chao; Lauter, Kristin; Malin, Bradley; Sofia, Heidi; Telenti, Amalio; Wang, Lei; Wang, Wenhao; Ohno-Machado, Lucila
2017-01-01
The human genome can reveal sensitive information and is potentially re-identifiable, which raises privacy and security concerns about sharing such data on wide scales. In 2016, we organized the third Critical Assessment of Data Privacy and Protection competition as a community effort to bring together biomedical informaticists, computer privacy and security researchers, and scholars in ethical, legal, and social implications (ELSI) to assess the latest advances on privacy-preserving techniques for protecting human genomic data. Teams were asked to develop novel protection methods for emerging genome privacy challenges in three scenarios: Track (1) data sharing through the Beacon service of the Global Alliance for Genomics and Health. Track (2) collaborative discovery of similar genomes between two institutions; and Track (3) data outsourcing to public cloud services. The latter two tracks represent continuing themes from our 2015 competition, while the former was new and a response to a recently established vulnerability. The winning strategy for Track 1 mitigated the privacy risk by hiding approximately 11% of the variation in the database while permitting around 160,000 queries, a significant improvement over the baseline. The winning strategies in Tracks 2 and 3 showed significant progress over the previous competition by achieving multiple orders of magnitude performance improvement in terms of computational runtime and memory requirements. The outcomes suggest that applying highly optimized privacy-preserving and secure computation techniques to safeguard genomic data sharing and analysis is useful. However, the results also indicate that further efforts are needed to refine these techniques into practical solutions.
NASA Astrophysics Data System (ADS)
Gerjuoy, Edward
2005-06-01
The security of messages encoded via the widely used RSA public key encryption system rests on the enormous computational effort required to find the prime factors of a large number N using classical (conventional) computers. In 1994 Peter Shor showed that for sufficiently large N, a quantum computer could perform the factoring with much less computational effort. This paper endeavors to explain, in a fashion comprehensible to the nonexpert, the RSA encryption protocol; the various quantum computer manipulations constituting the Shor algorithm; how the Shor algorithm performs the factoring; and the precise sense in which a quantum computer employing Shor's algorithm can be said to accomplish the factoring of very large numbers with less computational effort than a classical computer. It is made apparent that factoring N generally requires many successive runs of the algorithm. Our analysis reveals that the probability of achieving a successful factorization on a single run is about twice as large as commonly quoted in the literature.
Baele, Guy; Lemey, Philippe; Vansteelandt, Stijn
2013-03-06
Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model's marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. We here assess the original 'model-switch' path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model's marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation.
2013-01-01
Background Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model’s marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. Results We here assess the original ‘model-switch’ path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model’s marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. Conclusions We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation. PMID:23497171
Fernando, Rohan L; Cheng, Hao; Golden, Bruce L; Garrick, Dorian J
2016-12-08
Two types of models have been used for single-step genomic prediction and genome-wide association studies that include phenotypes from both genotyped animals and their non-genotyped relatives. The two types are breeding value models (BVM) that fit breeding values explicitly and marker effects models (MEM) that express the breeding values in terms of the effects of observed or imputed genotypes. MEM can accommodate a wider class of analyses, including variable selection or mixture model analyses. The order of the equations that need to be solved and the inverses required in their construction vary widely, and thus the computational effort required depends upon the size of the pedigree, the number of genotyped animals and the number of loci. We present computational strategies to avoid storing large, dense blocks of the MME that involve imputed genotypes. Furthermore, we present a hybrid model that fits a MEM for animals with observed genotypes and a BVM for those without genotypes. The hybrid model is computationally attractive for pedigree files containing millions of animals with a large proportion of those being genotyped. We demonstrate the practicality on both the original MEM and the hybrid model using real data with 6,179,960 animals in the pedigree with 4,934,101 phenotypes and 31,453 animals genotyped at 40,214 informative loci. To complete a single-trait analysis on a desk-top computer with four graphics cards required about 3 h using the hybrid model to obtain both preconditioned conjugate gradient solutions and 42,000 Markov chain Monte-Carlo (MCMC) samples of breeding values, which allowed making inferences from posterior means, variances and covariances. The MCMC sampling required one quarter of the effort when the hybrid model was used compared to the published MEM. We present a hybrid model that fits a MEM for animals with genotypes and a BVM for those without genotypes. Its practicality and considerable reduction in computing effort was demonstrated. This model can readily be extended to accommodate multiple traits, multiple breeds, maternal effects, and additional random effects such as polygenic residual effects.
Portable parallel stochastic optimization for the design of aeropropulsion components
NASA Technical Reports Server (NTRS)
Sues, Robert H.; Rhodes, G. S.
1994-01-01
This report presents the results of Phase 1 research to develop a methodology for performing large-scale Multi-disciplinary Stochastic Optimization (MSO) for the design of aerospace systems ranging from aeropropulsion components to complete aircraft configurations. The current research recognizes that such design optimization problems are computationally expensive, and require the use of either massively parallel or multiple-processor computers. The methodology also recognizes that many operational and performance parameters are uncertain, and that uncertainty must be considered explicitly to achieve optimum performance and cost. The objective of this Phase 1 research was to initialize the development of an MSO methodology that is portable to a wide variety of hardware platforms, while achieving efficient, large-scale parallelism when multiple processors are available. The first effort in the project was a literature review of available computer hardware, as well as review of portable, parallel programming environments. The first effort was to implement the MSO methodology for a problem using the portable parallel programming language, Parallel Virtual Machine (PVM). The third and final effort was to demonstrate the example on a variety of computers, including a distributed-memory multiprocessor, a distributed-memory network of workstations, and a single-processor workstation. Results indicate the MSO methodology can be well-applied towards large-scale aerospace design problems. Nearly perfect linear speedup was demonstrated for computation of optimization sensitivity coefficients on both a 128-node distributed-memory multiprocessor (the Intel iPSC/860) and a network of workstations (speedups of almost 19 times achieved for 20 workstations). Very high parallel efficiencies (75 percent for 31 processors and 60 percent for 50 processors) were also achieved for computation of aerodynamic influence coefficients on the Intel. Finally, the multi-level parallelization strategy that will be needed for large-scale MSO problems was demonstrated to be highly efficient. The same parallel code instructions were used on both platforms, demonstrating portability. There are many applications for which MSO can be applied, including NASA's High-Speed-Civil Transport, and advanced propulsion systems. The use of MSO will reduce design and development time and testing costs dramatically.
Overview of Materials Qualification Needs for Metal Additive Manufacturing
NASA Astrophysics Data System (ADS)
Seifi, Mohsen; Salem, Ayman; Beuth, Jack; Harrysson, Ola; Lewandowski, John J.
2016-03-01
This overview highlights some of the key aspects regarding materials qualification needs across the additive manufacturing (AM) spectrum. AM technology has experienced considerable publicity and growth in the past few years with many successful insertions for non-mission-critical applications. However, to meet the full potential that AM has to offer, especially for flight-critical components (e.g., rotating parts, fracture-critical parts, etc.), qualification and certification efforts are necessary. While development of qualification standards will address some of these needs, this overview outlines some of the other key areas that will need to be considered in the qualification path, including various process-, microstructure-, and fracture-modeling activities in addition to integrating these with lifing activities targeting specific components. Ongoing work in the Advanced Manufacturing and Mechanical Reliability Center at Case Western Reserve University is focusing on fracture and fatigue testing to rapidly assess critical mechanical properties of some titanium alloys before and after post-processing, in addition to conducting nondestructive testing/evaluation using micro-computerized tomography at General Electric. Process mapping studies are being conducted at Carnegie Mellon University while large area microstructure characterization and informatics (EBSD and BSE) analyses are being conducted at Materials Resources LLC to enable future integration of these efforts via an Integrated Computational Materials Engineering approach to AM. Possible future pathways for materials qualification are provided.
A semi-automatic annotation tool for cooking video
NASA Astrophysics Data System (ADS)
Bianco, Simone; Ciocca, Gianluigi; Napoletano, Paolo; Schettini, Raimondo; Margherita, Roberto; Marini, Gianluca; Gianforme, Giorgio; Pantaleo, Giuseppe
2013-03-01
In order to create a cooking assistant application to guide the users in the preparation of the dishes relevant to their profile diets and food preferences, it is necessary to accurately annotate the video recipes, identifying and tracking the foods of the cook. These videos present particular annotation challenges such as frequent occlusions, food appearance changes, etc. Manually annotate the videos is a time-consuming, tedious and error-prone task. Fully automatic tools that integrate computer vision algorithms to extract and identify the elements of interest are not error free, and false positive and false negative detections need to be corrected in a post-processing stage. We present an interactive, semi-automatic tool for the annotation of cooking videos that integrates computer vision techniques under the supervision of the user. The annotation accuracy is increased with respect to completely automatic tools and the human effort is reduced with respect to completely manual ones. The performance and usability of the proposed tool are evaluated on the basis of the time and effort required to annotate the same video sequences.
Integrated modeling of advanced optical systems
NASA Astrophysics Data System (ADS)
Briggs, Hugh C.; Needels, Laura; Levine, B. Martin
1993-02-01
This poster session paper describes an integrated modeling and analysis capability being developed at JPL under funding provided by the JPL Director's Discretionary Fund and the JPL Control/Structure Interaction Program (CSI). The posters briefly summarize the program capabilities and illustrate them with an example problem. The computer programs developed under this effort will provide an unprecedented capability for integrated modeling and design of high performance optical spacecraft. The engineering disciplines supported include structural dynamics, controls, optics and thermodynamics. Such tools are needed in order to evaluate the end-to-end system performance of spacecraft such as OSI, POINTS, and SMMM. This paper illustrates the proof-of-concept tools that have been developed to establish the technology requirements and demonstrate the new features of integrated modeling and design. The current program also includes implementation of a prototype tool based upon the CAESY environment being developed under the NASA Guidance and Control Research and Technology Computational Controls Program. This prototype will be available late in FY-92. The development plan proposes a major software production effort to fabricate, deliver, support and maintain a national-class tool from FY-93 through FY-95.
A Data-Driven Framework for Incorporating New Tools for ...
This talk was given during the “Exposure-Based Toxicity Testing” session at the annual meeting of the International Society for Exposure Science. It provided an update on the state of the science and tools that may be employed in risk-based prioritization efforts. It outlined knowledge gained from the data provided using these high-throughput tools to assess chemical bioactivity and to predict chemical exposures and also identified future needs. It provided an opportunity to showcase ongoing research efforts within the National Exposure Research Laboratory and the National Center for Computational Toxicology within the Office of Research and Development to an international audience. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based approaches to characterize exposures, evaluate fate and transport, and support environmental diagnostics/forensics with input from multiple data sources. It also develops media- and receptor-specific models, process models, and decision support tools for use both within and outside of EPA.
The Prediction of Drug-Disease Correlation Based on Gene Expression Data.
Cui, Hui; Zhang, Menghuan; Yang, Qingmin; Li, Xiangyi; Liebman, Michael; Yu, Ying; Xie, Lu
2018-01-01
The explosive growth of high-throughput experimental methods and resulting data yields both opportunity and challenge for selecting the correct drug to treat both a specific patient and their individual disease. Ideally, it would be useful and efficient if computational approaches could be applied to help achieve optimal drug-patient-disease matching but current efforts have met with limited success. Current approaches have primarily utilized the measureable effect of a specific drug on target tissue or cell lines to identify the potential biological effect of such treatment. While these efforts have met with some level of success, there exists much opportunity for improvement. This specifically follows the observation that, for many diseases in light of actual patient response, there is increasing need for treatment with combinations of drugs rather than single drug therapies. Only a few previous studies have yielded computational approaches for predicting the synergy of drug combinations by analyzing high-throughput molecular datasets. However, these computational approaches focused on the characteristics of the drug itself, without fully accounting for disease factors. Here, we propose an algorithm to specifically predict synergistic effects of drug combinations on various diseases, by integrating the data characteristics of disease-related gene expression profiles with drug-treated gene expression profiles. We have demonstrated utility through its application to transcriptome data, including microarray and RNASeq data, and the drug-disease prediction results were validated using existing publications and drug databases. It is also applicable to other quantitative profiling data such as proteomics data. We also provide an interactive web interface to allow our Prediction of Drug-Disease method to be readily applied to user data. While our studies represent a preliminary exploration of this critical problem, we believe that the algorithm can provide the basis for further refinement towards addressing a large clinical need.
Visual Odometry for Autonomous Deep-Space Navigation Project
NASA Technical Reports Server (NTRS)
Robinson, Shane; Pedrotty, Sam
2016-01-01
Autonomous rendezvous and docking (AR&D) is a critical need for manned spaceflight, especially in deep space where communication delays essentially leave crews on their own for critical operations like docking. Previously developed AR&D sensors have been large, heavy, power-hungry, and may still require further development (e.g. Flash LiDAR). Other approaches to vision-based navigation are not computationally efficient enough to operate quickly on slower, flight-like computers. The key technical challenge for visual odometry is to adapt it from the current terrestrial applications it was designed for to function in the harsh lighting conditions of space. This effort leveraged Draper Laboratory’s considerable prior development and expertise, benefitting both parties. The algorithm Draper has created is unique from other pose estimation efforts as it has a comparatively small computational footprint (suitable for use onboard a spacecraft, unlike alternatives) and potentially offers accuracy and precision needed for docking. This presents a solution to the AR&D problem that only requires a camera, which is much smaller, lighter, and requires far less power than competing AR&D sensors. We have demonstrated the algorithm’s performance and ability to process ‘flight-like’ imagery formats with a ‘flight-like’ trajectory, positioning ourselves to easily process flight data from the upcoming ‘ISS Selfie’ activity and then compare the algorithm’s quantified performance to the simulated imagery. This will bring visual odometry beyond TRL 5, proving its readiness to be demonstrated as part of an integrated system.Once beyond TRL 5, visual odometry will be poised to be demonstrated as part of a system in an in-space demo where relative pose is critical, like Orion AR&D, ISS robotic operations, asteroid proximity operations, and more.
Visual Odometry for Autonomous Deep-Space Navigation Project
NASA Technical Reports Server (NTRS)
Robinson, Shane; Pedrotty, Sam
2016-01-01
Autonomous rendezvous and docking (AR&D) is a critical need for manned spaceflight, especially in deep space where communication delays essentially leave crews on their own for critical operations like docking. Previously developed AR&D sensors have been large, heavy, power-hungry, and may still require further development (e.g. Flash LiDAR). Other approaches to vision-based navigation are not computationally efficient enough to operate quickly on slower, flight-like computers. The key technical challenge for visual odometry is to adapt it from the current terrestrial applications it was designed for to function in the harsh lighting conditions of space. This effort leveraged Draper Laboratory's considerable prior development and expertise, benefitting both parties. The algorithm Draper has created is unique from other pose estimation efforts as it has a comparatively small computational footprint (suitable for use onboard a spacecraft, unlike alternatives) and potentially offers accuracy and precision needed for docking. This presents a solution to the AR&D problem that only requires a camera, which is much smaller, lighter, and requires far less power than competing AR&D sensors. We have demonstrated the algorithm's performance and ability to process 'flight-like' imagery formats with a 'flight-like' trajectory, positioning ourselves to easily process flight data from the upcoming 'ISS Selfie' activity and then compare the algorithm's quantified performance to the simulated imagery. This will bring visual odometry beyond TRL 5, proving its readiness to be demonstrated as part of an integrated system. Once beyond TRL 5, visual odometry will be poised to be demonstrated as part of a system in an in-space demo where relative pose is critical, like Orion AR&D, ISS robotic operations, asteroid proximity operations, and more.
Proposed Directions for Research in Computer-Based Education.
ERIC Educational Resources Information Center
Waugh, Michael L.
Several directions for potential research efforts in the field of computer-based education (CBE) are discussed. (For the purposes of this paper, CBE is defined as any use of computers to promote learning with no intended inference as to the specific nature or organization of the educational application under discussion.) Efforts should be directed…
Rapid Prototyping of Hydrologic Model Interfaces with IPython
NASA Astrophysics Data System (ADS)
Farthing, M. W.; Winters, K. D.; Ahmadia, A. J.; Hesser, T.; Howington, S. E.; Johnson, B. D.; Tate, J.; Kees, C. E.
2014-12-01
A significant gulf still exists between the state of practice and state of the art in hydrologic modeling. Part of this gulf is due to the lack of adequate pre- and post-processing tools for newly developed computational models. The development of user interfaces has traditionally lagged several years behind the development of a particular computational model or suite of models. As a result, models with mature interfaces often lack key advancements in model formulation, solution methods, and/or software design and technology. Part of the problem has been a focus on developing monolithic tools to provide comprehensive interfaces for the entire suite of model capabilities. Such efforts require expertise in software libraries and frameworks for creating user interfaces (e.g., Tcl/Tk, Qt, and MFC). These tools are complex and require significant investment in project resources (time and/or money) to use. Moreover, providing the required features for the entire range of possible applications and analyses creates a cumbersome interface. For a particular site or application, the modeling requirements may be simplified or at least narrowed, which can greatly reduce the number and complexity of options that need to be accessible to the user. However, monolithic tools usually are not adept at dynamically exposing specific workflows. Our approach is to deliver highly tailored interfaces to users. These interfaces may be site and/or process specific. As a result, we end up with many, customized interfaces rather than a single, general-use tool. For this approach to be successful, it must be efficient to create these tailored interfaces. We need technology for creating quality user interfaces that is accessible and has a low barrier for integration into model development efforts. Here, we present efforts to leverage IPython notebooks as tools for rapid prototyping of site and application-specific user interfaces. We provide specific examples from applications in near-shore environments as well as levee analysis. We discuss our design decisions and methodology for developing customized interfaces, strategies for delivery of the interfaces to users in various computing environments, as well as implications for the design/implementation of simulation models.
Karp, Peter D; Berger, Bonnie; Kovats, Diane; Lengauer, Thomas; Linial, Michal; Sabeti, Pardis; Hide, Winston; Rost, Burkhard
2015-02-15
Speed is of the essence in combating Ebola; thus, computational approaches should form a significant component of Ebola research. As for the development of any modern drug, computational biology is uniquely positioned to contribute through comparative analysis of the genome sequences of Ebola strains and three-dimensional protein modeling. Other computational approaches to Ebola may include large-scale docking studies of Ebola proteins with human proteins and with small-molecule libraries, computational modeling of the spread of the virus, computational mining of the Ebola literature and creation of a curated Ebola database. Taken together, such computational efforts could significantly accelerate traditional scientific approaches. In recognition of the need for important and immediate solutions from the field of computational biology against Ebola, the International Society for Computational Biology (ISCB) announces a prize for an important computational advance in fighting the Ebola virus. ISCB will confer the ISCB Fight against Ebola Award, along with a prize of US$2000, at its July 2016 annual meeting (ISCB Intelligent Systems for Molecular Biology 2016, Orlando, FL). dkovats@iscb.org or rost@in.tum.de. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Future fundamental combustion research for aeropropulsion systems
NASA Technical Reports Server (NTRS)
Mularz, E. J.
1985-01-01
Physical fluid mechanics, heat transfer, and chemical kinetic processes which occur in the combustion chamber of aeropropulsion systems were investigated. With the component requirements becoming more severe for future engines, the current design methodology needs the new tools to obtain the optimum configuration in a reasonable design and development cycle. Research efforts in the last few years were encouraging but to achieve these benefits research is required into the fundamental aerothermodynamic processes of combustion. It is recommended that research continues in the areas of flame stabilization, combustor aerodynamics, heat transfer, multiphase flow and atomization, turbulent reacting flows, and chemical kinetics. Associated with each of these engineering sciences is the need for research into computational methods to accurately describe and predict these complex physical processes. Research needs in each of these areas are highlighted.
Can Clouds replace Grids? Will Clouds replace Grids?
NASA Astrophysics Data System (ADS)
Shiers, J. D.
2010-04-01
The world's largest scientific machine - comprising dual 27km circular proton accelerators cooled to 1.9oK and located some 100m underground - currently relies on major production Grid infrastructures for the offline computing needs of the 4 main experiments that will take data at this facility. After many years of sometimes difficult preparation the computing service has been declared "open" and ready to meet the challenges that will come shortly when the machine restarts in 2009. But the service is not without its problems: reliability - as seen by the experiments, as opposed to that measured by the official tools - still needs to be significantly improved. Prolonged downtimes or degradations of major services or even complete sites are still too common and the operational and coordination effort to keep the overall service running is probably not sustainable at this level. Recently "Cloud Computing" - in terms of pay-per-use fabric provisioning - has emerged as a potentially viable alternative but with rather different strengths and no doubt weaknesses too. Based on the concrete needs of the LHC experiments - where the total data volume that will be acquired over the full lifetime of the project, including the additional data copies that are required by the Computing Models of the experiments, approaches 1 Exabyte - we analyze the pros and cons of Grids versus Clouds. This analysis covers not only technical issues - such as those related to demanding database and data management needs - but also sociological aspects, which cannot be ignored, neither in terms of funding nor in the wider context of the essential but often overlooked role of science in society, education and economy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerber, Richard; Hack, James; Riley, Katherine
The mission of the U.S. Department of Energy Office of Science (DOE SC) is the delivery of scientific discoveries and major scientific tools to transform our understanding of nature and to advance the energy, economic, and national security missions of the United States. To achieve these goals in today’s world requires investments in not only the traditional scientific endeavors of theory and experiment, but also in computational science and the facilities that support large-scale simulation and data analysis. The Advanced Scientific Computing Research (ASCR) program addresses these challenges in the Office of Science. ASCR’s mission is to discover, develop, andmore » deploy computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to DOE. ASCR supports research in computational science, three high-performance computing (HPC) facilities — the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory and Leadership Computing Facilities at Argonne (ALCF) and Oak Ridge (OLCF) National Laboratories — and the Energy Sciences Network (ESnet) at Berkeley Lab. ASCR is guided by science needs as it develops research programs, computers, and networks at the leading edge of technologies. As we approach the era of exascale computing, technology changes are creating challenges for science programs in SC for those who need to use high performance computing and data systems effectively. Numerous significant modifications to today’s tools and techniques will be needed to realize the full potential of emerging computing systems and other novel computing architectures. To assess these needs and challenges, ASCR held a series of Exascale Requirements Reviews in 2015–2017, one with each of the six SC program offices,1 and a subsequent Crosscut Review that sought to integrate the findings from each. Participants at the reviews were drawn from the communities of leading domain scientists, experts in computer science and applied mathematics, ASCR facility staff, and DOE program managers in ASCR and the respective program offices. The purpose of these reviews was to identify mission-critical scientific problems within the DOE Office of Science (including experimental facilities) and determine the requirements for the exascale ecosystem that would be needed to address those challenges. The exascale ecosystem includes exascale computing systems, high-end data capabilities, efficient software at scale, libraries, tools, and other capabilities. This effort will contribute to the development of a strategic roadmap for ASCR compute and data facility investments and will help the ASCR Facility Division establish partnerships with Office of Science stakeholders. It will also inform the Office of Science research needs and agenda. The results of the six reviews have been published in reports available on the web at http://exascaleage.org/. This report presents a summary of the individual reports and of common and crosscutting findings, and it identifies opportunities for productive collaborations among the DOE SC program offices.« less
ERIC Educational Resources Information Center
Ashcraft, Catherine
2015-01-01
To date, girls and women are significantly underrepresented in computer science and technology. Concerns about this underrepresentation have sparked a wealth of educational efforts to promote girls' participation in computing, but these programs have demonstrated limited impact on reversing current trends. This paper argues that this is, in part,…
NASA Trapezoidal Wing Computations Including Transition and Advanced Turbulence Modeling
NASA Technical Reports Server (NTRS)
Rumsey, C. L.; Lee-Rausch, E. M.
2012-01-01
Flow about the NASA Trapezoidal Wing is computed with several turbulence models by using grids from the first High Lift Prediction Workshop in an effort to advance understanding of computational fluid dynamics modeling for this type of flowfield. Transition is accounted for in many of the computations. In particular, a recently-developed 4-equation transition model is utilized and works well overall. Accounting for transition tends to increase lift and decrease moment, which improves the agreement with experiment. Upper surface flap separation is reduced, and agreement with experimental surface pressures and velocity profiles is improved. The predicted shape of wakes from upstream elements is strongly influenced by grid resolution in regions above the main and flap elements. Turbulence model enhancements to account for rotation and curvature have the general effect of increasing lift and improving the resolution of the wing tip vortex as it convects downstream. However, none of the models improve the prediction of surface pressures near the wing tip, where more grid resolution is needed.
Brumberg, Jonathan S; Lorenz, Sean D; Galbraith, Byron V; Guenther, Frank H
2012-01-01
In this paper we present a framework for reducing the development time needed for creating applications for use in non-invasive brain-computer interfaces (BCI). Our framework is primarily focused on facilitating rapid software "app" development akin to current efforts in consumer portable computing (e.g. smart phones and tablets). This is accomplished by handling intermodule communication without direct user or developer implementation, instead relying on a core subsystem for communication of standard, internal data formats. We also provide a library of hardware interfaces for common mobile EEG platforms for immediate use in BCI applications. A use-case example is described in which a user with amyotrophic lateral sclerosis participated in an electroencephalography-based BCI protocol developed using the proposed framework. We show that our software environment is capable of running in real-time with updates occurring 50-60 times per second with limited computational overhead (5 ms system lag) while providing accurate data acquisition and signal analysis.
NASA Technical Reports Server (NTRS)
Hardman, R. R.; Mahan, J. R.; Smith, M. H.; Gelhausen, P. A.; Van Dalsem, W. R.
1991-01-01
The need for a validation technique for computational fluid dynamics (CFD) codes in STOVL applications has led to research efforts to apply infrared thermal imaging techniques to visualize gaseous flow fields. Specifically, a heated, free-jet test facility was constructed. The gaseous flow field of the jet exhaust was characterized using an infrared imaging technique in the 2 to 5.6 micron wavelength band as well as conventional pitot tube and thermocouple methods. These infrared images are compared to computer-generated images using the equations of radiative exchange based on the temperature distribution in the jet exhaust measured with the thermocouple traverses. Temperature and velocity measurement techniques, infrared imaging, and the computer model of the infrared imaging technique are presented and discussed. From the study, it is concluded that infrared imaging techniques coupled with the radiative exchange equations applied to CFD models are a valid method to qualitatively verify CFD codes used in STOVL applications.
Foley, Finbar; Rajagopalan, Srinivasan; Raghunath, Sushravya M; Boland, Jennifer M; Karwoski, Ronald A; Maldonado, Fabien; Bartholmai, Brian J; Peikert, Tobias
2016-01-01
Increased clinical use of chest high-resolution computed tomography results in increased identification of lung adenocarcinomas and persistent subsolid opacities. However, these lesions range from very indolent to extremely aggressive tumors. Clinically relevant diagnostic tools to noninvasively risk stratify and guide individualized management of these lesions are lacking. Research efforts investigating semiquantitative measures to decrease interrater and intrarater variability are emerging, and in some cases steps have been taken to automate this process. However, many such methods currently are still suboptimal, require validation and are not yet clinically applicable. The computer-aided nodule assessment and risk yield software application represents a validated tool for the automated, quantitative, and noninvasive tool for risk stratification of adenocarcinoma lung nodules. Computer-aided nodule assessment and risk yield correlates well with consensus histology and postsurgical patient outcomes, and therefore may help to guide individualized patient management, for example, in identification of nodules amenable to radiological surveillance, or in need of adjunctive therapy. Copyright © 2016 Elsevier Inc. All rights reserved.
Information technology developments within the national biological information infrastructure
Cotter, G.; Frame, M.T.
2000-01-01
Looking out an office window or exploring a community park, one can easily see the tremendous challenges that biological information presents the computer science community. Biological information varies in format and content depending whether or not it is information pertaining to a particular species (i.e. Brown Tree Snake), or a specific ecosystem, which often includes multiple species, land use characteristics, and geospatially referenced information. The complexity and uniqueness of each individual species or ecosystem do not easily lend themselves to today's computer science tools and applications. To address the challenges that the biological enterprise presents the National Biological Information Infrastructure (NBII) (http://www.nbii.gov) was established in 1993. The NBII is designed to address these issues on a National scale within the United States, and through international partnerships abroad. This paper discusses current computer science efforts within the National Biological Information Infrastructure Program and future computer science research endeavors that are needed to address the ever-growing issues related to our Nation's biological concerns.
The Guide to Better Hospital Computer Decisions
Dorenfest, Sheldon I.
1981-01-01
A soon-to-be-published major study of hospital computer use entitled “The Guide to Better Hospital Computer Decisions” was conducted by my firm over the past 2½ years. The study required over twenty (20) man years of effort at a cost of over $300,000, and the six (6) volume final report provides more than 1,000 pages of data about how hospitals are and will be using computerized medical and business information systems. It describes the current status and future expectations for computer use in major application areas, such as, but not limited to, finance, admitting, pharmacy, laboratory, data collection and hospital or medical information systems. It also includes profiles of over 100 companies and other types of organizations providing data processing products and services to hospitals. In this paper, we discuss the need for the study, the specific objectives of the study, the methodology and approach taken to complete the study and a few major conclusions.
NASA Technical Reports Server (NTRS)
Ferzali, Wassim; Zacharakis, Vassilis; Upadhyay, Triveni; Weed, Dennis; Burke, Gregory
1995-01-01
The ICAO Aeronautical Mobile Communications Panel (AMCP) completed the drafting of the Aeronautical Mobile Satellite Service (AMSS) Standards and Recommended Practices (SARP's) and the associated Guidance Material and submitted these documents to ICAO Air Navigation Commission (ANC) for ratification in May 1994. This effort, encompassed an extensive, multi-national SARP's validation. As part of this activity, the US Federal Aviation Administration (FAA) sponsored an effort to validate the SARP's via computer simulation. This paper provides a description of this effort. Specifically, it describes: (1) the approach selected for the creation of a high-fidelity AMSS computer model; (2) the test traffic generation scenarios; and (3) the resultant AMSS performance assessment. More recently, the AMSS computer model was also used to provide AMSS performance statistics in support of the RTCA standardization activities. This paper describes this effort as well.
Large-scale detection of repetitions
Smyth, W. F.
2014-01-01
Combinatorics on words began more than a century ago with a demonstration that an infinitely long string with no repetitions could be constructed on an alphabet of only three letters. Computing all the repetitions (such as ⋯TTT⋯ or ⋯CGACGA⋯ ) in a given string x of length n is one of the oldest and most important problems of computational stringology, requiring time in the worst case. About a dozen years ago, it was discovered that repetitions can be computed as a by-product of the Θ(n)-time computation of all the maximal periodicities or runs in x. However, even though the computation is linear, it is also brute force: global data structures, such as the suffix array, the longest common prefix array and the Lempel–Ziv factorization, need to be computed in a preprocessing phase. Furthermore, all of this effort is required despite the fact that the expected number of runs in a string is generally a small fraction of the string length. In this paper, I explore the possibility that repetitions (perhaps also other regularities in strings) can be computed in a manner commensurate with the size of the output. PMID:24751872
Development and application of dynamic simulations of a subsonic wind tunnel
NASA Technical Reports Server (NTRS)
Szuch, J. R.; Cole, G. L.; Seidel, R. C.; Arpasi, D. J.
1986-01-01
Efforts are currently underway at NASA Lewis to improve and expand ground test facilities and to develop supporting technologies to meet anticipated aeropropulsion research needs. Many of these efforts have been focused on a proposed rehabilitation of the Altitude Wind Tunnel (AWT). In order to insure a technically sound design, an AWT modeling program (both analytical and physical) was initiated to provide input to the AWT final design process. This paper describes the approach taken to develop analytical, dynamic computer simulations of the AWT, and the use of these simulations as test-beds for: (1) predicting the dynamic response characteristics of the AWT, and (2) evaluating proposed AWT control concepts. Plans for developing a portable, real-time simulator for the AWT facility are also described.
A machine-learning apprentice for the completion of repetitive forms
NASA Technical Reports Server (NTRS)
Hermens, Leonard A.; Schlimmer, Jeffrey C.
1994-01-01
Forms of all types are used in businesses and government agencies, and most of them are filled in by hand. Yet much time and effort has been expended to automate form-filling by programming specific systems or computers. The high cost of programmers and other resources prohibits many organizations from benefiting from efficient office automation. A learning apprentice can be used for such repetitious form-filling tasks. In this paper, we establish the need for learning apprentices, describe a framework for such a system, explain the difficulties of form-filling, and present empirical results of a form-filling system used in our department from September 1991 to April 1992. The form-filling apprentice saves up to 87 percent in keystroke effort and correctly predicts nearly 90 percent of the values on the form.
EASI: An electronic assistant for scientific investigation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schur, A.; Feller, D.; DeVaney, M.
1991-09-01
Although many automated tools support the productivity of professionals (engineers, managers, architects, secretaries, etc.), none specifically address the needs of the scientific researcher. The scientist's needs are complex and the primary activities are cognitive rather than physical. The individual scientist collects and manipulates large data sets, integrates, synthesizes, generates, and records information. The means to access and manipulate information are a critical determinant of the performance of the system as a whole. One hindrance in this process is the scientist's computer environment, which has changed little in the last two decades. Extensive time and effort is demanded from the scientistmore » to learn to use the computer system. This paper describes how chemists' activities and interactions with information were abstracted into a common paradigm that meets the critical requirement of facilitating information access and retrieval. This paradigm was embodied in EASI, a working prototype that increased the productivity of the individual scientific researcher. 4 refs., 2 figs., 1 tab.« less
Twenty Five Years in Cheminformatics - A Career Path ...
Antony Williams is a Computational Chemist at the US Environmental Protection Agency in the National Center for Computational Toxicology. He has been involved in cheminformatics and the dissemination of chemical information for over twenty-five years. He has worked for a Fortune 500 company (Eastman Kodak), in two successful start-ups (ACD/Labs and ChemSpider), for the Royal Society of Chemistry (in publishing) and, now, at the EPA. Throughout his career path he has experienced multiple diverse work cultures and focused his efforts on understanding the needs of his employers and the often unrecognized needs of a larger community. Antony will provide a short overview of his career path and discuss the various decisions that helped motivate his change in career from professional spectroscopist to website host and innovator, to working for one of the world's foremost scientific societies and now for one of the most impactful government organizations in the world. Invited Presentation at ACS Spring meeting at CINF: Careers in Chemical Information session
NASA Technical Reports Server (NTRS)
Kapania, Rakesh K.; Liu, Youhua
2000-01-01
At the preliminary design stage of a wing structure, an efficient simulation, one needing little computation but yielding adequately accurate results for various response quantities, is essential in the search of optimal design in a vast design space. In the present paper, methods of using sensitivities up to 2nd order, and direct application of neural networks are explored. The example problem is how to decide the natural frequencies of a wing given the shape variables of the structure. It is shown that when sensitivities cannot be obtained analytically, the finite difference approach is usually more reliable than a semi-analytical approach provided an appropriate step size is used. The use of second order sensitivities is proved of being able to yield much better results than the case where only the first order sensitivities are used. When neural networks are trained to relate the wing natural frequencies to the shape variables, a negligible computation effort is needed to accurately determine the natural frequencies of a new design.
High performance TWT development for the microwave power module
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whaley, D.R.; Armstrong, C.M.; Groshart, G.
1996-12-31
Northrop Grumman`s ongoing development of microwave power modules (MPM) provides microwave power at various power levels, frequencies, and bandwidths for a variety of applications. Present day requirements for the vacuum power booster traveling wave tubes of the microwave power module are becoming increasingly more demanding, necessitating the need for further enhancement of tube performance. The MPM development program at Northrop Grumman is designed specifically to meet this need by construction and test of a series of new tubes aimed at verifying computation and reaching high efficiency design goals. Tubes under test incorporate several different helix designs, as well as varyingmore » electron gun and magnetic confinement configurations. Current efforts also include further development of state-of-the-art TWT modeling and computational methods at Northrop Grumman incorporating new, more accurate models into existing design tools and developing new tools to be used in all aspects of traveling wave tube design. Current status of the Northrop Grumman MPM TWT development program will be presented.« less
Zanin, Massimiliano; Chorbev, Ivan; Stres, Blaz; Stalidzans, Egils; Vera, Julio; Tieri, Paolo; Castiglione, Filippo; Groen, Derek; Zheng, Huiru; Baumbach, Jan; Schmid, Johannes A; Basilio, José; Klimek, Peter; Debeljak, Nataša; Rozman, Damjana; Schmidt, Harald H H W
2017-12-05
Systems medicine holds many promises, but has so far provided only a limited number of proofs of principle. To address this road block, possible barriers and challenges of translating systems medicine into clinical practice need to be identified and addressed. The members of the European Cooperation in Science and Technology (COST) Action CA15120 Open Multiscale Systems Medicine (OpenMultiMed) wish to engage the scientific community of systems medicine and multiscale modelling, data science and computing, to provide their feedback in a structured manner. This will result in follow-up white papers and open access resources to accelerate the clinical translation of systems medicine. © The Author 2017. Published by Oxford University Press.
Soriano, Elena; Marco-Contelles, José; Colmena, Inés; Gandía, Luis
2010-05-01
One of the most critical issues on the study of ligand-receptor interactions in drug design is the knowledge of the bioactive conformation of the ligand. In this study, we describe a computational approach aimed at estimating the binding ability of epibatidine analogs to interact with the neuronal nicotinic acetylcholine receptor (nAChR) and get insights into the bioactive conformation. The protocol followed consists of a docking analysis and evaluation of pharmacophore parameters of the docked structures. On the basis of the biological data, the results have revealed that the docking analysis is able to predict active ligands, whereas further efforts are needed to develop a suitable and solid pharmacophore model.
Atomic detail visualization of photosynthetic membranes with GPU-accelerated ray tracing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stone, John E.; Sener, Melih; Vandivort, Kirby L.
The cellular process responsible for providing energy for most life on Earth, namely, photosynthetic light-harvesting, requires the cooperation of hundreds of proteins across an organelle, involving length and time scales spanning several orders of magnitude over quantum and classical regimes. Simulation and visualization of this fundamental energy conversion process pose many unique methodological and computational challenges. In this paper, we present, in two accompanying movies, light-harvesting in the photosynthetic apparatus found in purple bacteria, the so-called chromatophore. The movies are the culmination of three decades of modeling efforts, featuring the collaboration of theoretical, experimental, and computational scientists. Finally, we describemore » the techniques that were used to build, simulate, analyze, and visualize the structures shown in the movies, and we highlight cases where scientific needs spurred the development of new parallel algorithms that efficiently harness GPU accelerators and petascale computers.« less
Multiple Embedded Processors for Fault-Tolerant Computing
NASA Technical Reports Server (NTRS)
Bolotin, Gary; Watson, Robert; Katanyoutanant, Sunant; Burke, Gary; Wang, Mandy
2005-01-01
A fault-tolerant computer architecture has been conceived in an effort to reduce vulnerability to single-event upsets (spurious bit flips caused by impingement of energetic ionizing particles or photons). As in some prior fault-tolerant architectures, the redundancy needed for fault tolerance is obtained by use of multiple processors in one computer. Unlike prior architectures, the multiple processors are embedded in a single field-programmable gate array (FPGA). What makes this new approach practical is the recent commercial availability of FPGAs that are capable of having multiple embedded processors. A working prototype (see figure) consists of two embedded IBM PowerPC 405 processor cores and a comparator built on a Xilinx Virtex-II Pro FPGA. This relatively simple instantiation of the architecture implements an error-detection scheme. A planned future version, incorporating four processors and two comparators, would correct some errors in addition to detecting them.
Investigations into the shape-preserving interpolants using symbolic computation
NASA Technical Reports Server (NTRS)
Lam, Maria
1988-01-01
Shape representation is a central issue in computer graphics and computer-aided geometric design. Many physical phenomena involve curves and surfaces that are monotone (in some directions) or are convex. The corresponding representation problem is given some monotone or convex data, and a monotone or convex interpolant is found. Standard interpolants need not be monotone or convex even though they may match monotone or convex data. Most of the methods of investigation of this problem involve the utilization of quadratic splines or Hermite polynomials. In this investigation, a similar approach is adopted. These methods require derivative information at the given data points. The key to the problem is the selection of the derivative values to be assigned to the given data points. Schemes for choosing derivatives were examined. Along the way, fitting given data points by a conic section has also been investigated as part of the effort to study shape-preserving quadratic splines.
Atomic detail visualization of photosynthetic membranes with GPU-accelerated ray tracing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stone, John E.; Sener, Melih; Vandivort, Kirby L.
The cellular process responsible for providing energy for most life on Earth, namely, photosynthetic light-harvesting, requires the cooperation of hundreds of proteins across an organelle, involving length and time scales spanning several orders of magnitude over quantum and classical regimes. Simulation and visualization of this fundamental energy conversion process pose many unique methodological and computational challenges. We present, in two accompanying movies, light-harvesting in the photosynthetic apparatus found in purple bacteria, the so-called chromatophore. The movies are the culmination of three decades of modeling efforts, featuring the collaboration of theoretical, experimental, and computational scientists. We describe the techniques that weremore » used to build, simulate, analyze, and visualize the structures shown in the movies, and we highlight cases where scientific needs spurred the development of new parallel algorithms that efficiently harness GPU accelerators and petascale computers.« less
Atomic detail visualization of photosynthetic membranes with GPU-accelerated ray tracing
Stone, John E.; Sener, Melih; Vandivort, Kirby L.; ...
2015-12-12
The cellular process responsible for providing energy for most life on Earth, namely, photosynthetic light-harvesting, requires the cooperation of hundreds of proteins across an organelle, involving length and time scales spanning several orders of magnitude over quantum and classical regimes. Simulation and visualization of this fundamental energy conversion process pose many unique methodological and computational challenges. In this paper, we present, in two accompanying movies, light-harvesting in the photosynthetic apparatus found in purple bacteria, the so-called chromatophore. The movies are the culmination of three decades of modeling efforts, featuring the collaboration of theoretical, experimental, and computational scientists. Finally, we describemore » the techniques that were used to build, simulate, analyze, and visualize the structures shown in the movies, and we highlight cases where scientific needs spurred the development of new parallel algorithms that efficiently harness GPU accelerators and petascale computers.« less
Computing Properties of Hadrons, Nuclei and Nuclear Matter from Quantum Chromodynamics (LQCD)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Negele, John W.
Building on the success of two preceding generations of Scientific Discovery through Advanced Computing (SciDAC) projects, this grant supported the MIT component (P.I. John Negele) of a multi-institutional SciDAC-3 project that also included Brookhaven National Laboratory, the lead laboratory with P. I. Frithjof Karsch serving as Project Director, Thomas Jefferson National Accelerator Facility with P. I. David Richards serving as Co-director, University of Washington with P. I. Martin Savage, University of North Carolina with P. I. Rob Fowler, and College of William and Mary with P. I. Andreas Stathopoulos. Nationally, this multi-institutional project coordinated the software development effort that themore » nuclear physics lattice QCD community needs to ensure that lattice calculations can make optimal use of forthcoming leadership-class and dedicated hardware, including that at the national laboratories, and to exploit future computational resources in the Exascale era.« less
Geoinformatics 2008 - Data to Knowledge
Brady, Shailaja R.; Sinha, A. Krishna; Gundersen, Linda C.
2008-01-01
Geoinformatics is the term used to describe a variety of efforts to promote collaboration between the computer sciences and the geosciences to solve complex scientific questions. It refers to the distributed, integrated digital information system and working environment that provides innovative means for the study of the Earth systems, as well as other planets, through use of advanced information technologies. Geoinformatics activities range from major research and development efforts creating new technologies to provide high-quality, sustained production-level services for data discovery, integration and analysis, to small, discipline-specific efforts that develop earth science data collections and data analysis tools serving the needs of individual communities. The ultimate vision of Geoinformatics is a highly interconnected data system populated with high quality, freely available data, as well as, a robust set of software for analysis, visualization, and modeling. This volume is a collection of extended abstracts for oral papers presented at the Geoinformatics 2008 conference, June 11 and 13, 2008, in Potsdam, Germany.
Survey of Turbulence Models for the Computation of Turbulent Jet Flow and Noise
NASA Technical Reports Server (NTRS)
Nallasamy, N.
1999-01-01
The report presents an overview of jet noise computation utilizing the computational fluid dynamic solution of the turbulent jet flow field. The jet flow solution obtained with an appropriate turbulence model provides the turbulence characteristics needed for the computation of jet mixing noise. A brief account of turbulence models that are relevant for the jet noise computation is presented. The jet flow solutions that have been directly used to calculate jet noise are first reviewed. Then, the turbulent jet flow studies that compute the turbulence characteristics that may be used for noise calculations are summarized. In particular, flow solutions obtained with the k-e model, algebraic Reynolds stress model, and Reynolds stress transport equation model are reviewed. Since, the small scale jet mixing noise predictions can be improved by utilizing anisotropic turbulence characteristics, turbulence models that can provide the Reynolds stress components must now be considered for jet flow computations. In this regard, algebraic stress models and Reynolds stress transport models are good candidates. Reynolds stress transport models involve more modeling and computational effort and time compared to algebraic stress models. Hence, it is recommended that an algebraic Reynolds stress model (ASM) be implemented in flow solvers to compute the Reynolds stress components.
Progress on the FabrIc for Frontier Experiments project at Fermilab
Box, Dennis; Boyd, Joseph; Dykstra, Dave; ...
2015-12-23
The FabrIc for Frontier Experiments (FIFE) project is an ambitious, major-impact initiative within the Fermilab Scientific Computing Division designed to lead the computing model for Fermilab experiments. FIFE is a collaborative effort between experimenters and computing professionals to design and develop integrated computing models for experiments of varying needs and infrastructure. The major focus of the FIFE project is the development, deployment, and integration of Open Science Grid solutions for high throughput computing, data management, database access and collaboration within experiment. To accomplish this goal, FIFE has developed workflows that utilize Open Science Grid sites along with dedicated and commercialmore » cloud resources. The FIFE project has made significant progress integrating into experiment computing operations several services including new job submission services, software and reference data distribution through CVMFS repositories, flexible data transfer client, and access to opportunistic resources on the Open Science Grid. Hence, the progress with current experiments and plans for expansion with additional projects will be discussed. FIFE has taken a leading role in the definition of the computing model for Fermilab experiments, aided in the design of computing for experiments beyond Fermilab, and will continue to define the future direction of high throughput computing for future physics experiments worldwide« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Box, D.; Boyd, J.; Di Benedetto, V.
2016-01-01
The FabrIc for Frontier Experiments (FIFE) project is an initiative within the Fermilab Scientific Computing Division designed to steer the computing model for non-LHC Fermilab experiments across multiple physics areas. FIFE is a collaborative effort between experimenters and computing professionals to design and develop integrated computing models for experiments of varying size, needs, and infrastructure. The major focus of the FIFE project is the development, deployment, and integration of solutions for high throughput computing, data management, database access and collaboration management within an experiment. To accomplish this goal, FIFE has developed workflows that utilize Open Science Grid compute sites alongmore » with dedicated and commercial cloud resources. The FIFE project has made significant progress integrating into experiment computing operations several services including a common job submission service, software and reference data distribution through CVMFS repositories, flexible and robust data transfer clients, and access to opportunistic resources on the Open Science Grid. The progress with current experiments and plans for expansion with additional projects will be discussed. FIFE has taken the leading role in defining the computing model for Fermilab experiments, aided in the design of experiments beyond those hosted at Fermilab, and will continue to define the future direction of high throughput computing for future physics experiments worldwide.« less
NASA Astrophysics Data System (ADS)
Bielski, Conrad; Lemoine, Guido; Syryczynski, Jacek
2009-09-01
High Performance Computing (HPC) hardware solutions such as grid computing and General Processing on a Graphics Processing Unit (GPGPU) are now accessible to users with general computing needs. Grid computing infrastructures in the form of computing clusters or blades are becoming common place and GPGPU solutions that leverage the processing power of the video card are quickly being integrated into personal workstations. Our interest in these HPC technologies stems from the need to produce near real-time maps from a combination of pre- and post-event satellite imagery in support of post-disaster management. Faster processing provides a twofold gain in this situation: 1. critical information can be provided faster and 2. more elaborate automated processing can be performed prior to providing the critical information. In our particular case, we test the use of the PANTEX index which is based on analysis of image textural measures extracted using anisotropic, rotation-invariant GLCM statistics. The use of this index, applied in a moving window, has been shown to successfully identify built-up areas in remotely sensed imagery. Built-up index image masks are important input to the structuring of damage assessment interpretation because they help optimise the workload. The performance of computing the PANTEX workflow is compared on two different HPC hardware architectures: (1) a blade server with 4 blades, each having dual quad-core CPUs and (2) a CUDA enabled GPU workstation. The reference platform is a dual CPU-quad core workstation and the PANTEX workflow total computing time is measured. Furthermore, as part of a qualitative evaluation, the differences in setting up and configuring various hardware solutions and the related software coding effort is presented.
Hahm, Hyeouk Chris; Lee, Jieha; Chiao, Christine; Valentine, Anne; Lê Cook, Benjamin
2016-12-01
This study examined associations between sexual orientation of Asian-American women and receipt of mental health care and unmet need for health care. Computer-assisted self-interviews were conducted with 701 unmarried Chinese-, Korean-, and Vietnamese-American women ages 18 to 35. Multivariate regression models examined whether lesbian and bisexual participants differed from exclusively heterosexual participants in use of mental health care and unmet need for health care. After the analyses controlled for mental health status and other covariates, lesbian and bisexual women were more likely than exclusively heterosexual women to have received any past-year mental health services and reported a greater unmet need for health care. Sexual-minority women were no more likely to have received minimally adequate care. Given the high rates of mental health problems among Asian-American sexual-minority women, efforts are needed to identify and overcome barriers to receipt of adequate mental health care and minimize unmet health care needs.
Current Grid operation and future role of the Grid
NASA Astrophysics Data System (ADS)
Smirnova, O.
2012-12-01
Grid-like technologies and approaches became an integral part of HEP experiments. Some other scientific communities also use similar technologies for data-intensive computations. The distinct feature of Grid computing is the ability to federate heterogeneous resources of different ownership into a seamless infrastructure, accessible via a single log-on. Like other infrastructures of similar nature, Grid functioning requires not only technologically sound basis, but also reliable operation procedures, monitoring and accounting. The two aspects, technological and operational, are closely related: weaker is the technology, more burden is on operations, and other way around. As of today, Grid technologies are still evolving: at CERN alone, every LHC experiment uses an own Grid-like system. This inevitably creates a heavy load on operations. Infrastructure maintenance, monitoring and incident response are done on several levels, from local system administrators to large international organisations, involving massive human effort worldwide. The necessity to commit substantial resources is one of the obstacles faced by smaller research communities when moving computing to the Grid. Moreover, most current Grid solutions were developed under significant influence of HEP use cases, and thus need additional effort to adapt them to other applications. Reluctance of many non-HEP researchers to use Grid negatively affects the outlook for national Grid organisations, which strive to provide multi-science services. We started from the situation where Grid organisations were fused with HEP laboratories and national HEP research programmes; we hope to move towards the world where Grid will ultimately reach the status of generic public computing and storage service provider and permanent national and international Grid infrastructures will be established. How far will we be able to advance along this path, depends on us. If no standardisation and convergence efforts will take place, Grid will become limited to HEP; if however the current multitude of Grid-like systems will converge to a generic, modular and extensible solution, Grid will become true to its name.
Solving optimization problems on computational grids.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, S. J.; Mathematics and Computer Science
2001-05-01
Multiprocessor computing platforms, which have become more and more widely available since the mid-1980s, are now heavily used by organizations that need to solve very demanding computational problems. Parallel computing is now central to the culture of many research communities. Novel parallel approaches were developed for global optimization, network optimization, and direct-search methods for nonlinear optimization. Activity was particularly widespread in parallel branch-and-bound approaches for various problems in combinatorial and network optimization. As the cost of personal computers and low-end workstations has continued to fall, while the speed and capacity of processors and networks have increased dramatically, 'cluster' platforms havemore » become popular in many settings. A somewhat different type of parallel computing platform know as a computational grid (alternatively, metacomputer) has arisen in comparatively recent times. Broadly speaking, this term refers not to a multiprocessor with identical processing nodes but rather to a heterogeneous collection of devices that are widely distributed, possibly around the globe. The advantage of such platforms is obvious: they have the potential to deliver enormous computing power. Just as obviously, however, the complexity of grids makes them very difficult to use. The Condor team, headed by Miron Livny at the University of Wisconsin, were among the pioneers in providing infrastructure for grid computations. More recently, the Globus project has developed technologies to support computations on geographically distributed platforms consisting of high-end computers, storage and visualization devices, and other scientific instruments. In 1997, we started the metaneos project as a collaborative effort between optimization specialists and the Condor and Globus groups. Our aim was to address complex, difficult optimization problems in several areas, designing and implementing the algorithms and the software infrastructure need to solve these problems on computational grids. This article describes some of the results we have obtained during the first three years of the metaneos project. Our efforts have led to development of the runtime support library MW for implementing algorithms with master-worker control structure on Condor platforms. This work is discussed here, along with work on algorithms and codes for integer linear programming, the quadratic assignment problem, and stochastic linear programmming. Our experiences in the metaneos project have shown that cheap, powerful computational grids can be used to tackle large optimization problems of various types. In an industrial or commercial setting, the results demonstrate that one may not have to buy powerful computational servers to solve many of the large problems arising in areas such as scheduling, portfolio optimization, or logistics; the idle time on employee workstations (or, at worst, an investment in a modest cluster of PCs) may do the job. For the optimization research community, our results motivate further work on parallel, grid-enabled algorithms for solving very large problems of other types. The fact that very large problems can be solved cheaply allows researchers to better understand issues of 'practical' complexity and of the role of heuristics.« less
Clinical laboratory technician to clinical laboratory scientist articulation and distance learning.
Crowley, J R; Laurich, G A; Mobley, R C; Arnette, A H; Shaikh, A H; Martin, S M
1999-01-01
Laboratory workers and educators alike are challenged to support access to education that is current and provides opportunities for career advancement in the work place. The clinical laboratory science (CLS) program at the Medical College of Georgia in Augusta developed a clinical laboratory technician (CLT) to CLS articulation option, expanded it through distance learning, and integrated computer based learning technology into the educational process over a four year period to address technician needs for access to education. Both positive and negative outcomes were realized through these efforts. Twenty-seven students entered the pilot articulation program, graduated, and took a CLS certification examination. Measured in terms of CLS certification, promotions, pay raises, and career advancement, the program described was a success. However, major problems were encountered related to the use of unfamiliar communication technology; administration of the program at distance sites; communication between educational institutions, students, and employers; and competition with CLT programs for internship sites. These problems must be addressed in future efforts to provide a successful distance learning program. Effective methods for meeting educational needs and career ladder expectations of CLTs and their employers are important to the overall quality and appeal of the profession. Educational technology that includes computer-aided instruction, multimedia, and telecommunications can provide powerful tools for education in general and CLT articulation in particular. Careful preparation and vigilant attention to reliable delivery methods as well as students' progress and outcomes is critical for an efficient, economically feasible, and educationally sound program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eyler, L L; Trent, D S; Budden, M J
During the course of the TEMPEST computer code development a concurrent effort was conducted to assess the code's performance and the validity of computed results. The results of this work are presented in this document. The principal objective of this effort was to assure the code's computational correctness for a wide range of hydrothermal phenomena typical of fast breeder reactor application. 47 refs., 94 figs., 6 tabs.
42 CFR 441.182 - Maintenance of effort: Computation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... SERVICES Inpatient Psychiatric Services for Individuals Under Age 21 in Psychiatric Facilities or Programs § 441.182 Maintenance of effort: Computation. (a) For expenditures for inpatient psychiatric services... total State Medicaid expenditures in the current quarter for inpatient psychiatric services and...
London, Nir; Ambroggio, Xavier
2014-02-01
Computational protein design efforts aim to create novel proteins and functions in an automated manner and, in the process, these efforts shed light on the factors shaping natural proteins. The focus of these efforts has progressed from the interior of proteins to their surface and the design of functions, such as binding or catalysis. Here we examine progress in the development of robust methods for the computational design of non-natural interactions between proteins and molecular targets such as other proteins or small molecules. This problem is referred to as the de novo computational design of interactions. Recent successful efforts in de novo enzyme design and the de novo design of protein-protein interactions open a path towards solving this problem. We examine the common themes in these efforts, and review recent studies aimed at understanding the nature of successes and failures in the de novo computational design of interactions. While several approaches culminated in success, the use of a well-defined structural model for a specific binding interaction in particular has emerged as a key strategy for a successful design, and is therefore reviewed with special consideration. Copyright © 2013 Elsevier Inc. All rights reserved.
Absolute versus relative intensity of physical activity in a dose-response context.
Shephard, R J
2001-06-01
To examine the importance of relative versus absolute intensities of physical activity in the context of population health. A standard computer-search of the literature was supplemented by review of extensive personal files. Consensus reports (Category D Evidence) have commonly recommended moderate rather than hard physical activity in the context of population health. Much of the available literature provides Category C Evidence. It has often confounded issues of relative intensity with absolute intensity or total weekly dose of exercise. In terms of cardiovascular health, there is some evidence for a threshold intensity of effort, perhaps as high as 6 METs, in addition to a minimum volume of physical activity. Decreases in blood pressure and prevention of stroke seem best achieved by moderate rather than high relative intensities of physical activity. Many aspects of metabolic health depend on the total volume of activity; moderate relative intensities of effort are more effective in mobilizing body fat, but harder relative intensities may help to increase energy expenditures postexercise. Hard relative intensities seem needed to augment bone density, but this may reflect an associated increase in volume of activity. Hard relative intensities of exercise induce a transient immunosuppression. The optimal intensity of effort, relative or absolute, for protection against various types of cancer remains unresolved. Acute effects of exercise on mood state also require further study; long-term benefits seem associated with a moderate rather than a hard relative intensity of effort. The importance of relative versus absolute intensity of effort depends on the desired health outcome, and many issues remain to be resolved. Progress will depend on more precise epidemiological methods of assessing energy expenditures and studies that equate total energy expenditures between differing relative intensities. There is a need to focus on gains in quality-adjusted life expectancy.
Automated Stitching of Microtubule Centerlines across Serial Electron Tomograms
Weber, Britta; Tranfield, Erin M.; Höög, Johanna L.; Baum, Daniel; Antony, Claude; Hyman, Tony; Verbavatz, Jean-Marc; Prohaska, Steffen
2014-01-01
Tracing microtubule centerlines in serial section electron tomography requires microtubules to be stitched across sections, that is lines from different sections need to be aligned, endpoints need to be matched at section boundaries to establish a correspondence between neighboring sections, and corresponding lines need to be connected across multiple sections. We present computational methods for these tasks: 1) An initial alignment is computed using a distance compatibility graph. 2) A fine alignment is then computed with a probabilistic variant of the iterative closest points algorithm, which we extended to handle the orientation of lines by introducing a periodic random variable to the probabilistic formulation. 3) Endpoint correspondence is established by formulating a matching problem in terms of a Markov random field and computing the best matching with belief propagation. Belief propagation is not generally guaranteed to converge to a minimum. We show how convergence can be achieved, nonetheless, with minimal manual input. In addition to stitching microtubule centerlines, the correspondence is also applied to transform and merge the electron tomograms. We applied the proposed methods to samples from the mitotic spindle in C. elegans, the meiotic spindle in X. laevis, and sub-pellicular microtubule arrays in T. brucei. The methods were able to stitch microtubules across section boundaries in good agreement with experts' opinions for the spindle samples. Results, however, were not satisfactory for the microtubule arrays. For certain experiments, such as an analysis of the spindle, the proposed methods can replace manual expert tracing and thus enable the analysis of microtubules over long distances with reasonable manual effort. PMID:25438148
Automated stitching of microtubule centerlines across serial electron tomograms.
Weber, Britta; Tranfield, Erin M; Höög, Johanna L; Baum, Daniel; Antony, Claude; Hyman, Tony; Verbavatz, Jean-Marc; Prohaska, Steffen
2014-01-01
Tracing microtubule centerlines in serial section electron tomography requires microtubules to be stitched across sections, that is lines from different sections need to be aligned, endpoints need to be matched at section boundaries to establish a correspondence between neighboring sections, and corresponding lines need to be connected across multiple sections. We present computational methods for these tasks: 1) An initial alignment is computed using a distance compatibility graph. 2) A fine alignment is then computed with a probabilistic variant of the iterative closest points algorithm, which we extended to handle the orientation of lines by introducing a periodic random variable to the probabilistic formulation. 3) Endpoint correspondence is established by formulating a matching problem in terms of a Markov random field and computing the best matching with belief propagation. Belief propagation is not generally guaranteed to converge to a minimum. We show how convergence can be achieved, nonetheless, with minimal manual input. In addition to stitching microtubule centerlines, the correspondence is also applied to transform and merge the electron tomograms. We applied the proposed methods to samples from the mitotic spindle in C. elegans, the meiotic spindle in X. laevis, and sub-pellicular microtubule arrays in T. brucei. The methods were able to stitch microtubules across section boundaries in good agreement with experts' opinions for the spindle samples. Results, however, were not satisfactory for the microtubule arrays. For certain experiments, such as an analysis of the spindle, the proposed methods can replace manual expert tracing and thus enable the analysis of microtubules over long distances with reasonable manual effort.
Turbulence modeling of free shear layers for high performance aircraft
NASA Technical Reports Server (NTRS)
Sondak, Douglas
1993-01-01
In many flowfield computations, accuracy of the turbulence model employed is frequently a limiting factor in the overall accuracy of the computation. This is particularly true for complex flowfields such as those around full aircraft configurations. Free shear layers such as wakes, impinging jets (in V/STOL applications), and mixing layers over cavities are often part of these flowfields. Although flowfields have been computed for full aircraft, the memory and CPU requirements for these computations are often excessive. Additional computer power is required for multidisciplinary computations such as coupled fluid dynamics and conduction heat transfer analysis. Massively parallel computers show promise in alleviating this situation, and the purpose of this effort was to adapt and optimize CFD codes to these new machines. The objective of this research effort was to compute the flowfield and heat transfer for a two-dimensional jet impinging normally on a cool plate. The results of this research effort were summarized in an AIAA paper titled 'Parallel Implementation of the k-epsilon Turbulence Model'. Appendix A contains the full paper.
Biomedical engineering at Sandia National Laboratories
NASA Astrophysics Data System (ADS)
Zanner, Mary Ann
1994-12-01
The potential exists to reduce or control some aspects of the U.S. health care expenditure without compromising health care delivery by developing carefully selected technologies which impact favorably on the health care system. A focused effort to develop such technologies is underway at Sandia National Laboratories. As a DOE National Laboratory, Sandia possesses a wealth of engineering and scientific expertise that can be readily applied to this critical national need. Appropriate mechanisms currently exist to allow transfer of technology from the laboratory to the private sector. Sandia's Biomedical Engineering Initiative addresses the development of properly evaluated, cost-effective medical technologies through team collaborations with the medical community. Technology development is subjected to certain criteria including wide applicability, earlier diagnoses, increased efficiency, cost-effectiveness and dual-use. Examples of Sandia's medical technologies include a noninvasive blood glucose sensor, computer aided mammographic screening, noninvasive fetal oximetry and blood gas measurement, burn diagnostics and laser debridement, telerobotics and ultrasonic scanning for prosthetic devices. Sandia National Laboratories has the potential to aid in directing medical technology development efforts which emphasize health care needs, earlier diagnosis, cost containment and improvement of the quality of life.
Multi-school collaboration to develop and test nutrition computer modules for pediatric residents.
Roche, Patricia L; Ciccarelli, Mary R; Gupta, Sandeep K; Hayes, Barbara M; Molleston, Jean P
2007-09-01
The provision of essential nutrition-related content in US medical education has been deficient, despite efforts of the federal government and multiple professional organizations. Novel and efficient approaches are needed. A multi-department project was developed to create and pilot a computer-based compact disc instructional program covering the nutrition topics of oral rehydration therapy, calcium, and vitamins. Funded by an internal medical school grant, the content of the modules was written by Department of Pediatrics faculty. The modules were built by School of Informatics faculty and students, and were tested on a convenience sampling of 38 pediatric residents in a randomized controlled trial performed by a registered dietitian/School of Health and Rehabilitation Sciences Master's degree candidate. The modules were reviewed for content by the pediatric faculty principal investigator and the registered dietitian/School of Health and Rehabilitation Sciences graduate student. Residents completed a pretest of nutrition knowledge and attitude toward nutrition and Web-based instruction. Half the group was given three programs (oral rehydration therapy, calcium, and vitamins) on compact disc for study over 6 weeks. Both study and control groups completed a posttest. Pre- and postintervention objective test results in study vs control groups and attitudinal survey results before and after intervention in the study group were compared. The experimental group demonstrated significantly better posttrial objective test performance compared to the control group (P=0.0005). The study group tended toward improvement, whereas the control group performance declined substantially between pre- and posttests. Study group resident attitudes toward computer-based instruction improved. Use of these computer modules prompted almost half of the residents in the study group to independently pursue relevant nutrition-related information. This inexpensive, collaborative, multi-department effort to design a computer-based nutrition curriculum positively impacted both resident knowledge and attitudes.
NASA Technical Reports Server (NTRS)
Kraft, R. E.
1996-01-01
A computational method to predict modal reflection coefficients in cylindrical ducts has been developed based on the work of Homicz, Lordi, and Rehm, which uses the Wiener-Hopf method to account for the boundary conditions at the termination of a thin cylindrical pipe. The purpose of this study is to develop a computational routine to predict the reflection coefficients of higher order acoustic modes impinging on the unflanged termination of a cylindrical duct. This effort was conducted wider Task Order 5 of the NASA Lewis LET Program, Active Noise Control of aircraft Engines: Feasibility Study, and will be used as part of the development of an integrated source noise, acoustic propagation, ANC actuator coupling, and control system algorithm simulation. The reflection coefficient prediction will be incorporated into an existing cylindrical duct modal analysis to account for the reflection of modes from the duct termination. This will provide a more accurate, rapid computation design tool for evaluating the effect of reflected waves on active noise control systems mounted in the duct, as well as providing a tool for the design of acoustic treatment in inlet ducts. As an active noise control system design tool, the method can be used preliminary to more accurate but more numerically intensive acoustic propagation models such as finite element methods. The resulting computer program has been shown to give reasonable results, some examples of which are presented. Reliable data to use for comparison is scarce, so complete checkout is difficult, and further checkout is needed over a wider range of system parameters. In future efforts the method will be adapted as a subroutine to the GEAE segmented cylindrical duct modal analysis program.
An efficient dynamic load balancing algorithm
NASA Astrophysics Data System (ADS)
Lagaros, Nikos D.
2014-01-01
In engineering problems, randomness and uncertainties are inherent. Robust design procedures, formulated in the framework of multi-objective optimization, have been proposed in order to take into account sources of randomness and uncertainty. These design procedures require orders of magnitude more computational effort than conventional analysis or optimum design processes since a very large number of finite element analyses is required to be dealt. It is therefore an imperative need to exploit the capabilities of computing resources in order to deal with this kind of problems. In particular, parallel computing can be implemented at the level of metaheuristic optimization, by exploiting the physical parallelization feature of the nondominated sorting evolution strategies method, as well as at the level of repeated structural analyses required for assessing the behavioural constraints and for calculating the objective functions. In this study an efficient dynamic load balancing algorithm for optimum exploitation of available computing resources is proposed and, without loss of generality, is applied for computing the desired Pareto front. In such problems the computation of the complete Pareto front with feasible designs only, constitutes a very challenging task. The proposed algorithm achieves linear speedup factors and almost 100% speedup factor values with reference to the sequential procedure.
Computational thinking in life science education.
Rubinstein, Amir; Chor, Benny
2014-11-01
We join the increasing call to take computational education of life science students a step further, beyond teaching mere programming and employing existing software tools. We describe a new course, focusing on enriching the curriculum of life science students with abstract, algorithmic, and logical thinking, and exposing them to the computational "culture." The design, structure, and content of our course are influenced by recent efforts in this area, collaborations with life scientists, and our own instructional experience. Specifically, we suggest that an effective course of this nature should: (1) devote time to explicitly reflect upon computational thinking processes, resisting the temptation to drift to purely practical instruction, (2) focus on discrete notions, rather than on continuous ones, and (3) have basic programming as a prerequisite, so students need not be preoccupied with elementary programming issues. We strongly recommend that the mere use of existing bioinformatics tools and packages should not replace hands-on programming. Yet, we suggest that programming will mostly serve as a means to practice computational thinking processes. This paper deals with the challenges and considerations of such computational education for life science students. It also describes a concrete implementation of the course and encourages its use by others.
Ubiquitous computing in sports: A review and analysis.
Baca, Arnold; Dabnichki, Peter; Heller, Mario; Kornfeind, Philipp
2009-10-01
Ubiquitous (pervasive) computing is a term for a synergetic use of sensing, communication and computing. Pervasive use of computing has seen a rapid increase in the current decade. This development has propagated in applied sport science and everyday life. The work presents a survey of recent developments in sport and leisure with emphasis on technology and computational techniques. A detailed analysis on new technological developments is performed. Sensors for position and motion detection, and such for equipment and physiological monitoring are discussed. Aspects of novel trends in communication technologies and data processing are outlined. Computational advancements have started a new trend - development of smart and intelligent systems for a wide range of applications - from model-based posture recognition to context awareness algorithms for nutrition monitoring. Examples particular to coaching and training are discussed. Selected tools for monitoring rules' compliance and automatic decision-making are outlined. Finally, applications in leisure and entertainment are presented, from systems supporting physical activity to systems providing motivation. It is concluded that the emphasis in future will shift from technologies to intelligent systems that allow for enhanced social interaction as efforts need to be made to improve user-friendliness and standardisation of measurement and transmission protocols.
Roca, Josep; Cano, Isaac; Gomez-Cabrero, David; Tegnér, Jesper
2016-01-01
Systems medicine, using and adapting methods and approaches as developed within systems biology, promises to be essential in ongoing efforts of realizing and implementing personalized medicine in clinical practice and research. Here we review and critically assess these opportunities and challenges using our work on COPD as a case study. We find that there are significant unresolved biomedical challenges in how to unravel complex multifactorial components in disease initiation and progression producing different clinical phenotypes. Yet, while such a systems understanding of COPD is necessary, there are other auxiliary challenges that need to be addressed in concert with a systems analysis of COPD. These include information and communication technology (ICT)-related issues such as data harmonization, systematic handling of knowledge, computational modeling, and importantly their translation and support of clinical practice. For example, clinical decision-support systems need a seamless integration with new models and knowledge as systems analysis of COPD continues to develop. Our experience with clinical implementation of systems medicine targeting COPD highlights the need for a change of management including design of appropriate business models and adoption of ICT providing and supporting organizational interoperability among professional teams across healthcare tiers, working around the patient. In conclusion, in our hands the scope and efforts of systems medicine need to concurrently consider these aspects of clinical implementation, which inherently drives the selection of the most relevant and urgent issues and methods that need further development in a systems analysis of disease.
NASA Astrophysics Data System (ADS)
Perez, G. L.; Larour, E. Y.; Halkides, D. J.; Cheng, D. L. C.
2015-12-01
The Virtual Ice Sheet Laboratory(VISL) is a Cryosphere outreach effort byscientists at the Jet Propulsion Laboratory(JPL) in Pasadena, CA, Earth and SpaceResearch(ESR) in Seattle, WA, and the University of California at Irvine (UCI), with the goal of providing interactive lessons for K-12 and college level students,while conforming to STEM guidelines. At the core of VISL is the Ice Sheet System Model(ISSM), an open-source project developed jointlyat JPL and UCI whose main purpose is to model the evolution of the polar ice caps in Greenland and Antarctica. By using ISSM, VISL students have access tostate-of-the-art modeling software that is being used to conduct scientificresearch by users all over the world. However, providing this functionality isby no means simple. The modeling of ice sheets in response to sea and atmospheric temperatures, among many other possible parameters, requiressignificant computational resources. Furthermore, this service needs to beresponsive and capable of handling burst requests produced by classrooms ofstudents. Cloud computing providers represent a burgeoning industry. With majorinvestments by tech giants like Amazon, Google and Microsoft, it has never beeneasier or more affordable to deploy computational elements on-demand. This isexactly what VISL needs and ISSM is capable of. Moreover, this is a promisingalternative to investing in expensive and rapidly devaluing hardware.
NASA Astrophysics Data System (ADS)
Stout, Jane G.; Blaney, Jennifer M.
2017-10-01
Research suggests growth mindset, or the belief that knowledge is acquired through effort, may enhance women's sense of belonging in male-dominated disciplines, like computing. However, other research indicates women who spend a great deal of time and energy in technical fields experience a low sense of belonging. The current study assessed the benefits of a growth mindset on women's (and men's) sense of intellectual belonging in computing, accounting for the amount of time and effort dedicated to academics. We define "intellectual belonging" as the sense that one is believed to be a competent member of the community. Whereas a stronger growth mindset was associated with stronger intellectual belonging for men, a growth mindset only boosted women's intellectual belonging when they did not work hard on academics. Our findings suggest, paradoxically, women may not benefit from a growth mindset in computing when they exert a lot of effort.
Vassena, Eliana; Deraeve, James; Alexander, William H
2017-10-01
Human behavior is strongly driven by the pursuit of rewards. In daily life, however, benefits mostly come at a cost, often requiring that effort be exerted to obtain potential benefits. Medial PFC (MPFC) and dorsolateral PFC (DLPFC) are frequently implicated in the expectation of effortful control, showing increased activity as a function of predicted task difficulty. Such activity partially overlaps with expectation of reward and has been observed both during decision-making and during task preparation. Recently, novel computational frameworks have been developed to explain activity in these regions during cognitive control, based on the principle of prediction and prediction error (predicted response-outcome [PRO] model [Alexander, W. H., & Brown, J. W. Medial prefrontal cortex as an action-outcome predictor. Nature Neuroscience, 14, 1338-1344, 2011], hierarchical error representation [HER] model [Alexander, W. H., & Brown, J. W. Hierarchical error representation: A computational model of anterior cingulate and dorsolateral prefrontal cortex. Neural Computation, 27, 2354-2410, 2015]). Despite the broad explanatory power of these models, it is not clear whether they can also accommodate effects related to the expectation of effort observed in MPFC and DLPFC. Here, we propose a translation of these computational frameworks to the domain of effort-based behavior. First, we discuss how the PRO model, based on prediction error, can explain effort-related activity in MPFC, by reframing effort-based behavior in a predictive context. We propose that MPFC activity reflects monitoring of motivationally relevant variables (such as effort and reward), by coding expectations and discrepancies from such expectations. Moreover, we derive behavioral and neural model-based predictions for healthy controls and clinical populations with impairments of motivation. Second, we illustrate the possible translation to effort-based behavior of the HER model, an extended version of PRO model based on hierarchical error prediction, developed to explain MPFC-DLPFC interactions. We derive behavioral predictions that describe how effort and reward information is coded in PFC and how changing the configuration of such environmental information might affect decision-making and task performance involving motivation.
Status of Computational Aerodynamic Modeling Tools for Aircraft Loss-of-Control
NASA Technical Reports Server (NTRS)
Frink, Neal T.; Murphy, Patrick C.; Atkins, Harold L.; Viken, Sally A.; Petrilli, Justin L.; Gopalarathnam, Ashok; Paul, Ryan C.
2016-01-01
A concerted effort has been underway over the past several years to evolve computational capabilities for modeling aircraft loss-of-control under the NASA Aviation Safety Program. A principal goal has been to develop reliable computational tools for predicting and analyzing the non-linear stability & control characteristics of aircraft near stall boundaries affecting safe flight, and for utilizing those predictions for creating augmented flight simulation models that improve pilot training. Pursuing such an ambitious task with limited resources required the forging of close collaborative relationships with a diverse body of computational aerodynamicists and flight simulation experts to leverage their respective research efforts into the creation of NASA tools to meet this goal. Considerable progress has been made and work remains to be done. This paper summarizes the status of the NASA effort to establish computational capabilities for modeling aircraft loss-of-control and offers recommendations for future work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Srinath Vadlamani; Scott Kruger; Travis Austin
Extended magnetohydrodynamic (MHD) codes are used to model the large, slow-growing instabilities that are projected to limit the performance of International Thermonuclear Experimental Reactor (ITER). The multiscale nature of the extended MHD equations requires an implicit approach. The current linear solvers needed for the implicit algorithm scale poorly because the resultant matrices are so ill-conditioned. A new solver is needed, especially one that scales to the petascale. The most successful scalable parallel processor solvers to date are multigrid solvers. Applying multigrid techniques to a set of equations whose fundamental modes are dispersive waves is a promising solution to CEMM problems.more » For the Phase 1, we implemented multigrid preconditioners from the HYPRE project of the Center for Applied Scientific Computing at LLNL via PETSc of the DOE SciDAC TOPS for the real matrix systems of the extended MHD code NIMROD which is a one of the primary modeling codes of the OFES-funded Center for Extended Magnetohydrodynamic Modeling (CEMM) SciDAC. We implemented the multigrid solvers on the fusion test problem that allows for real matrix systems with success, and in the process learned about the details of NIMROD data structures and the difficulties of inverting NIMROD operators. The further success of this project will allow for efficient usage of future petascale computers at the National Leadership Facilities: Oak Ridge National Laboratory, Argonne National Laboratory, and National Energy Research Scientific Computing Center. The project will be a collaborative effort between computational plasma physicists and applied mathematicians at Tech-X Corporation, applied mathematicians Front Range Scientific Computations, Inc. (who are collaborators on the HYPRE project), and other computational plasma physicists involved with the CEMM project.« less
Predictive Models for Semiconductor Device Design and Processing
NASA Technical Reports Server (NTRS)
Meyyappan, Meyya; Arnold, James O. (Technical Monitor)
1998-01-01
The device feature size continues to be on a downward trend with a simultaneous upward trend in wafer size to 300 mm. Predictive models are needed more than ever before for this reason. At NASA Ames, a Device and Process Modeling effort has been initiated recently with a view to address these issues. Our activities cover sub-micron device physics, process and equipment modeling, computational chemistry and material science. This talk would outline these efforts and emphasize the interaction among various components. The device physics component is largely based on integrating quantum effects into device simulators. We have two parallel efforts, one based on a quantum mechanics approach and the second, a semiclassical hydrodynamics approach with quantum correction terms. Under the first approach, three different quantum simulators are being developed and compared: a nonequlibrium Green's function (NEGF) approach, Wigner function approach, and a density matrix approach. In this talk, results using various codes will be presented. Our process modeling work focuses primarily on epitaxy and etching using first-principles models coupling reactor level and wafer level features. For the latter, we are using a novel approach based on Level Set theory. Sample results from this effort will also be presented.
Energy Landscapes of Folding Chromosomes
NASA Astrophysics Data System (ADS)
Zhang, Bin
The genome, the blueprint of life, contains nearly all the information needed to build and maintain an entire organism. A comprehensive understanding of the genome is of paramount interest to human health and will advance progress in many areas, including life sciences, medicine, and biotechnology. The overarching goal of my research is to understand the structure-dynamics-function relationships of the human genome. In this talk, I will be presenting our efforts in moving towards that goal, with a particular emphasis on studying the three-dimensional organization, the structure of the genome with multi-scale approaches. Specifically, I will discuss the reconstruction of genome structures at both interphase and metaphase by making use of data from chromosome conformation capture experiments. Computationally modeling of chromatin fiber at atomistic level from first principles will also be presented as our effort for studying the genome structure from bottom up.
On the need and use of models to explore the role of economic confidence:a survey.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sprigg, James A.; Paez, Paul J.; Hand, Michael S.
2005-04-01
Empirical studies suggest that consumption is more sensitive to current income than suggested under the permanent income hypothesis, which raises questions regarding expectations for future income, risk aversion, and the role of economic confidence measures. This report surveys a body of fundamental economic literature as well as burgeoning computational modeling methods to support efforts to better anticipate cascading economic responses to terrorist threats and attacks. This is a three part survey to support the incorporation of models of economic confidence into agent-based microeconomic simulations. We first review broad underlying economic principles related to this topic. We then review the economicmore » principle of confidence and related empirical studies. Finally, we provide a brief survey of efforts and publications related to agent-based economic simulation.« less
Missed deadline notification in best-effort schedulers
NASA Astrophysics Data System (ADS)
Banachowski, Scott A.; Wu, Joel; Brandt, Scott A.
2003-12-01
It is common to run multimedia and other periodic, soft real-time applications on general-purpose computer systems. These systems use best-effort scheduling algorithms that cannot guarantee applications will receive responsive scheduling to meet deadline or timing requirements. We present a simple mechanism called Missed Deadline Notification (MDN) that allows applications to notify the system when they do not receive their desired level of responsiveness. Consisting of a single system call with no arguments, this simple interface allows the operating system to provide better support for soft real-time applications without any a priori information about their timing or resource needs. We implemented MDN in three different schedulers: Linux, BEST, and BeRate. We describe these implementations and their performance when running real-time applications and discuss policies to prevent applications from abusing MDN to gain extra resources.
Robotic disaster recovery efforts with ad-hoc deployable cloud computing
NASA Astrophysics Data System (ADS)
Straub, Jeremy; Marsh, Ronald; Mohammad, Atif F.
2013-06-01
Autonomous operations of search and rescue (SaR) robots is an ill posed problem, which is complexified by the dynamic disaster recovery environment. In a typical SaR response scenario, responder robots will require different levels of processing capabilities during various parts of the response effort and will need to utilize multiple algorithms. Placing these capabilities onboard the robot is a mediocre solution that precludes algorithm specific performance optimization and results in mediocre performance. Architecture for an ad-hoc, deployable cloud environment suitable for use in a disaster response scenario is presented. Under this model, each service provider is optimized for the task and maintains a database of situation-relevant information. This service-oriented architecture (SOA 3.0) compliant framework also serves as an example of the efficient use of SOA 3.0 in an actual cloud application.
Collaborating and sharing data in epilepsy research.
Wagenaar, Joost B; Worrell, Gregory A; Ives, Zachary; Dümpelmann, Matthias; Matthias, Dümpelmann; Litt, Brian; Schulze-Bonhage, Andreas
2015-06-01
Technological advances are dramatically advancing translational research in Epilepsy. Neurophysiology, imaging, and metadata are now recorded digitally in most centers, enabling quantitative analysis. Basic and translational research opportunities to use these data are exploding, but academic and funding cultures prevent this potential from being realized. Research on epileptogenic networks, antiepileptic devices, and biomarkers could progress rapidly if collaborative efforts to digest this "big neuro data" could be organized. Higher temporal and spatial resolution data are driving the need for novel multidimensional visualization and analysis tools. Crowd-sourced science, the same that drives innovation in computer science, could easily be mobilized for these tasks, were it not for competition for funding, attribution, and lack of standard data formats and platforms. As these efforts mature, there is a great opportunity to advance Epilepsy research through data sharing and increase collaboration between the international research community.
NASA Technical Reports Server (NTRS)
McComas, David C.; Strege, Susanne L.; Carpenter, Paul B. Hartman, Randy
2015-01-01
The core Flight System (cFS) is a flight software (FSW) product line developed by the Flight Software Systems Branch (FSSB) at NASA's Goddard Space Flight Center (GSFC). The cFS uses compile-time configuration parameters to implement variable requirements to enable portability across embedded computing platforms and to implement different end-user functional needs. The verification and validation of these requirements is proving to be a significant challenge. This paper describes the challenges facing the cFS and the results of a pilot effort to apply EXB Solution's testing approach to the cFS applications.
NASA Technical Reports Server (NTRS)
Hua, Chongyu; Volakis, John L.
1990-01-01
AUTOMESH-2D is a computer program specifically designed as a preprocessor for the scattering analysis of two dimensional bodies by the finite element method. This program was developed due to a need for reproducing the effort required to define and check the geometry data, element topology, and material properties. There are six modules in the program: (1) Parameter Specification; (2) Data Input; (3) Node Generation; (4) Element Generation; (5) Mesh Smoothing; and (5) Data File Generation.
Aerodynamic design trends for commercial aircraft
NASA Technical Reports Server (NTRS)
Hilbig, R.; Koerner, H.
1986-01-01
Recent research on advanced-configuration commercial aircraft at DFVLR is surveyed, with a focus on aerodynamic approaches to improved performance. Topics examined include transonic wings with variable camber or shock/boundary-layer control, wings with reduced friction drag or laminarized flow, prop-fan propulsion, and unusual configurations or wing profiles. Drawings, diagrams, and graphs of predicted performance are provided, and the need for extensive development efforts using powerful computer facilities, high-speed and low-speed wind tunnels, and flight tests of models (mounted on specially designed carrier aircraft) is indicated.
Kerfriden, P.; Goury, O.; Rabczuk, T.; Bordas, S.P.A.
2013-01-01
We propose in this paper a reduced order modelling technique based on domain partitioning for parametric problems of fracture. We show that coupling domain decomposition and projection-based model order reduction permits to focus the numerical effort where it is most needed: around the zones where damage propagates. No a priori knowledge of the damage pattern is required, the extraction of the corresponding spatial regions being based solely on algebra. The efficiency of the proposed approach is demonstrated numerically with an example relevant to engineering fracture. PMID:23750055
2011-05-01
communications and on computer networks—its Global Information Grid—which are potentially jeopardized by the millions of denial-of-service attacks, hacking ...Director,a National Security Agency Chief of Staff Joint Operations Center Defense Information Systems Agency Command Center J1 J2 J3 J4 J5 J6 J7 J8...DC Joint Staff • J39, Operations, Pentagon, Washington, DC • J5 , Strategic Plans and Policy, Pentagon, Washington, DC U.S. Strategic Command • J882
In Silico Chemogenomics Drug Repositioning Strategies for Neglected Tropical Diseases.
Andrade, Carolina Horta; Neves, Bruno Junior; Melo-Filho, Cleber Camilo; Rodrigues, Juliana; Silva, Diego Cabral; Braga, Rodolpho Campos; Cravo, Pedro Vitor Lemos
2018-03-08
Only ~1% of all drug candidates against Neglected Tropical Diseases (NTDs) have reached clinical trials in the last decades, underscoring the need for new, safe and effective treatments. In such context, drug repositioning, which allows finding novel indications for approved drugs whose pharmacokinetic and safety profiles are already known, is emerging as a promising strategy for tackling NTDs. Chemogenomics is a direct descendent of the typical drug discovery process that involves the systematic screening of chemical compounds against drug targets in high-throughput screening (HTS) efforts, for the identification of lead compounds. However, different to the one-drug-one-target paradigm, chemogenomics attempts to identify all potential ligands for all possible targets and diseases. In this review, we summarize current methodological development efforts in drug repositioning that use state-of-the-art computational ligand- and structure-based chemogenomics approaches. Furthermore, we highlighted the recent progress in computational drug repositioning for some NTDs, based on curation and modeling of genomic, biological, and chemical data. Additionally, we also present in-house and other successful examples and suggest possible solutions to existing pitfalls. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartlett, Roscoe A; Heroux, Dr. Michael A; Willenbring, James
2012-01-01
Software lifecycles are becoming an increasingly important issue for computational science & engineering (CSE) software. The process by which a piece of CSE software begins life as a set of research requirements and then matures into a trusted high-quality capability is both commonplace and extremely challenging. Although an implicit lifecycle is obviously being used in any effort, the challenges of this process--respecting the competing needs of research vs. production--cannot be overstated. Here we describe a proposal for a well-defined software lifecycle process based on modern Lean/Agile software engineering principles. What we propose is appropriate for many CSE software projects thatmore » are initially heavily focused on research but also are expected to eventually produce usable high-quality capabilities. The model is related to TriBITS, a build, integration and testing system, which serves as a strong foundation for this lifecycle model, and aspects of this lifecycle model are ingrained in the TriBITS system. Indeed this lifecycle process, if followed, will enable large-scale sustainable integration of many complex CSE software efforts across several institutions.« less
Elekes, Fruzsina; Varga, Máté; Király, Ildikó
2017-11-01
It has been widely assumed that computing how a scene looks from another perspective (level-2 perspective taking, PT) is an effortful process, as opposed to the automatic capacity of tracking visual access to objects (level-1 PT). Recently, adults have been found to compute both forms of visual perspectives in a quick but context-sensitive way, indicating that the two functions share more features than previously assumed. However, the developmental literature still shows the dissociation between automatic level-1 and effortful level-2 PT. In the current paper, we report an experiment showing that in a minimally social situation, participating in a number verification task with an adult confederate, eight- to 9.5-year-old children demonstrate similar online level-2 PT capacities as adults. Future studies need to address whether online PT shows selectivity in children as well and develop paradigms that are adequate to test preschoolers' online level-2 PT abilities. Statement of Contribution What is already known on this subject? Adults can access how objects appear to others (level-2 perspective) spontaneously and online Online level-1, but not level-2 perspective taking (PT) has been documented in school-aged children What the present study adds? Eight- to 9.5-year-olds performed a number verification task with a confederate who had the same task Children showed similar perspective interference as adults, indicating spontaneous level-2 PT Not only agent-object relations but also object appearances are computed online by eight- to 9.5-year-olds. © 2017 The British Psychological Society.
Advanced Methodology for Simulation of Complex Flows Using Structured Grid Systems
NASA Technical Reports Server (NTRS)
Steinthorsson, Erlendur; Modiano, David
1995-01-01
Detailed simulations of viscous flows in complicated geometries pose a significant challenge to current capabilities of Computational Fluid Dynamics (CFD). To enable routine application of CFD to this class of problems, advanced methodologies are required that employ (a) automated grid generation, (b) adaptivity, (c) accurate discretizations and efficient solvers, and (d) advanced software techniques. Each of these ingredients contributes to increased accuracy, efficiency (in terms of human effort and computer time), and/or reliability of CFD software. In the long run, methodologies employing structured grid systems will remain a viable choice for routine simulation of flows in complex geometries only if genuinely automatic grid generation techniques for structured grids can be developed and if adaptivity is employed more routinely. More research in both these areas is urgently needed.
Mining Software Usage with the Automatic Library Tracking Database (ALTD)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadri, Bilel; Fahey, Mark R
2013-01-01
Tracking software usage is important for HPC centers, computer vendors, code developers and funding agencies to provide more efficient and targeted software support, and to forecast needs and guide HPC software effort towards the Exascale era. However, accurately tracking software usage on HPC systems has been a challenging task. In this paper, we present a tool called Automatic Library Tracking Database (ALTD) that has been developed and put in production on several Cray systems. The ALTD infrastructure prototype automatically and transparently stores information about libraries linked into an application at compilation time and also the executables launched in a batchmore » job. We will illustrate the usage of libraries, compilers and third party software applications on a system managed by the National Institute for Computational Sciences.« less
Topography Modeling in Atmospheric Flows Using the Immersed Boundary Method
NASA Technical Reports Server (NTRS)
Ackerman, A. S.; Senocak, I.; Mansour, N. N.; Stevens, D. E.
2004-01-01
Numerical simulation of flow over complex geometry needs accurate and efficient computational methods. Different techniques are available to handle complex geometry. The unstructured grid and multi-block body-fitted grid techniques have been widely adopted for complex geometry in engineering applications. In atmospheric applications, terrain fitted single grid techniques have found common use. Although these are very effective techniques, their implementation, coupling with the flow algorithm, and efficient parallelization of the complete method are more involved than a Cartesian grid method. The grid generation can be tedious and one needs to pay special attention in numerics to handle skewed cells for conservation purposes. Researchers have long sought for alternative methods to ease the effort involved in simulating flow over complex geometry.
New Mexico district work-effort analysis computer program
Hiss, W.L.; Trantolo, A.P.; Sparks, J.L.
1972-01-01
The computer program (CAN 2) described in this report is one of several related programs used in the New Mexico District cost-analysis system. The work-effort information used in these programs is accumulated and entered to the nearest hour on forms completed by each employee. Tabulating cards are punched directly from these forms after visual examinations for errors are made. Reports containing detailed work-effort data itemized by employee within each project and account and by account and project for each employee are prepared for both current-month and year-to-date periods by the CAN 2 computer program. An option allowing preparation of reports for a specified 3-month period is provided. The total number of hours worked on each account and project and a grand total of hours worked in the New Mexico District is computed and presented in a summary report for each period. Work effort not chargeable directly to individual projects or accounts is considered as overhead and can be apportioned to the individual accounts and projects on the basis of the ratio of the total hours of work effort for the individual accounts or projects to the total New Mexico District work effort at the option of the user. The hours of work performed by a particular section, such as General Investigations or Surface Water, are prorated and charged to the projects or accounts within the particular section. A number of surveillance or buffer accounts are employed to account for the hours worked on special events or on those parts of large projects or accounts that require a more detailed analysis. Any part of the New Mexico District operation can be separated and analyzed in detail by establishing an appropriate buffer account. With the exception of statements associated with word size, the computer program is written in FORTRAN IV in a relatively low and standard language level to facilitate its use on different digital computers. The program has been run only on a Control Data Corporation 6600 computer system. Central processing computer time has seldom exceeded 5 minutes on the longest year-to-date runs.
Hypersonic Experimental and Computational Capability, Improvement and Validation. Volume 2
NASA Technical Reports Server (NTRS)
Muylaert, Jean (Editor); Kumar, Ajay (Editor); Dujarric, Christian (Editor)
1998-01-01
The results of the phase 2 effort conducted under AGARD Working Group 18 on Hypersonic Experimental and Computational Capability, Improvement and Validation are presented in this report. The first volume, published in May 1996, mainly focused on the design methodology, plans and some initial results of experiments that had been conducted to serve as validation benchmarks. The current volume presents the detailed experimental and computational data base developed during this effort.
VarPy: A python library for volcanology and rock physics data analysis
NASA Astrophysics Data System (ADS)
Filgueira, Rosa; Atkinson, Malcom; Bell, Andrew; Snelling, Brawen; Main, Ian
2014-05-01
The increasing prevalence of digital instrumentation in volcanology and rock physics is leading to a wealth of data, which in turn is increasing the need for computational analyses and models. Today, these are largely developed by each individual or researcher. The introduction of a shared library that can be used for this purpose has several benefits: 1. when an existing function in the library meets a need recognised by a researcher it is usually much less effort than developing ones own code; 2. once functions are established and multiply used they become better tested, more reliable and eventually trusted by the community; 3. use of the same functions by different researchers makes it easier to compare results and to compare the skill of rival analysis and modelling methods; and 4. in the longer term the cost of maintaining these functions is shared over a wide community and they therefore have greater duration. Python is a high-level interpreted programming language, with capabilities for object-oriented programming. Often scientists choose this language to program their programs because of the increased productivity it provides. Although, there are many software tools available for interactive data analysis and development, there are not libraries designed specifically for volcanology and rock physics data. Therefore, we propose a new Python open-source toolbox called "VarPy" to facilitate rapid application development for rock physicists and volcanologists, which allow users to define their own workflows to develop models, analyses and visualisations. This proposal is triggered by our work on data assimilation in the NERC EFFORT (Earthquake and Failure Forecasting in Real Time) project, using data provided by the NERC CREEP 2 experimental project and volcanic experiments from INVG observatory Etna and IGN observatory Hierro as a test cases. In EFFORT project we are developing a scientist gateway which offers services for collecting and sharing volcanology and rock physics data with the intent of stimulating sharing, collaboration and comparison of methods among the practitioners in the two fields. As such, it offers facilities for running analyses and models either under a researcher's control or periodically as part of an experiment and to compare the skills of predictive methods. The gateway therefore runs code on behalf of volcanology and rock physics researchers. Varpy library is intended to make it much easier for those researchers to set up the code they need to run. The library also makes it easier to arrange that code is in a form suitable for running in the EFFORT computational services. Care has been taken to ensure that the library can also be used outside of EFFORT systems, e.g., on a researcher's own laptop, providing two variants of the library: the gateway version and developer's version, with many of the functions completely identical. The library must fulfill two purposes simultaneously: • by providing a full repertoire of commonly required actions it must make it easy for volcanologist and rock physicists to write the python scripts they need to accomplish their work, and • by wrapping operations it must enable the EFFORT gateway to maintain the integrity of its data. Notice that proposal of VarPy library does not attempt to replace the functions provided by other libraries, such as NumpY and ScipY. VarPy is complementary to them.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodriguez, Salvador B.
Smart grids are a crucial component for enabling the nation’s future energy needs, as part of a modernization effort led by the Department of Energy. Smart grids and smart microgrids are being considered in niche applications, and as part of a comprehensive energy strategy to help manage the nation’s growing energy demands, for critical infrastructures, military installations, small rural communities, and large populations with limited water supplies. As part of a far-reaching strategic initiative, Sandia National Laboratories (SNL) presents herein a unique, three-pronged approach to integrate small modular reactors (SMRs) into microgrids, with the goal of providing economically-competitive, reliable, andmore » secure energy to meet the nation’s needs. SNL’s triad methodology involves an innovative blend of smart microgrid technology, high performance computing (HPC), and advanced manufacturing (AM). In this report, Sandia’s current capabilities in those areas are summarized, as well as paths forward that will enable DOE to achieve its energy goals. In the area of smart grid/microgrid technology, Sandia’s current computational capabilities can model the entire grid, including temporal aspects and cyber security issues. Our tools include system development, integration, testing and evaluation, monitoring, and sustainment.« less
Cooperative high-performance storage in the accelerated strategic computing initiative
NASA Technical Reports Server (NTRS)
Gary, Mark; Howard, Barry; Louis, Steve; Minuzzo, Kim; Seager, Mark
1996-01-01
The use and acceptance of new high-performance, parallel computing platforms will be impeded by the absence of an infrastructure capable of supporting orders-of-magnitude improvement in hierarchical storage and high-speed I/O (Input/Output). The distribution of these high-performance platforms and supporting infrastructures across a wide-area network further compounds this problem. We describe an architectural design and phased implementation plan for a distributed, Cooperative Storage Environment (CSE) to achieve the necessary performance, user transparency, site autonomy, communication, and security features needed to support the Accelerated Strategic Computing Initiative (ASCI). ASCI is a Department of Energy (DOE) program attempting to apply terascale platforms and Problem-Solving Environments (PSEs) toward real-world computational modeling and simulation problems. The ASCI mission must be carried out through a unified, multilaboratory effort, and will require highly secure, efficient access to vast amounts of data. The CSE provides a logically simple, geographically distributed, storage infrastructure of semi-autonomous cooperating sites to meet the strategic ASCI PSE goal of highperformance data storage and access at the user desktop.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pieper, Andreas; Kreutzer, Moritz; Alvermann, Andreas, E-mail: alvermann@physik.uni-greifswald.de
2016-11-15
We study Chebyshev filter diagonalization as a tool for the computation of many interior eigenvalues of very large sparse symmetric matrices. In this technique the subspace projection onto the target space of wanted eigenvectors is approximated with filter polynomials obtained from Chebyshev expansions of window functions. After the discussion of the conceptual foundations of Chebyshev filter diagonalization we analyze the impact of the choice of the damping kernel, search space size, and filter polynomial degree on the computational accuracy and effort, before we describe the necessary steps towards a parallel high-performance implementation. Because Chebyshev filter diagonalization avoids the need formore » matrix inversion it can deal with matrices and problem sizes that are presently not accessible with rational function methods based on direct or iterative linear solvers. To demonstrate the potential of Chebyshev filter diagonalization for large-scale problems of this kind we include as an example the computation of the 10{sup 2} innermost eigenpairs of a topological insulator matrix with dimension 10{sup 9} derived from quantum physics applications.« less
The road to business process improvement--can you get there from here?
Gilberto, P A
1995-11-01
Historically, "improvements" within the organization have been frequently attained through automation by building and installing computer systems. Material requirements planning (MRP), manufacturing resource planning II (MRP II), just-in-time (JIT), computer aided design (CAD), computer aided manufacturing (CAM), electronic data interchange (EDI), and various other TLAs (three-letter acronyms) have been used as the methods to attain business objectives. But most companies have found that installing computer software, cleaning up their data, and providing every employee with training on how to best use the systems have not resulted in the level of business improvements needed. The software systems have simply made management around the problems easier but did little to solve the basic problems. The missing element in the efforts to improve the performance of the organization has been a shift in focus from individual department improvements to cross-organizational business process improvements. This article describes how the Electric Boat Division of General Dynamics Corporation, in conjunction with the Data Systems Division, moved its focus from one of vertical organizational processes to horizontal business processes. In other words, how we got rid of the dinosaurs.
Comparison of different models for non-invasive FFR estimation
NASA Astrophysics Data System (ADS)
Mirramezani, Mehran; Shadden, Shawn
2017-11-01
Coronary artery disease is a leading cause of death worldwide. Fractional flow reserve (FFR), derived from invasively measuring the pressure drop across a stenosis, is considered the gold standard to diagnose disease severity and need for treatment. Non-invasive estimation of FFR has gained recent attention for its potential to reduce patient risk and procedural cost versus invasive FFR measurement. Non-invasive FFR can be obtained by using image-based computational fluid dynamics to simulate blood flow and pressure in a patient-specific coronary model. However, 3D simulations require extensive effort for model construction and numerical computation, which limits their routine use. In this study we compare (ordered by increasing computational cost/complexity): reduced-order algebraic models of pressure drop across a stenosis; 1D, 2D (multiring) and 3D CFD models; as well as 3D FSI for the computation of FFR in idealized and patient-specific stenosis geometries. We demonstrate the ability of an appropriate reduced order algebraic model to closely predict FFR when compared to FFR from a full 3D simulation. This work was supported by the NIH, Grant No. R01-HL103419.
NASA Astrophysics Data System (ADS)
Mohsin, Muhammad; Mu, Yongtong; Memon, Aamir Mahmood; Kalhoro, Muhammad Talib; Shah, Syed Baber Hussain
2017-07-01
Pakistani marine waters are under an open access regime. Due to poor management and policy implications, blind fishing is continued which may result in ecological as well as economic losses. Thus, it is of utmost importance to estimate fishery resources before harvesting. In this study, catch and effort data, 1996-2009, of Kiddi shrimp Parapenaeopsis stylifera fishery from Pakistani marine waters was analyzed by using specialized fishery software in order to know fishery stock status of this commercially important shrimp. Maximum, minimum and average capture production of P. stylifera was observed as 15 912 metric tons (mt) (1997), 9 438 mt (2009) and 11 667 mt/a. Two stock assessment tools viz. CEDA (catch and effort data analysis) and ASPIC (a stock production model incorporating covariates) were used to compute MSY (maximum sustainable yield) of this organism. In CEDA, three surplus production models, Fox, Schaefer and Pella-Tomlinson, along with three error assumptions, log, log normal and gamma, were used. For initial proportion (IP) 0.8, the Fox model computed MSY as 6 858 mt (CV=0.204, R 2 =0.709) and 7 384 mt (CV=0.149, R 2 =0.72) for log and log normal error assumption respectively. Here, gamma error produced minimization failure. Estimated MSY by using Schaefer and Pella-Tomlinson models remained the same for log, log normal and gamma error assumptions i.e. 7 083 mt, 8 209 mt and 7 242 mt correspondingly. The Schafer results showed highest goodness of fit R 2 (0.712) values. ASPIC computed MSY, CV, R 2, F MSY and B MSY parameters for the Fox model as 7 219 mt, 0.142, 0.872, 0.111 and 65 280, while for the Logistic model the computed values remained 7 720 mt, 0.148, 0.868, 0.107 and 72 110 correspondingly. Results obtained have shown that P. stylifera has been overexploited. Immediate steps are needed to conserve this fishery resource for the future and research on other species of commercial importance is urgently needed.
Computational Methods for Stability and Control (COMSAC): The Time Has Come
NASA Technical Reports Server (NTRS)
Hall, Robert M.; Biedron, Robert T.; Ball, Douglas N.; Bogue, David R.; Chung, James; Green, Bradford E.; Grismer, Matthew J.; Brooks, Gregory P.; Chambers, Joseph R.
2005-01-01
Powerful computational fluid dynamics (CFD) tools have emerged that appear to offer significant benefits as an adjunct to the experimental methods used by the stability and control community to predict aerodynamic parameters. The decreasing costs for and increasing availability of computing hours are making these applications increasingly viable as time goes on and the cost of computing continues to drop. This paper summarizes the efforts of four organizations to utilize high-end computational fluid dynamics (CFD) tools to address the challenges of the stability and control arena. General motivation and the backdrop for these efforts will be summarized as well as examples of current applications.
National Energy Audit Tool for Multifamily Buildings Development Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malhotra, Mini; MacDonald, Michael; Accawi, Gina K
The U.S. Department of Energy's (DOE's) Weatherization Assistance Program (WAP) enables low-income families to reduce their energy costs by providing funds to make their homes more energy efficient. In addition, the program funds Weatherization Training and Technical Assistance (T and TA) activities to support a range of program operations. These activities include measuring and documenting performance, monitoring programs, promoting advanced techniques and collaborations to further improve program effectiveness, and training, including developing tools and information resources. The T and TA plan outlines the tasks, activities, and milestones to support the weatherization network with the program implementation ramp up efforts. Weatherizationmore » of multifamily buildings has been recognized as an effective way to ramp up weatherization efforts. To support this effort, the 2009 National Weatherization T and TA plan includes the task of expanding the functionality of the Weatherization Assistant, a DOE-sponsored family of energy audit computer programs, to perform audits for large and small multifamily buildings This report describes the planning effort for a new multifamily energy audit tool for DOE's WAP. The functionality of the Weatherization Assistant is being expanded to also perform energy audits of small multifamily and large multifamily buildings. The process covers an assessment of needs that includes input from national experts during two national Web conferences. The assessment of needs is then translated into capability and performance descriptions for the proposed new multifamily energy audit, with some description of what might or should be provided in the new tool. The assessment of needs is combined with our best judgment to lay out a strategy for development of the multifamily tool that proceeds in stages, with features of an initial tool (version 1) and a more capable version 2 handled with currently available resources. Additional development in the future is expected to be needed if more capabilities are to be added. A rough schedule for development of the version 1 tool is presented. The components and capabilities described in this plan will serve as the starting point for development of the proposed new multifamily energy audit tool for WAP.« less
Coordination and Collaboration in European Research towards Healthy and Safe Nanomaterials
NASA Astrophysics Data System (ADS)
Riediker, Michael
2011-07-01
Nanotechnology is becoming part of our daily life in a wide range of products such as computers, bicycles, sunscreens or nanomedicines. While these applications already become reality, considerable work awaits scientists, engineers, and policy makers, who want such nanotechnological products to yield a maximum of benefit at a minimum of social, environmental, economic and (occupational) health cost. Considerable efforts for coordination and collaboration in research are needed if one wants to reach these goals in a reasonable time frame and an affordable price tag. This is recognized in Europe by the European Commission which funds not only research projects but also supports the coordination of research efforts. One of these coordination efforts is NanoImpactNet, a researcher-operated network, which started in 2008 promote scientific cross-talk across all disciplines on the health and environmental impact of nanomaterials. Stakeholders contribute to these activities, notably the definition of research and knowledge needs. Initial discussions in this domain focused on finding an agreement on common metrics, and which elements are needed for standardized approaches for hazard and exposure identification. There are many nanomaterial properties that may play a role. Hence, to gain the time needed to study this complex matter full of uncertainties, researchers and stakeholders unanimously called for simple, easy and fast risk assessment tools that can support decision making in this rapidly moving and growing domain. Today, several projects are starting or already running that will develop such assessment tools. At the same time, other projects investigate in depth which factors and material properties can lead to unwanted toxicity or exposure, what mechanisms are involved and how such responses can be predicted and modelled. A vision for the future is that once these factors, properties and mechanisms are understood, they can and will be accounted for in the development of new products and production processes following the idea of "Safety by Design". The promise of all these efforts is a future with nanomaterials where most of their risks are recognized and addressed before they even reach the market.
Genetic Toxicology in the 21st Century: Reflections and Future Directions
Mahadevan, Brinda; Snyder, Ronald D.; Waters, Michael D.; Benz, R. Daniel; Kemper, Raymond A.; Tice, Raymond R.; Richard, Ann M.
2011-01-01
A symposium at the 40th anniversary of the Environmental Mutagen Society, held from October 24–28, 2009 in St. Louis, MO, surveyed the current status and future directions of genetic toxicology. This article summarizes the presentations and provides a perspective on the future. An abbreviated history is presented, highlighting the current standard battery of genotoxicity assays and persistent challenges. Application of computational toxicology to safety testing within a regulatory setting is discussed as a means for reducing the need for animal testing and human clinical trials, and current approaches and applications of in silico genotoxicity screening approaches across the pharmaceutical industry were surveyed and are reported here. The expanded use of toxicogenomics to illuminate mechanisms and bridge genotoxicity and carcinogenicity, and new public efforts to use high-throughput screening technologies to address lack of toxicity evaluation for the backlog of thousands of industrial chemicals in the environment are detailed. The Tox21 project involves coordinated efforts of four U.S. Government regulatory/research entities to use new and innovative assays to characterize key steps in toxicity pathways, including genotoxic and nongenotoxic mechanisms for carcinogenesis. Progress to date, highlighting preliminary test results from the National Toxicology Program is summarized. Finally, an overview is presented of ToxCast™, a related research program of the U.S. Environmental Protection Agency, using a broad array of high throughput and high content technologies for toxicity profiling of environmental chemicals, and computational toxicology modeling. Progress and challenges, including the pressing need to incorporate metabolic activation capability, are summarized. PMID:21538556
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friese, Ryan; Khemka, Bhavesh; Maciejewski, Anthony A
Rising costs of energy consumption and an ongoing effort for increases in computing performance are leading to a significant need for energy-efficient computing. Before systems such as supercomputers, servers, and datacenters can begin operating in an energy-efficient manner, the energy consumption and performance characteristics of the system must be analyzed. In this paper, we provide an analysis framework that will allow a system administrator to investigate the tradeoffs between system energy consumption and utility earned by a system (as a measure of system performance). We model these trade-offs as a bi-objective resource allocation problem. We use a popular multi-objective geneticmore » algorithm to construct Pareto fronts to illustrate how different resource allocations can cause a system to consume significantly different amounts of energy and earn different amounts of utility. We demonstrate our analysis framework using real data collected from online benchmarks, and further provide a method to create larger data sets that exhibit similar heterogeneity characteristics to real data sets. This analysis framework can provide system administrators with insight to make intelligent scheduling decisions based on the energy and utility needs of their systems.« less
Virtual terrain: a security-based representation of a computer network
NASA Astrophysics Data System (ADS)
Holsopple, Jared; Yang, Shanchieh; Argauer, Brian
2008-03-01
Much research has been put forth towards detection, correlating, and prediction of cyber attacks in recent years. As this set of research progresses, there is an increasing need for contextual information of a computer network to provide an accurate situational assessment. Typical approaches adopt contextual information as needed; yet such ad hoc effort may lead to unnecessary or even conflicting features. The concept of virtual terrain is, therefore, developed and investigated in this work. Virtual terrain is a common representation of crucial information about network vulnerabilities, accessibilities, and criticalities. A virtual terrain model encompasses operating systems, firewall rules, running services, missions, user accounts, and network connectivity. It is defined as connected graphs with arc attributes defining dynamic relationships among vertices modeling network entities, such as services, users, and machines. The virtual terrain representation is designed to allow feasible development and maintenance of the model, as well as efficacy in terms of the use of the model. This paper will describe the considerations in developing the virtual terrain schema, exemplary virtual terrain models, and algorithms utilizing the virtual terrain model for situation and threat assessment.
Building an International Collaboration for GeoInformatics
NASA Astrophysics Data System (ADS)
Snyder, W. S.; Lehnert, K.; Klump, J.
2005-12-01
Geoinformatics (cyberinfrastructure for the geosciences) is being developed as a linked system of sites that provide to the Earth science community a library of research data research-grade tools to manipulate, mine, analyze and model interdisciplinary data, and mechanisms to provide the necessary computational resources for these activities. Our science is global in scope and hence, geoinformatics (GI) must be an international effort. How do we build this international GI? What are the main challenges presented by the political, cultural, organizational, and technical diversity of the global science community that we need to address to achieve a truly global cyberinfrastructure for the Geosciences? GI needs to be developed in an internet-like fashion establishing connections among independent globally distributed sites (`nodes') that will share, link, and integrate their data holdings and services. Independence of the GI pieces with respect to goals, scope, and approaches is critical to sustain commitment from people to build a GI node for which they feel ownership and get credit. This should not be fought by funding agencies - and certainly not by state and federal agencies. Communication, coordination, and collaboration are the core efforts to build the connections, but incentives and resources are required to advance and support them. Part of the coordination effort is development and maintenance of standards. Who should set these standards and govern their modification? Do we need an official international body to do so, and should this be a "governing body" or an "advisory body"? What role should international commissions and bodies such as CODATA/ICSU or IUGS-CGI, international societies and unions, the national geological surveys and other federal agencies play? Guidance from the science community is key to construct a system that geo-researchers will want to use, and that meets their needs. Only when the community endorses GI as a fundamental platform to support science research will we be able to address the challenging question how to insure sustained funding of GI which will undoubtly be a costly effort, convincing governments and funding agencies to invest in a global effort. Perhaps the most challenging problems are cultural ones such as the "my data" issue, the reluctance to share data even if they were generated with public funding. This is slowly being resolved by some funding agencies through moratorium periods for use of data before they are available to everyone, but will require a sustained "education" effort of the geoscience research community. Geoinformatics is the platform for a new paradigm in how we conduct our research. The challenges to building an international GI are quite serious, some might say daunting, but the conveners of this session feel that the effort is not only worth it, but required for the sake of our science research.
A compendium of computational fluid dynamics at the Langley Research Center
NASA Technical Reports Server (NTRS)
1980-01-01
Through numerous summary examples, the scope and general nature of the computational fluid dynamics (CFD) effort at Langley is identified. These summaries will help inform researchers in CFD and line management at Langley of the overall effort. In addition to the inhouse efforts, out of house CFD work supported by Langley through industrial contracts and university grants are included. Researchers were encouraged to include summaries of work in preliminary and tentative states of development as well as current research approaching definitive results.
Motivational Beliefs, Student Effort, and Feedback Behaviour in Computer-Based Formative Assessment
ERIC Educational Resources Information Center
Timmers, Caroline F.; Braber-van den Broek, Jannie; van den Berg, Stephanie M.
2013-01-01
Feedback can only be effective when students seek feedback and process it. This study examines the relations between students' motivational beliefs, effort invested in a computer-based formative assessment, and feedback behaviour. Feedback behaviour is represented by whether a student seeks feedback and the time a student spends studying the…
Establishing a K-12 Circuit Design Program
ERIC Educational Resources Information Center
Inceoglu, Mustafa M.
2010-01-01
Outreach, as defined by Wikipedia, is an effort by an organization or group to connect its ideas or practices to the efforts of other organizations, groups, specific audiences, or the general public. This paper describes a computer engineering outreach project of the Department of Computer Engineering at Ege University, Izmir, Turkey, to a local…
ERIC Educational Resources Information Center
Sexton, Randall; Hignite, Michael; Margavio, Thomas M.; Margavio, Geanie W.
2009-01-01
Information Literacy is a concept that evolved as a result of efforts to move technology-based instructional and research efforts beyond the concepts previously associated with "computer literacy." While computer literacy was largely a topic devoted to knowledge of hardware and software, information literacy is concerned with students' abilities…
Workshop report on large-scale matrix diagonalization methods in chemistry theory institute
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bischof, C.H.; Shepard, R.L.; Huss-Lederman, S.
The Large-Scale Matrix Diagonalization Methods in Chemistry theory institute brought together 41 computational chemists and numerical analysts. The goal was to understand the needs of the computational chemistry community in problems that utilize matrix diagonalization techniques. This was accomplished by reviewing the current state of the art and looking toward future directions in matrix diagonalization techniques. This institute occurred about 20 years after a related meeting of similar size. During those 20 years the Davidson method continued to dominate the problem of finding a few extremal eigenvalues for many computational chemistry problems. Work on non-diagonally dominant and non-Hermitian problems asmore » well as parallel computing has also brought new methods to bear. The changes and similarities in problems and methods over the past two decades offered an interesting viewpoint for the success in this area. One important area covered by the talks was overviews of the source and nature of the chemistry problems. The numerical analysts were uniformly grateful for the efforts to convey a better understanding of the problems and issues faced in computational chemistry. An important outcome was an understanding of the wide range of eigenproblems encountered in computational chemistry. The workshop covered problems involving self- consistent-field (SCF), configuration interaction (CI), intramolecular vibrational relaxation (IVR), and scattering problems. In atomic structure calculations using the Hartree-Fock method (SCF), the symmetric matrices can range from order hundreds to thousands. These matrices often include large clusters of eigenvalues which can be as much as 25% of the spectrum. However, if Cl methods are also used, the matrix size can be between 10{sup 4} and 10{sup 9} where only one or a few extremal eigenvalues and eigenvectors are needed. Working with very large matrices has lead to the development of« less
A structured interface to the object-oriented genomics unified schema for XML-formatted data.
Clark, Terry; Jurek, Josef; Kettler, Gregory; Preuss, Daphe
2005-01-01
Data management systems are fast becoming required components in many biology laboratories as the role of computer-based information grows. Although the need for data management systems is on the rise, their inherent complexities can deter the full and routine use of their computational capabilities. The significant undertaking to implement a capable production system can be reduced in part by adapting an established data management system. In such a way, we are leveraging the Genomics Unified Schema (GUS) developed at the Computational Biology and Informatics Laboratory at the University of Pennsylvania as a foundation for managing and analysing DNA sequence data in centromere research projects around Arabidopsis thaliana and related species. Because GUS provides a core schema that includes support for genome sequences, mRNA and its expression, and annotated chromosomes, it is ideal for synthesising a variety of parameters to analyse these repetitive and highly dynamic portions of the genome. Despite this, production-strength data management frameworks are complex, requiring dedicated efforts to adapt and maintain. The work reported in this article addresses one component of such an effort, namely the pivotal task of marshalling data from various sources into GUS. In order to harness GUS for our project, and motivated by efficiency needs, we developed a structured framework for transferring data into GUS from outside sources. This technology is embodied in a GUS object-layer processor, XMLGUS. XMLGUS facilitates incorporating data into GUS by (i) formulating an XML interface that includes relational database key constraint definitions, (ii) regularising traversal through that XML, (iii) realising automatic processing of the XML with database key constraints and (iv) allowing for special processing of input data within the framework for automated processing. The application of XMLGUS to production pipeline processing for a sequencing project and inputting the Arabidopsis genome into GUS is discussed. XMLGUS is available from the Flora website (http://flora.ittc.ku.edu/).
STREAM2016: Streaming Requirements, Experience, Applications and Middleware Workshop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fox, Geoffrey; Jha, Shantenu; Ramakrishnan, Lavanya
The Department of Energy (DOE) Office of Science (SC) facilities including accelerators, light sources and neutron sources and sensors that study, the environment, and the atmosphere, are producing streaming data that needs to be analyzed for next-generation scientific discoveries. There has been an explosion of new research and technologies for stream analytics arising from the academic and private sectors. However, there has been no corresponding effort in either documenting the critical research opportunities or building a community that can create and foster productive collaborations. The two-part workshop series, STREAM: Streaming Requirements, Experience, Applications and Middleware Workshop (STREAM2015 and STREAM2016), weremore » conducted to bring the community together and identify gaps and future efforts needed by both NSF and DOE. This report describes the discussions, outcomes and conclusions from STREAM2016: Streaming Requirements, Experience, Applications and Middleware Workshop, the second of these workshops held on March 22-23, 2016 in Tysons, VA. STREAM2016 focused on the Department of Energy (DOE) applications, computational and experimental facilities, as well software systems. Thus, the role of “streaming and steering” as a critical mode of connecting the experimental and computing facilities was pervasive through the workshop. Given the overlap in interests and challenges with industry, the workshop had significant presence from several innovative companies and major contributors. The requirements that drive the proposed research directions, identified in this report, show an important opportunity for building competitive research and development program around streaming data. These findings and recommendations are consistent with vision outlined in NRC Frontiers of Data and National Strategic Computing Initiative (NCSI) [1, 2]. The discussions from the workshop are captured as topic areas covered in this report's sections. The report discusses four research directions driven by current and future application requirements reflecting the areas identified as important by STREAM2016. These include (i) Algorithms, (ii) Programming Models, Languages and Runtime Systems (iii) Human-in-the-loop and Steering in Scientific Workflow and (iv) Facilities.« less
Instrumentino: An Open-Source Software for Scientific Instruments.
Koenka, Israel Joel; Sáiz, Jorge; Hauser, Peter C
2015-01-01
Scientists often need to build dedicated computer-controlled experimental systems. For this purpose, it is becoming common to employ open-source microcontroller platforms, such as the Arduino. These boards and associated integrated software development environments provide affordable yet powerful solutions for the implementation of hardware control of transducers and acquisition of signals from detectors and sensors. It is, however, a challenge to write programs that allow interactive use of such arrangements from a personal computer. This task is particularly complex if some of the included hardware components are connected directly to the computer and not via the microcontroller. A graphical user interface framework, Instrumentino, was therefore developed to allow the creation of control programs for complex systems with minimal programming effort. By writing a single code file, a powerful custom user interface is generated, which enables the automatic running of elaborate operation sequences and observation of acquired experimental data in real time. The framework, which is written in Python, allows extension by users, and is made available as an open source project.
Colling, D.; Britton, D.; Gordon, J.; Lloyd, S.; Doyle, A.; Gronbech, P.; Coles, J.; Sansum, A.; Patrick, G.; Jones, R.; Middleton, R.; Kelsey, D.; Cass, A.; Geddes, N.; Clark, P.; Barnby, L.
2013-01-01
The Large Hadron Collider (LHC) is one of the greatest scientific endeavours to date. The construction of the collider itself and the experiments that collect data from it represent a huge investment, both financially and in terms of human effort, in our hope to understand the way the Universe works at a deeper level. Yet the volumes of data produced are so large that they cannot be analysed at any single computing centre. Instead, the experiments have all adopted distributed computing models based on the LHC Computing Grid. Without the correct functioning of this grid infrastructure the experiments would not be able to understand the data that they have collected. Within the UK, the Grid infrastructure needed by the experiments is provided by the GridPP project. We report on the operations, performance and contributions made to the experiments by the GridPP project during the years of 2010 and 2011—the first two significant years of the running of the LHC. PMID:23230163
Minkara, Mona S; Weaver, Michael N; Gorske, Jim; Bowers, Clifford R; Merz, Kenneth M
2015-08-11
There exists a sparse representation of blind and low-vision students in science, technology, engineering and mathematics (STEM) fields. This is due in part to these individuals being discouraged from pursuing STEM degrees as well as a lack of appropriate adaptive resources in upper level STEM courses and research. Mona Minkara is a rising fifth year graduate student in computational chemistry at the University of Florida. She is also blind. This account presents efforts conducted by an expansive team of university and student personnel in conjunction with Mona to adapt different portions of the graduate student curriculum to meet Mona's needs. The most important consideration is prior preparation of materials to assist with coursework and cumulative exams. Herein we present an account of the first four years of Mona's graduate experience hoping this will assist in the development of protocols for future blind and low-vision graduate students in computational chemistry.
Hiremath, Shivayogi V; Chen, Weidong; Wang, Wei; Foldes, Stephen; Yang, Ying; Tyler-Kabara, Elizabeth C; Collinger, Jennifer L; Boninger, Michael L
2015-01-01
A brain-computer interface (BCI) system transforms neural activity into control signals for external devices in real time. A BCI user needs to learn to generate specific cortical activity patterns to control external devices effectively. We call this process BCI learning, and it often requires significant effort and time. Therefore, it is important to study this process and develop novel and efficient approaches to accelerate BCI learning. This article reviews major approaches that have been used for BCI learning, including computer-assisted learning, co-adaptive learning, operant conditioning, and sensory feedback. We focus on BCIs based on electrocorticography and intracortical microelectrode arrays for restoring motor function. This article also explores the possibility of brain modulation techniques in promoting BCI learning, such as electrical cortical stimulation, transcranial magnetic stimulation, and optogenetics. Furthermore, as proposed by recent BCI studies, we suggest that BCI learning is in many ways analogous to motor and cognitive skill learning, and therefore skill learning should be a useful metaphor to model BCI learning.
Scale-Resolving simulations (SRS): How much resolution do we really need?
NASA Astrophysics Data System (ADS)
Pereira, Filipe M. S.; Girimaji, Sharath
2017-11-01
Scale-resolving simulations (SRS) are emerging as the computational approach of choice for many engineering flows with coherent structures. The SRS methods seek to resolve only the most important features of the coherent structures and model the remainder of the flow field with canonical closures. With reference to a typical Large-Eddy Simulation (LES), practical SRS methods aim to resolve a considerably narrower range of scales (reduced physical resolution) to achieve an adequate degree of accuracy at reasonable computational effort. While the objective of SRS is well-founded, the criteria for establishing the optimal degree of resolution required to achieve an acceptable level of accuracy are not clear. This study considers the canonical case of the flow around a circular cylinder to address the issue of `optimal' resolution. Two important criteria are developed. The first condition addresses the issue of adequate resolution of the flow field. The second guideline provides an assessment of whether the modeled field is canonical (stochastic) turbulence amenable to closure-based computations.
Solution of nonlinear time-dependent PDEs through componentwise approximation of matrix functions
NASA Astrophysics Data System (ADS)
Cibotarica, Alexandru; Lambers, James V.; Palchak, Elisabeth M.
2016-09-01
Exponential propagation iterative (EPI) methods provide an efficient approach to the solution of large stiff systems of ODEs, compared to standard integrators. However, the bulk of the computational effort in these methods is due to products of matrix functions and vectors, which can become very costly at high resolution due to an increase in the number of Krylov projection steps needed to maintain accuracy. In this paper, it is proposed to modify EPI methods by using Krylov subspace spectral (KSS) methods, instead of standard Krylov projection methods, to compute products of matrix functions and vectors. Numerical experiments demonstrate that this modification causes the number of Krylov projection steps to become bounded independently of the grid size, thus dramatically improving efficiency and scalability. As a result, for each test problem featured, as the total number of grid points increases, the growth in computation time is just below linear, while other methods achieved this only on selected test problems or not at all.
Computational Modeling in Structural Materials Processing
NASA Technical Reports Server (NTRS)
Meyyappan, Meyya; Arnold, James O. (Technical Monitor)
1997-01-01
High temperature materials such as silicon carbide, a variety of nitrides, and ceramic matrix composites find use in aerospace, automotive, machine tool industries and in high speed civil transport applications. Chemical vapor deposition (CVD) is widely used in processing such structural materials. Variations of CVD include deposition on substrates, coating of fibers, inside cavities and on complex objects, and infiltration within preforms called chemical vapor infiltration (CVI). Our current knowledge of the process mechanisms, ability to optimize processes, and scale-up for large scale manufacturing is limited. In this regard, computational modeling of the processes is valuable since a validated model can be used as a design tool. The effort is similar to traditional chemically reacting flow modeling with emphasis on multicomponent diffusion, thermal diffusion, large sets of homogeneous reactions, and surface chemistry. In the case of CVI, models for pore infiltration are needed. In the present talk, examples of SiC nitride, and Boron deposition from the author's past work will be used to illustrate the utility of computational process modeling.
Multiscale modeling of three-dimensional genome
NASA Astrophysics Data System (ADS)
Zhang, Bin; Wolynes, Peter
The genome, the blueprint of life, contains nearly all the information needed to build and maintain an entire organism. A comprehensive understanding of the genome is of paramount interest to human health and will advance progress in many areas, including life sciences, medicine, and biotechnology. The overarching goal of my research is to understand the structure-dynamics-function relationships of the human genome. In this talk, I will be presenting our efforts in moving towards that goal, with a particular emphasis on studying the three-dimensional organization, the structure of the genome with multi-scale approaches. Specifically, I will discuss the reconstruction of genome structures at both interphase and metaphase by making use of data from chromosome conformation capture experiments. Computationally modeling of chromatin fiber at atomistic level from first principles will also be presented as our effort for studying the genome structure from bottom up.
HART-II: Prediction of Blade-Vortex Interaction Loading
NASA Technical Reports Server (NTRS)
Lim, Joon W.; Tung, Chee; Yu, Yung H.; Burley, Casey L.; Brooks, Thomas; Boyd, Doug; vanderWall, Berend; Schneider, Oliver; Richard, Hugues; Raffel, Markus
2003-01-01
During the HART-I data analysis, the need for comprehensive wake data was found including vortex creation and aging, and its re-development after blade-vortex interaction. In October 2001, US Army AFDD, NASA Langley, German DLR, French ONERA and Dutch DNW performed the HART-II test as an international joint effort. The main objective was to focus on rotor wake measurement using a PIV technique along with the comprehensive data of blade deflections, airloads, and acoustics. Three prediction teams made preliminary correlation efforts with HART-II data: a joint US team of US Army AFDD and NASA Langley, German DLR, and French ONERA. The predicted results showed significant improvements over the HART-I predicted results, computed about several years ago, which indicated that there has been better understanding of complicated wake modeling in the comprehensive rotorcraft analysis. All three teams demonstrated satisfactory prediction capabilities, in general, though there were slight deviations of prediction accuracies for various disciplines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, Martin L.; Choi, C. L.; Hattrick-Simpers, J. R.
The Materials Genome Initiative, a national effort to introduce new materials into the market faster and at lower cost, has made significant progress in computational simulation and modeling of materials. To build on this progress, a large amount of experimental data for validating these models, and informing more sophisticated ones, will be required. High-throughput experimentation generates large volumes of experimental data using combinatorial materials synthesis and rapid measurement techniques, making it an ideal experimental complement to bring the Materials Genome Initiative vision to fruition. This paper reviews the state-of-the-art results, opportunities, and challenges in high-throughput experimentation for materials design. Asmore » a result, a major conclusion is that an effort to deploy a federated network of high-throughput experimental (synthesis and characterization) tools, which are integrated with a modern materials data infrastructure, is needed.« less
DOE unveils climate model in advance of global test
NASA Astrophysics Data System (ADS)
Popkin, Gabriel
2018-05-01
The world's growing collection of climate models has a high-profile new entry. Last week, after nearly 4 years of work, the U.S. Department of Energy (DOE) released computer code and initial results from an ambitious effort to simulate the Earth system. The new model is tailored to run on future supercomputers and designed to forecast not just how climate will change, but also how those changes might stress energy infrastructure. Results from an upcoming comparison of global models may show how well the new entrant works. But so far it is getting a mixed reception, with some questioning the need for another model and others saying the $80 million effort has yet to improve predictions of the future climate. Even the project's chief scientist, Ruby Leung of the Pacific Northwest National Laboratory in Richland, Washington, acknowledges that the model is not yet a leader.
GIS and RDBMS Used with Offline FAA Airspace Databases
NASA Technical Reports Server (NTRS)
Clark, J.; Simmons, J.; Scofield, E.; Talbott, B.
1994-01-01
A geographic information system (GIS) and relational database management system (RDBMS) were used in a Macintosh environment to access, manipulate, and display off-line FAA databases of airport and navigational aid locations, airways, and airspace boundaries. This proof-of-concept effort used data available from the Adaptation Controlled Environment System (ACES) and Digital Aeronautical Chart Supplement (DACS) databases to allow FAA cartographers and others to create computer-assisted charts and overlays as reference material for air traffic controllers. These products were created on an engineering model of the future GRASP (GRaphics Adaptation Support Position) workstation that will be used to make graphics and text products for the Advanced Automation System (AAS), which will upgrade and replace the current air traffic control system. Techniques developed during the prototyping effort have shown the viability of using databases to create graphical products without the need for an intervening data entry step.
Experience Transitioning Models and Data at the NOAA Space Weather Prediction Center
NASA Astrophysics Data System (ADS)
Berger, Thomas
2016-07-01
The NOAA Space Weather Prediction Center has a long history of transitioning research data and models into operations and with the validation activities required. The first stage in this process involves demonstrating that the capability has sufficient value to customers to justify the cost needed to transition it and to run it continuously and reliably in operations. Once the overall value is demonstrated, a substantial effort is then required to develop the operational software from the research codes. The next stage is to implement and test the software and product generation on the operational computers. Finally, effort must be devoted to establishing long-term measures of performance, maintaining the software, and working with forecasters, customers, and researchers to improve over time the operational capabilities. This multi-stage process of identifying, transitioning, and improving operational space weather capabilities will be discussed using recent examples. Plans for future activities will also be described.
Green, Martin L.; Choi, C. L.; Hattrick-Simpers, J. R.; ...
2017-03-28
The Materials Genome Initiative, a national effort to introduce new materials into the market faster and at lower cost, has made significant progress in computational simulation and modeling of materials. To build on this progress, a large amount of experimental data for validating these models, and informing more sophisticated ones, will be required. High-throughput experimentation generates large volumes of experimental data using combinatorial materials synthesis and rapid measurement techniques, making it an ideal experimental complement to bring the Materials Genome Initiative vision to fruition. This paper reviews the state-of-the-art results, opportunities, and challenges in high-throughput experimentation for materials design. Asmore » a result, a major conclusion is that an effort to deploy a federated network of high-throughput experimental (synthesis and characterization) tools, which are integrated with a modern materials data infrastructure, is needed.« less
Electronic Nose and Electronic Tongue
NASA Astrophysics Data System (ADS)
Bhattacharyya, Nabarun; Bandhopadhyay, Rajib
Human beings have five senses, namely, vision, hearing, touch, smell and taste. The sensors for vision, hearing and touch have been developed for several years. The need for sensors capable of mimicking the senses of smell and taste have been felt only recently in food industry, environmental monitoring and several industrial applications. In the ever-widening horizon of frontier research in the field of electronics and advanced computing, emergence of electronic nose (E-Nose) and electronic tongue (E-Tongue) have been drawing attention of scientists and technologists for more than a decade. By intelligent integration of multitudes of technologies like chemometrics, microelectronics and advanced soft computing, human olfaction has been successfully mimicked by such new techniques called machine olfaction (Pearce et al. 2002). But the very essence of such research and development efforts has centered on development of customized electronic nose and electronic tongue solutions specific to individual applications. In fact, research trends as of date clearly points to the fact that a machine olfaction system as versatile, universal and broadband as human nose and human tongue may not be feasible in the decades to come. But application specific solutions may definitely be demonstrated and commercialized by modulation in sensor design and fine-tuning the soft computing solutions. This chapter deals with theory, developments of E-Nose and E-Tongue technology and their applications. Also a succinct account of future trends of R&D efforts in this field with an objective of establishing co-relation between machine olfaction and human perception has been included.
Impact of remote sensing upon the planning, management, and development of water resources
NASA Technical Reports Server (NTRS)
Loats, H. L.; Fowler, T. R.; Frech, S. L.
1974-01-01
A survey of the principal water resource users was conducted to determine the impact of new remote data streams on hydrologic computer models. The analysis of the responses and direct contact demonstrated that: (1) the majority of water resource effort of the type suitable to remote sensing inputs is conducted by major federal water resources agencies or through federally stimulated research, (2) the federal government develops most of the hydrologic models used in this effort; and (3) federal computer power is extensive. The computers, computer power, and hydrologic models in current use were determined.
Architecture independent environment for developing engineering software on MIMD computers
NASA Technical Reports Server (NTRS)
Valimohamed, Karim A.; Lopez, L. A.
1990-01-01
Engineers are constantly faced with solving problems of increasing complexity and detail. Multiple Instruction stream Multiple Data stream (MIMD) computers have been developed to overcome the performance limitations of serial computers. The hardware architectures of MIMD computers vary considerably and are much more sophisticated than serial computers. Developing large scale software for a variety of MIMD computers is difficult and expensive. There is a need to provide tools that facilitate programming these machines. First, the issues that must be considered to develop those tools are examined. The two main areas of concern were architecture independence and data management. Architecture independent software facilitates software portability and improves the longevity and utility of the software product. It provides some form of insurance for the investment of time and effort that goes into developing the software. The management of data is a crucial aspect of solving large engineering problems. It must be considered in light of the new hardware organizations that are available. Second, the functional design and implementation of a software environment that facilitates developing architecture independent software for large engineering applications are described. The topics of discussion include: a description of the model that supports the development of architecture independent software; identifying and exploiting concurrency within the application program; data coherence; engineering data base and memory management.
Kazakis, Georgios; Kanellopoulos, Ioannis; Sotiropoulos, Stefanos; Lagaros, Nikos D
2017-10-01
Construction industry has a major impact on the environment that we spend most of our life. Therefore, it is important that the outcome of architectural intuition performs well and complies with the design requirements. Architects usually describe as "optimal design" their choice among a rather limited set of design alternatives, dictated by their experience and intuition. However, modern design of structures requires accounting for a great number of criteria derived from multiple disciplines, often of conflicting nature. Such criteria derived from structural engineering, eco-design, bioclimatic and acoustic performance. The resulting vast number of alternatives enhances the need for computer-aided architecture in order to increase the possibility of arriving at a more preferable solution. Therefore, the incorporation of smart, automatic tools in the design process, able to further guide designer's intuition becomes even more indispensable. The principal aim of this study is to present possibilities to integrate automatic computational techniques related to topology optimization in the phase of intuition of civil structures as part of computer aided architectural design. In this direction, different aspects of a new computer aided architectural era related to the interpretation of the optimized designs, difficulties resulted from the increased computational effort and 3D printing capabilities are covered here in.
An opportunity cost model of subjective effort and task performance
Kurzban, Robert; Duckworth, Angela; Kable, Joseph W.; Myers, Justus
2013-01-01
Why does performing certain tasks cause the aversive experience of mental effort and concomitant deterioration in task performance? One explanation posits a physical resource that is depleted over time. We propose an alternate explanation that centers on mental representations of the costs and benefits associated with task performance. Specifically, certain computational mechanisms, especially those associated with executive function, can be deployed for only a limited number of simultaneous tasks at any given moment. Consequently, the deployment of these computational mechanisms carries an opportunity cost – that is, the next-best use to which these systems might be put. We argue that the phenomenology of effort can be understood as the felt output of these cost/benefit computations. In turn, the subjective experience of effort motivates reduced deployment of these computational mechanisms in the service of the present task. These opportunity cost representations, then, together with other cost/benefit calculations, determine effort expended and, everything else equal, result in performance reductions. In making our case for this position, we review alternate explanations both for the phenomenology of effort associated with these tasks and for performance reductions over time. Likewise, we review the broad range of relevant empirical results from across subdisciplines, especially psychology and neuroscience. We hope that our proposal will help to build links among the diverse fields that have been addressing similar questions from different perspectives, and we emphasize ways in which alternate models might be empirically distinguished. PMID:24304775
A small Unix-based data acquisition system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engberg, D.; Glanzman, T.
1994-02-01
The proposed SLAC B Factory detector plans to use Unix-based machines for all aspects of computing, including real-time data acquisition and experimental control. A R and D program has been established to investigate the use of Unix in the various aspects of experimental computation. Earlier R and D work investigated the basic real-time aspects of the IBM RS/6000 workstation running AIX, which claims to be a real-time operating system. The next step in this R and D is the construction of a prototype data acquisition system which attempts to exercise many of the features needed in the final on-line systemmore » in a realistic situation. For this project, the authors have combined efforts with a team studying the use of novel cell designs and gas mixtures in a new prototype drift chamber.« less
Star Identification Without Attitude Knowledge: Testing with X-Ray Timing Experiment Data
NASA Technical Reports Server (NTRS)
Ketchum, Eleanor
1997-01-01
As the budget for the scientific exploration of space shrinks, the need for more autonomous spacecraft increases. For a spacecraft with a star tracker, the ability to determinate attitude from a lost in space state autonomously requires the capability to identify the stars in the field of view of the tracker. Although there have been efforts to produce autonomous star trackers which perform this function internally, many programs cannot afford these sensors. The author previously presented a method for identifying stars without a priori attitude knowledge specifically targeted for onboard computers as it minimizes the necessary computer storage. The method has previously been tested with simulated data. This paper provides results of star identification without a priori attitude knowledge using flight data from two 8 by 8 degree charge coupled device star trackers onboard the X-Ray Timing Experiment.
Experimental and computational investigation of lateral gauge response in polycarbonate
NASA Astrophysics Data System (ADS)
Eliot, Jim; Harris, Ernst; Hazell, Paul; Appleby-Thomas, Gareth; Winter, Ronald; Wood, David; Owen, Gareth
2011-06-01
Polycarbonate's use in personal armour systems means its high strain-rate response has been extensively studied. Interestingly, embedded lateral manganin stress gauges in polycarbonate have shown gradients behind incident shocks, suggestive of increasing shear strength. However, such gauges need to be embedded in a central (typically) epoxy interlayer - an inherently invasive approach. Recently, research has suggested that in such metal systems interlayer/target impedance may contribute to observed gradients in lateral stress. Here, experimental T-gauge (Vishay Micro-Measurements® type J2M-SS-580SF-025) traces from polycarbonate targets are compared to computational simulations. This work extends previous efforts such that similar impedance exists between the interlayer and matrix (target) interface. Further, experiments and simulations are presented investigating the effects of a ``dry joint'' in polycarbonate, in which no encapsulating medium is employed.
NASA Technical Reports Server (NTRS)
Choo, Yung K.; Slater, John W.; Vickerman, Mary B.; VanZante, Judith F.; Wadel, Mary F. (Technical Monitor)
2002-01-01
Issues associated with analysis of 'icing effects' on airfoil and wing performances are discussed, along with accomplishments and efforts to overcome difficulties with ice. Because of infinite variations of ice shapes and their high degree of complexity, computational 'icing effects' studies using available software tools must address many difficulties in geometry acquisition and modeling, grid generation, and flow simulation. The value of each technology component needs to be weighed from the perspective of the entire analysis process, from geometry to flow simulation. Even though CFD codes are yet to be validated for flows over iced airfoils and wings, numerical simulation, when considered together with wind tunnel tests, can provide valuable insights into 'icing effects' and advance our understanding of the relationship between ice characteristics and their effects on performance degradation.
HGML: a hypertext guideline markup language.
Hagerty, C. G.; Pickens, D.; Kulikowski, C.; Sonnenberg, F.
2000-01-01
Existing text-based clinical practice guidelines can be difficult to put into practice. While a growing number of such documents have gained acceptance in the medical community and contain a wealth of valuable information, the time required to digest them is substantial. Yet the expressive power, subtlety and flexibility of natural language pose challenges when designing computer tools that will help in their application. At the same time, formal computer languages typically lack such expressiveness and the effort required to translate existing documents into these languages may be costly. We propose a method based on the mark-up concept for converting text-based clinical guidelines into a machine-operable form. This allows existing guidelines to be manipulated by machine, and viewed in different formats at various levels of detail according to the needs of the practitioner, while preserving their originally published form. PMID:11079898
Instability Mechanisms of Thermally-Driven Interfacial Flows in Liquid-Encapsulated Crystal Growth
NASA Technical Reports Server (NTRS)
Haj-Hariri, Hossein; Borhan, Ali
1997-01-01
During the past year, a great deal of effort was focused on the enhancement and refinement of the computational tools developed as part of our previous NASA grant. In particular, the interface mollification algorithm developed earlier was extended to incorporate the effects of surface-rheological properties in order to allow the study of thermocapillary flows in the presence of surface contamination. These tools will be used in the computational component of the proposed research in the remaining years of this grant. A detailed description of the progress made in this area is provided elsewhere. Briefly, the method developed allows for the convection and diffusion of bulk-insoluble surfactants on a moving and deforming interface. The novelty of the method is its grid independence: there is no need for front tracking, surface reconstruction, body-fitted grid generation, or metric evaluations; these are all very expensive computational tasks in three dimensions. For small local radii of curvature there is a need for local grid adaption so that the smearing thickness remains a small fraction of the radius of curvature. A special Neumann boundary condition was devised and applied so that the calculated surfactant concentration has no variations normal to the interface, and it is hence truly a surface-defined quantity. The discretized governing equations are solved subsequently using a time-split integration scheme which updates the concentration and the shape successively. Results demonstrate excellent agreement between the computed and exact solutions.
Development of Large-Scale Spacecraft Fire Safety Experiments
NASA Technical Reports Server (NTRS)
Ruff, Gary A.; Urban, David; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; Cowlard, Adam J.;
2013-01-01
The status is presented of a spacecraft fire safety research project that is under development to reduce the uncertainty and risk in the design of spacecraft fire safety systems by testing at nearly full scale in low-gravity. Future crewed missions are expected to be more complex and longer in duration than previous exploration missions outside of low-earth orbit. This will increase the challenge of ensuring a fire-safe environment for the crew throughout the mission. Based on our fundamental uncertainty of the behavior of fires in low-gravity, the need for realistic scale testing at reduced gravity has been demonstrated. To address this gap in knowledge, a project has been established under the NASA Advanced Exploration Systems Program under the Human Exploration and Operations Mission directorate with the goal of substantially advancing our understanding of the spacecraft fire safety risk. Associated with the project is an international topical team of fire experts from other space agencies who conduct research that is integrated into the overall experiment design. The experiments are under development to be conducted in an Orbital Science Corporation Cygnus vehicle after it has undocked from the ISS. Although the experiment will need to meet rigorous safety requirements to ensure the carrier vehicle does not sustain damage, the absence of a crew removes the need for strict containment of combustion products. The tests will be fully automated with the data downlinked at the conclusion of the test before the Cygnus vehicle reenters the atmosphere. A computer modeling effort will complement the experimental effort. The international topical team is collaborating with the NASA team in the definition of the experiment requirements and performing supporting analysis, experimentation and technology development. The status of the overall experiment and the associated international technology development efforts are summarized.
DOE planning workshop advanced biomedical technology initiative
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1994-06-01
The Department of Energy has mad major contributions in the biomedical sciences with programs in medical applications and instrumentation development, molecular biology, human genome, and computational sciences. In an effort to help determine DOE`s role in applying these capabilities to the nation`s health care needs, a planning workshop was held on January 11--12, 1994. The workshop was co-sponsored by the Department`s Office of Energy Research and Defense Programs organizations. Participants represented industry, medical research institutions, national laboratories, and several government agencies. They attempted to define the needs of the health care industry. identify DOE laboratory capabilities that address these needs,more » and determine how DOE, in cooperation with other team members, could begin an initiative with the goals of reducing health care costs while improving the quality of health care delivery through the proper application of technology and computational systems. This document is a report of that workshop. Seven major technology development thrust areas were considered. Each involves development of various aspects of imaging, optical, sensor and data processing and storage technologies. The thrust areas as prioritized for DOE are: (1) Minimally Invasive Procedures; (2) Technologies for Individual Self Care; (3) Outcomes Research; (4) Telemedicine; (5) Decision Support Systems; (6) Assistive Technology; (7) Prevention and Education.« less
Precision medicine needs pioneering clinical bioinformaticians.
Gómez-López, Gonzalo; Dopazo, Joaquín; Cigudosa, Juan C; Valencia, Alfonso; Al-Shahrour, Fátima
2017-10-25
Success in precision medicine depends on accessing high-quality genetic and molecular data from large, well-annotated patient cohorts that couple biological samples to comprehensive clinical data, which in conjunction can lead to effective therapies. From such a scenario emerges the need for a new professional profile, an expert bioinformatician with training in clinical areas who can make sense of multi-omics data to improve therapeutic interventions in patients, and the design of optimized basket trials. In this review, we first describe the main policies and international initiatives that focus on precision medicine. Secondly, we review the currently ongoing clinical trials in precision medicine, introducing the concept of 'precision bioinformatics', and we describe current pioneering bioinformatics efforts aimed at implementing tools and computational infrastructures for precision medicine in health institutions around the world. Thirdly, we discuss the challenges related to the clinical training of bioinformaticians, and the urgent need for computational specialists capable of assimilating medical terminologies and protocols to address real clinical questions. We also propose some skills required to carry out common tasks in clinical bioinformatics and some tips for emergent groups. Finally, we explore the future perspectives and the challenges faced by precision medicine bioinformatics. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Design Protocols and Analytical Strategies that Incorporate Structural Reliability Models
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.
1997-01-01
Ceramic matrix composites (CMC) and intermetallic materials (e.g., single crystal nickel aluminide) are high performance materials that exhibit attractive mechanical, thermal and chemical properties. These materials are critically important in advancing certain performance aspects of gas turbine engines. From an aerospace engineer's perspective the new generation of ceramic composites and intermetallics offers a significant potential for raising the thrust/weight ratio and reducing NO(x) emissions of gas turbine engines. These aspects have increased interest in utilizing these materials in the hot sections of turbine engines. However, as these materials evolve and their performance characteristics improve a persistent need exists for state-of-the-art analytical methods that predict the response of components fabricated from CMC and intermetallic material systems. This need provided the motivation for the technology developed under this research effort. Continuous ceramic fiber composites exhibit an increase in work of fracture, which allows for "graceful" rather than catastrophic failure. When loaded in the fiber direction, these composites retain substantial strength capacity beyond the initiation of transverse matrix cracking despite the fact that neither of its constituents would exhibit such behavior if tested alone. As additional load is applied beyond first matrix cracking, the matrix tends to break in a series of cracks bridged by the ceramic fibers. Any additional load is born increasingly by the fibers until the ultimate strength of the composite is reached. Thus modeling efforts supported under this research effort have focused on predicting this sort of behavior. For single crystal intermetallics the issues that motivated the technology development involved questions relating to material behavior and component design. Thus the research effort supported by this grant had to determine the statistical nature and source of fracture in a high strength, NiAl single crystal turbine blade material; map a simplistic failure strength envelope of the material; develop a statistically based reliability computer algorithm, verify the reliability model and computer algorithm, and model stator vanes for rig tests. Thus establishing design protocols that enable the engineer to analyze and predict the mechanical behavior of ceramic composites and intermetallics would mitigate the prototype (trial and error) approach currently used by the engineering community. The primary objective of the research effort supported by this short term grant is the continued creation of enabling technologies for the macroanalysis of components fabricated from ceramic composites and intermetallic material systems. The creation of enabling technologies aids in shortening the product development cycle of components fabricated from the new high technology materials.
Design Protocols and Analytical Strategies that Incorporate Structural Reliability Models
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.
1997-01-01
Ceramic matrix composites (CMC) and intermetallic materials (e.g., single crystal nickel aluminide) are high performance materials that exhibit attractive mechanical, thermal, and chemical properties. These materials are critically important in advancing certain performance aspects of gas turbine engines. From an aerospace engineers perspective the new generation of ceramic composites and intermetallics offers a significant potential for raising the thrust/weight ratio and reducing NO(sub x) emissions of gas turbine engines. These aspects have increased interest in utilizing these materials in the hot sections of turbine engines. However, as these materials evolve and their performance characteristics improve a persistent need exists for state-of-the-art analytical methods that predict the response of components fabricated from CMC and intermetallic material systems. This need provided the motivation for the technology developed under this research effort. Continuous ceramic fiber composites exhibit an increase in work of fracture, which allows for 'graceful' rather than catastrophic failure. When loaded in the fiber direction these composites retain substantial strength capacity beyond the initiation of transverse matrix cracking despite the fact that neither of its constituents would exhibit such behavior if tested alone. As additional load is applied beyond first matrix cracking, the matrix tends to break in a series of cracks bridged by the ceramic fibers. Any additional load is born increasingly by the fibers until the ultimate strength of the composite is reached. Thus modeling efforts supported under this research effort have focused on predicting this sort of behavior. For single crystal intermetallics the issues that motivated the technology development involved questions relating to material behavior and component design. Thus the research effort supported by this grant had to determine the statistical nature and source of fracture in a high strength, NiAl single crystal turbine blade material; map a simplistic future strength envelope of the material; develop a statistically based reliability computer algorithm; verify the reliability model and computer algorithm-, and model stator vanes for rig tests. Thus establishing design protocols that enable the engineer to analyze and predict the mechanical behavior of ceramic composites and intermetallics would mitigate the prototype (trial and error) approach currently used by the engineering community. The primary objective of the research effort supported by this short term grant is the continued creation of enabling technologies for the macro-analysis of components fabricated from ceramic composites and intermetallic material systems. The creation of enabling technologies aids in shortening the product development cycle of components fabricated from the new high technology materials.
An Investigation of the Flow Physics of Acoustic Liners by Direct Numerical Simulation
NASA Technical Reports Server (NTRS)
Watson, Willie R. (Technical Monitor); Tam, Christopher
2004-01-01
This report concentrates on reporting the effort and status of work done on three dimensional (3-D) simulation of a multi-hole resonator in an impedance tube. This work is coordinated with a parallel experimental effort to be carried out at the NASA Langley Research Center. The outline of this report is as follows : 1. Preliminary consideration. 2. Computation model. 3. Mesh design and parallel computing. 4. Visualization. 5. Status of computer code development. 1. Preliminary Consideration.
Re-analysis of Alaskan benchmark glacier mass-balance data using the index method
Van Beusekom, Ashely E.; O'Nell, Shad R.; March, Rod S.; Sass, Louis C.; Cox, Leif H.
2010-01-01
At Gulkana and Wolverine Glaciers, designated the Alaskan benchmark glaciers, we re-analyzed and re-computed the mass balance time series from 1966 to 2009 to accomplish our goal of making more robust time series. Each glacier's data record was analyzed with the same methods. For surface processes, we estimated missing information with an improved degree-day model. Degree-day models predict ablation from the sum of daily mean temperatures and an empirical degree-day factor. We modernized the traditional degree-day model and derived new degree-day factors in an effort to match the balance time series more closely. We estimated missing yearly-site data with a new balance gradient method. These efforts showed that an additional step needed to be taken at Wolverine Glacier to adjust for non-representative index sites. As with the previously calculated mass balances, the re-analyzed balances showed a continuing trend of mass loss. We noted that the time series, and thus our estimate of the cumulative mass loss over the period of record, was very sensitive to the data input, and suggest the need to add data-collection sites and modernize our weather stations.
Developing the Next Generation of Science Data System Engineers
NASA Technical Reports Server (NTRS)
Moses, John F.; Behnke, Jeanne; Durachka, Christopher D.
2016-01-01
At Goddard, engineers and scientists with a range of experience in science data systems are needed to employ new technologies and develop advances in capabilities for supporting new Earth and Space science research. Engineers with extensive experience in science data, software engineering and computer-information architectures are needed to lead and perform these activities. The increasing types and complexity of instrument data and emerging computer technologies coupled with the current shortage of computer engineers with backgrounds in science has led the need to develop a career path for science data systems engineers and architects.The current career path, in which undergraduate students studying various disciplines such as Computer Engineering or Physical Scientist, generally begins with serving on a development team in any of the disciplines where they can work in depth on existing Goddard data systems or serve with a specific NASA science team. There they begin to understand the data, infuse technologies, and begin to know the architectures of science data systems. From here the typical career involves peermentoring, on-the-job training or graduate level studies in analytics, computational science and applied science and mathematics. At the most senior level, engineers become subject matter experts and system architect experts, leading discipline-specific data centers and large software development projects. They are recognized as a subject matter expert in a science domain, they have project management expertise, lead standards efforts and lead international projects. A long career development remains necessary not only because of the breadth of knowledge required across physical sciences and engineering disciplines, but also because of the diversity of instrument data being developed today both by NASA and international partner agencies and because multidiscipline science and practitioner communities expect to have access to all types of observational data.This paper describes an approach to defining career-path guidance for college-bound high school and undergraduate engineering students, junior and senior engineers from various disciplines.
Developing the Next Generation of Science Data System Engineers
NASA Astrophysics Data System (ADS)
Moses, J. F.; Durachka, C. D.; Behnke, J.
2015-12-01
At Goddard, engineers and scientists with a range of experience in science data systems are needed to employ new technologies and develop advances in capabilities for supporting new Earth and Space science research. Engineers with extensive experience in science data, software engineering and computer-information architectures are needed to lead and perform these activities. The increasing types and complexity of instrument data and emerging computer technologies coupled with the current shortage of computer engineers with backgrounds in science has led the need to develop a career path for science data systems engineers and architects. The current career path, in which undergraduate students studying various disciplines such as Computer Engineering or Physical Scientist, generally begins with serving on a development team in any of the disciplines where they can work in depth on existing Goddard data systems or serve with a specific NASA science team. There they begin to understand the data, infuse technologies, and begin to know the architectures of science data systems. From here the typical career involves peer mentoring, on-the-job training or graduate level studies in analytics, computational science and applied science and mathematics. At the most senior level, engineers become subject matter experts and system architect experts, leading discipline-specific data centers and large software development projects. They are recognized as a subject matter expert in a science domain, they have project management expertise, lead standards efforts and lead international projects. A long career development remains necessary not only because of the breath of knowledge required across physical sciences and engineering disciplines, but also because of the diversity of instrument data being developed today both by NASA and international partner agencies and because multi-discipline science and practitioner communities expect to have access to all types of observational data. This paper describes an approach to defining career-path guidance for college-bound high school and undergraduate engineering students, junior and senior engineers from various disciplines.
Automated Estimation Of Software-Development Costs
NASA Technical Reports Server (NTRS)
Roush, George B.; Reini, William
1993-01-01
COSTMODL is automated software development-estimation tool. Yields significant reduction in risk of cost overruns and failed projects. Accepts description of software product developed and computes estimates of effort required to produce it, calendar schedule required, and distribution of effort and staffing as function of defined set of development life-cycle phases. Written for IBM PC(R)-compatible computers.
Small Unix data acquisition system
NASA Astrophysics Data System (ADS)
Engberg, D.; Glanzman, T.
1994-02-01
A R&D program has been established to investigate the use of Unix in the various aspects of experimental computation. Earlier R&D work investigated the basic real-time aspects of the IBMRS/6000 workstation running AIX, which claims to be a real-time operating system. The next step in this R&D is the construction of prototype data acquisition system which attempts to exercise many of the features needed in the final on-line system in a realistic situation. For this project, we have combined efforts with a team studying the use of novel cell designs and gas mixtures in a new prototype drift chamber.
Progress toward the development of an aircraft icing analysis capability
NASA Technical Reports Server (NTRS)
Shaw, R. J.
1984-01-01
An overview of the NASA efforts to develop an aircraft icing analysis capability is presented. Discussions are included of the overall and long term objectives of the program as well as current capabilities and limitations of the various computer codes being developed. Descriptions are given of codes being developed to analyze two and three dimensional trajectories of water droplets, airfoil ice accretion, aerodynamic performance degradation of components and complete aircraft configurations, electrothermal deicer, and fluid freezing point depressant deicer. The need for bench mark and verification data to support the code development is also discussed.
Graphical user interface for wireless sensor networks simulator
NASA Astrophysics Data System (ADS)
Paczesny, Tomasz; Paczesny, Daniel; Weremczuk, Jerzy
2008-01-01
Wireless Sensor Networks (WSN) are currently very popular area of development. It can be suited in many applications form military through environment monitoring, healthcare, home automation and others. Those networks, when working in dynamic, ad-hoc model, need effective protocols which must differ from common computer networks algorithms. Research on those protocols would be difficult without simulation tool, because real applications often use many nodes and tests on such a big networks take much effort and costs. The paper presents Graphical User Interface (GUI) for simulator which is dedicated for WSN studies, especially in routing and data link protocols evaluation.
Remote Sensing: A valuable tool in the Forest Service decision making process. [in Utah
NASA Technical Reports Server (NTRS)
Stanton, F. L.
1975-01-01
Forest Service studies for integrating remotely sensed data into existing information systems highlight a need to: (1) re-examine present methods of collecting and organizing data, (2) develop an integrated information system for rapidly processing and interpreting data, (3) apply existing technological tools in new ways, and (4) provide accurate and timely information for making right management decisions. The Forest Service developed an integrated information system using remote sensors, microdensitometers, computer hardware and software, and interactive accessories. Their efforts substantially reduce the time it takes for collecting and processing data.
NSSDC activities with 12-inch optical disk drives
NASA Technical Reports Server (NTRS)
Lowrey, Barbara E.; Lopez-Swafford, Brian
1986-01-01
The development status of optical-disk data transfer and storage technology at the National Space Science Data Center (NSSDC) is surveyed. The aim of the R&D program is to facilitate the exchange of large volumes of data. Current efforts focus on a 12-inch 1-Gbyte write-once/read-many disk and a disk drive which interfaces with VAX/VMS computer systems. The history of disk development at NSSDC is traced; the results of integration and performance tests are summarized; the operating principles of the 12-inch system are explained and illustrated with diagrams; and the need for greater standardization is indicated.
Cardarelli, Roberto; Reese, David; Roper, Karen L.; Cardarelli, Kathryn; Feltner, Frances J.; Studts, Jamie L.; Knight, Jennifer R.; Armstrong, Debra; Weaver, Anthony; Shaffer, Dana
2017-01-01
For low dose CT lung cancer screening to be effective in curbing disease mortality, efforts are needed to overcome barriers to awareness and facilitate uptake of the current evidence-based screening guidelines. A sequential mixed-methods approach was employed to design a screening campaign utilizing messages developed from community focus groups, followed by implementation of the outreach campaign intervention in two high-risk Kentucky regions. This study reports on rates of awareness and screening in intervention regions, as compared to a control region. PMID:27866066
NASA's Climate Data Services Initiative
NASA Astrophysics Data System (ADS)
McInerney, M.; Duffy, D.; Schnase, J. L.; Webster, W. P.
2013-12-01
Our understanding of the Earth's processes is based on a combination of observational data records and mathematical models. The size of NASA's space-based observational data sets is growing dramatically as new missions come online. However a potentially bigger data challenge is posed by the work of climate scientists, whose models are regularly producing data sets of hundreds of terabytes or more. It is important to understand that the 'Big Data' challenge of climate science cannot be solved with a single technological approach or an ad hoc assemblage of technologies. It will require a multi-faceted, well-integrated suite of capabilities that include cloud computing, large-scale compute-storage systems, high-performance analytics, scalable data management, and advanced deployment mechanisms in addition to the existing, well-established array of mature information technologies. It will also require a coherent organizational effort that is able to focus on the specific and sometimes unique requirements of climate science. Given that it is the knowledge that is gained from data that is of ultimate benefit to society, data publication and data analytics will play a particularly important role. In an effort to accelerate scientific discovery and innovation through broader use of climate data, NASA Goddard Space Flight Center's Office of Computational and Information Sciences and Technology has embarked on a determined effort to build a comprehensive, integrated data publication and analysis capability for climate science. The Climate Data Services (CDS) Initiative integrates people, expertise, and technology into a highly-focused, next-generation, one-stop climate science information service. The CDS Initiative is providing the organizational framework, processes, and protocols needed to deploy existing information technologies quickly using a combination of enterprise-level services and an expanding array of cloud services. Crucial to its effectiveness, the CDS Initiative is developing the technical expertise to move new information technologies from R&D into operational use. This combination enables full, end-to-end support for climate data publishing and data analytics, and affords the flexibility required to meet future and unanticipated needs. Current science efforts being supported by the CDS Initiative include IPPC, OBS4MIP, ANA4MIPS, MERRA II, National Climate Assessment, the Ocean Data Assimilation project, NASA Earth Exchange (NEX), and the RECOVER Burned Area Emergency Response decision support system. Service offerings include an integrated suite of classic technologies (FTP, LAS, THREDDS, ESGF, GRaD-DODS, OPeNDAP, WMS, ArcGIS Server), emerging technologies (iRODS, UVCDAT), and advanced technologies (MERRA Analytic Services, MapReduce, Ontology Services, and the CDS API). This poster will describe the CDS Initiative, provide details about the Initiative's advanced offerings, and layout the CDS Initiative's deployment roadmap.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pointer, William David
The objective of this effort is to establish a strategy and process for generation of suitable computational mesh for computational fluid dynamics simulations of departure from nucleate boiling in a 5 by 5 fuel rod assembly held in place by PWR mixing vane spacer grids. This mesh generation process will support ongoing efforts to develop, demonstrate and validate advanced multi-phase computational fluid dynamics methods that enable more robust identification of dryout conditions and DNB occurrence.Building upon prior efforts and experience, multiple computational meshes were developed using the native mesh generation capabilities of the commercial CFD code STAR-CCM+. These meshes weremore » used to simulate two test cases from the Westinghouse 5 by 5 rod bundle facility. The sensitivity of predicted quantities of interest to the mesh resolution was then established using two evaluation methods, the Grid Convergence Index method and the Least Squares method. This evaluation suggests that the Least Squares method can reliably establish the uncertainty associated with local parameters such as vector velocity components at a point in the domain or surface averaged quantities such as outlet velocity magnitude. However, neither method is suitable for characterization of uncertainty in global extrema such as peak fuel surface temperature, primarily because such parameters are not necessarily associated with a fixed point in space. This shortcoming is significant because the current generation algorithm for identification of DNB event conditions relies on identification of such global extrema. Ongoing efforts to identify DNB based on local surface conditions will address this challenge« less
Bringing the CMS distributed computing system into scalable operations
NASA Astrophysics Data System (ADS)
Belforte, S.; Fanfani, A.; Fisk, I.; Flix, J.; Hernández, J. M.; Kress, T.; Letts, J.; Magini, N.; Miccio, V.; Sciabà, A.
2010-04-01
Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure and optimize the networking and storage through scale testing data transfers and data processing, and improving the efficiency of accessing data across the CMS computing system from global transfers to local access. This contribution reports on the tools and procedures developed by CMS for computing commissioning and scale testing as well as the improvements accomplished towards efficient, reliable and scalable computing operations. The activities include the development and operation of load generators for job submission and data transfers with the aim of stressing the experiment and Grid data management and workload management systems, site commissioning procedures and tools to monitor and improve site availability and reliability, as well as activities targeted to the commissioning of the distributed production, user analysis and monitoring systems.
A Survey of Techniques for Approximate Computing
Mittal, Sparsh
2016-03-18
Approximate computing trades off computation quality with the effort expended and as rising performance demands confront with plateauing resource budgets, approximate computing has become, not merely attractive, but even imperative. Here, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU and FPGA), processor components, memory technologies etc., and programming frameworks for AC. Moreover, we classify these techniques based on several key characteristics to emphasize their similarities and differences. Finally, the aim of this paper is tomore » provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems.« less
A novel paradigm for cell and molecule interaction ontology: from the CMM model to IMGT-ONTOLOGY
2010-01-01
Background Biology is moving fast toward the virtuous circle of other disciplines: from data to quantitative modeling and back to data. Models are usually developed by mathematicians, physicists, and computer scientists to translate qualitative or semi-quantitative biological knowledge into a quantitative approach. To eliminate semantic confusion between biology and other disciplines, it is necessary to have a list of the most important and frequently used concepts coherently defined. Results We propose a novel paradigm for generating new concepts for an ontology, starting from model rather than developing a database. We apply that approach to generate concepts for cell and molecule interaction starting from an agent based model. This effort provides a solid infrastructure that is useful to overcome the semantic ambiguities that arise between biologists and mathematicians, physicists, and computer scientists, when they interact in a multidisciplinary field. Conclusions This effort represents the first attempt at linking molecule ontology with cell ontology, in IMGT-ONTOLOGY, the well established ontology in immunogenetics and immunoinformatics, and a paradigm for life science biology. With the increasing use of models in biology and medicine, the need to link different levels, from molecules to cells to tissues and organs, is increasingly important. PMID:20167082
Robbins, Lisa L.; Hansen, Mark; Raabe, Ellen; Knorr, Paul O.; Browne, Joseph
2007-01-01
The Florida shelf represents a finite source of economic resources, including commercial and recreational fisheries, tourism, recreation, sand and gravel resources, phosphate, and freshwater reserves. Yet the basic information needed to locate resources, or to interpret and utilize existing data, comes from many sources, dates, and formats. A multi-agency effort is underway to coordinate and prioritize the compilation of suitable datasets for an integrated information system of Florida’s coastal and ocean resources. This report and the associated data files represent part of the effort to make data accessible and useable with computer-mapping systems, web-based technologies, and user-friendly visualization tools. Among the datasets compiled and developed are seafloor imagery, marine sediment data, and existing bathymetric data. A U.S. Geological Survey-sponsored workshop in January 2007 resulted in the establishment of mapping priorities for the state. Bathymetry was identified as a common priority among agencies and researchers. State-of-the-art computer-mapping techniques and data-processing tools were used to develop shelf-wide raster and vector data layers. Florida Shelf Habitat (FLaSH) Mapping Project (http://coastal.er.usgs.gov/flash) endeavors to locate available data, identify data gaps, synthesize existing information, and expand our understanding of geologic processes in our dynamic coastal and marine systems.
Khalil, Mohammed K; Paas, Fred; Johnson, Tristan E; Su, Yung K; Payer, Andrew F
2008-01-01
This research is an effort to best utilize the interactive anatomical images for instructional purposes based on cognitive load theory. Three studies explored the differential effects of three computer-based instructional strategies that use anatomical cross-sections to enhance the interpretation of radiological images. These strategies include: (1) cross-sectional images of the head that can be superimposed on radiological images, (2) transparent highlighting of anatomical structures in radiological images, and (3) cross-sectional images of the head with radiological images presented side-by-side. Data collected included: (1) time spent on instruction and on solving test questions, (2) mental effort during instruction and test, and (3) students' performance to identify anatomical structures in radiological images. Participants were 28 freshmen medical students (15 males and 13 females) and 208 biology students (190 females and 18 males). All studies used posttest-only control group design, and the collected data were analyzed by either t test or ANOVA. In self-directed computer-based environments, the strategies that used cross sections to improve students' ability to recognize anatomic structures in radiological images showed no significant positive effects. However, when increasing the complexity of the instructional materials, cross-sectional images imposed a higher cognitive load, as indicated by higher investment of mental effort. There is not enough evidence to claim that the simultaneous combination of cross sections and radiological images has no effect on the identification of anatomical structures in radiological images for novices. Further research that control for students' learning and cognitive style is needed to reach an informative conclusion.
Piquado, Tepring; Cousins, Katheryn A Q; Wingfield, Arthur; Miller, Paul
2010-12-13
Poor hearing acuity reduces memory for spoken words, even when the words are presented with enough clarity for correct recognition. An "effortful hypothesis" suggests that the perceptual effort needed for recognition draws from resources that would otherwise be available for encoding the word in memory. To assess this hypothesis, we conducted a behavioral task requiring immediate free recall of word-lists, some of which contained an acoustically masked word that was just above perceptual threshold. Results show that masking a word reduces the recall of that word and words prior to it, as well as weakening the linking associations between the masked and prior words. In contrast, recall probabilities of words following the masked word are not affected. To account for this effect we conducted computational simulations testing two classes of models: Associative Linking Models and Short-Term Memory Buffer Models. Only a model that integrated both contextual linking and buffer components matched all of the effects of masking observed in our behavioral data. In this Linking-Buffer Model, the masked word disrupts a short-term memory buffer, causing associative links of words in the buffer to be weakened, affecting memory for the masked word and the word prior to it, while allowing links of words following the masked word to be spared. We suggest that these data account for the so-called "effortful hypothesis", where distorted input has a detrimental impact on prior information stored in short-term memory. Copyright © 2010 Elsevier B.V. All rights reserved.
34 CFR 461.45 - How does the Secretary compute maintenance of effort in the event of a waiver?
Code of Federal Regulations, 2012 CFR
2012-07-01
... Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION ADULT EDUCATION... awarded for the year after the year of the waiver by comparing the amount spent for adult education from... 34 Education 3 2012-07-01 2012-07-01 false How does the Secretary compute maintenance of effort in...
34 CFR 461.45 - How does the Secretary compute maintenance of effort in the event of a waiver?
Code of Federal Regulations, 2013 CFR
2013-07-01
... Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION ADULT EDUCATION... awarded for the year after the year of the waiver by comparing the amount spent for adult education from... 34 Education 3 2013-07-01 2013-07-01 false How does the Secretary compute maintenance of effort in...
34 CFR 461.45 - How does the Secretary compute maintenance of effort in the event of a waiver?
Code of Federal Regulations, 2011 CFR
2011-07-01
... Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION ADULT EDUCATION... awarded for the year after the year of the waiver by comparing the amount spent for adult education from... 34 Education 3 2011-07-01 2011-07-01 false How does the Secretary compute maintenance of effort in...
34 CFR 461.45 - How does the Secretary compute maintenance of effort in the event of a waiver?
Code of Federal Regulations, 2014 CFR
2014-07-01
... Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION ADULT EDUCATION... awarded for the year after the year of the waiver by comparing the amount spent for adult education from... 34 Education 3 2014-07-01 2014-07-01 false How does the Secretary compute maintenance of effort in...
34 CFR 461.45 - How does the Secretary compute maintenance of effort in the event of a waiver?
Code of Federal Regulations, 2010 CFR
2010-07-01
... Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION ADULT EDUCATION... awarded for the year after the year of the waiver by comparing the amount spent for adult education from... 34 Education 3 2010-07-01 2010-07-01 false How does the Secretary compute maintenance of effort in...
Micro-video display with ocular tracking and interactive voice control
NASA Technical Reports Server (NTRS)
Miller, James E.
1993-01-01
In certain space-restricted environments, many of the benefits resulting from computer technology have been foregone because of the size, weight, inconvenience, and lack of mobility associated with existing computer interface devices. Accordingly, an effort to develop a highly miniaturized and 'wearable' computer display and control interface device, referred to as the Sensory Integrated Data Interface (SIDI), is underway. The system incorporates a micro-video display that provides data display and ocular tracking on a lightweight headset. Software commands are implemented by conjunctive eye movement and voice commands of the operator. In this initial prototyping effort, various 'off-the-shelf' components have been integrated into a desktop computer and with a customized menu-tree software application to demonstrate feasibility and conceptual capabilities. When fully developed as a customized system, the interface device will allow mobile, 'hand-free' operation of portable computer equipment. It will thus allow integration of information technology applications into those restrictive environments, both military and industrial, that have not yet taken advantage of the computer revolution. This effort is Phase 1 of Small Business Innovative Research (SBIR) Topic number N90-331 sponsored by the Naval Undersea Warfare Center Division, Newport. The prime contractor is Foster-Miller, Inc. of Waltham, MA.
Measuring and Modeling Change in Examinee Effort on Low-Stakes Tests across Testing Occasions
ERIC Educational Resources Information Center
Sessoms, John; Finney, Sara J.
2015-01-01
Because schools worldwide use low-stakes tests to make important decisions, value-added indices computed from test scores must accurately reflect student learning, which requires equal test-taking effort across testing occasions. Evaluating change in effort assumes effort is measured equivalently across occasions. We evaluated the longitudinal…
NASA Technical Reports Server (NTRS)
Vickers, John
2015-01-01
The Materials Genome Initiative (MGI) project element is a cross-Center effort that is focused on the integration of computational tools to simulate manufacturing processes and materials behavior. These computational simulations will be utilized to gain understanding of processes and materials behavior to accelerate process development and certification to more efficiently integrate new materials in existing NASA projects and to lead to the design of new materials for improved performance. This NASA effort looks to collaborate with efforts at other government agencies and universities working under the national MGI. MGI plans to develop integrated computational/experimental/ processing methodologies for accelerating discovery and insertion of materials to satisfy NASA's unique mission demands. The challenges include validated design tools that incorporate materials properties, processes, and design requirements; and materials process control to rapidly mature emerging manufacturing methods and develop certified manufacturing processes
Multiphysics Thrust Chamber Modeling for Nuclear Thermal Propulsion
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Cheng, Gary; Chen, Yen-Sen
2006-01-01
The objective of this effort is to develop an efficient and accurate thermo-fluid computational methodology to predict environments for a solid-core, nuclear thermal engine thrust chamber. The computational methodology is based on an unstructured-grid, pressure-based computational fluid dynamics formulation. A two-pronged approach is employed in this effort: A detailed thermo-fluid analysis on a multi-channel flow element for mid-section corrosion investigation; and a global modeling of the thrust chamber to understand the effect of heat transfer on thrust performance. Preliminary results on both aspects are presented.
Electric Vehicle Service Personnel Training Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernstein, Gerald
As the share of hybrid, plug-in hybrid (PHEV), electric (EV) and fuel-cell (FCV) vehicles grows in the national automotive fleet, an entirely new set of diagnostic and technical skills needs to be obtained by the maintenance workforce. Electrically-powered vehicles require new diagnostic tools, technique and vocabulary when compared to existing internal combustion engine-powered models. While the manufacturers of these new vehicles train their own maintenance personnel, training for students, independent working technicians and fleet operators is less focused and organized. This DOE-funded effort provided training to these three target groups to help expand availability of skills and to provide moremore » competition (and lower consumer cost) in the maintenance of these hybrid- and electric-powered vehicles. Our approach was to start locally in the San Francisco Bay Area, one of the densest markets in the United States for these types of automobiles. We then expanded training to the Los Angeles area and then out-of-state to identify what types of curriculum was appropriate and what types of problems were encountered as training was disseminated. The fact that this effort trained up to 800 individuals with sessions varying from 2- day workshops to full-semester courses is considered a successful outcome. Diverse programs were developed to match unique time availability and educational needs of each of the three target audiences. Several key findings and observations arising from this effort include: • Recognition that hybrid and PHEV training demand is immediate; demand for EV training is starting to emerge; while demand for FCV training is still over the horizon • Hybrid and PHEV training are an excellent starting point for all EV-related training as they introduce all the basic concepts (electric motors, battery management, controllers, vocabulary, testing techniques) that are needed for all EVs, and these skills are in-demand in today’s market. • Faculty training is widely available and can be relatively quickly achieved. Equipment availability (vehicles, specialized tools, diagnostic software and computers) is a bigger challenge for funding-constrained colleges. • A computer-based emulation system that would replicate vehicle and diagnostic software in one package is a training aid that would have widespread benefit, but does not appear to exist. This need is further described at the end of Section 6.5. The benefits of this project are unique to each of the three target audiences. Students have learned skills they will use for the remainder of their careers; independent technicians can now accept customers who they previously needed to turn away due to lack of familiarity with hybrid systems; and fleet maintenance personnel are able to lower costs by undertaking work in-house that they previously needed to outsource. The direct job impact is estimated at 0.75 FTE continuously over the 3 ½ -year duration of the grant.« less
The Synthetic Aperture Radar Science Data Processing Foundry Concept for Earth Science
NASA Astrophysics Data System (ADS)
Rosen, P. A.; Hua, H.; Norton, C. D.; Little, M. M.
2015-12-01
Since 2008, NASA's Earth Science Technology Office and the Advanced Information Systems Technology Program have invested in two technology evolutions to meet the needs of the community of scientists exploiting the rapidly growing database of international synthetic aperture radar (SAR) data. JPL, working with the science community, has developed the InSAR Scientific Computing Environment (ISCE), a next-generation interferometric SAR processing system that is designed to be flexible and extensible. ISCE currently supports many international space borne data sets but has been primarily focused on geodetic science and applications. A second evolutionary path, the Advanced Rapid Imaging and Analysis (ARIA) science data system, uses ISCE as its core science data processing engine and produces automated science and response products, quality assessments and metadata. The success of this two-front effort has been demonstrated in NASA's ability to respond to recent events with useful disaster support. JPL has enabled high-volume and low latency data production by the re-use of the hybrid cloud computing science data system (HySDS) that runs ARIA, leveraging on-premise cloud computing assets that are able to burst onto the Amazon Web Services (AWS) services as needed. Beyond geodetic applications, needs have emerged to process large volumes of time-series SAR data collected for estimation of biomass and its change, in such campaigns as the upcoming AfriSAR field campaign. ESTO is funding JPL to extend the ISCE-ARIA model to a "SAR Science Data Processing Foundry" to on-ramp new data sources and to produce new science data products to meet the needs of science teams and, in general, science community members. An extension of the ISCE-ARIA model to support on-demand processing will permit PIs to leverage this Foundry to produce data products from accepted data sources when they need them. This paper will describe each of the elements of the SAR SDP Foundry and describe their integration into a new conceptual approach to enable more effective use of SAR instruments.
Methodology, status and plans for development and assessment of TUF and CATHENA codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luxat, J.C.; Liu, W.S.; Leung, R.K.
1997-07-01
An overview is presented of the Canadian two-fluid computer codes TUF and CATHENA with specific focus on the constraints imposed during development of these codes and the areas of application for which they are intended. Additionally a process for systematic assessment of these codes is described which is part of a broader, industry based initiative for validation of computer codes used in all major disciplines of safety analysis. This is intended to provide both the licensee and the regulator in Canada with an objective basis for assessing the adequacy of codes for use in specific applications. Although focused specifically onmore » CANDU reactors, Canadian experience in developing advanced two-fluid codes to meet wide-ranging application needs while maintaining past investment in plant modelling provides a useful contribution to international efforts in this area.« less
SAPNEW: Parallel finite element code for thin shell structures on the Alliant FX/80
NASA Astrophysics Data System (ADS)
Kamat, Manohar P.; Watson, Brian C.
1992-02-01
The results of a research activity aimed at providing a finite element capability for analyzing turbo-machinery bladed-disk assemblies in a vector/parallel processing environment are summarized. Analysis of aircraft turbofan engines is very computationally intensive. The performance limit of modern day computers with a single processing unit was estimated at 3 billions of floating point operations per second (3 gigaflops). In view of this limit of a sequential unit, performance rates higher than 3 gigaflops can be achieved only through vectorization and/or parallelization as on Alliant FX/80. Accordingly, the efforts of this critically needed research were geared towards developing and evaluating parallel finite element methods for static and vibration analysis. A special purpose code, named with the acronym SAPNEW, performs static and eigen analysis of multi-degree-of-freedom blade models built-up from flat thin shell elements.
Overview of aerothermodynamic loads definition study
NASA Technical Reports Server (NTRS)
Gaugler, Raymond E.
1989-01-01
Over the years, NASA has been conducting the Advanced Earth-to-Orbit (AETO) Propulsion Technology Program to provide the knowledge, understanding, and design methodology that will allow the development of advanced Earth-to-orbit propulsion systems with high performance, extended service life, automated operations, and diagnostics for in-flight health monitoring. The objective of the Aerothermodynamic Loads Definition Study is to develop methods to more accurately predict the operating environment in AETO propulsion systems, such as the Space Shuttle Main Engine (SSME) powerhead. The approach taken consists of 2 parts: to modify, apply, and disseminate existing computational fluid dynamics tools in response to current needs and to develop new technology that will enable more accurate computation of the time averaged and unsteady aerothermodynamic loads in the SSME powerhead. The software tools are detailed. Significant progress was made in the area of turbomachinery, where there is an overlap between the AETO efforts and research in the aeronautical gas turbine field.
NASA Astrophysics Data System (ADS)
Moan, T.
2017-12-01
An overview of integrity management of offshore structures, with emphasis on the oil and gas energy sector, is given. Based on relevant accident experiences and means to control the associated risks, accidents are categorized from a technical-physical as well as human and organizational point of view. Structural risk relates to extreme actions as well as structural degradation. Risk mitigation measures, including adequate design criteria, inspection, repair and maintenance as well as quality assurance and control of engineering processes, are briefly outlined. The current status of risk and reliability methodology to aid decisions in the integrity management is briefly reviewed. Finally, the need to balance the uncertainties in data, methods and computational efforts and the cautious use and quality assurance and control in applying high fidelity methods to avoid human errors, is emphasized, and with a plea to develop both high fidelity as well as efficient, simplified methods for design.
Modeling Methodologies for Design and Control of Solid Oxide Fuel Cell APUs
NASA Astrophysics Data System (ADS)
Pianese, C.; Sorrentino, M.
2009-08-01
Among the existing fuel cell technologies, Solid Oxide Fuel Cells (SOFC) are particularly suitable for both stationary and mobile applications, due to their high energy conversion efficiencies, modularity, high fuel flexibility, low emissions and noise. Moreover, the high working temperatures enable their use for efficient cogeneration applications. SOFCs are entering in a pre-industrial era and a strong interest for designing tools has growth in the last years. Optimal system configuration, components sizing, control and diagnostic system design require computational tools that meet the conflicting needs of accuracy, affordable computational time, limited experimental efforts and flexibility. The paper gives an overview on control-oriented modeling of SOFC at both single cell and stack level. Such an approach provides useful simulation tools for designing and controlling SOFC-APUs destined to a wide application area, ranging from automotive to marine and airplane APUs.
Developing Information Power Grid Based Algorithms and Software
NASA Technical Reports Server (NTRS)
Dongarra, Jack
1998-01-01
This exploratory study initiated our effort to understand performance modeling on parallel systems. The basic goal of performance modeling is to understand and predict the performance of a computer program or set of programs on a computer system. Performance modeling has numerous applications, including evaluation of algorithms, optimization of code implementations, parallel library development, comparison of system architectures, parallel system design, and procurement of new systems. Our work lays the basis for the construction of parallel libraries that allow for the reconstruction of application codes on several distinct architectures so as to assure performance portability. Following our strategy, once the requirements of applications are well understood, one can then construct a library in a layered fashion. The top level of this library will consist of architecture-independent geometric, numerical, and symbolic algorithms that are needed by the sample of applications. These routines should be written in a language that is portable across the targeted architectures.
Simulation Enabled Safeguards Assessment Methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robert Bean; Trond Bjornard; Thomas Larson
2007-09-01
It is expected that nuclear energy will be a significant component of future supplies. New facilities, operating under a strengthened international nonproliferation regime will be needed. There is good reason to believe virtual engineering applied to the facility design, as well as to the safeguards system design will reduce total project cost and improve efficiency in the design cycle. Simulation Enabled Safeguards Assessment MEthodology (SESAME) has been developed as a software package to provide this capability for nuclear reprocessing facilities. The software architecture is specifically designed for distributed computing, collaborative design efforts, and modular construction to allow step improvements inmore » functionality. Drag and drop wireframe construction allows the user to select the desired components from a component warehouse, render the system for 3D visualization, and, linked to a set of physics libraries and/or computational codes, conduct process evaluations of the system they have designed.« less
Garde, Sebastian; Hovenga, Evelyn; Buck, Jasmin; Knaup, Petra
2006-01-01
Ubiquitous computing requires ubiquitous access to information and knowledge. With the release of openEHR Version 1.0 there is a common model available to solve some of the problems related to accessing information and knowledge by improving semantic interoperability between clinical systems. Considerable work has been undertaken by various bodies to standardise Clinical Data Sets. Notwithstanding their value, several problems remain unsolved with Clinical Data Sets without the use of a common model underpinning them. This paper outlines these problems like incompatible basic data types and overlapping and incompatible definitions of clinical content. A solution to this based on openEHR archetypes is motivated and an approach to transform existing Clinical Data Sets into archetypes is presented. To avoid significant overlaps and unnecessary effort during archetype development, archetype development needs to be coordinated nationwide and beyond and also across the various health professions in a formalized process.
Dunne, Paul; Tasker, Gary
1996-01-01
The reservoirs and pumping stations that comprise the Raritan River Basin water-supply system and its interconnections to the Delaware-Raritan Canal water-supply system, operated by the New Jersey Water Supply Authority (NJWSA), provide potable water to central New Jersey communities. The water reserve of this combined system can easily be depleted by an extended period of below-normal precipitation. Efficient operation of the combined system is vital to meeting the water-supply needs of central New Jersey. In an effort to improve the efficiency of the system operation, the U.S. Geological Survey (USGS), in cooperation with the NJWSA, has developed a computer model that provides a technical basis for evaluating the effects of alternative patterns of operation of the Raritan River Basin water-supply system. This fact sheet describes the model, its technical basis, and its operation.
SAPNEW: Parallel finite element code for thin shell structures on the Alliant FX/80
NASA Technical Reports Server (NTRS)
Kamat, Manohar P.; Watson, Brian C.
1992-01-01
The results of a research activity aimed at providing a finite element capability for analyzing turbo-machinery bladed-disk assemblies in a vector/parallel processing environment are summarized. Analysis of aircraft turbofan engines is very computationally intensive. The performance limit of modern day computers with a single processing unit was estimated at 3 billions of floating point operations per second (3 gigaflops). In view of this limit of a sequential unit, performance rates higher than 3 gigaflops can be achieved only through vectorization and/or parallelization as on Alliant FX/80. Accordingly, the efforts of this critically needed research were geared towards developing and evaluating parallel finite element methods for static and vibration analysis. A special purpose code, named with the acronym SAPNEW, performs static and eigen analysis of multi-degree-of-freedom blade models built-up from flat thin shell elements.
Hucka, M; Finney, A; Bornstein, B J; Keating, S M; Shapiro, B E; Matthews, J; Kovitz, B L; Schilstra, M J; Funahashi, A; Doyle, J C; Kitano, H
2004-06-01
Biologists are increasingly recognising that computational modelling is crucial for making sense of the vast quantities of complex experimental data that are now being collected. The systems biology field needs agreed-upon information standards if models are to be shared, evaluated and developed cooperatively. Over the last four years, our team has been developing the Systems Biology Markup Language (SBML) in collaboration with an international community of modellers and software developers. SBML has become a de facto standard format for representing formal, quantitative and qualitative models at the level of biochemical reactions and regulatory networks. In this article, we summarise the current and upcoming versions of SBML and our efforts at developing software infrastructure for supporting and broadening its use. We also provide a brief overview of the many SBML-compatible software tools available today.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luszczek, Piotr R; Tomov, Stanimire Z; Dongarra, Jack J
We present an efficient and scalable programming model for the development of linear algebra in heterogeneous multi-coprocessor environments. The model incorporates some of the current best design and implementation practices for the heterogeneous acceleration of dense linear algebra (DLA). Examples are given as the basis for solving linear systems' algorithms - the LU, QR, and Cholesky factorizations. To generate the extreme level of parallelism needed for the efficient use of coprocessors, algorithms of interest are redesigned and then split into well-chosen computational tasks. The tasks execution is scheduled over the computational components of a hybrid system of multi-core CPUs andmore » coprocessors using a light-weight runtime system. The use of lightweight runtime systems keeps scheduling overhead low, while enabling the expression of parallelism through otherwise sequential code. This simplifies the development efforts and allows the exploration of the unique strengths of the various hardware components.« less
34 CFR 403.185 - How does the Secretary compute maintenance of effort in the event of a waiver?
Code of Federal Regulations, 2010 CFR
2010-07-01
... VOCATIONAL AND APPLIED TECHNOLOGY EDUCATION PROGRAM What Financial Conditions Must Be Met by a State? § 403... 34 Education 3 2010-07-01 2010-07-01 false How does the Secretary compute maintenance of effort in the event of a waiver? 403.185 Section 403.185 Education Regulations of the Offices of the Department...
Computational Fluid Dynamics Technology for Hypersonic Applications
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.
2003-01-01
Several current challenges in computational fluid dynamics and aerothermodynamics for hypersonic vehicle applications are discussed. Example simulations are presented from code validation and code benchmarking efforts to illustrate capabilities and limitations. Opportunities to advance the state-of-art in algorithms, grid generation and adaptation, and code validation are identified. Highlights of diverse efforts to address these challenges are then discussed. One such effort to re-engineer and synthesize the existing analysis capability in LAURA, VULCAN, and FUN3D will provide context for these discussions. The critical (and evolving) role of agile software engineering practice in the capability enhancement process is also noted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oxstrand, Johanna; Bly, Aaron
The U.S Department of Energy Light Water Reactor Sustainability (LWRS) Program initiated research in to what is needed in order to provide a roadmap or model for Nuclear Power Plants to reference when building an architecture that can support the growing data supply and demand flowing through their networks. The Digital Architecture project published report Digital Architecture Planning Model (Oxstrand et. al, 2016) discusses things to consider when building an architecture to support the increasing needs and demands of data throughout the plant. Once the plant is able to support the data demands it still needs to be able tomore » provide the data in an easy, quick and reliable method. A common method is to create a “one stop shop” application that a user can go to get all the data they need. The creation of this leads to the need of creating a Seamless Digital Environment (SDE) to integrate all the “siloed” data. An SDE is the desired perception that should be presented to users by gathering the data from any data source (e.g., legacy applications and work management systems) without effort by the user. The goal for FY16 was to complete a feasibility study for data mining and analytics for employing information from computer-based procedures enabled technologies for use in developing improved business analytics. The research team collaborated with multiple organizations to identify use cases or scenarios, which could be beneficial to investigate in a feasibility study. Many interesting potential use cases were identified throughout the FY16 activity. Unfortunately, due to factors out of the research team’s control, none of the studies were initiated this year. However, the insights gained and the relationships built with both PVNGS and NextAxiom will be valuable when moving forward with future research. During the 2016 annual Nuclear Information Technology Strategic Leadership (NITSL) group meeting it was identified would be very beneficial to the industry to support a research effort focused on data analytics. It was suggested that the effort would develop and evaluate use cases for data mining and analytics for employing information from plant sensors and database for use in developing improved business analytics.« less
Munabi, Ian G; Buwembo, William; Bajunirwe, Francis; Kitara, David Lagoro; Joseph, Ruberwa; Peter, Kawungezi; Obua, Celestino; Quinn, John; Mwaka, Erisa S
2015-02-25
Effective utilization of computers and their applications in medical education and research is of paramount importance to students. The objective of this study was to determine the association between owning a computer and use of computers for research data analysis and the other factors influencing health professions students' computer use for data analysis. We conducted a cross sectional study among undergraduate health professions students at three public universities in Uganda using a self-administered questionnaire. The questionnaire was composed of questions on participant demographics, students' participation in research, computer ownership, and use of computers for data analysis. Descriptive and inferential statistics (uni-variable and multi- level logistic regression analysis) were used to analyse data. The level of significance was set at 0.05. Six hundred (600) of 668 questionnaires were completed and returned (response rate 89.8%). A majority of respondents were male (68.8%) and 75.3% reported owning computers. Overall, 63.7% of respondents reported that they had ever done computer based data analysis. The following factors were significant predictors of having ever done computer based data analysis: ownership of a computer (adj. OR 1.80, p = 0.02), recently completed course in statistics (Adj. OR 1.48, p =0.04), and participation in research (Adj. OR 2.64, p <0.01). Owning a computer, participation in research and undertaking courses in research methods influence undergraduate students' use of computers for research data analysis. Students are increasingly participating in research, and thus need to have competencies for the successful conduct of research. Medical training institutions should encourage both curricular and extra-curricular efforts to enhance research capacity in line with the modern theories of adult learning.
ERIC Educational Resources Information Center
Kolata, Gina
1984-01-01
Examines social influences which discourage women from pursuing studies in computer science, including monopoly of computer time by boys at the high school level, sexual harassment in college, movies, and computer games. Describes some initial efforts to encourage females of all ages to study computer science. (JM)
National Facilities Study. Volume 1: Facilities Inventory
NASA Technical Reports Server (NTRS)
1994-01-01
The inventory activity was initiated to solve the critical need for a single source of site specific descriptive and parametric data on major public and privately held aeronautics and aerospace related facilities. This a challenging undertaking due to the scope of the effort and the short lead time in which to assemble the inventory and have it available to support the task group study needs. The inventory remains dynamic as sites are being added and the data is accessed and refined as the study progresses. The inventory activity also included the design and implementation of a computer database and analytical tools to simplify access to the data. This volume describes the steps which were taken to define the data requirements, select sites, and solicit and acquire data from them. A discussion of the inventory structure and analytical tools is also provided.
NASA Technical Reports Server (NTRS)
Sebok, Angelia; Wickens, Christopher; Sargent, Robert
2015-01-01
One human factors challenge is predicting operator performance in novel situations. Approaches such as drawing on relevant previous experience, and developing computational models to predict operator performance in complex situations, offer potential methods to address this challenge. A few concerns with modeling operator performance are that models need to realistic, and they need to be tested empirically and validated. In addition, many existing human performance modeling tools are complex and require that an analyst gain significant experience to be able to develop models for meaningful data collection. This paper describes an effort to address these challenges by developing an easy to use model-based tool, using models that were developed from a review of existing human performance literature and targeted experimental studies, and performing an empirical validation of key model predictions.
Moore, G. W.; Hutchins, G. M.; Miller, R. E.
1984-01-01
Computerized indexing and retrieval of medical records is increasingly important; but the use of natural language versus coded languages (SNOP, SNOMED) for this purpose remains controversial. In an effort to develop search strategies for natural language text, the authors examined the anatomic diagnosis reports by computer for 7000 consecutive autopsy subjects spanning a 13-year period at The Johns Hopkins Hospital. There were 923,657 words, 11,642 of them distinct. The authors observed an average of 1052 keystrokes, 28 lines, and 131 words per autopsy report, with an average 4.6 words per line and 7.0 letters per word. The entire text file represented 921 hours of secretarial effort. Words ranged in frequency from 33,959 occurrences of "and" to one occurrence for each of 3398 different words. Searches for rare diseases with unique names or for representative examples of common diseases were most readily performed with the use of computer-printed key word in context (KWIC) books. For uncommon diseases designated by commonly used terms (such as "cystic fibrosis"), needs were best served by a computerized search for logical combinations of key words. In an unbalanced word distribution, each conjunction (logical and) search should be performed in ascending order of word frequency; but each alternation (logical inclusive or) search should be performed in descending order of word frequency. Natural language text searches will assume a larger role in medical records analysis as the labor-intensive procedure of translation into a coded language becomes more costly, compared with the computer-intensive procedure of text searching. PMID:6546837
A Pythonic Approach for Computational Geosciences and Geo-Data Processing
NASA Astrophysics Data System (ADS)
Morra, G.; Yuen, D. A.; Lee, S. M.
2016-12-01
Computational methods and data analysis play a constantly increasing role in Earth Sciences however students and professionals need to climb a steep learning curve before reaching a sufficient level that allows them to run effective models. Furthermore the recent arrival and new powerful machine learning tools such as Torch and Tensor Flow has opened new possibilities but also created a new realm of complications related to the completely different technology employed. We present here a series of examples entirely written in Python, a language that combines the simplicity of Matlab with the power and speed of compiled languages such as C, and apply them to a wide range of geological processes such as porous media flow, multiphase fluid-dynamics, creeping flow and many-faults interaction. We also explore ways in which machine learning can be employed in combination with numerical modelling. From immediately interpreting a large number of modeling results to optimizing a set of modeling parameters to obtain a desired optimal simulation. We show that by using Python undergraduate and graduate can learn advanced numerical technologies with a minimum dedicated effort, which in turn encourages them to develop more numerical tools and quickly progress in their computational abilities. We also show how Python allows combining modeling with machine learning as pieces of LEGO, therefore simplifying the transition towards a new kind of scientific geo-modelling. The conclusion is that Python is an ideal tool to create an infrastructure for geosciences that allows users to quickly develop tools, reuse techniques and encourage collaborative efforts to interpret and integrate geo-data in profound new ways.
A Three-Dimensional Parallel Time-Accurate Turbopump Simulation Procedure Using Overset Grid System
NASA Technical Reports Server (NTRS)
Kiris, Cetin; Chan, William; Kwak, Dochan
2002-01-01
The objective of the current effort is to provide a computational framework for design and analysis of the entire fuel supply system of a liquid rocket engine, including high-fidelity unsteady turbopump flow analysis. This capability is needed to support the design of pump sub-systems for advanced space transportation vehicles that are likely to involve liquid propulsion systems. To date, computational tools for design/analysis of turbopump flows are based on relatively lower fidelity methods. An unsteady, three-dimensional viscous flow analysis tool involving stationary and rotational components for the entire turbopump assembly has not been available for real-world engineering applications. The present effort provides developers with information such as transient flow phenomena at start up, and nonuniform inflows, and will eventually impact on system vibration and structures. In the proposed paper, the progress toward the capability of complete simulation of the turbo-pump for a liquid rocket engine is reported. The Space Shuttle Main Engine (SSME) turbo-pump is used as a test case for evaluation of the hybrid MPI/Open-MP and MLP versions of the INS3D code. CAD to solution auto-scripting capability is being developed for turbopump applications. The relative motion of the grid systems for the rotor-stator interaction was obtained using overset grid techniques. Unsteady computations for the SSME turbo-pump, which contains 114 zones with 34.5 million grid points, are carried out on Origin 3000 systems at NASA Ames Research Center. Results from these time-accurate simulations with moving boundary capability are presented along with the performance of parallel versions of the code.
Time-Dependent Simulations of Turbopump Flows
NASA Technical Reports Server (NTRS)
Kris, Cetin C.; Kwak, Dochan
2001-01-01
The objective of the current effort is to provide a computational framework for design and analysis of the entire fuel supply system of a liquid rocket engine, including high-fidelity unsteady turbopump flow analysis. This capability is needed to support the design of pump sub-systems for advanced space transportation vehicles that are likely to involve liquid propulsion systems. To date, computational tools for design/analysis of turbopump flows are based on relatively lower fidelity methods. An unsteady, three-dimensional viscous flow analysis tool involving stationary and rotational components for the entire turbopump assembly has not been available for real-world engineering applications. The present effort will provide developers with information such as transient flow phenomena at start up, impact of non-uniform inflows, system vibration and impact on the structure. In the proposed paper, the progress toward the capability of complete simulation of the turbo-pump for a liquid rocket engine is reported. The Space Shuttle Main Engine (SSME) turbo-pump is used as a test case for evaluation of the hybrid MPI/Open-MP and MLP versions of the INS3D code. The relative motion of the grid systems for the rotor-stator interaction was obtained using overset grid techniques. Time-accuracy of the scheme has been evaluated with simple test cases. Unsteady computations for the SSME turbo-pump, which contains 114 zones with 34.5 million grid points, are carried out on Origin 2000 systems at NASA Ames Research Center. Results from these time-accurate simulations with moving boundary capability will be presented along with the performance of parallel versions of the code.
Combining Computational and Social Effort for Collaborative Problem Solving
Wagy, Mark D.; Bongard, Josh C.
2015-01-01
Rather than replacing human labor, there is growing evidence that networked computers create opportunities for collaborations of people and algorithms to solve problems beyond either of them. In this study, we demonstrate the conditions under which such synergy can arise. We show that, for a design task, three elements are sufficient: humans apply intuitions to the problem, algorithms automatically determine and report back on the quality of designs, and humans observe and innovate on others’ designs to focus creative and computational effort on good designs. This study suggests how such collaborations should be composed for other domains, as well as how social and computational dynamics mutually influence one another during collaborative problem solving. PMID:26544199
NASA Astrophysics Data System (ADS)
Lele, Sanjiva K.
2002-08-01
Funds were received in April 2001 under the Department of Defense DURIP program for construction of a 48 processor high performance computing cluster. This report details the hardware which was purchased and how it has been used to enable and enhance research activities directly supported by, and of interest to, the Air Force Office of Scientific Research and the Department of Defense. The report is divided into two major sections. The first section after this summary describes the computer cluster, its setup, and some cluster performance benchmark results. The second section explains ongoing research efforts which have benefited from the cluster hardware, and presents highlights of those efforts since installation of the cluster.
Parallel Architectures for Planetary Exploration Requirements (PAPER)
NASA Technical Reports Server (NTRS)
Cezzar, Ruknet; Sen, Ranjan K.
1989-01-01
The Parallel Architectures for Planetary Exploration Requirements (PAPER) project is essentially research oriented towards technology insertion issues for NASA's unmanned planetary probes. It was initiated to complement and augment the long-term efforts for space exploration with particular reference to NASA/LaRC's (NASA Langley Research Center) research needs for planetary exploration missions of the mid and late 1990s. The requirements for space missions as given in the somewhat dated Advanced Information Processing Systems (AIPS) requirements document are contrasted with the new requirements from JPL/Caltech involving sensor data capture and scene analysis. It is shown that more stringent requirements have arisen as a result of technological advancements. Two possible architectures, the AIPS Proof of Concept (POC) configuration and the MAX Fault-tolerant dataflow multiprocessor, were evaluated. The main observation was that the AIPS design is biased towards fault tolerance and may not be an ideal architecture for planetary and deep space probes due to high cost and complexity. The MAX concepts appears to be a promising candidate, except that more detailed information is required. The feasibility for adding neural computation capability to this architecture needs to be studied. Key impact issues for architectural design of computing systems meant for planetary missions were also identified.
Verbeke, J. M.; Petit, O.
2016-06-01
From nuclear safeguards to homeland security applications, the need for the better modeling of nuclear interactions has grown over the past decades. Current Monte Carlo radiation transport codes compute average quantities with great accuracy and performance; however, performance and averaging come at the price of limited interaction-by-interaction modeling. These codes often lack the capability of modeling interactions exactly: for a given collision, energy is not conserved, energies of emitted particles are uncorrelated, and multiplicities of prompt fission neutrons and photons are uncorrelated. Many modern applications require more exclusive quantities than averages, such as the fluctuations in certain observables (e.g., themore » neutron multiplicity) and correlations between neutrons and photons. In an effort to meet this need, the radiation transport Monte Carlo code TRIPOLI-4® was modified to provide a specific mode that models nuclear interactions in a full analog way, replicating as much as possible the underlying physical process. Furthermore, the computational model FREYA (Fission Reaction Event Yield Algorithm) was coupled with TRIPOLI-4 to model complete fission events. As a result, FREYA automatically includes fluctuations as well as correlations resulting from conservation of energy and momentum.« less
Overview: early history of crop growth and photosynthesis modeling.
El-Sharkawy, Mabrouk A
2011-02-01
As in industrial and engineering systems, there is a need to quantitatively study and analyze the many constituents of complex natural biological systems as well as agro-ecosystems via research-based mechanistic modeling. This objective is normally addressed by developing mathematically built descriptions of multilevel biological processes to provide biologists a means to integrate quantitatively experimental research findings that might lead to a better understanding of the whole systems and their interactions with surrounding environments. Aided with the power of computational capacities associated with computer technology then available, pioneering cropping systems simulations took place in the second half of the 20th century by several research groups across continents. This overview summarizes that initial pioneering effort made to simulate plant growth and photosynthesis of crop canopies, focusing on the discovery of gaps that exist in the current scientific knowledge. Examples are given for those gaps where experimental research was needed to improve the validity and application of the constructed models, so that their benefit to mankind was enhanced. Such research necessitates close collaboration among experimentalists and model builders while adopting a multidisciplinary/inter-institutional approach. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Use of imagery and GIS for humanitarian demining management
NASA Astrophysics Data System (ADS)
Gentile, Jack; Gustafson, Glen C.; Kimsey, Mary; Kraenzle, Helmut; Wilson, James; Wright, Stephen
1997-11-01
In the Fall of 1996, the Center for Geographic Information Science at James Madison University became involved in a project for the Department of Defense evaluating the data needs and data management systems for humanitarian demining in the Third World. In particular, the effort focused on the information needs of demining in Cambodia and in Bosnia. In the first phase of the project one team attempted to identify all sources of unclassified country data, image data and map data. Parallel with this, another group collected information and evaluations on most of the commercial off-the-shelf computer software packages for the management of such geographic information. The result was a design for the kinds of data and the kinds of systems necessary to establish and maintain such a database as a humanitarian demining management tool. The second phase of the work involved acquiring the recommended data and systems, integrating the two, and producing a demonstration of the system. In general, the configuration involves ruggedized portable computers for field use with a greatly simplified graphical user interface, supported by a more capable central facility based on Pentium workstations and appropriate technical expertise.
NASA Astrophysics Data System (ADS)
Tsui, Eddy K.; Thomas, Russell L.
2004-09-01
As part of the Commanding General of Army Material Command's Research, Development & Engineering Command (RDECOM), the U.S. Army Research Development and Engineering Center (ARDEC), Picatinny funded a joint development effort with McQ Associates, Inc. to develop an Advanced Minefield Sensor (AMS) as a technology evaluation prototype for the Anti-Personnel Landmine Alternatives (APLA) Track III program. This effort laid the fundamental groundwork of smart sensors for detection and classification of targets, identification of combatant or noncombatant, target location and tracking at and between sensors, fusion of information across targets and sensors, and automatic situation awareness to the 1st responder. The efforts have culminated in developing a performance oriented architecture meeting the requirements of size, weight, and power (SWAP). The integrated digital signal processor (DSP) paradigm is capable of computing signals from sensor modalities to extract needed information within either a 360° or fixed field of view with acceptable false alarm rate. This paper discusses the challenges in the developments of such a sensor, focusing on achieving reasonable operating ranges, achieving low power, small size and low cost, and applications for extensions of this technology.
contributes to the research efforts for commercial buildings. This effort is dedicated to studying the , commercial sector whole-building energy simulation, scientific computing, and software configuration and
Efficient integration method for fictitious domain approaches
NASA Astrophysics Data System (ADS)
Duczek, Sascha; Gabbert, Ulrich
2015-10-01
In the current article, we present an efficient and accurate numerical method for the integration of the system matrices in fictitious domain approaches such as the finite cell method (FCM). In the framework of the FCM, the physical domain is embedded in a geometrically larger domain of simple shape which is discretized using a regular Cartesian grid of cells. Therefore, a spacetree-based adaptive quadrature technique is normally deployed to resolve the geometry of the structure. Depending on the complexity of the structure under investigation this method accounts for most of the computational effort. To reduce the computational costs for computing the system matrices an efficient quadrature scheme based on the divergence theorem (Gauß-Ostrogradsky theorem) is proposed. Using this theorem the dimension of the integral is reduced by one, i.e. instead of solving the integral for the whole domain only its contour needs to be considered. In the current paper, we present the general principles of the integration method and its implementation. The results to several two-dimensional benchmark problems highlight its properties. The efficiency of the proposed method is compared to conventional spacetree-based integration techniques.
Physics-based enzyme design: predicting binding affinity and catalytic activity.
Sirin, Sarah; Pearlman, David A; Sherman, Woody
2014-12-01
Computational enzyme design is an emerging field that has yielded promising success stories, but where numerous challenges remain. Accurate methods to rapidly evaluate possible enzyme design variants could provide significant value when combined with experimental efforts by reducing the number of variants needed to be synthesized and speeding the time to reach the desired endpoint of the design. To that end, extending our computational methods to model the fundamental physical-chemical principles that regulate activity in a protocol that is automated and accessible to a broad population of enzyme design researchers is essential. Here, we apply a physics-based implicit solvent MM-GBSA scoring approach to enzyme design and benchmark the computational predictions against experimentally determined activities. Specifically, we evaluate the ability of MM-GBSA to predict changes in affinity for a steroid binder protein, catalytic turnover for a Kemp eliminase, and catalytic activity for α-Gliadin peptidase variants. Using the enzyme design framework developed here, we accurately rank the most experimentally active enzyme variants, suggesting that this approach could provide enrichment of active variants in real-world enzyme design applications. © 2014 Wiley Periodicals, Inc.
Van Metre, P.C.
1990-01-01
A computer-program interface between a geographic-information system and a groundwater flow model links two unrelated software systems for use in developing the flow models. The interface program allows the modeler to compile and manage geographic components of a groundwater model within the geographic information system. A significant savings of time and effort is realized in developing, calibrating, and displaying the groundwater flow model. Four major guidelines were followed in developing the interface program: (1) no changes to the groundwater flow model code were to be made; (2) a data structure was to be designed within the geographic information system that follows the same basic data structure as the groundwater flow model; (3) the interface program was to be flexible enough to support all basic data options available within the model; and (4) the interface program was to be as efficient as possible in terms of computer time used and online-storage space needed. Because some programs in the interface are written in control-program language, the interface will run only on a computer with the PRIMOS operating system. (USGS)
Inventory of environmental impact models related to energy technologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owen, P.T.; Dailey, N.S.; Johnson, C.A.
The purpose of this inventory is to identify and collect data on computer simulations and computational models related to the environmental effects of energy source development, energy conversion, or energy utilization. Information for 33 data fields was sought for each model reported. All of the information which could be obtained within the time alloted for completion of the project is presented for each model listed. Efforts will be continued toward acquiring the needed information. Readers who are interested in these particular models are invited to contact ESIC for assistance in locating them. In addition to the standard bibliographic information, othermore » data fields of interest to modelers, such as computer hardware and software requirements, algorithms, applications, and existing model validation information, are included. Indexes are provided for contact person, acronym, keyword, and title. The models are grouped into the following categories: atmospheric transport, air quality, aquatic transport, terrestrial food chains, soil transport, aquatic food chains, water quality, dosimetry, and human effects, animal effects, plant effects, and generalized environmental transport. Within these categories, the models are arranged alphabetically by last name of the contact person.« less
Study of the Use of Time-Mean Vortices to Generate Lift for MAV Applications
2011-05-31
microplate to in-plane resonance. Computational effort centers around optimization of a range of parameters (geometry, frequency, amplitude of oscillation, etc...issue involved. Towards this end, a suspended microplate was fabricated via MEMS technology and driven to in-plane resonance via Lorentz force...force to drive the suspended MEMS-based microplate to in-plane resonance. Computational effort centers around optimization of a range of parameters
NASA Technical Reports Server (NTRS)
Diaz, Manuel F.; Takamoto, Neal; Woolford, Barbara
1994-01-01
In a joint effort with Brooks AFB, Texas, the Flight Crew Support Division at JSC has begun a computer simulation and performance modeling program directed at establishing the predictive validity of software tools for modeling human performance during spaceflight. This paper addresses the utility of task network modeling for predicting the workload that astronauts are likely to encounter in extravehicular activities (EVA) during the Hubble Space Telescope (HST) repair mission. The intent of the study was to determine whether two EVA crewmembers and one intravehicular activity (IVA) crewmember could reasonably be expected to complete HST Wide Field/Planetary Camera (WFPC) replacement in the allotted time. Ultimately, examination of the points during HST servicing that may result in excessive workload will lead to recommendations to the HST Flight Systems and Servicing Project concerning (1) expectation of degraded performance, (2) the need to change task allocation across crewmembers, (3) the need to expand the timeline, and (4) the need to increase the number of EVA's.
A General Approach to Measuring Test-Taking Effort on Computer-Based Tests
ERIC Educational Resources Information Center
Wise, Steven L.; Gao, Lingyun
2017-01-01
There has been an increased interest in the impact of unmotivated test taking on test performance and score validity. This has led to the development of new ways of measuring test-taking effort based on item response time. In particular, Response Time Effort (RTE) has been shown to provide an assessment of effort down to the level of individual…
NASA Astrophysics Data System (ADS)
Qin, C.; Hassanizadeh, S.
2013-12-01
Multiphase flow and species transport though thin porous layers are encountered in a number of industrial applications, such as fuel cells, filters, and hygiene products. Based on some macroscale models like the Darcy's law, to date, the modeling of flow and transport through such thin layers has been mostly performed in 3D discretized domains with many computational cells. But, there are a number of problems with this approach. First, a proper representative elementary volume (REV) is not defined. Second, one needs to discretize a thin porous medium into computational cells whose size may be comparable to the pore sizes. This suggests that the traditional models are not applicable to such thin domains. Third, the interfacial conditions between neighboring layers are usually not well defined. Last, 3D modeling of a number of interacting thin porous layers often requires heavy computational efforts. So, to eliminate the drawbacks mentioned above, we propose a new approach to modeling multilayers of thin porous media as 2D interacting continua (see Fig. 1). Macroscale 2D governing equations are formulated in terms of thickness-averaged material properties. Also, the exchange of thermodynamic properties between neighboring layers is described by thickness-averaged quantities. In Comparison to previous macroscale models, our model has the distinctive advantages of: (1) it is rigorous thermodynamics-based model; (2) it is formulated in terms of thickness-averaged material properties which are easily measureable; and (3) it reduces 3D modeling to 2D leading to a very significant reduction of computation efforts. As an application, we employ the new approach in the study of liquid water flooding in the cathode of a polymer electrolyte fuel cell (PEFC). To highlight the advantages of the present model, we compare the results of water distribution with those obtained from the traditional 3D Darcy-based modeling. Finally, it is worth noting that, for specific case studies, a number of material properties in the model need to be determined experimentally, such as mass and heat exchange coefficients between neighboring layers. Fig. 1: Schematic representation of three thin porous layers, which may exchange mass, momentum, and energy. Also, a typical averaging domain (REV) is shown. Note that the layer thickness and thus the REV height can be spatially variable. Also, in reality, the layers are tightly stacked and there is no gap between them.
Langley Ground Facilities and Testing in the 21st Century
NASA Technical Reports Server (NTRS)
Ambur, Damodar R.; Kegelman, Jerome T.; Kilgore, William A.
2010-01-01
A strategic approach for retaining and more efficiently operating the essential Langley Ground Testing Facilities in the 21st Century is presented. This effort takes advantage of the previously completed and ongoing studies at the Agency and National levels. This integrated approach takes into consideration the overall decline in test business base within the nation and reduced utilization in each of the Langley facilities with capabilities to test in the subsonic, transonic, supersonic, and hypersonic speed regimes. The strategy accounts for capability needs to meet the Agency programmatic requirements and strategic goals and to execute test activities in the most efficient and flexible facility operating structure. The structure currently being implemented at Langley offers agility to right-size our capability and capacity from a national perspective, to accommodate the dynamic nature of the testing needs, and will address the influence of existing and emerging analytical tools for design. The paradigm for testing in the retained facilities is to efficiently and reliably provide more accurate and high-quality test results at an affordable cost to support design information needs for flight regimes where the computational capability is not adequate and to verify and validate the existing and emerging computational tools. Each of the above goals are planned to be achieved, keeping in mind the increasing small industry customer base engaged in developing unpiloted aerial vehicles and commercial space transportation systems.
NASA Astrophysics Data System (ADS)
Bucks, Gregory Warren
Computers have become an integral part of how engineers complete their work, allowing them to collect and analyze data, model potential solutions and aiding in production through automation and robotics. In addition, computers are essential elements of the products themselves, from tennis shoes to construction materials. An understanding of how computers function, both at the hardware and software level, is essential for the next generation of engineers. Despite the need for engineers to develop a strong background in computing, little opportunity is given for engineering students to develop these skills. Learning to program is widely seen as a difficult task, requiring students to develop not only an understanding of specific concepts, but also a way of thinking. In addition, students are forced to learn a new tool, in the form of the programming environment employed, along with these concepts and thought processes. Because of this, many students will not develop a sufficient proficiency in programming, even after progressing through the traditional introductory programming sequence. This is a significant problem, especially in the engineering disciplines, where very few students receive more than one or two semesters' worth of instruction in an already crowded engineering curriculum. To address these issues, new pedagogical techniques must be investigated in an effort to enhance the ability of engineering students to develop strong computing skills. However, these efforts are hindered by the lack of published assessment instruments available for probing an individual's understanding of programming concepts across programming languages. Traditionally, programming knowledge has been assessed by producing written code in a specific language. This can be an effective method, but does not lend itself well to comparing the pedagogical impact of different programming environments, languages or paradigms. This dissertation presents a phenomenographic research study exploring the different ways of understanding held by individuals of two programming concepts: conditional structures and repetition structures. This work lays the foundation for the development of language independent assessment instruments, which can ultimately be used to assess the pedagogical implications of various programming environments.
Simulation software: engineer processes before reengineering.
Lepley, C J
2001-01-01
People make decisions all the time using intuition. But what happens when you are asked: "Are you sure your predictions are accurate? How much will a mistake cost? What are the risks associated with this change?" Once a new process is engineered, it is difficult to analyze what would have been different if other options had been chosen. Simulating a process can help senior clinical officers solve complex patient flow problems and avoid wasted efforts. Simulation software can give you the data you need to make decisions. The author introduces concepts, methodologies, and applications of computer aided simulation to illustrate their use in making decisions to improve workflow design.
Astronomical Data and Information Visualization
NASA Astrophysics Data System (ADS)
Goodman, Alyssa A.
2010-01-01
As the size and complexity of data sets increases, the need to "see" them more clearly increases as well. In the past, many scientists saw "fancy" data and information visualization as necessary for "outreach," but not for research. In this talk, I wlll demonstrate, using specific examples, why more and more scientists--not just astronomers--are coming to rely upon the development of new visualization strategies not just to present their data, but to understand it. Principal examples will be drawn from the "Astronomical Medicine" project at Harvard's Initiative in Innovative Computing, and from the "Seamless Astronomy" effort, which is co-sponsored by the VAO (NASA/NSF) and Microsoft Research.
The Fermi Unix Environment - Dealing with Adolescence
NASA Astrophysics Data System (ADS)
Pordes, Ruth; Nicholls, Judy; Wicks, Matt
Fermilab's Computing Division started early in the definition implemention and promulgation of a common environment for Users across the Laboratory's UNIX platforms and installations. Based on our experience over nearly five years, we discuss the status of the effort ongoing developments and needs, some analysis of where we could have done better, and identify future directions to allow us to provide better and more complete service to our customers. In particular, with the power of the new PCs making enthusiastic converts of physicists to the pc world, we are faced with the challenge of expanding the paradigm to non-UNIX platforms in a uniform and consistent way.
Unsupervised Learning Through Randomized Algorithms for High-Volume High-Velocity Data (ULTRA-HV).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pinar, Ali; Kolda, Tamara G.; Carlberg, Kevin Thomas
Through long-term investments in computing, algorithms, facilities, and instrumentation, DOE is an established leader in massive-scale, high-fidelity simulations, as well as science-leading experimentation. In both cases, DOE is generating more data than it can analyze and the problem is intensifying quickly. The need for advanced algorithms that can automatically convert the abundance of data into a wealth of useful information by discovering hidden structures is well recognized. Such efforts however, are hindered by the massive volume of the data and its high velocity. Here, the challenge is developing unsupervised learning methods to discover hidden structure in high-volume, high-velocity data.
Multimodality image display station
NASA Astrophysics Data System (ADS)
Myers, H. Joseph
1990-07-01
The Multi-modality Image Display Station (MIDS) is designed for the use of physicians outside of the radiology department. Connected to a local area network or a host computer, it provides speedy access to digitized radiology images and written diagnostics needed by attending and consulting physicians near the patient bedside. Emphasis has been placed on low cost, high performance and ease of use. The work is being done as a joint study with the University of Texas Southwestern Medical Center at Dallas, and as part of a joint development effort with the Mayo Clinic. MIDS is a prototype, and should not be assumed to be an IBM product.
CAG12 - A CSCM based procedure for flow of an equilibrium chemically reacting gas
NASA Technical Reports Server (NTRS)
Green, M. J.; Davy, W. C.; Lombard, C. K.
1985-01-01
The Conservative Supra Characteristic Method (CSCM), an implicit upwind Navier-Stokes algorithm, is extended to the numerical simulation of flows in chemical equilibrium. The resulting computer code known as Chemistry and Gasdynamics Implicit - Version 2 (CAG12) is described. First-order accurate results are presented for inviscid and viscous Mach 20 flows of air past a hemisphere-cylinder. The solution procedure captures the bow shock in a chemically reacting gas, a technique that is needed for simulating high altitude, rarefied flows. In an initial effort to validate the code, the inviscid results are compared with published gasdynamic and chemistry solutions and satisfactorily agreement is obtained.
Making the connection: the VA-Regenstrief project.
Martin, D K
1992-01-01
The Regenstrief Automated Medical Record System is a well-established clinical information system with powerful facilities for querying and decision support. My colleagues and I introduced this system into the Indianapolis Veterans Affairs (VA) Medical Center by interfacing it to the institution's automated data-processing system, the Decentralized Hospital Computer Program (DHCP), using a recently standardized method for clinical data interchange. This article discusses some of the challenges encountered in that process, including the translation of vocabulary terms and maintenance of the software interface. Efforts such as these demonstrate the importance of standardization in medical informatics and the need for data standards at all levels of information exchange.
A Computing Infrastructure for Supporting Climate Studies
NASA Astrophysics Data System (ADS)
Yang, C.; Bambacus, M.; Freeman, S. M.; Huang, Q.; Li, J.; Sun, M.; Xu, C.; Wojcik, G. S.; Cahalan, R. F.; NASA Climate @ Home Project Team
2011-12-01
Climate change is one of the major challenges facing us on the Earth planet in the 21st century. Scientists build many models to simulate the past and predict the climate change for the next decades or century. Most of the models are at a low resolution with some targeting high resolution in linkage to practical climate change preparedness. To calibrate and validate the models, millions of model runs are needed to find the best simulation and configuration. This paper introduces the NASA effort on Climate@Home project to build a supercomputer based-on advanced computing technologies, such as cloud computing, grid computing, and others. Climate@Home computing infrastructure includes several aspects: 1) a cloud computing platform is utilized to manage the potential spike access to the centralized components, such as grid computing server for dispatching and collecting models runs results; 2) a grid computing engine is developed based on MapReduce to dispatch models, model configuration, and collect simulation results and contributing statistics; 3) a portal serves as the entry point for the project to provide the management, sharing, and data exploration for end users; 4) scientists can access customized tools to configure model runs and visualize model results; 5) the public can access twitter and facebook to get the latest about the project. This paper will introduce the latest progress of the project and demonstrate the operational system during the AGU fall meeting. It will also discuss how this technology can become a trailblazer for other climate studies and relevant sciences. It will share how the challenges in computation and software integration were solved.
Computational strategies for three-dimensional flow simulations on distributed computer systems
NASA Technical Reports Server (NTRS)
Sankar, Lakshmi N.; Weed, Richard A.
1995-01-01
This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.
Computational strategies for three-dimensional flow simulations on distributed computer systems
NASA Astrophysics Data System (ADS)
Sankar, Lakshmi N.; Weed, Richard A.
1995-08-01
This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.
Towards ethical decision support and knowledge management in neonatal intensive care.
Yang, L; Frize, M; Eng, P; Walker, R; Catley, C
2004-01-01
Recent studies in neonatal medicine, clinical nursing, and cognitive psychology have indicated the need to augment current decision-making practice in neonatal intensive care units with computerized, intelligent decision support systems. Rapid progress in artificial intelligence and knowledge management facilitates the design of collaborative ethical decision-support tools that allow clinicians to provide better support for parents facing inherently difficult choices, such as when to withdraw aggressive treatment. The appropriateness of using computers to support ethical decision-making is critically analyzed through research and literature review. In ethical dilemmas, multiple diverse participants need to communicate and function as a team to select the best treatment plan. In order to do this, physicians require reliable estimations of prognosis, while parents need a highly useable tool to help them assimilate complex medical issues and address their own value system. Our goal is to improve and structuralize the ethical decision-making that has become an inevitable part of modern neonatal care units. The paper contributes to clinical decision support by outlining the needs and basis for ethical decision support and justifying the proposed development efforts.
Kapur, Tina; Pieper, Steve; Fedorov, Andriy; Fillion-Robin, J-C; Halle, Michael; O'Donnell, Lauren; Lasso, Andras; Ungi, Tamas; Pinter, Csaba; Finet, Julien; Pujol, Sonia; Jagadeesan, Jayender; Tokuda, Junichi; Norton, Isaiah; Estepar, Raul San Jose; Gering, David; Aerts, Hugo J W L; Jakab, Marianna; Hata, Nobuhiko; Ibanez, Luiz; Blezek, Daniel; Miller, Jim; Aylward, Stephen; Grimson, W Eric L; Fichtinger, Gabor; Wells, William M; Lorensen, William E; Schroeder, Will; Kikinis, Ron
2016-10-01
The National Alliance for Medical Image Computing (NA-MIC) was launched in 2004 with the goal of investigating and developing an open source software infrastructure for the extraction of information and knowledge from medical images using computational methods. Several leading research and engineering groups participated in this effort that was funded by the US National Institutes of Health through a variety of infrastructure grants. This effort transformed 3D Slicer from an internal, Boston-based, academic research software application into a professionally maintained, robust, open source platform with an international leadership and developer and user communities. Critical improvements to the widely used underlying open source libraries and tools-VTK, ITK, CMake, CDash, DCMTK-were an additional consequence of this effort. This project has contributed to close to a thousand peer-reviewed publications and a growing portfolio of US and international funded efforts expanding the use of these tools in new medical computing applications every year. In this editorial, we discuss what we believe are gaps in the way medical image computing is pursued today; how a well-executed research platform can enable discovery, innovation and reproducible science ("Open Science"); and how our quest to build such a software platform has evolved into a productive and rewarding social engineering exercise in building an open-access community with a shared vision. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Bennett, Jerome (Technical Monitor)
2002-01-01
The NASA Center for Computational Sciences (NCCS) is a high-performance scientific computing facility operated, maintained and managed by the Earth and Space Data Computing Division (ESDCD) of NASA Goddard Space Flight Center's (GSFC) Earth Sciences Directorate. The mission of the NCCS is to advance leading-edge science by providing the best people, computers, and data storage systems to NASA's Earth and space sciences programs and those of other U.S. Government agencies, universities, and private institutions. Among the many computationally demanding Earth science research efforts supported by the NCCS in Fiscal Year 1999 (FY99) are the NASA Seasonal-to-Interannual Prediction Project, the NASA Search and Rescue Mission, Earth gravitational model development efforts, the National Weather Service's North American Observing System program, Data Assimilation Office studies, a NASA-sponsored project at the Center for Ocean-Land-Atmosphere Studies, a NASA-sponsored microgravity project conducted by researchers at the City University of New York and the University of Pennsylvania, the completion of a satellite-derived global climate data set, simulations of a new geodynamo model, and studies of Earth's torque. This document presents highlights of these research efforts and an overview of the NCCS, its facilities, and its people.
NASA Astrophysics Data System (ADS)
Memon, Aamir Mahmood; Liu, Qun; Memon, Khadim Hussain; Baloch, Wazir Ali; Memon, Asfandyar; Baset, Abdul
2015-07-01
Catch and effort data were analyzed to estimate the maximum sustainable yield (MSY) of King Soldier Bream, Argyrops spinifer (Forsskål, 1775, Family: Sparidae), and to evaluate the present status of the fish stocks exploited in Pakistani waters. The catch and effort data for the 25-years period 1985-2009 were analyzed using two computer software packages, CEDA (catch and effort data analysis) and ASPIC (a surplus production model incorporating covariates). The maximum catch of 3 458 t was observed in 1988 and the minimum catch of 1 324 t in 2005, while the average annual catch of A. spinifer over the 25 years was 2 500 t. The surplus production models of Fox, Schaefer, and Pella Tomlinson under three error assumptions of normal, log-normal and gamma are in the CEDA package and the two surplus models of Fox and logistic are in the ASPIC package. In CEDA, the MSY was estimated by applying the initial proportion (IP) of 0.8, because the starting catch was approximately 80% of the maximum catch. Except for gamma, because gamma showed maximization failures, the estimated results of MSY using CEDA with the Fox surplus production model and two error assumptions, were 1 692.08 t ( R 2=0.572) and 1 694.09 t ( R 2=0.606), respectively, and from the Schaefer and the Pella Tomlinson models with two error assumptions were 2 390.95 t ( R 2=0.563), and 2 380.06 t ( R 2=0.605), respectively. The MSY estimated by the Fox model was conservatively compared to the Schaefer and Pella Tomlinson models. The MSY values from Schaefer and Pella Tomlinson models were the same. The computed values of MSY using the ASPIC computer software program with the two surplus production models of Fox and logistic were 1 498 t ( R 2=0.917), and 2 488 t ( R 2=0.897) respectively. The estimated values of MSY using CEDA were about 1 700-2 400 t and the values from ASPIC were 1 500-2 500 t. The estimates output by the CEDA and the ASPIC packages indicate that the stock is overfished, and needs some effective management to reduce the fishing effort of the species in Pakistani waters.
IITET and shadow TT: an innovative approach to training at the point of need
NASA Astrophysics Data System (ADS)
Gross, Andrew; Lopez, Favio; Dirkse, James; Anderson, Darran; Berglie, Stephen; May, Christopher; Harkrider, Susan
2014-06-01
The Image Intensification and Thermal Equipment Training (IITET) project is a joint effort between Night Vision and Electronics Sensors Directorate (NVESD) Modeling and Simulation Division (MSD) and the Army Research Institute (ARI) Fort Benning Research Unit. The IITET effort develops a reusable and extensible training architecture that supports the Army Learning Model and trains Manned-Unmanned Teaming (MUM-T) concepts to Shadow Unmanned Aerial Systems (UAS) payload operators. The training challenge of MUM-T during aviation operations is that UAS payload operators traditionally learn few of the scout-reconnaissance skills and coordination appropriate to MUM-T at the schoolhouse. The IITET effort leveraged the simulation experience and capabilities at NVESD and ARI's research to develop a novel payload operator training approach consistent with the Army Learning Model. Based on the training and system requirements, the team researched and identified candidate capabilities in several distinct technology areas. The training capability will support a variety of training missions as well as a full campaign. Data from these missions will be captured in a fully integrated AAR capability, which will provide objective feedback to the user in near-real-time. IITET will be delivered via a combination of browser and video streaming technologies, eliminating the requirement for a client download and reducing user computer system requirements. The result is a novel UAS Payload Operator training capability, nested within an architecture capable of supporting a wide variety of training needs for air and ground tactical platforms and sensors, and potentially several other areas requiring vignette-based serious games training.
Collaborative Information Technologies
NASA Astrophysics Data System (ADS)
Meyer, William; Casper, Thomas
1999-11-01
Significant effort has been expended to provide infrastructure and to facilitate the remote collaborations within the fusion community and out. Through the Office of Fusion Energy Science Information Technology Initiative, communication technologies utilized by the fusion community are being improved. The initial thrust of the initiative has been collaborative seminars and meetings. Under the initiative 23 sites, both laboratory and university, were provided with hardware required to remotely view, or project, documents being presented. The hardware is capable of delivering documents to a web browser, or to compatible hardware, over ESNET in an access controlled manner. The ability also exists for documents to originate from virtually any of the collaborating sites. In addition, RealNetwork servers are being tested to provide audio and/or video, in a non-interactive environment with MBONE providing two-way interaction where needed. Additional effort is directed at remote distributed computing, file systems, security, and standard data storage and retrieval methods. This work supported by DoE contract No. W-7405-ENG-48
Operational Use of GPS Navigation for Space Shuttle Entry
NASA Technical Reports Server (NTRS)
Goodman, John L.; Propst, Carolyn A.
2008-01-01
The STS-118 flight of the Space Shuttle Endeavour was the first shuttle mission flown with three Global Positioning System (GPS) receivers in place of the three legacy Tactical Air Navigation (TACAN) units. This marked the conclusion of a 15 year effort involving procurement, missionization, integration, and flight testing of a GPS receiver and a parallel effort to formulate and implement shuttle computer software changes to support GPS. The use of GPS data from a single receiver in parallel with TACAN during entry was successfully demonstrated by the orbiters Discovery and Atlantis during four shuttle missions in 2006 and 2007. This provided the confidence needed before flying the first all GPS, no TACAN flight with Endeavour. A significant number of lessons were learned concerning the integration of a software intensive navigation unit into a legacy avionics system. These lessons have been taken into consideration during vehicle design by other flight programs, including the vehicle that will replace the Space Shuttle, Orion.
Ni, Ji-Qin
2015-05-01
There was an increasing interest in reducing production and emission of air pollutants to improve air quality for animal feeding operations (AFOs) in the U.S. in the 21st century. Research was focused on identification, quantification, characterization, and modeling of air pollutions; effects of emissions; and methodologies and technologies for scientific research and pollution control. Mitigation effects were on pre-excretion, pre-release, pre-emission, and post-emission. More emphasis was given on reducing pollutant emissions than improving indoor air quality. Research and demonstrations were generally continuation and improvement of previous efforts. Most demonstrated technologies were still in a limited scale of application. Future efforts are needed in many fundamental and applied research areas. Advancement in instrumentation, computer technology, and biological sciences and genetic engineering is critical to bring major changes in this area. Development in research and demonstration will depend on the actual political, economic, and environmental situations. Copyright © 2015 Elsevier Ltd. All rights reserved.
Center of excellence for small robots
NASA Astrophysics Data System (ADS)
Nguyen, Hoa G.; Carroll, Daniel M.; Laird, Robin T.; Everett, H. R.
2005-05-01
The mission of the Unmanned Systems Branch of SPAWAR Systems Center, San Diego (SSC San Diego) is to provide network-integrated robotic solutions for Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR) applications, serving and partnering with industry, academia, and other government agencies. We believe the most important criterion for a successful acquisition program is producing a value-added end product that the warfighter needs, uses and appreciates. Through our accomplishments in the laboratory and field, SSC San Diego has been designated the Center of Excellence for Small Robots by the Office of the Secretary of Defense Joint Robotics Program. This paper covers the background, experience, and collaboration efforts by SSC San Diego to serve as the "Impedance-Matching Transformer" between the robotic user and technical communities. Special attention is given to our Unmanned Systems Technology Imperatives for Research, Development, Testing and Evaluation (RDT&E) of Small Robots. Active projects, past efforts, and architectures are provided as success stories for the Unmanned Systems Development Approach.
SCA Waveform Development for Space Telemetry
NASA Technical Reports Server (NTRS)
Mortensen, Dale J.; Kifle, Multi; Hall, C. Steve; Quinn, Todd M.
2004-01-01
The NASA Glenn Research Center is investigating and developing suitable reconfigurable radio architectures for future NASA missions. This effort is examining software-based open-architectures for space based transceivers, as well as common hardware platform architectures. The Joint Tactical Radio System's (JTRS) Software Communications Architecture (SCA) is a candidate for the software approach, but may need modifications or adaptations for use in space. An in-house SCA compliant waveform development focuses on increasing understanding of software defined radio architectures and more specifically the JTRS SCA. Space requirements put a premium on size, mass, and power. This waveform development effort is key to evaluating tradeoffs with the SCA for space applications. Existing NASA telemetry links, as well as Space Exploration Initiative scenarios, are the basis for defining the waveform requirements. Modeling and simulations are being developed to determine signal processing requirements associated with a waveform and a mission-specific computational burden. Implementation of the waveform on a laboratory software defined radio platform is proceeding in an iterative fashion. Parallel top-down and bottom-up design approaches are employed.
Quantitative computational models of molecular self-assembly in systems biology
Thomas, Marcus; Schwartz, Russell
2017-01-01
Molecular self-assembly is the dominant form of chemical reaction in living systems, yet efforts at systems biology modeling are only beginning to appreciate the need for and challenges to accurate quantitative modeling of self-assembly. Self-assembly reactions are essential to nearly every important process in cell and molecular biology and handling them is thus a necessary step in building comprehensive models of complex cellular systems. They present exceptional challenges, however, to standard methods for simulating complex systems. While the general systems biology world is just beginning to deal with these challenges, there is an extensive literature dealing with them for more specialized self-assembly modeling. This review will examine the challenges of self-assembly modeling, nascent efforts to deal with these challenges in the systems modeling community, and some of the solutions offered in prior work on self-assembly specifically. The review concludes with some consideration of the likely role of self-assembly in the future of complex biological system models more generally. PMID:28535149
Quantitative computational models of molecular self-assembly in systems biology.
Thomas, Marcus; Schwartz, Russell
2017-05-23
Molecular self-assembly is the dominant form of chemical reaction in living systems, yet efforts at systems biology modeling are only beginning to appreciate the need for and challenges to accurate quantitative modeling of self-assembly. Self-assembly reactions are essential to nearly every important process in cell and molecular biology and handling them is thus a necessary step in building comprehensive models of complex cellular systems. They present exceptional challenges, however, to standard methods for simulating complex systems. While the general systems biology world is just beginning to deal with these challenges, there is an extensive literature dealing with them for more specialized self-assembly modeling. This review will examine the challenges of self-assembly modeling, nascent efforts to deal with these challenges in the systems modeling community, and some of the solutions offered in prior work on self-assembly specifically. The review concludes with some consideration of the likely role of self-assembly in the future of complex biological system models more generally.
Development of a Space Radiation Monte Carlo Computer Simulation
NASA Technical Reports Server (NTRS)
Pinsky, Lawrence S.
1997-01-01
The ultimate purpose of this effort is to undertake the development of a computer simulation of the radiation environment encountered in spacecraft which is based upon the Monte Carlo technique. The current plan is to adapt and modify a Monte Carlo calculation code known as FLUKA, which is presently used in high energy and heavy ion physics, to simulate the radiation environment present in spacecraft during missions. The initial effort would be directed towards modeling the MIR and Space Shuttle environments, but the long range goal is to develop a program for the accurate prediction of the radiation environment likely to be encountered on future planned endeavors such as the Space Station, a Lunar Return Mission, or a Mars Mission. The longer the mission, especially those which will not have the shielding protection of the earth's magnetic field, the more critical the radiation threat will be. The ultimate goal of this research is to produce a code that will be useful to mission planners and engineers who need to have detailed projections of radiation exposures at specified locations within the spacecraft and for either specific times during the mission or integrated over the entire mission. In concert with the development of the simulation, it is desired to integrate it with a state-of-the-art interactive 3-D graphics-capable analysis package known as ROOT, to allow easy investigation and visualization of the results. The efforts reported on here include the initial development of the program and the demonstration of the efficacy of the technique through a model simulation of the MIR environment. This information was used to write a proposal to obtain follow-on permanent funding for this project.
Computerizing the Accounting Curriculum.
ERIC Educational Resources Information Center
Nash, John F.; England, Thomas G.
1986-01-01
Discusses the use of computers in college accounting courses. Argues that the success of new efforts in using computers in teaching accounting is dependent upon increasing instructors' computer skills, and choosing appropriate hardware and software, including commercially available business software packages. (TW)
Computers in Schools: White Boys Only?
ERIC Educational Resources Information Center
Hammett, Roberta F.
1997-01-01
Discusses the role of computers in today's world and the construction of computer use attitudes, such as gender gaps. Suggests how schools might close the gaps. Includes a brief explanation about how facility with computers is important for women in their efforts to gain equitable treatment in all aspects of their lives. (PA)
Computers and Instruction: Implications of the Rising Tide of Criticism for Reading Education.
ERIC Educational Resources Information Center
Balajthy, Ernest
1988-01-01
Examines two major reasons that schools have adopted computers without careful prior examination and planning. Surveys a variety of criticisms targeted toward some aspects of computer-based instruction in reading in an effort to direct attention to the beneficial implications of computers in the classroom. (MS)
Preparing Future Secondary Computer Science Educators
ERIC Educational Resources Information Center
Ajwa, Iyad
2007-01-01
Although nearly every college offers a major in computer science, many computer science teachers at the secondary level have received little formal training. This paper presents details of a project that could make a significant contribution to national efforts to improve computer science education by combining teacher education and professional…
Computers for the Faculty: How on a Limited Budget.
ERIC Educational Resources Information Center
Arman, Hal; Kostoff, John
An informal investigation of the use of computers at Delta College (DC) in Michigan revealed reasonable use of computers by faculty in disciplines such as mathematics, business, and technology, but very limited use in the humanities and social sciences. In an effort to increase faculty computer usage, DC decided to make computers available to any…
Possible Computer Vision Systems and Automated or Computer-Aided Edging and Trimming
Philip A. Araman
1990-01-01
This paper discusses research which is underway to help our industry reduce costs, increase product volume and value recovery, and market more accurately graded and described products. The research is part of a team effort to help the hardwood sawmill industry automate with computer vision systems, and computer-aided or computer controlled processing. This paper...
Biomechanics of Head, Neck, and Chest Injury Prevention for Soldiers: Phase 2 and 3
2016-08-01
understanding of the biomechanics of the head and brain. Task 2.3 details the computational modeling efforts conducted to evaluate the response of the...section also details the progress made on the development of a testing apparatus to evaluate cervical spine implants in survivable loading scenarios...computational modeling efforts conducted to evaluate the response of the cervical spine and the effects of cervical arthrodesis and arthroplasty during
Limits on fundamental limits to computation.
Markov, Igor L
2014-08-14
An indispensable part of our personal and working lives, computing has also become essential to industries and governments. Steady improvements in computer hardware have been supported by periodic doubling of transistor densities in integrated circuits over the past fifty years. Such Moore scaling now requires ever-increasing efforts, stimulating research in alternative hardware and stirring controversy. To help evaluate emerging technologies and increase our understanding of integrated-circuit scaling, here I review fundamental limits to computation in the areas of manufacturing, energy, physical space, design and verification effort, and algorithms. To outline what is achievable in principle and in practice, I recapitulate how some limits were circumvented, and compare loose and tight limits. Engineering difficulties encountered by emerging technologies may indicate yet unknown limits.
Computational electronics and electromagnetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shang, C C
The Computational Electronics and Electromagnetics thrust area serves as the focal point for Engineering R and D activities for developing computer-based design and analysis tools. Representative applications include design of particle accelerator cells and beamline components; design of transmission line components; engineering analysis and design of high-power (optical and microwave) components; photonics and optoelectronics circuit design; electromagnetic susceptibility analysis; and antenna synthesis. The FY-97 effort focuses on development and validation of (1) accelerator design codes; (2) 3-D massively parallel, time-dependent EM codes; (3) material models; (4) coupling and application of engineering tools for analysis and design of high-power components; andmore » (5) development of beam control algorithms coupled to beam transport physics codes. These efforts are in association with technology development in the power conversion, nondestructive evaluation, and microtechnology areas. The efforts complement technology development in Lawrence Livermore National programs.« less
Office workers' computer use patterns are associated with workplace stressors.
Eijckelhof, Belinda H W; Huysmans, Maaike A; Blatter, Birgitte M; Leider, Priscilla C; Johnson, Peter W; van Dieën, Jaap H; Dennerlein, Jack T; van der Beek, Allard J
2014-11-01
This field study examined associations between workplace stressors and office workers' computer use patterns. We collected keyboard and mouse activities of 93 office workers (68F, 25M) for approximately two work weeks. Linear regression analyses examined the associations between self-reported effort, reward, overcommitment, and perceived stress and software-recorded computer use duration, number of short and long computer breaks, and pace of input device usage. Daily duration of computer use was, on average, 30 min longer for workers with high compared to low levels of overcommitment and perceived stress. The number of short computer breaks (30 s-5 min long) was approximately 20% lower for those with high compared to low effort and for those with low compared to high reward. These outcomes support the hypothesis that office workers' computer use patterns vary across individuals with different levels of workplace stressors. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Neurocomputational mechanisms underlying subjective valuation of effort costs
Giehl, Kathrin; Sillence, Annie
2017-01-01
In everyday life, we have to decide whether it is worth exerting effort to obtain rewards. Effort can be experienced in different domains, with some tasks requiring significant cognitive demand and others being more physically effortful. The motivation to exert effort for reward is highly subjective and varies considerably across the different domains of behaviour. However, very little is known about the computational or neural basis of how different effort costs are subjectively weighed against rewards. Is there a common, domain-general system of brain areas that evaluates all costs and benefits? Here, we used computational modelling and functional magnetic resonance imaging (fMRI) to examine the mechanisms underlying value processing in both the cognitive and physical domains. Participants were trained on two novel tasks that parametrically varied either cognitive or physical effort. During fMRI, participants indicated their preferences between a fixed low-effort/low-reward option and a variable higher-effort/higher-reward offer for each effort domain. Critically, reward devaluation by both cognitive and physical effort was subserved by a common network of areas, including the dorsomedial and dorsolateral prefrontal cortex, the intraparietal sulcus, and the anterior insula. Activity within these domain-general areas also covaried negatively with reward and positively with effort, suggesting an integration of these parameters within these areas. Additionally, the amygdala appeared to play a unique, domain-specific role in processing the value of rewards associated with cognitive effort. These results are the first to reveal the neurocomputational mechanisms underlying subjective cost–benefit valuation across different domains of effort and provide insight into the multidimensional nature of motivation. PMID:28234892
Machine learning in materials informatics: recent applications and prospects
NASA Astrophysics Data System (ADS)
Ramprasad, Rampi; Batra, Rohit; Pilania, Ghanshyam; Mannodi-Kanakkithodi, Arun; Kim, Chiho
2017-12-01
Propelled partly by the Materials Genome Initiative, and partly by the algorithmic developments and the resounding successes of data-driven efforts in other domains, informatics strategies are beginning to take shape within materials science. These approaches lead to surrogate machine learning models that enable rapid predictions based purely on past data rather than by direct experimentation or by computations/simulations in which fundamental equations are explicitly solved. Data-centric informatics methods are becoming useful to determine material properties that are hard to measure or compute using traditional methods—due to the cost, time or effort involved—but for which reliable data either already exists or can be generated for at least a subset of the critical cases. Predictions are typically interpolative, involving fingerprinting a material numerically first, and then following a mapping (established via a learning algorithm) between the fingerprint and the property of interest. Fingerprints, also referred to as "descriptors", may be of many types and scales, as dictated by the application domain and needs. Predictions may also be extrapolative—extending into new materials spaces—provided prediction uncertainties are properly taken into account. This article attempts to provide an overview of some of the recent successful data-driven "materials informatics" strategies undertaken in the last decade, with particular emphasis on the fingerprint or descriptor choices. The review also identifies some challenges the community is facing and those that should be overcome in the near future.
Bennie, Jason A.; Kolbe-Alexander, Tracy; Meester, Femke De
2018-01-01
Prolonged sitting has been linked to adverse health outcomes; therefore, we developed and examined a web-based, computer-tailored workplace sitting intervention. As we had previously shown good effectiveness, the next stage was to conduct a dissemination study. This study reports on the dissemination efforts of a health promotion organisation, associated costs, reach achieved, and attributes of the website users. The organisation systematically registered all the time and resources invested to promote the intervention. Website usage statistics (reach) and descriptive statistics (website users’ attributes) were also assessed. Online strategies (promotion on their homepage; sending e-mails, newsletters, Twitter, Facebook and LinkedIn posts to professional partners) were the main dissemination methods. The total time investment was 25.6 h, which cost approximately 845 EUR in salaries. After sixteen months, 1599 adults had visited the website and 1500 (93.8%) completed the survey to receive personalized sitting advice. This sample was 38.3 ± 11.0 years, mainly female (76.9%), college/university educated (89.0%), highly sedentary (88.5% sat >8 h/day) and intending to change (93.0%) their sitting. Given the small time and money investment, these outcomes are positive and indicate the potential for wide-scale dissemination. However, more efforts are needed to reach men, non-college/university educated employees, and those not intending behavioural change. PMID:29789491
Cocker, Katrien De; Cardon, Greet; Bennie, Jason A; Kolbe-Alexander, Tracy; Meester, Femke De; Vandelanotte, Corneel
2018-05-22
Prolonged sitting has been linked to adverse health outcomes; therefore, we developed and examined a web-based, computer-tailored workplace sitting intervention. As we had previously shown good effectiveness, the next stage was to conduct a dissemination study. This study reports on the dissemination efforts of a health promotion organisation, associated costs, reach achieved, and attributes of the website users. The organisation systematically registered all the time and resources invested to promote the intervention. Website usage statistics (reach) and descriptive statistics (website users' attributes) were also assessed. Online strategies (promotion on their homepage; sending e-mails, newsletters, Twitter, Facebook and LinkedIn posts to professional partners) were the main dissemination methods. The total time investment was 25.6 h, which cost approximately 845 EUR in salaries. After sixteen months, 1599 adults had visited the website and 1500 (93.8%) completed the survey to receive personalized sitting advice. This sample was 38.3 ± 11.0 years, mainly female (76.9%), college/university educated (89.0%), highly sedentary (88.5% sat >8 h/day) and intending to change (93.0%) their sitting. Given the small time and money investment, these outcomes are positive and indicate the potential for wide-scale dissemination. However, more efforts are needed to reach men, non-college/university educated employees, and those not intending behavioural change.
Validation and verification of the laser range safety tool (LRST)
NASA Astrophysics Data System (ADS)
Kennedy, Paul K.; Keppler, Kenneth S.; Thomas, Robert J.; Polhamus, Garrett D.; Smith, Peter A.; Trevino, Javier O.; Seaman, Daniel V.; Gallaway, Robert A.; Crockett, Gregg A.
2003-06-01
The U.S. Dept. of Defense (DOD) is currently developing and testing a number of High Energy Laser (HEL) weapons systems. DOD range safety officers now face the challenge of designing safe methods of testing HEL's on DOD ranges. In particular, safety officers need to ensure that diffuse and specular reflections from HEL system targets, as well as direct beam paths, are contained within DOD boundaries. If both the laser source and the target are moving, as they are for the Airborne Laser (ABL), a complex series of calculations is required and manual calculations are impractical. Over the past 5 years, the Optical Radiation Branch of the Air Force Research Laboratory (AFRL/HEDO), the ABL System Program Office, Logicon-RDA, and Northrup-Grumman, have worked together to develop a computer model called teh Laser Range Safety Tool (LRST), specifically designed for HEL reflection hazard analyses. The code, which is still under development, is currently tailored to support the ABL program. AFRL/HEDO has led an LRST Validation and Verification (V&V) effort since 1998, in order to determine if code predictions are accurate. This paper summarizes LRST V&V efforts to date including: i) comparison of code results with laboratory measurements of reflected laser energy and with reflection measurements made during actual HEL field tests, and ii) validation of LRST's hazard zone computations.
Comparing New Zealand's 'Middle Out' health information technology strategy with other OECD nations.
Bowden, Tom; Coiera, Enrico
2013-05-01
Implementation of efficient, universally applied, computer to computer communications is a high priority for many national health systems. As a consequence, much effort has been channelled into finding ways in which a patient's previous medical history can be made accessible when needed. A number of countries have attempted to share patients' records, with varying degrees of success. While most efforts to create record-sharing architectures have relied upon government-provided strategy and funding, New Zealand has taken a different approach. Like most British Commonwealth nations, New Zealand has a 'hybrid' publicly/privately funded health system. However its information technology infrastructure and automation has largely been developed by the private sector, working closely with regional and central government agencies. Currently the sector is focused on finding ways in which patient records can be shared amongst providers across three different regions. New Zealand's healthcare IT model combines government contributed funding, core infrastructure, facilitation and leadership with private sector investment and skills and is being delivered via a set of controlled experiments. The net result is a 'Middle Out' approach to healthcare automation. 'Middle Out' relies upon having a clear, well-articulated health-reform strategy and a determination by both public and private sector organisations to implement useful healthcare IT solutions by working closely together. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
The TeraShake Computational Platform for Large-Scale Earthquake Simulations
NASA Astrophysics Data System (ADS)
Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas
Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.
Stetson, Peter D.; McKnight, Lawrence K.; Bakken, Suzanne; Curran, Christine; Kubose, Tate T.; Cimino, James J.
2002-01-01
Medical errors are common, costly and often preventable. Work in understanding the proximal causes of medical errors demonstrates that systems failures predispose to adverse clinical events. Most of these systems failures are due to lack of appropriate information at the appropriate time during the course of clinical care. Problems with clinical communication are common proximal causes of medical errors. We have begun a project designed to measure the impact of wireless computing on medical errors. We report here on our efforts to develop an ontology representing the intersection of medical errors, information needs and the communication space. We will use this ontology to support the collection, storage and interpretation of project data. The ontology’s formal representation of the concepts in this novel domain will help guide the rational deployment of our informatics interventions. A real-life scenario is evaluated using the ontology in order to demonstrate its utility.
Computerization of Mental Health Integration Complexity Scores at Intermountain Healthcare
Oniki, Thomas A.; Rodrigues, Drayton; Rahman, Noman; Patur, Saritha; Briot, Pascal; Taylor, David P.; Wilcox, Adam B.; Reiss-Brennan, Brenda; Cannon, Wayne H.
2014-01-01
Intermountain Healthcare’s Mental Health Integration (MHI) Care Process Model (CPM) contains formal scoring criteria for assessing a patient’s mental health complexity as “mild,” “medium,” or “high” based on patient data. The complexity score attempts to assist Primary Care Physicians in assessing the mental health needs of their patients and what resources will need to be brought to bear. We describe an effort to computerize the scoring. Informatics and MHI personnel collaboratively and iteratively refined the criteria to make them adequately explicit and reflective of MHI objectives. When tested on retrospective data of 540 patients, the clinician agreed with the computer’s conclusion in 52.8% of the cases (285/540). We considered the analysis sufficiently successful to begin piloting the computerized score in prospective clinical care. So far in the pilot, clinicians have agreed with the computer in 70.6% of the cases (24/34). PMID:25954401
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aspuru-Guzik, Alan
2016-11-04
Clean, affordable, and renewable energy sources are urgently needed to satisfy the 10s of terawatts (TW) energy need of human beings. Solar cells are one promising choice to replace traditional energy sources. Our broad efforts have expanded the knowledge of possible donor materials for organic photovoltaics, while increasing access of our results to the world through the Clean Energy Project database (www.molecularspace.org). Machine learning techniques, including Gaussian Processes have been used to calibrate frontier molecular orbital energies, and OPV bulk properties (open-circuit voltage, percent conversion efficiencies, and short-circuit current). This grant allowed us to delve into the solid-state properties ofmore » OPVs (charge-carrier dynamics). One particular example allowed us to predict charge-carrier dynamics and make predictions about future hydrogen-bonded materials.« less
A DIY Ultrasonic Signal Generator for Sound Experiments
NASA Astrophysics Data System (ADS)
Riad, Ihab F.
2018-02-01
Many physics departments around the world have electronic and mechanical workshops attached to them that can help build experimental setups and instruments for research and the training of undergraduate students. The workshops are usually run by experienced technicians and equipped with expensive lathing, computer numerical control (CNC) machines, electric measuring instruments, and several other essential tools. However, in developing countries such as Sudan, the lack of qualified technicians and adequately equipped workshops hampers efforts by these departments to supplement their laboratories with the equipment they need. The only other option is to buy the needed equipment from specialized manufacturers. The latter option is not feasible for the departments in developing countries where funding for education and research is scarce and very limited and as equipment from these manufacturers is typically too expensive. These departments struggle significantly in equipping undergraduate teaching laboratories, and here we propose one way to address this.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronald L. Boring; David I. Gertman; Jeffrey C. Joe
2005-09-01
An ongoing issue within human-computer interaction (HCI) is the need for simplified or “discount” methods. The current economic slowdown has necessitated innovative methods that are results driven and cost effective. The myriad methods of design and usability are currently being cost-justified, and new techniques are actively being explored that meet current budgets and needs. Recent efforts in human reliability analysis (HRA) are highlighted by the ten-year development of the Standardized Plant Analysis Risk HRA (SPAR-H) method. The SPAR-H method has been used primarily for determining humancentered risk at nuclear power plants. The SPAR-H method, however, shares task analysis underpinnings withmore » HCI. Despite this methodological overlap, there is currently no HRA approach deployed in heuristic usability evaluation. This paper presents an extension of the existing SPAR-H method to be used as part of heuristic usability evaluation in HCI.« less
Enhancing the many-to-many relations across IHE document sharing communities.
Ribeiro, Luís S; Costa, Carlos; Oliveira, José Luís
2012-01-01
The Integrating Healthcare Enterprise (IHE) initiative is an ongoing project aiming to enable true inter-site interoperability in the health IT field. IHE is a work in progress and many challenges need to be overcome before the healthcare Institutions may share patient clinical records transparently and effortless. Configuring, deploying and testing an IHE document sharing community requires a significant effort to plan and maintain the supporting IT infrastructure. With the new paradigm of cloud computing is now possible to launch software devices on demand and paying accordantly to the usage. This paper presents a framework designed with purpose of expediting the creation of IHE document sharing communities. It provides semi-ready templates of sharing communities that will be customized according the community needs. The framework is a meeting point of the healthcare institutions, creating a favourable environment that might converge in new inter-institutional professional relationships and eventually the creation of new Affinity Domains.
Adoption of computer-assisted learning in medical education: the educators' perspective.
Schifferdecker, Karen E; Berman, Norm B; Fall, Leslie H; Fischer, Martin R
2012-11-01
Computer-assisted learning (CAL) in medical education has been shown to be effective in the achievement of learning outcomes, but requires the input of significant resources and development time. This study examines the key elements and processes that led to the widespread adoption of a CAL program in undergraduate medical education, the Computer-assisted Learning in Paediatrics Program (CLIPP). It then considers the relative importance of elements drawn from existing theories and models for technology adoption and other studies on CAL in medical education to inform the future development, implementation and testing of CAL programs in medical education. The study used a mixed-methods explanatory design. All paediatric clerkship directors (CDs) using CLIPP were recruited to participate in a self-administered, online questionnaire. Semi-structured interviews were then conducted with a random sample of CDs to further explore the quantitative results. Factors that facilitated adoption included CLIPP's ability to fill gaps in exposure to core clinical problems, the use of a national curriculum, development by CDs, and the meeting of CDs' desires to improve teaching and student learning. An additional facilitating factor was that little time and effort were needed to implement CLIPP within a clerkship. The quantitative findings were mostly corroborated by the qualitative findings. This study indicates issues that are important in the consideration and future exploration of the development and implementation of CAL programs in medical education. The promise of CAL as a method of enhancing the process and outcomes of medical education, and its cost, increase the need for future CAL funders and developers to pay equal attention to the needs of potential adopters and the development process as they do to the content and tools in the CAL program. Important questions that remain on the optimal design, use and integration of CAL should be addressed in order to adequately inform future development. Support is needed for studies that address these critical areas. © Blackwell Publishing Ltd 2012.
NASA Technical Reports Server (NTRS)
Crabb, Michele D.; Woodrow, Thomas S. (Technical Monitor)
1995-01-01
With the fast growing popularity of the Internet, many organizations are racing to get onto the on-ramp to the Information Superhighway. However, with frequent headlines such as 'Hackers' break in at General Electric raises questions about the Net's Security', 'Internet Security Imperiled - Hackers steal data that could threaten computers world-wide' and 'Stanford Computer system infiltrated; Security fears grow', organizations find themselves rethinking their approach to the on-ramp. Is the Internet safe? What do I need to do to protect my organization? Will hackers try to break into my systems? These are questions many organizations are asking themselves today. In order to safely travel along the Information Superhighway, organizations need a strong security framework. Developing such a framework for a computer site, whether it be just a few dozen hosts or several thousand hosts is not an easy task. The security infrastructure for a site is often developed piece-by-piece in response to security incidents which have affected that site over time. Or worse yet, no coordinated effort has been dedicated toward security. The end result is that many sites are still poorly prepared to handle the security dangers of the Internet. This paper presents guidelines for building a successful security infrastructure. The problem is addressed in a cookbook style method. First is a discussion on how to identify your assets and evaluate the threats to those assets; next are suggestions and tips for identifying the weak areas in your security armor. Armed with this information we can begin to think about what you really need for your site and what you can afford. In this stage of the process we examine the different categories of security tools and products that are available and then present some tips for deciding what is best for your site.
CCARES: A computer algorithm for the reliability analysis of laminated CMC components
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.; Gyekenyesi, John P.
1993-01-01
Structural components produced from laminated CMC (ceramic matrix composite) materials are being considered for a broad range of aerospace applications that include various structural components for the national aerospace plane, the space shuttle main engine, and advanced gas turbines. Specifically, these applications include segmented engine liners, small missile engine turbine rotors, and exhaust nozzles. Use of these materials allows for improvements in fuel efficiency due to increased engine temperatures and pressures, which in turn generate more power and thrust. Furthermore, this class of materials offers significant potential for raising the thrust-to-weight ratio of gas turbine engines by tailoring directions of high specific reliability. The emerging composite systems, particularly those with silicon nitride or silicon carbide matrix, can compete with metals in many demanding applications. Laminated CMC prototypes have already demonstrated functional capabilities at temperatures approaching 1400 C, which is well beyond the operational limits of most metallic materials. Laminated CMC material systems have several mechanical characteristics which must be carefully considered in the design process. Test bed software programs are needed that incorporate stochastic design concepts that are user friendly, computationally efficient, and have flexible architectures that readily incorporate changes in design philosophy. The CCARES (Composite Ceramics Analysis and Reliability Evaluation of Structures) program is representative of an effort to fill this need. CCARES is a public domain computer algorithm, coupled to a general purpose finite element program, which predicts the fast fracture reliability of a structural component under multiaxial loading conditions.
Future Computer Requirements for Computational Aerodynamics
NASA Technical Reports Server (NTRS)
1978-01-01
Recent advances in computational aerodynamics are discussed as well as motivations for and potential benefits of a National Aerodynamic Simulation Facility having the capability to solve fluid dynamic equations at speeds two to three orders of magnitude faster than presently possible with general computers. Two contracted efforts to define processor architectures for such a facility are summarized.
2010-06-01
DATES COVEREDAPR 2009 – JAN 2010 (From - To) APR 2009 – JAN 2010 4. TITLE AND SUBTITLE EMERGING NEUROMORPHIC COMPUTING ARCHITECTURES AND ENABLING...14. ABSTRACT The highly cross-disciplinary emerging field of neuromorphic computing architectures for cognitive information processing applications...belief systems, software, computer engineering, etc. In our effort to develop cognitive systems atop a neuromorphic computing architecture, we explored
Overview 1993: Computational applications
NASA Technical Reports Server (NTRS)
Benek, John A.
1993-01-01
Computational applications include projects that apply or develop computationally intensive computer programs. Such programs typically require supercomputers to obtain solutions in a timely fashion. This report describes two CSTAR projects involving Computational Fluid Dynamics (CFD) technology. The first, the Parallel Processing Initiative, is a joint development effort and the second, the Chimera Technology Development, is a transfer of government developed technology to American industry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heffelfinger, Grant S.; Martino, Anthony; Rintoul, Mark Daniel
This SAND report provides the technical progress through October 2004 of the Sandia-led project, %22Carbon Sequestration in Synechococcus Sp.: From Molecular Machines to Hierarchical Modeling,%22 funded by the DOE Office of Science Genomes to Life Program. Understanding, predicting, and perhaps manipulating carbon fixation in the oceans has long been a major focus of biological oceanography and has more recently been of interest to a broader audience of scientists and policy makers. It is clear that the oceanic sinks and sources of CO2 are important terms in the global environmental response to anthropogenic atmospheric inputs of CO2 and that oceanic microorganismsmore » play a key role in this response. However, the relationship between this global phenomenon and the biochemical mechanisms of carbon fixation in these microorganisms is poorly understood. In this project, we will investigate the carbon sequestration behavior of Synechococcus Sp., an abundant marine cyanobacteria known to be important to environmental responses to carbon dioxide levels, through experimental and computational methods. This project is a combined experimental and computational effort with emphasis on developing and applying new computational tools and methods. Our experimental effort will provide the biology and data to drive the computational efforts and include significant investment in developing new experimental methods for uncovering protein partners, characterizing protein complexes, identifying new binding domains. We will also develop and apply new data measurement and statistical methods for analyzing microarray experiments. Computational tools will be essential to our efforts to discover and characterize the function of the molecular machines of Synechococcus. To this end, molecular simulation methods will be coupled with knowledge discovery from diverse biological data sets for high-throughput discovery and characterization of protein-protein complexes. In addition, we will develop a set of novel capabilities for inference of regulatory pathways in microbial genomes across multiple sources of information through the integration of computational and experimental technologies. These capabilities will be applied to Synechococcus regulatory pathways to characterize their interaction map and identify component proteins in these - 4 - pathways. We will also investigate methods for combining experimental and computational results with visualization and natural language tools to accelerate discovery of regulatory pathways. The ultimate goal of this effort is develop and apply new experimental and computational methods needed to generate a new level of understanding of how the Synechococcus genome affects carbon fixation at the global scale. Anticipated experimental and computational methods will provide ever-increasing insight about the individual elements and steps in the carbon fixation process, however relating an organism's genome to its cellular response in the presence of varying environments will require systems biology approaches. Thus a primary goal for this effort is to integrate the genomic data generated from experiments and lower level simulations with data from the existing body of literature into a whole cell model. We plan to accomplish this by developing and applying a set of tools for capturing the carbon fixation behavior of complex of Synechococcus at different levels of resolution. Finally, the explosion of data being produced by high-throughput experiments requires data analysis and models which are more computationally complex, more heterogeneous, and require coupling to ever increasing amounts of experimentally obtained data in varying formats. These challenges are unprecedented in high performance scientific computing and necessitate the development of a companion computational infrastructure to support this effort. More information about this project, including a copy of the original proposal, can be found at www.genomes-to-life.org Acknowledgment We want to gratefully acknowledge the contributions of the GTL Project Team as follows: Grant S. Heffelfinger1*, Anthony Martino2, Andrey Gorin3, Ying Xu10,3, Mark D. Rintoul1, Al Geist3, Matthew Ennis1, Hashimi Al-Hashimi8, Nikita Arnold3, Andrei Borziak3, Bianca Brahamsha6, Andrea Belgrano12, Praveen Chandramohan3, Xin Chen9, Pan Chongle3, Paul Crozier1, PguongAn Dam10, George S. Davidson1, Robert Day3, Jean Loup Faulon2, Damian Gessler12, Arlene Gonzalez2, David Haaland1, William Hart1, Victor Havin3, Tao Jiang9, Howland Jones1, David Jung3, Ramya Krishnamurthy3, Yooli Light2, Shawn Martin1, Rajesh Munavalli3, Vijaya Natarajan3, Victor Olman10, Frank Olken4, Brian Palenik6, Byung Park3, Steven Plimpton1, Diana Roe2, Nagiza Samatova3, Arie Shoshani4, Michael Sinclair1, Alex Slepoy1, Shawn Stevens8, Chris Stork1, Charlie Strauss5, Zhengchang Su10, Edward Thomas1, Jerilyn A. Timlin1, Xiufeng Wan11, HongWei Wu10, Dong Xu11, Gong-Xin Yu3, Grover Yip8, Zhaoduo Zhang2, Erik Zuiderweg8 *Author to whom correspondence should be addressed (gsheffe%40sandia.gov) 1. Sandia National Laboratories, Albuquerque, NM 2. Sandia National Laboratories, Livermore, CA 3. Oak Ridge National Laboratory, Oak Ridge, TN 4. Lawrence Berkeley National Laboratory, Berkeley, CA 5. Los Alamos National Laboratory, Los Alamos, NM 6. University of California, San Diego 7. University of Illinois, Urbana/Champaign 8. University of Michigan, Ann Arbor 9. University of California, Riverside 10. University of Georgia, Athens 11. University of Missouri, Columbia 12. National Center for Genome Resources, Santa Fe, NM Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.« less
Genomes to Life Project Quarterly Report April 2005.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heffelfinger, Grant S.; Martino, Anthony; Rintoul, Mark Daniel
2006-02-01
This SAND report provides the technical progress through April 2005 of the Sandia-led project, "Carbon Sequestration in Synechococcus Sp.: From Molecular Machines to Hierarchical Modeling," funded by the DOE Office of Science Genomics:GTL Program. Understanding, predicting, and perhaps manipulating carbon fixation in the oceans has long been a major focus of biological oceanography and has more recently been of interest to a broader audience of scientists and policy makers. It is clear that the oceanic sinks and sources of CO2 are important terms in the global environmental response to anthropogenic atmospheric inputs of CO2 and that oceanic microorganisms play amore » key role in this response. However, the relationship between this global phenomenon and the biochemical mechanisms of carbon fixation in these microorganisms is poorly understood. In this project, we will investigate the carbon sequestration behavior of Synechococcus Sp., an abundant marine cyanobacteria known to be important to environmental responses to carbon dioxide levels, through experimental and computational methods. This project is a combined experimental and computational effort with emphasis on developing and applying new computational tools and methods. Our experimental effort will provide the biology and data to drive the computational efforts and include significant investment in developing new experimental methods for uncovering protein partners, characterizing protein complexes, identifying new binding domains. We will also develop and apply new data measurement and statistical methods for analyzing microarray experiments. Computational tools will be essential to our efforts to discover and characterize the function of the molecular machines of Synechococcus. To this end, molecular simulation methods will be coupled with knowledge discovery from diverse biological data sets for high-throughput discovery and characterization of protein-protein complexes. In addition, we will develop a set of novel capabilities for inference of regulatory pathways in microbial genomes across multiple sources of information through the integration of computational and experimental technologies. These capabilities will be applied to Synechococcus regulatory pathways to characterize their interaction map and identify component proteins in these - 4 -pathways. We will also investigate methods for combining experimental and computational results with visualization and natural language tools to accelerate discovery of regulatory pathways. The ultimate goal of this effort is develop and apply new experimental and computational methods needed to generate a new level of understanding of how the Synechococcus genome affects carbon fixation at the global scale. Anticipated experimental and computational methods will provide ever-increasing insight about the individual elements and steps in the carbon fixation process, however relating an organism's genome to its cellular response in the presence of varying environments will require systems biology approaches. Thus a primary goal for this effort is to integrate the genomic data generated from experiments and lower level simulations with data from the existing body of literature into a whole cell model. We plan to accomplish this by developing and applying a set of tools for capturing the carbon fixation behavior of complex of Synechococcus at different levels of resolution. Finally, the explosion of data being produced by high-throughput experiments requires data analysis and models which are more computationally complex, more heterogeneous, and require coupling to ever increasing amounts of experimentally obtained data in varying formats. These challenges are unprecedented in high performance scientific computing and necessitate the development of a companion computational infrastructure to support this effort. More information about this project can be found at www.genomes-to-life.org Acknowledgment We want to gratefully acknowledge the contributions of: Grant Heffelfinger1*, Anthony Martino2, Brian Palenik6, Andrey Gorin3, Ying Xu10,3, Mark Daniel Rintoul1, Al Geist3, Matthew Ennis1, with Pratul Agrawal3, Hashim Al-Hashimi8, Andrea Belgrano12, Mike Brown1, Xin Chen9, Paul Crozier1, PguongAn Dam10, Jean-Loup Faulon2, Damian Gessler12, David Haaland1, Victor Havin4, C.F. Huang5, Tao Jiang9, Howland Jones1, David Jung3, Katherine Kang14, Michael Langston15, Shawn Martin1, Shawn Means1, Vijaya Natarajan4, Roy Nielson5, Frank Olken4, Victor Olman10, Ian Paulsen14, Steve Plimpton1, Andreas Reichsteiner5, Nagiza Samatova3, Arie Shoshani4, Michael Sinclair1, Alex Slepoy1, Shawn Stevens8, Charlie Strauss5, Zhengchang Su10, Ed Thomas1, Jerilyn Timlin1, WimVermaas13, Xiufeng Wan11, HongWei Wu10, Dong Xu11, Grover Yip8, Erik Zuiderweg8 *Author to whom correspondence should be addressed (gsheffe@sandia.gov) 1. Sandia National Laboratories, Albuquerque, NM 2. Sandia National Laboratories, Livermore, CA 3. Oak Ridge National Laboratory, Oak Ridge, TN 4. Lawrence Berkeley National Laboratory, Berkeley, CA 5. Los Alamos National Laboratory, Los Alamos, NM 6. University of California, San Diego 7. University of Illinois, Urbana/Champaign 8. University of Michigan, Ann Arbor 9. University of California, Riverside 10. University of Georgia, Athens 11. University of Missouri, Columbia 12. National Center for Genome Resources, Santa Fe, NM 13. Arizona State University 14. The Institute for Genomic Research 15. University of Tennessee 5 Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL8500.« less
Computer Augmented Video Education.
ERIC Educational Resources Information Center
Sousa, M. B.
1979-01-01
Describes project CAVE (Computer Augmented Video Education), an ongoing effort at the U.S. Naval Academy to present lecture material on videocassette tape, reinforced by drill and practice through an interactive computer system supported by a 12 channel closed circuit television distribution and production facility. (RAO)
Computer Guided Instructional Design.
ERIC Educational Resources Information Center
Merrill, M. David; Wood, Larry E.
1984-01-01
Describes preliminary efforts to create the Lesson Design System, a computer-guided instructional design system written in Pascal for Apple microcomputers. Its content outline, strategy, display, and online lesson editors correspond roughly to instructional design phases of content and strategy analysis, display creation, and computer programing…
CAROLINA CENTER FOR COMPUTATIONAL TOXICOLOGY
The Center will advance the field of computational toxicology through the development of new methods and tools, as well as through collaborative efforts. In each Project, new computer-based models will be developed and published that represent the state-of-the-art. The tools p...
Archiving Software Systems: Approaches to Preserve Computational Capabilities
NASA Astrophysics Data System (ADS)
King, T. A.
2014-12-01
A great deal of effort is made to preserve scientific data. Not only because data is knowledge, but it is often costly to acquire and is sometimes collected under unique circumstances. Another part of the science enterprise is the development of software to process and analyze the data. Developed software is also a large investment and worthy of preservation. However, the long term preservation of software presents some challenges. Software often requires a specific technology stack to operate. This can include software, operating systems and hardware dependencies. One past approach to preserve computational capabilities is to maintain ancient hardware long past its typical viability. On an archive horizon of 100 years, this is not feasible. Another approach to preserve computational capabilities is to archive source code. While this can preserve details of the implementation and algorithms, it may not be possible to reproduce the technology stack needed to compile and run the resulting applications. This future forward dilemma has a solution. Technology used to create clouds and process big data can also be used to archive and preserve computational capabilities. We explore how basic hardware, virtual machines, containers and appropriate metadata can be used to preserve computational capabilities and to archive functional software systems. In conjunction with data archives, this provides scientist with both the data and capability to reproduce the processing and analysis used to generate past scientific results.
Afshar, Majid; Press, Valerie G; Robison, Rachel G; Kho, Abel N; Bandi, Sindhura; Biswas, Ashvini; Avila, Pedro C; Kumar, Harsha Vardhan Madan; Yu, Byung; Naureckas, Edward T; Nyenhuis, Sharmilee M; Codispoti, Christopher D
2017-10-13
Comprehensive, rapid, and accurate identification of patients with asthma for clinical care and engagement in research efforts is needed. The original development and validation of a computable phenotype for asthma case identification occurred at a single institution in Chicago and demonstrated excellent test characteristics. However, its application in a diverse payer mix, across different health systems and multiple electronic health record vendors, and in both children and adults was not examined. The objective of this study is to externally validate the computable phenotype across diverse Chicago institutions to accurately identify pediatric and adult patients with asthma. A cohort of 900 asthma and control patients was identified from the electronic health record between January 1, 2012 and November 30, 2014. Two physicians at each site independently reviewed the patient chart to annotate cases. The inter-observer reliability between the physician reviewers had a κ-coefficient of 0.95 (95% CI 0.93-0.97). The accuracy, sensitivity, specificity, negative predictive value, and positive predictive value of the computable phenotype were all above 94% in the full cohort. The excellent positive and negative predictive values in this multi-center external validation study establish a useful tool to identify asthma cases in in the electronic health record for research and care. This computable phenotype could be used in large-scale comparative-effectiveness trials.
Langley's Computational Efforts in Sonic-Boom Softening of the Boeing HSCT
NASA Technical Reports Server (NTRS)
Fouladi, Kamran
1999-01-01
NASA Langley's computational efforts in the sonic-boom softening of the Boeing high-speed civil transport are discussed in this paper. In these efforts, an optimization process using a higher order Euler method for analysis was employed to reduce the sonic boom of a baseline configuration through fuselage camber and wing dihedral modifications. Fuselage modifications did not provide any improvements, but the dihedral modifications were shown to be an important tool for the softening process. The study also included aerodynamic and sonic-boom analyses of the baseline and some of the proposed "softened" configurations. Comparisons of two Euler methodologies and two propagation programs for sonic-boom predictions are also discussed in the present paper.
NASA Technical Reports Server (NTRS)
Gardner, Kevin D.; Liu, Jong-Shang; Murthy, Durbha V.; Kruse, Marlin J.; James, Darrell
1999-01-01
AlliedSignal Engines, in cooperation with NASA GRC (National Aeronautics and Space Administration Glenn Research Center), completed an evaluation of recently-developed aeroelastic computer codes using test cases from the AlliedSignal Engines fan blisk and turbine databases. Test data included strain gage, performance, and steady-state pressure information obtained for conditions where synchronous or flutter vibratory conditions were found to occur. Aeroelastic codes evaluated included quasi 3-D UNSFLO (MIT Developed/AE Modified, Quasi 3-D Aeroelastic Computer Code), 2-D FREPS (NASA-Developed Forced Response Prediction System Aeroelastic Computer Code), and 3-D TURBO-AE (NASA/Mississippi State University Developed 3-D Aeroelastic Computer Code). Unsteady pressure predictions for the turbine test case were used to evaluate the forced response prediction capabilities of each of the three aeroelastic codes. Additionally, one of the fan flutter cases was evaluated using TURBO-AE. The UNSFLO and FREPS evaluation predictions showed good agreement with the experimental test data trends, but quantitative improvements are needed. UNSFLO over-predicted turbine blade response reductions, while FREPS under-predicted them. The inviscid TURBO-AE turbine analysis predicted no discernible blade response reduction, indicating the necessity of including viscous effects for this test case. For the TURBO-AE fan blisk test case, significant effort was expended getting the viscous version of the code to give converged steady flow solutions for the transonic flow conditions. Once converged, the steady solutions provided an excellent match with test data and the calibrated DAWES (AlliedSignal 3-D Viscous Steady Flow CFD Solver). However, efforts expended establishing quality steady-state solutions prevented exercising the unsteady portion of the TURBO-AE code during the present program. AlliedSignal recommends that unsteady pressure measurement data be obtained for both test cases examined for use in aeroelastic code validation.
Standardized Cardiovascular Data for Clinical Research, Registries, and Patient Care
Anderson, H. Vernon; Weintraub, William S.; Radford, Martha J.; Kremers, Mark S.; Roe, Matthew T.; Shaw, Richard E.; Pinchotti, Dana M.; Tcheng, James E.
2013-01-01
Relatively little attention has been focused on standardization of data exchange in clinical research studies and patient care activities. Both are usually managed locally using separate and generally incompatible data systems at individual hospitals or clinics. In the past decade there have been nascent efforts to create data standards for clinical research and patient care data, and to some extent these are helpful in providing a degree of uniformity. Nevertheless these data standards generally have not been converted into accepted computer-based language structures that could permit reliable data exchange across computer networks. The National Cardiovascular Research Infrastructure (NCRI) project was initiated with a major objective of creating a model framework for standard data exchange in all clinical research, clinical registry, and patient care environments, including all electronic health records. The goal is complete syntactic and semantic interoperability. A Data Standards Workgroup was established to create or identify and then harmonize clinical definitions for a base set of standardized cardiovascular data elements that could be used in this network infrastructure. Recognizing the need for continuity with prior efforts, the Workgroup examined existing data standards sources. A basic set of 353 elements was selected. The NCRI staff then collaborated with the two major technical standards organizations in healthcare, the Clinical Data Interchange Standards Consortium and Health Level 7 International, as well as with staff from the National Cancer Institute Enterprise Vocabulary Services. Modeling and mapping were performed to represent (instantiate) the data elements in appropriate technical computer language structures for endorsement as an accepted data standard for public access and use. Fully implemented, these elements will facilitate clinical research, registry reporting, administrative reporting and regulatory compliance, and patient care. PMID:23500238
NASA Astrophysics Data System (ADS)
Gaultois, Michael W.; Oliynyk, Anton O.; Mar, Arthur; Sparks, Taylor D.; Mulholland, Gregory J.; Meredig, Bryce
2016-05-01
The experimental search for new thermoelectric materials remains largely confined to a limited set of successful chemical and structural families, such as chalcogenides, skutterudites, and Zintl phases. In principle, computational tools such as density functional theory (DFT) offer the possibility of rationally guiding experimental synthesis efforts toward very different chemistries. However, in practice, predicting thermoelectric properties from first principles remains a challenging endeavor [J. Carrete et al., Phys. Rev. X 4, 011019 (2014)], and experimental researchers generally do not directly use computation to drive their own synthesis efforts. To bridge this practical gap between experimental needs and computational tools, we report an open machine learning-based recommendation engine (
The DYNES Instrument: A Description and Overview
NASA Astrophysics Data System (ADS)
Zurawski, Jason; Ball, Robert; Barczyk, Artur; Binkley, Mathew; Boote, Jeff; Boyd, Eric; Brown, Aaron; Brown, Robert; Lehman, Tom; McKee, Shawn; Meekhof, Benjeman; Mughal, Azher; Newman, Harvey; Rozsa, Sandor; Sheldon, Paul; Tackett, Alan; Voicu, Ramiro; Wolff, Stephen; Yang, Xi
2012-12-01
Scientific innovation continues to increase requirements for the computing and networking infrastructures of the world. Collaborative partners, instrumentation, storage, and processing facilities are often geographically and topologically separated, as is the case with LHC virtual organizations. These separations challenge the technology used to interconnect available resources, often delivered by Research and Education (R&E) networking providers, and leads to complications in the overall process of end-to-end data management. Capacity and traffic management are key concerns of R&E network operators; a delicate balance is required to serve both long-lived, high capacity network flows, as well as more traditional end-user activities. The advent of dynamic circuit services, a technology that enables the creation of variable duration, guaranteed bandwidth networking channels, allows for the efficient use of common network infrastructures. These gains are seen particularly in locations where overall capacity is scarce compared to the (sustained peak) needs of user communities. Related efforts, including those of the LHCOPN [3] operations group and the emerging LHCONE [4] project, may take advantage of available resources by designating specific network activities as a “high priority”, allowing reservation of dedicated bandwidth or optimizing for deadline scheduling and predicable delivery patterns. This paper presents the DYNES instrument, an NSF funded cyberinfrastructure project designed to facilitate end-to-end dynamic circuit services [2]. This combination of hardware and software innovation is being deployed across R&E networks in the United States at selected end-sites located on University Campuses. DYNES is peering with international efforts in other countries using similar solutions, and is increasing the reach of this emerging technology. This global data movement solution could be integrated into computing paradigms such as cloud and grid computing platforms, and through the use of APIs can be integrated into existing data movement software.
A Three Dimensional Parallel Time Accurate Turbopump Simulation Procedure Using Overset Grid Systems
NASA Technical Reports Server (NTRS)
Kiris, Cetin; Chan, William; Kwak, Dochan
2001-01-01
The objective of the current effort is to provide a computational framework for design and analysis of the entire fuel supply system of a liquid rocket engine, including high-fidelity unsteady turbopump flow analysis. This capability is needed to support the design of pump sub-systems for advanced space transportation vehicles that are likely to involve liquid propulsion systems. To date, computational tools for design/analysis of turbopump flows are based on relatively lower fidelity methods. An unsteady, three-dimensional viscous flow analysis tool involving stationary and rotational components for the entire turbopump assembly has not been available for real-world engineering applications. The present effort provides developers with information such as transient flow phenomena at start up, and non-uniform inflows, and will eventually impact on system vibration and structures. In the proposed paper, the progress toward the capability of complete simulation of the turbo-pump for a liquid rocket engine is reported. The Space Shuttle Main Engine (SSME) turbo-pump is used as a test case for evaluation of the hybrid MPI/Open-MP and MLP versions of the INS3D code. CAD to solution auto-scripting capability is being developed for turbopump applications. The relative motion of the grid systems for the rotor-stator interaction was obtained using overset grid techniques. Unsteady computations for the SSME turbo-pump, which contains 114 zones with 34.5 million grid points, are carried out on Origin 3000 systems at NASA Ames Research Center. Results from these time-accurate simulations with moving boundary capability will be presented along with the performance of parallel versions of the code.
Lessons Learned in Deploying the World s Largest Scale Lustre File System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dillow, David A; Fuller, Douglas; Wang, Feiyi
2010-01-01
The Spider system at the Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) is the world's largest scale Lustre parallel file system. Envisioned as a shared parallel file system capable of delivering both the bandwidth and capacity requirements of the OLCF's diverse computational environment, the project had a number of ambitious goals. To support the workloads of the OLCF's diverse computational platforms, the aggregate performance and storage capacity of Spider exceed that of our previously deployed systems by a factor of 6x - 240 GB/sec, and 17x - 10 Petabytes, respectively. Furthermore, Spider supports over 26,000 clients concurrently accessing themore » file system, which exceeds our previously deployed systems by nearly 4x. In addition to these scalability challenges, moving to a center-wide shared file system required dramatically improved resiliency and fault-tolerance mechanisms. This paper details our efforts in designing, deploying, and operating Spider. Through a phased approach of research and development, prototyping, deployment, and transition to operations, this work has resulted in a number of insights into large-scale parallel file system architectures, from both the design and the operational perspectives. We present in this paper our solutions to issues such as network congestion, performance baselining and evaluation, file system journaling overheads, and high availability in a system with tens of thousands of components. We also discuss areas of continued challenges, such as stressed metadata performance and the need for file system quality of service alongside with our efforts to address them. Finally, operational aspects of managing a system of this scale are discussed along with real-world data and observations.« less
Enhancing Geologic Education in Grades 5-12: Creating Virtual Field Trips
NASA Astrophysics Data System (ADS)
Vitek, J. D.; Gamache, K. R.; Giardino, J. R.; Schroeder, C. E.
2011-12-01
New tools of technology enhance and facilitate the ability to bring the "field experience" into the classroom as part of the effort necessary to turn students onto the geosciences. The real key is high-speed computers and high-definition cameras with which to capture visual images. Still and movie data are easily obtained as are large and small-scale images from space, available through "Google Earth°". GPS information provides accurate location data to enhance mapping efforts. One no longer needs to rely on commercial ventures to show students any aspect of the "real" world. The virtual world is a viable replacement. The new cost-effective tools mean everyone can be a producer of information critical to understanding Earth. During the last four summers (2008-2011), Texas teachers have participated in G-Camp, an effort to instill geologic and geomorphic knowledge such that the information will make its way into classrooms. Teachers have acquired thousands of images and developed concepts that are being used to enhance their ability to promote geology in their classrooms. Texas will soon require four years of science at the high-school level, and we believe that geology or Earth science needs to be elevated to the required level of biology, chemistry and physics. Teachers need to be trained and methodology developed that is exciting to students. After all, everyone on Earth needs to be aware of the hazardous nature of geologic events not just to pass an exam, but for a lifetime. We use a video, which is a composite of our ventures, to show how data collected during these trips can be used in the classroom. . Social media, Facebook°, blogs, and email facilitate sharing information such that everyone can learn from each other about the best way to do things. New tools of technology are taking their place in every classroom to take advantage of the skills students bring to the learning environment. Besides many of these approaches are common to video gaming, and certainly, education cannot be too far behind.
Petascale Many Body Methods for Complex Correlated Systems
NASA Astrophysics Data System (ADS)
Pruschke, Thomas
2012-02-01
Correlated systems constitute an important class of materials in modern condensed matter physics. Correlation among electrons are at the heart of all ordering phenomena and many intriguing novel aspects, such as quantum phase transitions or topological insulators, observed in a variety of compounds. Yet, theoretically describing these phenomena is still a formidable task, even if one restricts the models used to the smallest possible set of degrees of freedom. Here, modern computer architectures play an essential role, and the joint effort to devise efficient algorithms and implement them on state-of-the art hardware has become an extremely active field in condensed-matter research. To tackle this task single-handed is quite obviously not possible. The NSF-OISE funded PIRE collaboration ``Graduate Education and Research in Petascale Many Body Methods for Complex Correlated Systems'' is a successful initiative to bring together leading experts around the world to form a virtual international organization for addressing these emerging challenges and educate the next generation of computational condensed matter physicists. The collaboration includes research groups developing novel theoretical tools to reliably and systematically study correlated solids, experts in efficient computational algorithms needed to solve the emerging equations, and those able to use modern heterogeneous computer architectures to make then working tools for the growing community.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wittwehr, Clemens; Aladjov, Hristo; Ankley, Gerald
Efforts are underway to transform regulatory toxicology and chemical safety assessment from a largely empirical science based on direct observation of apical toxicity outcomes in whole organism toxicity tests to a predictive one in which outcomes and risk are inferred from accumulated mechanistic understanding. The adverse outcome pathway (AOP) framework has emerged as a systematic approach for organizing knowledge that supports such inference. We argue that this systematic organization of knowledge can inform and help direct the design and development of computational prediction models that can further enhance the utility of mechanistic and in silico data for chemical safety assessment.more » Examples of AOP-informed model development and its application to the assessment of chemicals for skin sensitization and multiple modes of endocrine disruption are provided. The role of problem formulation, not only as a critical phase of risk assessment, but also as guide for both AOP and complementary model development described. Finally, a proposal for actively engaging the modeling community in AOP-informed computational model development is made. The contents serve as a vision for how AOPs can be leveraged to facilitate development of computational prediction models needed to support the next generation of chemical safety assessment.« less
Frame, M.T.; Cotter, G.; Zolly, L.; Little, J.
2002-01-01
Whether your vantage point is that of an office window or a national park, your view undoubtedly encompasses a rich diversity of life forms, all carefully studied or managed by some scientist, resource manager, or planner. A few simple calculations - the number of species, their interrelationships, and the many researchers studying them - and you can easily see the tremendous challenges that the resulting biological data presents to the information and computer science communities. Biological information varies in format and content: it may pertain to a particular species or an entire ecosystem; it can contain land use characteristics, and geospatially referenced information. The complexity and uniqueness of each individual species or ecosystem do not easily lend themselves to today's computer science tools and applications. To address the challenges that the biological enterprise presents, the National Biological Information Infrastructure (NBII) (http://www.nbii.gov) was established in 1993 on the recommendation of the National Research Council (National Research Council 1993). The NBII is designed to address these issues on a national scale, and through international partnerships. This paper discusses current information and computer science efforts within the National Biological Information Infrastructure Program, and future computer science research endeavors that are needed to address the ever-growing issues related to our nation's biological concerns. ?? 2003 by The Haworth Press, Inc. All rights reserved.
Links, Jonathan M; Schwartz, Brian S; Lin, Sen; Kanarek, Norma; Mitrani-Reiser, Judith; Sell, Tara Kirk; Watson, Crystal R; Ward, Doug; Slemp, Cathy; Burhans, Robert; Gill, Kimberly; Igusa, Tak; Zhao, Xilei; Aguirre, Benigno; Trainor, Joseph; Nigg, Joanne; Inglesby, Thomas; Carbone, Eric; Kendra, James M
2018-02-01
Policy-makers and practitioners have a need to assess community resilience in disasters. Prior efforts conflated resilience with community functioning, combined resistance and recovery (the components of resilience), and relied on a static model for what is inherently a dynamic process. We sought to develop linked conceptual and computational models of community functioning and resilience after a disaster. We developed a system dynamics computational model that predicts community functioning after a disaster. The computational model outputted the time course of community functioning before, during, and after a disaster, which was used to calculate resistance, recovery, and resilience for all US counties. The conceptual model explicitly separated resilience from community functioning and identified all key components for each, which were translated into a system dynamics computational model with connections and feedbacks. The components were represented by publicly available measures at the county level. Baseline community functioning, resistance, recovery, and resilience evidenced a range of values and geographic clustering, consistent with hypotheses based on the disaster literature. The work is transparent, motivates ongoing refinements, and identifies areas for improved measurements. After validation, such a model can be used to identify effective investments to enhance community resilience. (Disaster Med Public Health Preparedness. 2018;12:127-137).
S3DB core: a framework for RDF generation and management in bioinformatics infrastructures
2010-01-01
Background Biomedical research is set to greatly benefit from the use of semantic web technologies in the design of computational infrastructure. However, beyond well defined research initiatives, substantial issues of data heterogeneity, source distribution, and privacy currently stand in the way towards the personalization of Medicine. Results A computational framework for bioinformatic infrastructure was designed to deal with the heterogeneous data sources and the sensitive mixture of public and private data that characterizes the biomedical domain. This framework consists of a logical model build with semantic web tools, coupled with a Markov process that propagates user operator states. An accompanying open source prototype was developed to meet a series of applications that range from collaborative multi-institution data acquisition efforts to data analysis applications that need to quickly traverse complex data structures. This report describes the two abstractions underlying the S3DB-based infrastructure, logical and numerical, and discusses its generality beyond the immediate confines of existing implementations. Conclusions The emergence of the "web as a computer" requires a formal model for the different functionalities involved in reading and writing to it. The S3DB core model proposed was found to address the design criteria of biomedical computational infrastructure, such as those supporting large scale multi-investigator research, clinical trials, and molecular epidemiology. PMID:20646315
Overview of NASA/OAST efforts related to manufacturing technology
NASA Technical Reports Server (NTRS)
Saunders, N. T.
1976-01-01
An overview of some of NASA's current efforts related to manufacturing technology and some possible directions for the future are presented. The topics discussed are: computer-aided design, composite structures, and turbine engine components.
Data Network Weather Service Reporting - Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michael Frey
2012-08-30
A final report is made of a three-year effort to develop a new forecasting paradigm for computer network performance. This effort was made in co-ordination with Fermi Lab's construction of e-Weather Center.
Synthetic tsunami waveform catalogs with kinematic constraints
NASA Astrophysics Data System (ADS)
Baptista, Maria Ana; Miranda, Jorge Miguel; Matias, Luis; Omira, Rachid
2017-07-01
In this study we present a comprehensive methodology to produce a synthetic tsunami waveform catalogue in the northeast Atlantic, east of the Azores islands. The method uses a synthetic earthquake catalogue compatible with plate kinematic constraints of the area. We use it to assess the tsunami hazard from the transcurrent boundary located between Iberia and the Azores, whose western part is known as the Gloria Fault. This study focuses only on earthquake-generated tsunamis. Moreover, we assume that the time and space distribution of the seismic events is known. To do this, we compute a synthetic earthquake catalogue including all fault parameters needed to characterize the seafloor deformation covering the time span of 20 000 years, which we consider long enough to ensure the representability of earthquake generation on this segment of the plate boundary. The computed time and space rupture distributions are made compatible with global kinematic plate models. We use the tsunami empirical Green's functions to efficiently compute the synthetic tsunami waveforms for the dataset of coastal locations, thus providing the basis for tsunami impact characterization. We present the results in the form of offshore wave heights for all coastal points in the dataset. Our results focus on the northeast Atlantic basin, showing that earthquake-induced tsunamis in the transcurrent segment of the Azores-Gibraltar plate boundary pose a minor threat to coastal areas north of Portugal and beyond the Strait of Gibraltar. However, in Morocco, the Azores, and the Madeira islands, we can expect wave heights between 0.6 and 0.8 m, leading to precautionary evacuation of coastal areas. The advantages of the method are its easy application to other regions and the low computation effort needed.
Computational Modeling as a Design Tool in Microelectronics Manufacturing
NASA Technical Reports Server (NTRS)
Meyyappan, Meyya; Arnold, James O. (Technical Monitor)
1997-01-01
Plans to introduce pilot lines or fabs for 300 mm processing are in progress. The IC technology is simultaneously moving towards 0.25/0.18 micron. The convergence of these two trends places unprecedented stringent demands on processes and equipments. More than ever, computational modeling is called upon to play a complementary role in equipment and process design. The pace in hardware/process development needs a matching pace in software development: an aggressive move towards developing "virtual reactors" is desirable and essential to reduce design cycle and costs. This goal has three elements: reactor scale model, feature level model, and database of physical/chemical properties. With these elements coupled, the complete model should function as a design aid in a CAD environment. This talk would aim at the description of various elements. At the reactor level, continuum, DSMC(or particle) and hybrid models will be discussed and compared using examples of plasma and thermal process simulations. In microtopography evolution, approaches such as level set methods compete with conventional geometric models. Regardless of the approach, the reliance on empricism is to be eliminated through coupling to reactor model and computational surface science. This coupling poses challenging issues of orders of magnitude variation in length and time scales. Finally, database development has fallen behind; current situation is rapidly aggravated by the ever newer chemistries emerging to meet process metrics. The virtual reactor would be a useless concept without an accompanying reliable database that consists of: thermal reaction pathways and rate constants, electron-molecule cross sections, thermochemical properties, transport properties, and finally, surface data on the interaction of radicals, atoms and ions with various surfaces. Large scale computational chemistry efforts are critical as experiments alone cannot meet database needs due to the difficulties associated with such controlled experiments and costs.
Defining Computational Thinking for Mathematics and Science Classrooms
NASA Astrophysics Data System (ADS)
Weintrop, David; Beheshti, Elham; Horn, Michael; Orton, Kai; Jona, Kemi; Trouille, Laura; Wilensky, Uri
2016-02-01
Science and mathematics are becoming computational endeavors. This fact is reflected in the recently released Next Generation Science Standards and the decision to include "computational thinking" as a core scientific practice. With this addition, and the increased presence of computation in mathematics and scientific contexts, a new urgency has come to the challenge of defining computational thinking and providing a theoretical grounding for what form it should take in school science and mathematics classrooms. This paper presents a response to this challenge by proposing a definition of computational thinking for mathematics and science in the form of a taxonomy consisting of four main categories: data practices, modeling and simulation practices, computational problem solving practices, and systems thinking practices. In formulating this taxonomy, we draw on the existing computational thinking literature, interviews with mathematicians and scientists, and exemplary computational thinking instructional materials. This work was undertaken as part of a larger effort to infuse computational thinking into high school science and mathematics curricular materials. In this paper, we argue for the approach of embedding computational thinking in mathematics and science contexts, present the taxonomy, and discuss how we envision the taxonomy being used to bring current educational efforts in line with the increasingly computational nature of modern science and mathematics.
Future Air Transportation System Breakout Series Report
NASA Technical Reports Server (NTRS)
2001-01-01
This presentation discusses: AvSTAR Future System Effort Critically important; Investment in the future; Need to follow a systems engineering process; and Efforts need to be worked in worldwide context
The Effort Paradox: Effort Is Both Costly and Valued.
Inzlicht, Michael; Shenhav, Amitai; Olivola, Christopher Y
2018-04-01
According to prominent models in cognitive psychology, neuroscience, and economics, effort (be it physical or mental) is costly: when given a choice, humans and non-human animals alike tend to avoid effort. Here, we suggest that the opposite is also true and review extensive evidence that effort can also add value. Not only can the same outcomes be more rewarding if we apply more (not less) effort, sometimes we select options precisely because they require effort. Given the increasing recognition of effort's role in motivation, cognitive control, and value-based decision-making, considering this neglected side of effort will not only improve formal computational models, but also provide clues about how to promote sustained mental effort across time. Copyright © 2018 Elsevier Ltd. All rights reserved.
An Assessment of CFD Effectiveness for Vortex Flow Simulation to Meet Preliminary Design Needs
NASA Technical Reports Server (NTRS)
Raj, P.; Ghaffari, F.; Finley, D. B.
2003-01-01
The low-speed flight and transonic maneuvering characteristics of combat air vehicles designed for efficient supersonic flight are significantly affected by the presence of free vortices. At moderate-to-high angles of attack, the flow invariably separates from the leading edges of the swept slender wings, as well as from the forebodies of the air vehicles, and rolls up to form free vortices. The design of military vehicles is heavily driven by the need to simultaneously improve performance and affordability.1 In order to meet this need, increasing emphasis is being placed on using Modeling & Simulation environments employing the Integrated Product & Process Development (IPPD) concept. The primary focus is on expeditiously providing design teams with high-fidelity data needed to make more informed decisions in the preliminary design stage. Extensive aerodynamic data are needed to support combat air vehicle design. Force and moment data are used to evaluate performance and handling qualities; surface pressures provide inputs for structural design; and flow-field data facilitate system integration. Continuing advances in computational fluid dynamics (CFD) provide an attractive means of generating the desired data in a manner that is responsive to the needs of the preliminary design efforts. The responsiveness is readily characterized as timely delivery of quality data at low cost.
76 FR 28443 - President's National Security Telecommunications Advisory Committee
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-17
... Government's use of cloud computing; the Federal Emergency Management Agency's NS/EP communications... Commercial Satellite Mission Assurance; and the way forward for the committee's cloud computing effort. The...
System Award for developing a tool that has had a lasting influence on computing. Project Jupyter evolved lasting influence on computing. Project Jupyter evolved from IPython, an effort pioneered by Fernando PÃ
Predicting Pilot Error in Nextgen: Pilot Performance Modeling and Validation Efforts
NASA Technical Reports Server (NTRS)
Wickens, Christopher; Sebok, Angelia; Gore, Brian; Hooey, Becky
2012-01-01
We review 25 articles presenting 5 general classes of computational models to predict pilot error. This more targeted review is placed within the context of the broader review of computational models of pilot cognition and performance, including such aspects as models of situation awareness or pilot-automation interaction. Particular emphasis is placed on the degree of validation of such models against empirical pilot data, and the relevance of the modeling and validation efforts to Next Gen technology and procedures.
2D bifurcations and Newtonian properties of memristive Chua's circuits
NASA Astrophysics Data System (ADS)
Marszalek, W.; Podhaisky, H.
2016-01-01
Two interesting properties of Chua's circuits are presented. First, two-parameter bifurcation diagrams of Chua's oscillatory circuits with memristors are presented. To obtain various 2D bifurcation images a substantial numerical effort, possibly with parallel computations, is needed. The numerical algorithm is described first and its numerical code for 2D bifurcation image creation is available for free downloading. Several color 2D images and the corresponding 1D greyscale bifurcation diagrams are included. Secondly, Chua's circuits are linked to Newton's law φ ''= F(t,φ,φ')/m with φ=\\text{flux} , constant m > 0, and the force term F(t,φ,φ') containing memory terms. Finally, the jounce scalar equations for Chua's circuits are also discussed.
Study of fault-tolerant software technology
NASA Technical Reports Server (NTRS)
Slivinski, T.; Broglio, C.; Wild, C.; Goldberg, J.; Levitt, K.; Hitt, E.; Webb, J.
1984-01-01
Presented is an overview of the current state of the art of fault-tolerant software and an analysis of quantitative techniques and models developed to assess its impact. It examines research efforts as well as experience gained from commercial application of these techniques. The paper also addresses the computer architecture and design implications on hardware, operating systems and programming languages (including Ada) of using fault-tolerant software in real-time aerospace applications. It concludes that fault-tolerant software has progressed beyond the pure research state. The paper also finds that, although not perfectly matched, newer architectural and language capabilities provide many of the notations and functions needed to effectively and efficiently implement software fault-tolerance.
On the numerical treatment of selected oscillatory evolutionary problems
NASA Astrophysics Data System (ADS)
Cardone, Angelamaria; Conte, Dajana; D'Ambrosio, Raffaele; Paternoster, Beatrice
2017-07-01
We focus on evolutionary problems whose qualitative behaviour is known a-priori and exploited in order to provide efficient and accurate numerical schemes. For classical numerical methods, depending on constant coefficients, the required computational effort could be quite heavy, due to the necessary employ of very small stepsizes needed to accurately reproduce the qualitative behaviour of the solution. In these situations, it may be convenient to use special purpose formulae, i.e. non-polynomially fitted formulae on basis functions adapted to the problem (see [16, 17] and references therein). We show examples of special purpose strategies to solve two families of evolutionary problems exhibiting periodic solutions, i.e. partial differential equations and Volterra integral equations.
Construction of In-house Databases in a Corporation
NASA Astrophysics Data System (ADS)
Dezaki, Kyoko; Saeki, Makoto
Rapid progress in advanced informationalization has increased need to enforce documentation activities in industries. Responding to it Tokin Corporation has been engaged in database construction for patent information, technical reports and so on accumulated inside the Company. Two results are obtained; One is TOPICS, inhouse patent information management system, the other is TOMATIS, management and technical information system by use of personal computers and all-purposed relational database software. These systems aim at compiling databases of patent and technological management information generated internally and externally by low labor efforts as well as low cost, and providing for comprehensive information company-wide. This paper introduces the outline of these systems and how they are actually used.
Characterizing SOI Wafers By Use Of AOTF-PHI
NASA Technical Reports Server (NTRS)
Cheng, Li-Jen; Li, Guann-Pyng; Zang, Deyu
1995-01-01
Developmental nondestructive method of characterizing layers of silicon-on-insulator (SOI) wafer involves combination of polarimetric hyperspectral imaging by use of acousto-optical tunable filters (AOTF-PHI) and computational resources for extracting pertinent data on SOI wafers from polarimetric hyperspectral images. Offers high spectral resolution and both ease and rapidity of optical-wavelength tuning. Further efforts to implement all of processing of polarimetric spectral image data in special-purpose hardware for sake of procesing speed. Enables characterization of SOI wafers in real time for online monitoring and adjustment of production. Also accelerates application of AOTF-PHI to other applications in which need for high-resolution spectral imaging, both with and without polarimetry.
Herpetological Monitoring Using a Pitfall Trapping Design in Southern California
Fisher, Robert; Stokes, Drew; Rochester, Carlton; Brehme, Cheryl; Hathaway, Stacie; Case, Ted
2008-01-01
The steps necessary to conduct a pitfall trapping survey for small terrestrial vertebrates are presented. Descriptions of the materials needed and the methods to build trapping equipment from raw materials are discussed. Recommended data collection techniques are given along with suggested data fields. Animal specimen processing procedures, including toe- and scale-clipping, are described for lizards, snakes, frogs, and salamanders. Methods are presented for conducting vegetation surveys that can be used to classify the environment associated with each pitfall trap array. Techniques for data storage and presentation are given based on commonly use computer applications. As with any study, much consideration should be given to the study design and methods before beginning any data collection effort.
Identifying High Potential Well Targets with 3D Seismic and Mineralogy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mellors, R. J.
2015-10-30
Seismic reflection the primary tool used in petroleum exploration and production, but use in geothermal exploration is less standard, in part due to cost but also due to the challenges in identifying the highly-permeable zones essential for economic hydrothermal systems [e.g. Louie et al., 2011; Majer, 2003]. Newer technology, such as wireless sensors and low-cost high performance computing, has helped reduce the cost and effort needed to conduct 3D surveys. The second difficulty, identifying permeable zones, has been less tractable so far. Here we report on the use of seismic attributes from a 3D seismic survey to identify and mapmore » permeable zones in a hydrothermal area.« less
The Fermi Unix environment -- Dealing with adolescence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pordes, R.; Nicholls, J.; Wicks, M.
1995-10-01
Fermilab`s Computing Division started early in the definition implementation and promulgation of a common environment for Users across the Laboratory`s UNIX platforms and installations. Based on the authors experience over nearly five years, they discuss the status of the effort, ongoing developments and needs, some analysis of where they could have done better, and identify future directions to allow them to provide better and more complete service to their customers. In particular, with the power of the new PCs making enthusiastic converts of physicists to the pc world, they are faced with the challenge of expanding the paradigm to non-UNIXmore » platforms in a uniform and consistent way.« less