Software Reuse Methods to Improve Technological Infrastructure for e-Science
NASA Technical Reports Server (NTRS)
Marshall, James J.; Downs, Robert R.; Mattmann, Chris A.
2011-01-01
Social computing has the potential to contribute to scientific research. Ongoing developments in information and communications technology improve capabilities for enabling scientific research, including research fostered by social computing capabilities. The recent emergence of e-Science practices has demonstrated the benefits from improvements in the technological infrastructure, or cyber-infrastructure, that has been developed to support science. Cloud computing is one example of this e-Science trend. Our own work in the area of software reuse offers methods that can be used to improve new technological development, including cloud computing capabilities, to support scientific research practices. In this paper, we focus on software reuse and its potential to contribute to the development and evaluation of information systems and related services designed to support new capabilities for conducting scientific research.
Airborne Cloud Computing Environment (ACCE)
NASA Technical Reports Server (NTRS)
Hardman, Sean; Freeborn, Dana; Crichton, Dan; Law, Emily; Kay-Im, Liz
2011-01-01
Airborne Cloud Computing Environment (ACCE) is JPL's internal investment to improve the return on airborne missions. Improve development performance of the data system. Improve return on the captured science data. The investment is to develop a common science data system capability for airborne instruments that encompasses the end-to-end lifecycle covering planning, provisioning of data system capabilities, and support for scientific analysis in order to improve the quality, cost effectiveness, and capabilities to enable new scientific discovery and research in earth observation.
Improving the Computational Thinking Pedagogical Capabilities of School Teachers
ERIC Educational Resources Information Center
Bower, Matt; Wood, Leigh N.; Lai, Jennifer W. M.; Howe, Cathie; Lister, Raymond; Mason, Raina; Highfield, Kate; Veal, Jennifer
2017-01-01
The idea of computational thinking as skills and universal competence which every child should possess emerged last decade and has been gaining traction ever since. This raises a number of questions, including how to integrate computational thinking into the curriculum, whether teachers have computational thinking pedagogical capabilities to teach…
SPAR improved structural-fluid dynamic analysis capability
NASA Technical Reports Server (NTRS)
Pearson, M. L.
1985-01-01
The results of a study whose objective was to improve the operation of the SPAR computer code by improving efficiency, user features, and documentation is presented. Additional capability was added to the SPAR arithmetic utility system, including trigonometric functions, numerical integration, interpolation, and matrix combinations. Improvements were made in the EIG processor. A processor was created to compute and store principal stresses in table-format data sets. An additional capability was developed and incorporated into the plot processor which permits plotting directly from table-format data sets. Documentation of all these features is provided in the form of updates to the SPAR users manual.
Hypersonic Experimental and Computational Capability, Improvement and Validation. Volume 2
NASA Technical Reports Server (NTRS)
Muylaert, Jean (Editor); Kumar, Ajay (Editor); Dujarric, Christian (Editor)
1998-01-01
The results of the phase 2 effort conducted under AGARD Working Group 18 on Hypersonic Experimental and Computational Capability, Improvement and Validation are presented in this report. The first volume, published in May 1996, mainly focused on the design methodology, plans and some initial results of experiments that had been conducted to serve as validation benchmarks. The current volume presents the detailed experimental and computational data base developed during this effort.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lingerfelt, Eric J; Endeve, Eirik; Hui, Yawei
Improvements in scientific instrumentation allow imaging at mesoscopic to atomic length scales, many spectroscopic modes, and now--with the rise of multimodal acquisition systems and the associated processing capability--the era of multidimensional, informationally dense data sets has arrived. Technical issues in these combinatorial scientific fields are exacerbated by computational challenges best summarized as a necessity for drastic improvement in the capability to transfer, store, and analyze large volumes of data. The Bellerophon Environment for Analysis of Materials (BEAM) platform provides material scientists the capability to directly leverage the integrated computational and analytical power of High Performance Computing (HPC) to perform scalablemore » data analysis and simulation and manage uploaded data files via an intuitive, cross-platform client user interface. This framework delivers authenticated, "push-button" execution of complex user workflows that deploy data analysis algorithms and computational simulations utilizing compute-and-data cloud infrastructures and HPC environments like Titan at the Oak Ridge Leadershp Computing Facility (OLCF).« less
Advances in computer-aided well-test interpretation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horne, R.N.
1994-07-01
Despite the feeling expressed several times over the past 40 years that well-test analysis had reached it peak development, an examination of recent advances shows continuous expansion in capability, with future improvement likely. The expansion in interpretation capability over the past decade arose mainly from the development of computer-aided techniques, which, although introduced 20 years ago, have come into use only recently. The broad application of computer-aided interpretation originated with the improvement of the methodologies and continued with the expansion in computer access and capability that accompanied the explosive development of the microcomputer industry. This paper focuses on the differentmore » pieces of the methodology that combine to constitute a computer-aided interpretation and attempts to compare some of the approaches currently used. Future directions of the approach are also discussed. The separate areas discussed are deconvolution, pressure derivatives, model recognition, nonlinear regression, and confidence intervals.« less
Computer-aided design of large-scale integrated circuits - A concept
NASA Technical Reports Server (NTRS)
Schansman, T. T.
1971-01-01
Circuit design and mask development sequence are improved by using general purpose computer with interactive graphics capability establishing efficient two way communications link between design engineer and system. Interactive graphics capability places design engineer in direct control of circuit development.
Improvements to information management systems simulator
NASA Technical Reports Server (NTRS)
Bilek, R. W.
1972-01-01
The performance of personnel in the augmentation and improvement of the interactive IMSIM information management simulation model is summarized. With this augmented model, NASA now has even greater capabilities for the simulation of computer system configurations, data processing loads imposed on these configurations, and executive software to control system operations. Through these simulations, NASA has an extremely cost effective capability for the design and analysis of computer-based data management systems.
NASA Technical Reports Server (NTRS)
Carden, H. D.; Mcgehee, J. R.
1978-01-01
Modifications to a multidegree of freedom flexible aircraft take-off and landing analysis (FATOLA) computer program, which improved its simulation capabilities, are discussed, and supplemental instructions for use of the program are included. Sample analytical results which illustrate the capabilities of an added nosewheel steering option indicate consistent behavior of the airplane tracking, attitude, motions, and loads for the landing cases and steering situations which were investigated.
Civil propulsion technology for the next twenty-five years
NASA Technical Reports Server (NTRS)
Rosen, Robert; Facey, John R.
1987-01-01
The next twenty-five years will see major advances in civil propulsion technology that will result in completely new aircraft systems for domestic, international, commuter and high-speed transports. These aircraft will include advanced aerodynamic, structural, and avionic technologies resulting in major new system capabilities and economic improvements. Propulsion technologies will include high-speed turboprops in the near term, very high bypass ratio turbofans, high efficiency small engines and advanced cycles utilizing high temperature materials for high-speed propulsion. Key fundamental enabling technologies include increased temperature capability and advanced design methods. Increased temperature capability will be based on improved composite materials such as metal matrix, intermetallics, ceramics, and carbon/carbon as well as advanced heat transfer techniques. Advanced design methods will make use of advances in internal computational fluid mechanics, reacting flow computation, computational structural mechanics and computational chemistry. The combination of advanced enabling technologies, new propulsion concepts and advanced control approaches will provide major improvements in civil aircraft.
Human and Robotic Space Mission Use Cases for High-Performance Spaceflight Computing
NASA Technical Reports Server (NTRS)
Doyle, Richard; Bergman, Larry; Some, Raphael; Whitaker, William; Powell, Wesley; Johnson, Michael; Goforth, Montgomery; Lowry, Michael
2013-01-01
Spaceflight computing is a key resource in NASA space missions and a core determining factor of spacecraft capability, with ripple effects throughout the spacecraft, end-to-end system, and the mission; it can be aptly viewed as a "technology multiplier" in that advances in onboard computing provide dramatic improvements in flight functions and capabilities across the NASA mission classes, and will enable new flight capabilities and mission scenarios, increasing science and exploration return per mission-dollar.
NASA Technical Reports Server (NTRS)
Blakely, R. L.
1973-01-01
A G189A simulation of the shuttle orbiter EC/lSS was prepared and used to study payload support capabilities. Two master program libraries of the G189A computer program were prepared for the NASA/JSC computer system. Several new component subroutines were added to the G189A program library and many existing subroutines were revised to improve their capabilities. A number of special analyses were performed in support of a NASA/JSC shuttle orbiter EC/LSS payload support capability study.
Applied Operations Research: Augmented Reality in an Industrial Environment
NASA Technical Reports Server (NTRS)
Cole, Stuart K.
2015-01-01
Augmented reality is the application of computer generated data or graphics onto a real world view. Its use provides the operator additional information or a heightened situational awareness. While advancements have been made in automation and diagnostics of high value critical equipment to improve readiness, reliability and maintenance, the need for assisting and support to Operations and Maintenance staff persists. AR can improve the human machine interface where computer capabilities maximize the human experience and analysis capabilities. NASA operates multiple facilities with complex ground based HVCE in support of national aerodynamics and space exploration, and the need exists to improve operational support and close a gap related to capability sustainment where key and experienced staff consistently rotate work assignments and reach their expiration of term of service. The initiation of an AR capability to augment and improve human abilities and training experience in the industrial environment requires planning and establishment of a goal and objectives for the systems and specific applications. This paper explored use of AR in support of Operation staff in real time operation of HVCE and its maintenance. The results identified include identification of specific goal and objectives, challenges related to availability and computer system infrastructure.
Optimizing Engineering Tools Using Modern Ground Architectures
2017-12-01
Considerations,” International Journal of Computer Science & Engineering Survey , vol. 5, no. 4, 2014. [10] R. Bell. (n.d). A beginner’s guide to big O notation...scientific community. Traditional computing architectures were not capable of processing the data efficiently, or in some cases, could not process the...thesis investigates how these modern computing architectures could be leveraged by industry and academia to improve the performance and capabilities of
Extensions and improvements on XTRAN3S
NASA Technical Reports Server (NTRS)
Borland, C. J.
1989-01-01
Improvements to the XTRAN3S computer program are summarized. Work on this code, for steady and unsteady aerodynamic and aeroelastic analysis in the transonic flow regime has concentrated on the following areas: (1) Maintenance of the XTRAN3S code, including correction of errors, enhancement of operational capability, and installation on the Cray X-MP system; (2) Extension of the vectorization concepts in XTRAN3S to include additional areas of the code for improved execution speed; (3) Modification of the XTRAN3S algorithm for improved numerical stability for swept, tapered wing cases and improved computational efficiency; and (4) Extension of the wing-only version of XTRAN3S to include pylon and nacelle or external store capability.
Extreme-Scale Computing Project Aims to Advance Precision Oncology | Poster
Two government agencies and five national laboratories are collaborating to develop extremely high-performance computing capabilities that will analyze mountains of research and clinical data to improve scientific understanding of cancer, predict drug response, and improve treatments for patients.
Compute as Fast as the Engineers Can Think! ULTRAFAST COMPUTING TEAM FINAL REPORT
NASA Technical Reports Server (NTRS)
Biedron, R. T.; Mehrotra, P.; Nelson, M. L.; Preston, M. L.; Rehder, J. J.; Rogersm J. L.; Rudy, D. H.; Sobieski, J.; Storaasli, O. O.
1999-01-01
This report documents findings and recommendations by the Ultrafast Computing Team (UCT). In the period 10-12/98, UCT reviewed design case scenarios for a supersonic transport and a reusable launch vehicle to derive computing requirements necessary for support of a design process with efficiency so radically improved that human thought rather than the computer paces the process. Assessment of the present computing capability against the above requirements indicated a need for further improvement in computing speed by several orders of magnitude to reduce time to solution from tens of hours to seconds in major applications. Evaluation of the trends in computer technology revealed a potential to attain the postulated improvement by further increases of single processor performance combined with massively parallel processing in a heterogeneous environment. However, utilization of massively parallel processing to its full capability will require redevelopment of the engineering analysis and optimization methods, including invention of new paradigms. To that end UCT recommends initiation of a new activity at LaRC called Computational Engineering for development of new methods and tools geared to the new computer architectures in disciplines, their coordination, and validation and benefit demonstration through applications.
Evolution of Embedded Processing for Wide Area Surveillance
2014-01-01
future vision . 15. SUBJECT TERMS Embedded processing; high performance computing; general-purpose graphical processing units (GPGPUs) 16. SECURITY...recon- naissance (ISR) mission capabilities. The capabilities these advancements are achieving include the ability to provide persistent all...fighters to support and positively affect their mission . Significant improvements in high-performance computing (HPC) technology make it possible to
Military clouds: utilization of cloud computing systems at the battlefield
NASA Astrophysics Data System (ADS)
Süleyman, Sarıkürk; Volkan, Karaca; İbrahim, Kocaman; Ahmet, Şirzai
2012-05-01
Cloud computing is known as a novel information technology (IT) concept, which involves facilitated and rapid access to networks, servers, data saving media, applications and services via Internet with minimum hardware requirements. Use of information systems and technologies at the battlefield is not new. Information superiority is a force multiplier and is crucial to mission success. Recent advances in information systems and technologies provide new means to decision makers and users in order to gain information superiority. These developments in information technologies lead to a new term, which is known as network centric capability. Similar to network centric capable systems, cloud computing systems are operational today. In the near future extensive use of military clouds at the battlefield is predicted. Integrating cloud computing logic to network centric applications will increase the flexibility, cost-effectiveness, efficiency and accessibility of network-centric capabilities. In this paper, cloud computing and network centric capability concepts are defined. Some commercial cloud computing products and applications are mentioned. Network centric capable applications are covered. Cloud computing supported battlefield applications are analyzed. The effects of cloud computing systems on network centric capability and on the information domain in future warfare are discussed. Battlefield opportunities and novelties which might be introduced to network centric capability by cloud computing systems are researched. The role of military clouds in future warfare is proposed in this paper. It was concluded that military clouds will be indispensible components of the future battlefield. Military clouds have the potential of improving network centric capabilities, increasing situational awareness at the battlefield and facilitating the settlement of information superiority.
NASA HPCC Technology for Aerospace Analysis and Design
NASA Technical Reports Server (NTRS)
Schulbach, Catherine H.
1999-01-01
The Computational Aerosciences (CAS) Project is part of NASA's High Performance Computing and Communications Program. Its primary goal is to accelerate the availability of high-performance computing technology to the US aerospace community-thus providing the US aerospace community with key tools necessary to reduce design cycle times and increase fidelity in order to improve safety, efficiency and capability of future aerospace vehicles. A complementary goal is to hasten the emergence of a viable commercial market within the aerospace community for the advantage of the domestic computer hardware and software industry. The CAS Project selects representative aerospace problems (especially design) and uses them to focus efforts on advancing aerospace algorithms and applications, systems software, and computing machinery to demonstrate vast improvements in system performance and capability over the life of the program. Recent demonstrations have served to assess the benefits of possible performance improvements while reducing the risk of adopting high-performance computing technology. This talk will discuss past accomplishments in providing technology to the aerospace community, present efforts, and future goals. For example, the times to do full combustor and compressor simulations (of aircraft engines) have been reduced by factors of 320:1 and 400:1 respectively. While this has enabled new capabilities in engine simulation, the goal of an overnight, dynamic, multi-disciplinary, 3-dimensional simulation of an aircraft engine is still years away and will require new generations of high-end technology.
A breakthrough for experiencing and understanding simulated physics
NASA Technical Reports Server (NTRS)
Watson, Val
1988-01-01
The use of computer simulation in physics research is discussed, focusing on improvements to graphic workstations. Simulation capabilities and applications of enhanced visualization tools are outlined. The elements of an ideal computer simulation are presented and the potential for improving various simulation elements is examined. The interface between the human and the computer and simulation models are considered. Recommendations are made for changes in computer simulation practices and applications of simulation technology in education.
Efficient Computation Of Manipulator Inertia Matrix
NASA Technical Reports Server (NTRS)
Fijany, Amir; Bejczy, Antal K.
1991-01-01
Improved method for computation of manipulator inertia matrix developed, based on concept of spatial inertia of composite rigid body. Required for implementation of advanced dynamic-control schemes as well as dynamic simulation of manipulator motion. Motivated by increasing demand for fast algorithms to provide real-time control and simulation capability and, particularly, need for faster-than-real-time simulation capability, required in many anticipated space teleoperation applications.
Institute for Computational Mechanics in Propulsion (ICOMP)
NASA Technical Reports Server (NTRS)
Feiler, Charles E. (Compiler)
1993-01-01
The Institute for Computational Mechanics in Propulsion (ICOMP) was established at the NASA Lewis Research Center in Cleveland, Ohio to develop techniques to improve problem-solving capabilities in all aspects of computational mechanics related to propulsion. The activities at ICOMP during 1992 are described.
Mobile medical computing driven by the complexity of neurologic diagnosis.
Segal, Michael M
2006-07-01
Medical computing has been split between palm-sized computers optimized for mobility and desktop computers optimized for capability. This split was due to technology too immature to deliver both mobility and capability in the same computer and the lack of medical software that demanded both mobility and capability. Advances in hardware and software are ushering in an era in which fully capable computers will be available ubiquitously. As a result, medical practice, education and publishing will change. Medical practice will be improved by the use of software that not only assists with diagnosis but can do so at the bedside, where the doctor can act immediately upon suggestions such as useful findings to check. Medical education will shift away from a focus on details of unusual diseases and toward a focus on skills of physical examination and using computerized tools. Medical publishing, in contrast, will shift toward greater detail: it will be increasingly important to quantitate the frequency of findings in diseases and their time course since such information can have a major impact clinically when added to decision support software.
An Interactive Version of MULR04 With Enhanced Graphic Capability
ERIC Educational Resources Information Center
Burkholder, Joel H.
1978-01-01
An existing computer program for computing multiple regression analyses is made interactive in order to alleviate core storage requirements. Also, some improvements in the graphics aspects of the program are included. (JKS)
An assessment of future computer system needs for large-scale computation
NASA Technical Reports Server (NTRS)
Lykos, P.; White, J.
1980-01-01
Data ranging from specific computer capability requirements to opinions about the desirability of a national computer facility are summarized. It is concluded that considerable attention should be given to improving the user-machine interface. Otherwise, increased computer power may not improve the overall effectiveness of the machine user. Significant improvement in throughput requires highly concurrent systems plus the willingness of the user community to develop problem solutions for that kind of architecture. An unanticipated result was the expression of need for an on-going cross-disciplinary users group/forum in order to share experiences and to more effectively communicate needs to the manufacturers.
NASA Technical Reports Server (NTRS)
Watson, V. R.
1983-01-01
A personal computer has been used to illustrate physical phenomena and problem solution techniques in engineering classes. According to student evaluations, instruction of concepts was greatly improved through the use of these illustrations. This paper describes the class of phenomena that can be effectively illustrated, the techniques used to create these illustrations, and the techniques used to display the illustrations in regular classrooms and over an instructional TV network. The features of a personal computer required to apply these techniques are listed. The capabilities of some present personal computers are discussed and a forecast of the capabilities of future personal computers is presented.
Extreme-Scale Computing Project Aims to Advance Precision Oncology | FNLCR Staging
Two government agencies and five national laboratories are collaborating to develop extremely high-performance computing capabilities that will analyze mountains of research and clinical data to improve scientific understanding of cancer, predict dru
Human and Robotic Space Mission Use Cases for High-Performance Spaceflight Computing
NASA Technical Reports Server (NTRS)
Some, Raphael; Doyle, Richard; Bergman, Larry; Whitaker, William; Powell, Wesley; Johnson, Michael; Goforth, Montgomery; Lowry, Michael
2013-01-01
Spaceflight computing is a key resource in NASA space missions and a core determining factor of spacecraft capability, with ripple effects throughout the spacecraft, end-to-end system, and mission. Onboard computing can be aptly viewed as a "technology multiplier" in that advances provide direct dramatic improvements in flight functions and capabilities across the NASA mission classes, and enable new flight capabilities and mission scenarios, increasing science and exploration return. Space-qualified computing technology, however, has not advanced significantly in well over ten years and the current state of the practice fails to meet the near- to mid-term needs of NASA missions. Recognizing this gap, the NASA Game Changing Development Program (GCDP), under the auspices of the NASA Space Technology Mission Directorate, commissioned a study on space-based computing needs, looking out 15-20 years. The study resulted in a recommendation to pursue high-performance spaceflight computing (HPSC) for next-generation missions, and a decision to partner with the Air Force Research Lab (AFRL) in this development.
Heterogeneous concurrent computing with exportable services
NASA Technical Reports Server (NTRS)
Sunderam, Vaidy
1995-01-01
Heterogeneous concurrent computing, based on the traditional process-oriented model, is approaching its functionality and performance limits. An alternative paradigm, based on the concept of services, supporting data driven computation, and built on a lightweight process infrastructure, is proposed to enhance the functional capabilities and the operational efficiency of heterogeneous network-based concurrent computing. TPVM is an experimental prototype system supporting exportable services, thread-based computation, and remote memory operations that is built as an extension of and an enhancement to the PVM concurrent computing system. TPVM offers a significantly different computing paradigm for network-based computing, while maintaining a close resemblance to the conventional PVM model in the interest of compatibility and ease of transition Preliminary experiences have demonstrated that the TPVM framework presents a natural yet powerful concurrent programming interface, while being capable of delivering performance improvements of upto thirty percent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bryan, Frank; Dennis, John; MacCready, Parker
This project aimed to improve long term global climate simulations by resolving and enhancing the representation of the processes involved in the cycling of freshwater through estuaries and coastal regions. This was a collaborative multi-institution project consisting of physical oceanographers, climate model developers, and computational scientists. It specifically targeted the DOE objectives of advancing simulation and predictive capability of climate models through improvements in resolution and physical process representation. The main computational objectives were: 1. To develop computationally efficient, but physically based, parameterizations of estuary and continental shelf mixing processes for use in an Earth System Model (CESM). 2. Tomore » develop a two-way nested regional modeling framework in order to dynamically downscale the climate response of particular coastal ocean regions and to upscale the impact of the regional coastal processes to the global climate in an Earth System Model (CESM). 3. To develop computational infrastructure to enhance the efficiency of data transfer between specific sources and destinations, i.e., a point-to-point communication capability, (used in objective 1) within POP, the ocean component of CESM.« less
Institute for Computational Mechanics in Propulsion (ICOMP)
NASA Technical Reports Server (NTRS)
Feiler, Charles E. (Editor)
1991-01-01
The Institute for Computational Mechanics in Propulsion (ICOMP) is operated jointly by Case Western Reserve University and the NASA Lewis Research Center in Cleveland, Ohio. The purpose of ICOMP is to develop techniques to improve problem-solving capabilities in all aspects of computational mechanics related to propulsion. The activities at ICOMP during 1990 are described.
Institute for Computational Mechanics in Propulsion (ICOMP)
NASA Technical Reports Server (NTRS)
1989-01-01
The Institute for Computational Mechanics in Propulsion (ICOMP) is operated jointly by Case Western Reserve University and the NASA Lewis Research Center in Cleveland, Ohio. The purpose of ICOMP is to develop techniques to improve problem-solving capabilities in all aspects of computational mechanics related to propulsion. This report describes the activities at ICOMP during 1988.
Institute for Computational Mechanics in Propulsion (ICOMP)
NASA Technical Reports Server (NTRS)
1988-01-01
The Institute for Computational Mechanics in Propulsion (ICOMP) is operated jointly by Case Western Reserve University and the NASA Lewis Research Center in Cleveland, Ohio. The purpose of ICOMP is to develop techniques to improve problem-solving capabilities in all aspects of computational mechanics related to propulsion. Described are the activities of ICOMP during 1987.
Institute for Computational Mechanics in Propulsion (ICOMP) fourth annual review, 1989
NASA Technical Reports Server (NTRS)
1990-01-01
The Institute for Computational Mechanics in Propulsion (ICOMP) is operated jointly by Case Western Reserve University and the NASA Lewis Research Center. The purpose of ICOMP is to develop techniques to improve problem solving capabilities in all aspects of computational mechanics related to propulsion. The activities at ICOMP during 1989 are described.
Institute for Computational Mechanics in Propulsion (ICOMP)
NASA Technical Reports Server (NTRS)
Feiler, Charles E. (Editor)
1992-01-01
The Institute for Computational Mechanics in Propulsion (ICOMP) is a combined activity of Case Western Reserve University, Ohio Aerospace Institute (OAI) and NASA Lewis. The purpose of ICOMP is to develop techniques to improve problem solving capabilities in all aspects of computational mechanics related to propulsion. The activities at ICOMP during 1991 are described.
Computer program optimizes design of nuclear radiation shields
NASA Technical Reports Server (NTRS)
Lahti, G. P.
1971-01-01
Computer program, OPEX 2, determines minimum weight, volume, or cost for shields. Program incorporates improved coding, simplified data input, spherical geometry, and an expanded output. Method is capable of altering dose-thickness relationship when a shield layer has been removed.
Two government agencies and five national laboratories are collaborating to develop extremely high-performance computing capabilities that will analyze mountains of research and clinical data to improve scientific understanding of cancer, predict dru
New Toxico-Cheminformatics & Computational Toxicology Initiatives At EPA
EPA’s National Center for Computational Toxicology is building capabilities to support a new paradigm for toxicity screening and prediction. The DSSTox project is improving public access to quality structure-annotated chemical toxicity information in less summarized forms than tr...
Computational Modeling in Concert with Laboratory Studies: Application to B Cell Differentiation
Remediation is expensive, so accurate prediction of dose-response is important to help control costs. Dose response is a function of biological mechanisms. Computational models of these mechanisms improve the efficiency of research and provide the capability for prediction.
CEASAW: A User-Friendly Computer Environment Analysis for the Sawmill Owner
Guillermo Mendoza; William Sprouse; Philip A. Araman; William G. Luppold
1991-01-01
Improved spreadsheet software capabilities have brought optimization to users with little or no background in mathematical programming. Better interface capabilities of spreadsheet models now make it possible to combine optimization models with a spreadsheet system. Sawmill production and inventory systems possess many features that make them suitable application...
Institute for Computational Mechanics in Propulsion (ICOMP)
NASA Technical Reports Server (NTRS)
Feiler, Charles E. (Editor)
1994-01-01
The Institute for Computational Mechanics in Propulsion (ICOMP) is operated by the Ohio Aerospace Institute (OAI) and the NASA Lewis Research Center in Cleveland, Ohio. The purpose of ICOMP is to develop techniques to improve problem-solving capabilities in all aspects of computational mechanics related to propulsion. This report describes the accomplishments and activities at ICOMP during 1993.
The Role of Crop Systems Simulation in Agriculture and Environment
USDA-ARS?s Scientific Manuscript database
Over the past 30 to 40 years, simulation of crop systems has advanced from a neophyte science with inadequate computing power into a robust and increasingly accepted science supported by improved software, languages, development tools, and computer capabilities. Crop system simulators contain mathe...
Computational Fluid Dynamics (CFD) simulations provide a number of unique opportunities for expanding and improving capabilities for modeling exposures to environmental pollutants. The US Environmental Protection Agency's National Exposure Research Laboratory (NERL) has been c...
A design of an interface board between a MRC thermistor probe and a personal computer.
DOT National Transportation Integrated Search
2013-09-01
The main purpose of this project was to design and build a prototype of an interface board between an MRC temperature probe : (thermistor array) and a personal laptop computer. This interface board replaces and significantly improve the capabilities ...
NASA Technical Reports Server (NTRS)
Rubbert, P. E.
1978-01-01
The commercial airplane builder's viewpoint on the important issues involved in the development of improved computational aerodynamics tools such as powerful computers optimized for fluid flow problems is presented. The primary user of computational aerodynamics in a commercial aircraft company is the design engineer who is concerned with solving practical engineering problems. From his viewpoint, the development of program interfaces and pre-and post-processing capability for new computational methods is just as important as the algorithms and machine architecture. As more and more details of the entire flow field are computed, the visibility of the output data becomes a major problem which is then doubled when a design capability is added. The user must be able to see, understand, and interpret the results calculated. Enormous costs are expanded because of the need to work with programs having only primitive user interfaces.
NASA Technical Reports Server (NTRS)
Gaston, S.; Wertheim, M.; Orourke, J. A.
1973-01-01
Summary, consolidation and analysis of specifications, manufacturing process and test controls, and performance results for OAO-2 and OAO-3 lot 20 Amp-Hr sealed nickel cadmium cells and batteries are reported. Correlation of improvements in control requirements with performance is a key feature. Updates for a cell/battery computer model to improve performance prediction capability are included. Applicability of regression analysis computer techniques to relate process controls to performance is checked.
Additions and improvements to the high energy density physics capabilities in the FLASH code
NASA Astrophysics Data System (ADS)
Lamb, D. Q.; Flocke, N.; Graziani, C.; Tzeferacos, P.; Weide, K.
2016-10-01
FLASH is an open source, finite-volume Eulerian, spatially adaptive radiation magnetohydrodynamics code that has the capabilities to treat a broad range of physical processes. FLASH performs well on a wide range of computer architectures, and has a broad user base. Extensive high energy density physics (HEDP) capabilities have been added to FLASH to make it an open toolset for the academic HEDP community. We summarize these capabilities, emphasizing recent additions and improvements. In particular, we showcase the ability of FLASH to simulate the Faraday Rotation Measure produced by the presence of magnetic fields; and proton radiography, proton self-emission, and Thomson scattering diagnostics with and without the presence of magnetic fields. We also describe several collaborations with the academic HEDP community in which FLASH simulations were used to design and interpret HEDP experiments. This work was supported in part at the University of Chicago by the DOE NNSA ASC through the Argonne Institute for Computing in Science under field work proposal 57789; and the NSF under Grant PHY-0903997.
76 FR 62420 - Statement of Organization, Functions and Delegations of Authority
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-07
... leadership, consultation, training, and management services for HRSA's enterprise computing environment; (2... responsibility with improved security management capabilities and improved alignment of current security... responsible for the organization, management, and administrative functions necessary to carry out the...
Performance upgrades to the MCNP6 burnup capability for large scale depletion calculations
Fensin, M. L.; Galloway, J. D.; James, M. R.
2015-04-11
The first MCNP based inline Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. With the merger of MCNPX and MCNP5, MCNP6 combined the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. The new MCNP6 depletion capability was first showcased at the International Congress for Advancements in Nuclear Power Plants (ICAPP) meeting in 2012. At that conference the new capabilities addressed included the combined distributive and shared memory parallel architecture for the burnup capability, improved memory management, physics enhancements, and newmore » predictability as compared to the H.B Robinson Benchmark. At Los Alamos National Laboratory, a special purpose cluster named “tebow,” was constructed such to maximize available RAM per CPU, as well as leveraging swap space with solid state hard drives, to allow larger scale depletion calculations (allowing for significantly more burnable regions than previously examined). As the MCNP6 burnup capability was scaled to larger numbers of burnable regions, a noticeable slowdown was realized.This paper details two specific computational performance strategies for improving calculation speedup: (1) retrieving cross sections during transport; and (2) tallying mechanisms specific to burnup in MCNP. To combat this slowdown new performance upgrades were developed and integrated into MCNP6 1.2.« less
Institute for Computational Mechanics in Propulsion (ICOMP)
NASA Technical Reports Server (NTRS)
Feiler, Charles E. (Editor)
1995-01-01
The Institute for Computational Mechanics in Propulsion (ICOMP) is operated by the Ohio Aerospace Institute (OAI) and funded under a cooperative agreement by the NASA Lewis Research Center in Cleveland, Ohio. The purpose of ICOMP is to develop techniques to improve problem-solving capabilities in all aspects of computational mechanics related to propulsion. This report describes the activities at ICOMP during 1994.
Institute for Computational Mechanics in Propulsion (ICOMP). 10
NASA Technical Reports Server (NTRS)
Keith, Theo G., Jr. (Editor); Balog, Karen (Editor); Povinelli, Louis A. (Editor)
1996-01-01
The Institute for Computational Mechanics in Propulsion (ICOMP) is operated by the Ohio Aerospace Institute (OAI) and funded under a cooperative agreement by the NASA Lewis Research Center in Cleveland, Ohio. The purpose of ICOMP is to develop techniques to improve problem-solving capabilities in all aspects of computational mechanics related to propulsion. This report describes the activities at ICOUP during 1995.
Institute for Computational Mechanics in Propulsion (ICOMP)
NASA Technical Reports Server (NTRS)
Keith, Theo G., Jr. (Editor); Balog, Karen (Editor); Povinelli, Louis A. (Editor)
1997-01-01
The Institute for Computational Mechanics in Propulsion (ICOMP) is operated by the Ohio Aerospace Institute (OAI) and funded under a cooperative agreement by the NASA Lewis Research Center in Cleveland, Ohio. Thee purpose of ICOMP is to develop techniques to improve problem-solving capabilities in all aspects of computational mechanics related to propulsion. This report describes the activities at ICOMP during 1996.
Improved Interactive Medical-Imaging System
NASA Technical Reports Server (NTRS)
Ross, Muriel D.; Twombly, Ian A.; Senger, Steven
2003-01-01
An improved computational-simulation system for interactive medical imaging has been invented. The system displays high-resolution, three-dimensional-appearing images of anatomical objects based on data acquired by such techniques as computed tomography (CT) and magnetic-resonance imaging (MRI). The system enables users to manipulate the data to obtain a variety of views for example, to display cross sections in specified planes or to rotate images about specified axes. Relative to prior such systems, this system offers enhanced capabilities for synthesizing images of surgical cuts and for collaboration by users at multiple, remote computing sites.
Dual-Arm Generalized Compliant Motion With Shared Control
NASA Technical Reports Server (NTRS)
Backes, Paul G.
1994-01-01
Dual-Arm Generalized Compliant Motion (DAGCM) primitive computer program implementing improved unified control scheme for two manipulator arms cooperating in task in which both grasp same object. Provides capabilities for autonomous, teleoperation, and shared control of two robot arms. Unifies cooperative dual-arm control with multi-sensor-based task control and makes complete task-control capability available to higher-level task-planning computer system via large set of input parameters used to describe desired force and position trajectories followed by manipulator arms. Some concepts discussed in "A Generalized-Compliant-Motion Primitive" (NPO-18134).
Research in progress in applied mathematics, numerical analysis, and computer science
NASA Technical Reports Server (NTRS)
1990-01-01
Research conducted at the Institute in Science and Engineering in applied mathematics, numerical analysis, and computer science is summarized. The Institute conducts unclassified basic research in applied mathematics in order to extend and improve problem solving capabilities in science and engineering, particularly in aeronautics and space.
The Electronic Portfolio: Ethical Considerations
ERIC Educational Resources Information Center
McFadden, Joan R.; Saiki, Diana
2005-01-01
Computer technology has become important in Family and Consumer Science (FCS) because it offers improved communication with capabilities such as the Internet that allows for a "more productive workplace" (Jones, 2003, p. 38). Kittross and Gordon (2003) alerted users that as computer technology evolves, there is a need for colleges and universities…
Distributed GPU Computing in GIScience
NASA Astrophysics Data System (ADS)
Jiang, Y.; Yang, C.; Huang, Q.; Li, J.; Sun, M.
2013-12-01
Geoscientists strived to discover potential principles and patterns hidden inside ever-growing Big Data for scientific discoveries. To better achieve this objective, more capable computing resources are required to process, analyze and visualize Big Data (Ferreira et al., 2003; Li et al., 2013). Current CPU-based computing techniques cannot promptly meet the computing challenges caused by increasing amount of datasets from different domains, such as social media, earth observation, environmental sensing (Li et al., 2013). Meanwhile CPU-based computing resources structured as cluster or supercomputer is costly. In the past several years with GPU-based technology matured in both the capability and performance, GPU-based computing has emerged as a new computing paradigm. Compare to traditional computing microprocessor, the modern GPU, as a compelling alternative microprocessor, has outstanding high parallel processing capability with cost-effectiveness and efficiency(Owens et al., 2008), although it is initially designed for graphical rendering in visualization pipe. This presentation reports a distributed GPU computing framework for integrating GPU-based computing within distributed environment. Within this framework, 1) for each single computer, computing resources of both GPU-based and CPU-based can be fully utilized to improve the performance of visualizing and processing Big Data; 2) within a network environment, a variety of computers can be used to build up a virtual super computer to support CPU-based and GPU-based computing in distributed computing environment; 3) GPUs, as a specific graphic targeted device, are used to greatly improve the rendering efficiency in distributed geo-visualization, especially for 3D/4D visualization. Key words: Geovisualization, GIScience, Spatiotemporal Studies Reference : 1. Ferreira de Oliveira, M. C., & Levkowitz, H. (2003). From visual data exploration to visual data mining: A survey. Visualization and Computer Graphics, IEEE Transactions on, 9(3), 378-394. 2. Li, J., Jiang, Y., Yang, C., Huang, Q., & Rice, M. (2013). Visualizing 3D/4D Environmental Data Using Many-core Graphics Processing Units (GPUs) and Multi-core Central Processing Units (CPUs). Computers & Geosciences, 59(9), 78-89. 3. Owens, J. D., Houston, M., Luebke, D., Green, S., Stone, J. E., & Phillips, J. C. (2008). GPU computing. Proceedings of the IEEE, 96(5), 879-899.
Investigating the impact of the cielo cray XE6 architecture on scientific application codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajan, Mahesh; Barrett, Richard; Pedretti, Kevin Thomas Tauke
2010-12-01
Cielo, a Cray XE6, is the Department of Energy NNSA Advanced Simulation and Computing (ASC) campaign's newest capability machine. Rated at 1.37 PFLOPS, it consists of 8,944 dual-socket oct-core AMD Magny-Cours compute nodes, linked using Cray's Gemini interconnect. Its primary mission objective is to enable a suite of the ASC applications implemented using MPI to scale to tens of thousands of cores. Cielo is an evolutionary improvement to a successful architecture previously available to many of our codes, thus enabling a basis for understanding the capabilities of this new architecture. Using three codes strategically important to the ASC campaign, andmore » supplemented with some micro-benchmarks that expose the fundamental capabilities of the XE6, we report on the performance characteristics and capabilities of Cielo.« less
NASA Astrophysics Data System (ADS)
Kim, Hyun-Sok; Hyun, Min-Sung; Ju, Jae-Wuk; Kim, Young-Sik; Lambregts, Cees; van Rhee, Peter; Kim, Johan; McNamara, Elliott; Tel, Wim; Böcker, Paul; Oh, Nang-Lyeom; Lee, Jun-Hyung
2018-03-01
Computational metrology has been proposed as the way forward to resolve the need for increased metrology density, resulting from extending correction capabilities, without adding actual metrology budget. By exploiting TWINSCAN based metrology information, dense overlay fingerprints for every wafer can be computed. This extended metrology dataset enables new use cases, such as monitoring and control based on fingerprints for every wafer of the lot. This paper gives a detailed description, discusses the accuracy of the fingerprints computed, and will show results obtained in a DRAM HVM manufacturing environment. Also an outlook for improvements and extensions will be shared.
Institute for Computational Mechanics in Propulsion (ICOMP)
NASA Technical Reports Server (NTRS)
Keith, Theo G., Jr. (Editor); Balog, Karen (Editor); Povinelli, Louis A. (Editor)
1999-01-01
The Institute for Computational Mechanics in Propulsion (ICOMP) was formed to develop techniques to improve problem-solving capabilities in all aspects of computational mechanics related to propulsion. ICOMP is operated by the Ohio Aerospace Institute (OAI) and funded via numerous cooperative agreements by the NASA Glenn Research Center in Cleveland, Ohio. This report describes the activities at ICOMP during 1998, the Institutes thirteenth year of operation.
ERIC Educational Resources Information Center
Wang, Chu-Fu; Lin, Chih-Lung; Deng, Jien-Han
2012-01-01
Testing is an important stage of teaching as it can assist teachers in auditing students' learning results. A good test is able to accurately reflect the capability of a learner. Nowadays, Computer-Assisted Testing (CAT) is greatly improving traditional testing, since computers can automatically and quickly compose a proper test sheet to meet user…
Institute for Computational Mechanics in Propulsion (ICOMP)
NASA Technical Reports Server (NTRS)
Keith, Theo G., Jr. (Editor); Balog, Karen (Editor); Povinelli, Louis A. (Editor)
2001-01-01
The Institute for Computational Mechanics in Propulsion (ICOMP) was formed to develop techniques to improve problem-solving capabilities in all aspects of computational mechanics related to propulsion. ICOMP is operated by the Ohio Aerospace Institute (OAI) and funded via numerous cooperative agreements by the NASA Glenn Research Center in Cleveland, Ohio. This report describes the activities at ICOMP during 1999, the Institute's fourteenth year of operation.
Institute for Computational Mechanics in Propulsion (ICOMP)
NASA Technical Reports Server (NTRS)
Keith, Theo G., Jr. (Editor); Balog, Karen (Editor); Povinelli, Louis A. (Editor)
1998-01-01
The Institute for Computational Mechanics in Propulsion (ICOMP) was formed to develop techniques to improve problem-solving capabilities in all aspects of computational mechanics related to propulsion. ICOMP is operated by the Ohio Aerospace Institute (OAI) and funded via numerous cooperative agreements by the NASA Lewis Research Center in Cleveland, Ohio. This report describes the activities at ICOMP during 1997, the Institute's twelfth year of operation.
"Software Tools" to Improve Student Writing.
ERIC Educational Resources Information Center
Oates, Rita Haugh
1987-01-01
Reviews several software packages that analyze text readability, check for spelling and style problems, offer desktop publishing capabilities, teach interviewing skills, and teach grammar using a computer game. (SRT)
NASA Technical Reports Server (NTRS)
Mall, G. H.
1983-01-01
Modifications to a multi-degree-of-freedom flexible aircraft take-off and landing analysis (FATOLA) computer program, including a provision for actively controlled landing gears to expand the programs simulation capabilities, are presented. Supplemental instructions for preparation of data and for use of the modified program are included.
Additions and improvements to the high energy density physics capabilities in the FLASH code
NASA Astrophysics Data System (ADS)
Lamb, D.; Bogale, A.; Feister, S.; Flocke, N.; Graziani, C.; Khiar, B.; Laune, J.; Tzeferacos, P.; Walker, C.; Weide, K.
2017-10-01
FLASH is an open-source, finite-volume Eulerian, spatially-adaptive radiation magnetohydrodynamics code that has the capabilities to treat a broad range of physical processes. FLASH performs well on a wide range of computer architectures, and has a broad user base. Extensive high energy density physics (HEDP) capabilities exist in FLASH, which make it a powerful open toolset for the academic HEDP community. We summarize these capabilities, emphasizing recent additions and improvements. We describe several non-ideal MHD capabilities that are being added to FLASH, including the Hall and Nernst effects, implicit resistivity, and a circuit model, which will allow modeling of Z-pinch experiments. We showcase the ability of FLASH to simulate Thomson scattering polarimetry, which measures Faraday due to the presence of magnetic fields, as well as proton radiography, proton self-emission, and Thomson scattering diagnostics. Finally, we describe several collaborations with the academic HEDP community in which FLASH simulations were used to design and interpret HEDP experiments. This work was supported in part at U. Chicago by DOE NNSA ASC through the Argonne Institute for Computing in Science under FWP 57789; DOE NNSA under NLUF Grant DE-NA0002724; DOE SC OFES Grant DE-SC0016566; and NSF Grant PHY-1619573.
DOE Office of Scientific and Technical Information (OSTI.GOV)
James, Conrad D.; Schiess, Adrian B.; Howell, Jamie
2013-10-01
The human brain (volume=1200cm3) consumes 20W and is capable of performing > 10^16 operations/s. Current supercomputer technology has reached 1015 operations/s, yet it requires 1500m^3 and 3MW, giving the brain a 10^12 advantage in operations/s/W/cm^3. Thus, to reach exascale computation, two achievements are required: 1) improved understanding of computation in biological tissue, and 2) a paradigm shift towards neuromorphic computing where hardware circuits mimic properties of neural tissue. To address 1), we will interrogate corticostriatal networks in mouse brain tissue slices, specifically with regard to their frequency filtering capabilities as a function of input stimulus. To address 2), we willmore » instantiate biological computing characteristics such as multi-bit storage into hardware devices with future computational and memory applications. Resistive memory devices will be modeled, designed, and fabricated in the MESA facility in consultation with our internal and external collaborators.« less
Status of Computational Aerodynamic Modeling Tools for Aircraft Loss-of-Control
NASA Technical Reports Server (NTRS)
Frink, Neal T.; Murphy, Patrick C.; Atkins, Harold L.; Viken, Sally A.; Petrilli, Justin L.; Gopalarathnam, Ashok; Paul, Ryan C.
2016-01-01
A concerted effort has been underway over the past several years to evolve computational capabilities for modeling aircraft loss-of-control under the NASA Aviation Safety Program. A principal goal has been to develop reliable computational tools for predicting and analyzing the non-linear stability & control characteristics of aircraft near stall boundaries affecting safe flight, and for utilizing those predictions for creating augmented flight simulation models that improve pilot training. Pursuing such an ambitious task with limited resources required the forging of close collaborative relationships with a diverse body of computational aerodynamicists and flight simulation experts to leverage their respective research efforts into the creation of NASA tools to meet this goal. Considerable progress has been made and work remains to be done. This paper summarizes the status of the NASA effort to establish computational capabilities for modeling aircraft loss-of-control and offers recommendations for future work.
Increasing productivity of the McAuto CAD/CAE system by user-specific applications programming
NASA Technical Reports Server (NTRS)
Plotrowski, S. M.; Vu, T. H.
1985-01-01
Significant improvements in the productivity of the McAuto Computer-Aided Design/Computer-Aided Engineering (CAD/CAE) system were achieved by applications programming using the system's own Graphics Interactive Programming language (GRIP) and the interface capabilities with the main computer on which the system resides. The GRIP programs for creating springs, bar charts, finite element model representations and aiding management planning are presented as examples.
NASA Technical Reports Server (NTRS)
1983-01-01
An assessment was made of the impact of developments in computational fluid dynamics (CFD) on the traditional role of aerospace ground test facilities over the next fifteen years. With improvements in CFD and more powerful scientific computers projected over this period it is expected to have the capability to compute the flow over a complete aircraft at a unit cost three orders of magnitude lower than presently possible. Over the same period improvements in ground test facilities will progress by application of computational techniques including CFD to data acquisition, facility operational efficiency, and simulation of the light envelope; however, no dramatic change in unit cost is expected as greater efficiency will be countered by higher energy and labor costs.
NASA Technical Reports Server (NTRS)
Freitas, R. A., Jr. (Editor); Carlson, P. A. (Editor)
1983-01-01
Adoption of an aggressive computer science research and technology program within NASA will: (1) enable new mission capabilities such as autonomous spacecraft, reliability and self-repair, and low-bandwidth intelligent Earth sensing; (2) lower manpower requirements, especially in the areas of Space Shuttle operations, by making fuller use of control center automation, technical support, and internal utilization of state-of-the-art computer techniques; (3) reduce project costs via improved software verification, software engineering, enhanced scientist/engineer productivity, and increased managerial effectiveness; and (4) significantly improve internal operations within NASA with electronic mail, managerial computer aids, an automated bureaucracy and uniform program operating plans.
Northwest Trajectory Analysis Capability: A Platform for Enhancing Computational Biophysics Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peterson, Elena S.; Stephan, Eric G.; Corrigan, Abigail L.
2008-07-30
As computational resources continue to increase, the ability of computational simulations to effectively complement, and in some cases replace, experimentation in scientific exploration also increases. Today, large-scale simulations are recognized as an effective tool for scientific exploration in many disciplines including chemistry and biology. A natural side effect of this trend has been the need for an increasingly complex analytical environment. In this paper, we describe Northwest Trajectory Analysis Capability (NTRAC), an analytical software suite developed to enhance the efficiency of computational biophysics analyses. Our strategy is to layer higher-level services and introduce improved tools within the user’s familiar environmentmore » without preventing researchers from using traditional tools and methods. Our desire is to share these experiences to serve as an example for effectively analyzing data intensive large scale simulation data.« less
SPAR improved structure/fluid dynamic analysis capability
NASA Technical Reports Server (NTRS)
Oden, J. T.; Pearson, M. L.
1983-01-01
The capability of analyzing a coupled dynamic system of flowing fluid and elastic structure was added to the SPAR computer code. A method, developed and adopted for use in SPAR utilizes the existing assumed stress hybrid plan element in SPAR. An operational mode was incorporated in SPAR which provides the capability for analyzing the flaw of a two dimensional, incompressible, viscous fluid within rigid boundaries. Equations were developed to provide for the eventual analysis of the interaction of such fluids with an elastic solid.
Effects of shock on hypersonic boundary layer stability
NASA Astrophysics Data System (ADS)
Pinna, F.; Rambaud, P.
2013-06-01
The design of hypersonic vehicles requires the estimate of the laminar to turbulent transition location for an accurate sizing of the thermal protection system. Linear stability theory is a fast scientific way to study the problem. Recent improvements in computational capabilities allow computing the flow around a full vehicle instead of using only simplified boundary layer equations. In this paper, the effect of the shock is studied on a mean flow provided by steady Computational Fluid Dynamics (CFD) computations and simplified boundary layer calculations.
Impact design methods for ceramic components in gas turbine engines
NASA Technical Reports Server (NTRS)
Song, J.; Cuccio, J.; Kington, H.
1991-01-01
Methods currently under development to design ceramic turbine components with improved impact resistance are presented. Two different modes of impact damage are identified and characterized, i.e., structural damage and local damage. The entire computation is incorporated into the EPIC computer code. Model capability is demonstrated by simulating instrumented plate impact and particle impact tests.
NASA Technical Reports Server (NTRS)
Ledbetter, Kenneth W.
1992-01-01
Four trends in spacecraft flight operations are discussed which will reduce overall program costs. These trends are the use of high-speed, highly reliable data communications systems for distributing operations functions to more convenient and cost-effective sites; the improved capability for remote operation of sensors; a continued rapid increase in memory and processing speed of flight qualified computer chips; and increasingly capable ground-based hardware and software systems, notably those augmented by artificial intelligence functions. Changes reflected by these trends are reviewed starting from the NASA Viking missions of the early 70s, when mission control was conducted at one location using expensive and cumbersome mainframe computers and communications equipment. In the 1980s, powerful desktop computers and modems enabled the Magellan project team to operate the spacecraft remotely. In the 1990s, the Hubble Space Telescope project uses multiple color screens and automated sequencing software on small computers. Given a projection of current capabilities, future control centers will be even more cost-effective.
NASA Astrophysics Data System (ADS)
Ford, Eric B.; Dindar, Saleh; Peters, Jorg
2015-08-01
The realism of astrophysical simulations and statistical analyses of astronomical data are set by the available computational resources. Thus, astronomers and astrophysicists are constantly pushing the limits of computational capabilities. For decades, astronomers benefited from massive improvements in computational power that were driven primarily by increasing clock speeds and required relatively little attention to details of the computational hardware. For nearly a decade, increases in computational capabilities have come primarily from increasing the degree of parallelism, rather than increasing clock speeds. Further increases in computational capabilities will likely be led by many-core architectures such as Graphical Processing Units (GPUs) and Intel Xeon Phi. Successfully harnessing these new architectures, requires significantly more understanding of the hardware architecture, cache hierarchy, compiler capabilities and network network characteristics.I will provide an astronomer's overview of the opportunities and challenges provided by modern many-core architectures and elastic cloud computing. The primary goal is to help an astronomical audience understand what types of problems are likely to yield more than order of magnitude speed-ups and which problems are unlikely to parallelize sufficiently efficiently to be worth the development time and/or costs.I will draw on my experience leading a team in developing the Swarm-NG library for parallel integration of large ensembles of small n-body systems on GPUs, as well as several smaller software projects. I will share lessons learned from collaborating with computer scientists, including both technical and soft skills. Finally, I will discuss the challenges of training the next generation of astronomers to be proficient in this new era of high-performance computing, drawing on experience teaching a graduate class on High-Performance Scientific Computing for Astrophysics and organizing a 2014 advanced summer school on Bayesian Computing for Astronomical Data Analysis with support of the Penn State Center for Astrostatistics and Institute for CyberScience.
Advanced Simulation and Computing Fiscal Year 14 Implementation Plan, Rev. 0.5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meisner, Robert; McCoy, Michel; Archer, Bill
2013-09-11
The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities andmore » computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is now focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), quantify critical margins and uncertainties, and resolve increasingly difficult analyses needed for the SSP. Moreover, ASC’s business model is integrated and focused on requirements-driven products that address long-standing technical questions related to enhanced predictive capability in the simulation tools.« less
System enhancements of Mesoscale Analysis and Space Sensor (MASS) computer system
NASA Technical Reports Server (NTRS)
Hickey, J. S.; Karitani, S.
1985-01-01
The interactive information processing for the mesoscale analysis and space sensor (MASS) program is reported. The development and implementation of new spaceborne remote sensing technology to observe and measure atmospheric processes is described. The space measurements and conventional observational data are processed together to gain an improved understanding of the mesoscale structure and dynamical evolution of the atmosphere relative to cloud development and precipitation processes. A Research Computer System consisting of three primary computers was developed (HP-1000F, Perkin-Elmer 3250, and Harris/6) which provides a wide range of capabilities for processing and displaying interactively large volumes of remote sensing data. The development of a MASS data base management and analysis system on the HP-1000F computer and extending these capabilities by integration with the Perkin-Elmer and Harris/6 computers using the MSFC's Apple III microcomputer workstations is described. The objectives are: to design hardware enhancements for computer integration and to provide data conversion and transfer between machines.
Computer-assisted photogrammetric mapping systems for geologic studies-A progress report
Pillmore, C.L.; Dueholm, K.S.; Jepsen, H.S.; Schuch, C.H.
1981-01-01
Photogrammetry has played an important role in geologic mapping for many years; however, only recently have attempts been made to automate mapping functions for geology. Computer-assisted photogrammetric mapping systems for geologic studies have been developed and are currently in use in offices of the Geological Survey of Greenland at Copenhagen, Denmark, and the U.S. Geological Survey at Denver, Colorado. Though differing somewhat, the systems are similar in that they integrate Kern PG-2 photogrammetric plotting instruments and small desk-top computers that are programmed to perform special geologic functions and operate flat-bed plotters by means of specially designed hardware and software. A z-drive capability, in which stepping motors control the z-motions of the PG-2 plotters, is an integral part of both systems. This feature enables the computer to automatically position the floating mark on computer-calculated, previously defined geologic planes, such as contacts or the base of coal beds, throughout the stereoscopic model in order to improve the mapping capabilities of the instrument and to aid in correlation and tracing of geologic units. The common goal is to enhance the capabilities of the PG-2 plotter and provide a means by which geologists can make conventional geologic maps more efficiently and explore ways to apply computer technology to geologic studies. ?? 1981.
Program of Basic Research in Distributed Tactical Decision Making.
1987-08-05
computer -simulated game representing a "space war" battle context were devised and two experiments were conducted to test some of the underlying...assume that advanced communication and computation of ever increasing capabilities will ensure successful group performance simply by improving the...There was a total of 12 subjects, three in each condition. 0 Apparatus A computer -controlled DTDM environment was developed using a VAX-I 1/750. The DTDM
GPU-based Parallel Application Design for Emerging Mobile Devices
NASA Astrophysics Data System (ADS)
Gupta, Kshitij
A revolution is underway in the computing world that is causing a fundamental paradigm shift in device capabilities and form-factor, with a move from well-established legacy desktop/laptop computers to mobile devices in varying sizes and shapes. Amongst all the tasks these devices must support, graphics has emerged as the 'killer app' for providing a fluid user interface and high-fidelity game rendering, effectively making the graphics processor (GPU) one of the key components in (present and future) mobile systems. By utilizing the GPU as a general-purpose parallel processor, this dissertation explores the GPU computing design space from an applications standpoint, in the mobile context, by focusing on key challenges presented by these devices---limited compute, memory bandwidth, and stringent power consumption requirements---while improving the overall application efficiency of the increasingly important speech recognition workload for mobile user interaction. We broadly partition trends in GPU computing into four major categories. We analyze hardware and programming model limitations in current-generation GPUs and detail an alternate programming style called Persistent Threads, identify four use case patterns, and propose minimal modifications that would be required for extending native support. We show how by manually extracting data locality and altering the speech recognition pipeline, we are able to achieve significant savings in memory bandwidth while simultaneously reducing the compute burden on GPU-like parallel processors. As we foresee GPU computing to evolve from its current 'co-processor' model into an independent 'applications processor' that is capable of executing complex work independently, we create an alternate application framework that enables the GPU to handle all control-flow dependencies autonomously at run-time while minimizing host involvement to just issuing commands, that facilitates an efficient application implementation. Finally, as compute and communication capabilities of mobile devices improve, we analyze energy implications of processing speech recognition locally (on-chip) and offloading it to servers (in-cloud).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langer, S; Rotman, D; Schwegler, E
The Institutional Computing Executive Group (ICEG) review of FY05-06 Multiprogrammatic and Institutional Computing (M and IC) activities is presented in the attached report. In summary, we find that the M and IC staff does an outstanding job of acquiring and supporting a wide range of institutional computing resources to meet the programmatic and scientific goals of LLNL. The responsiveness and high quality of support given to users and the programs investing in M and IC reflects the dedication and skill of the M and IC staff. M and IC has successfully managed serial capacity, parallel capacity, and capability computing resources.more » Serial capacity computing supports a wide range of scientific projects which require access to a few high performance processors within a shared memory computer. Parallel capacity computing supports scientific projects that require a moderate number of processors (up to roughly 1000) on a parallel computer. Capability computing supports parallel jobs that push the limits of simulation science. M and IC has worked closely with Stockpile Stewardship, and together they have made LLNL a premier institution for computational and simulation science. Such a standing is vital to the continued success of laboratory science programs and to the recruitment and retention of top scientists. This report provides recommendations to build on M and IC's accomplishments and improve simulation capabilities at LLNL. We recommend that institution fully fund (1) operation of the atlas cluster purchased in FY06 to support a few large projects; (2) operation of the thunder and zeus clusters to enable 'mid-range' parallel capacity simulations during normal operation and a limited number of large simulations during dedicated application time; (3) operation of the new yana cluster to support a wide range of serial capacity simulations; (4) improvements to the reliability and performance of the Lustre parallel file system; (5) support for the new GDO petabyte-class storage facility on the green network for use in data intensive external collaborations; and (6) continued support for visualization and other methods for analyzing large simulations. We also recommend that M and IC begin planning in FY07 for the next upgrade of its parallel clusters. LLNL investments in M and IC have resulted in a world-class simulation capability leading to innovative science. We thank the LLNL management for its continued support and thank the M and IC staff for its vision and dedicated efforts to make it all happen.« less
Basic principles of cone beam computed tomography.
Abramovitch, Kenneth; Rice, Dwight D
2014-07-01
At the end of the millennium, cone-beam computed tomography (CBCT) heralded a new dental technology for the next century. Owing to the dramatic and positive impact of CBCT on implant dentistry and orthognathic/orthodontic patient care, additional applications for this technology soon evolved. New software programs were developed to improve the applicability of, and access to, CBCT for dental patients. Improved, rapid, and cost-effective computer technology, combined with the ability of software engineers to develop multiple dental imaging applications for CBCT with broad diagnostic capability, have played a large part in the rapid incorporation of CBCT technology into dentistry. Copyright © 2014 Elsevier Inc. All rights reserved.
Wilson, Frederic H.
1989-01-01
Graphics programs on computers can facilitate the compilation and production of geologic maps, including full color maps of publication quality. This paper describes the application of two different programs, GSMAP and ARC/INFO, to the production of a geologic map of the Port Meller and adjacent 1:250,000-scale quadrangles on the Alaska Peninsula. GSMAP was used at first because of easy digitizing on inexpensive computer hardware. Limitations in its editing capability led to transfer of the digital data to ARC/INFO, a Geographic Information System, which has better editing and also added data analysis capability. Although these improved capabilities are accompanied by increased complexity, the availability of ARC/INFO's data analysis capability provides unanticipated advantages. It allows digital map data to be processed as one of multiple data layers for mineral resource assessment. As a result of development of both software packages, it is now easier to apply both software packages to geologic map production. Both systems accelerate the drafting and revision of maps and enhance the compilation process. Additionally, ARC/ INFO's analysis capability enhances the geologist's ability to develop answers to questions of interest that were previously difficult or impossible to obtain.
Pre- and postprocessing techniques for determining goodness of computational meshes
NASA Technical Reports Server (NTRS)
Oden, J. Tinsley; Westermann, T.; Bass, J. M.
1993-01-01
Research in error estimation, mesh conditioning, and solution enhancement for finite element, finite difference, and finite volume methods has been incorporated into AUDITOR, a modern, user-friendly code, which operates on 2D and 3D unstructured neutral files to improve the accuracy and reliability of computational results. Residual error estimation capabilities provide local and global estimates of solution error in the energy norm. Higher order results for derived quantities may be extracted from initial solutions. Within the X-MOTIF graphical user interface, extensive visualization capabilities support critical evaluation of results in linear elasticity, steady state heat transfer, and both compressible and incompressible fluid dynamics.
Gpu Implementation of a Viscous Flow Solver on Unstructured Grids
NASA Astrophysics Data System (ADS)
Xu, Tianhao; Chen, Long
2016-06-01
Graphics processing units have gained popularities in scientific computing over past several years due to their outstanding parallel computing capability. Computational fluid dynamics applications involve large amounts of calculations, therefore a latest GPU card is preferable of which the peak computing performance and memory bandwidth are much better than a contemporary high-end CPU. We herein focus on the detailed implementation of our GPU targeting Reynolds-averaged Navier-Stokes equations solver based on finite-volume method. The solver employs a vertex-centered scheme on unstructured grids for the sake of being capable of handling complex topologies. Multiple optimizations are carried out to improve the memory accessing performance and kernel utilization. Both steady and unsteady flow simulation cases are carried out using explicit Runge-Kutta scheme. The solver with GPU acceleration in this paper is demonstrated to have competitive advantages over the CPU targeting one.
Method to predict external store carriage characteristics at transonic speeds
NASA Technical Reports Server (NTRS)
Rosen, Bruce S.
1988-01-01
Development of a computational method for prediction of external store carriage characteristics at transonic speeds is described. The geometric flexibility required for treatment of pylon-mounted stores is achieved by computing finite difference solutions on a five-level embedded grid arrangement. A completely automated grid generation procedure facilitates applications. Store modeling capability consists of bodies of revolution with multiple fore and aft fins. A body-conforming grid improves the accuracy of the computed store body flow field. A nonlinear relaxation scheme developed specifically for modified transonic small disturbance flow equations enhances the method's numerical stability and accuracy. As a result, treatment of lower aspect ratio, more highly swept and tapered wings is possible. A limited supersonic freestream capability is also provided. Pressure, load distribution, and force/moment correlations show good agreement with experimental data for several test cases. A detailed computer program description for the Transonic Store Carriage Loads Prediction (TSCLP) Code is included.
Ground Contact Modeling for the Morpheus Test Vehicle Simulation
NASA Technical Reports Server (NTRS)
Cordova, Luis
2014-01-01
The Morpheus vertical test vehicle is an autonomous robotic lander being developed at Johnson Space Center (JSC) to test hazard detection technology. Because the initial ground contact simulation model was not very realistic, it was decided to improve the model without making it too computationally expensive. The first development cycle added capability to define vehicle attachment points (AP) and to keep track of their states in the lander reference frame (LFRAME). These states are used with a spring damper model to compute an AP contact force. The lateral force is then overwritten, if necessary, by the Coulomb static or kinetic friction force. The second development cycle added capability to use the PolySurface class as the contact surface. The class can load CAD data in STL (Stereo Lithography) format, and use the data to compute line of sight (LOS) intercepts. A polygon frame (PFRAME) is computed from the facet intercept normal and used to convert the AP state to PFRAME. Three flat plane tests validate the transitions from kinetic to static, static to kinetic, and vertical impact. The hazardous terrain test will be used to test for visual reasonableness. The improved model is numerically inexpensive, robust, and produces results that are reasonable.
Ground Contact Modeling for the Morpheus Test Vehicle Simulation
NASA Technical Reports Server (NTRS)
Cordova, Luis
2013-01-01
The Morpheus vertical test vehicle is an autonomous robotic lander being developed at Johnson Space Center (JSC) to test hazard detection technology. Because the initial ground contact simulation model was not very realistic, it was decided to improve the model without making it too computationally expensive. The first development cycle added capability to define vehicle attachment points (AP) and to keep track of their states in the lander reference frame (LFRAME). These states are used with a spring damper model to compute an AP contact force. The lateral force is then overwritten, if necessary, by the Coulomb static or kinetic friction force. The second development cycle added capability to use the PolySurface class as the contact surface. The class can load CAD data in STL (Stereo Lithography) format, and use the data to compute line of sight (LOS) intercepts. A polygon frame (PFRAME) is computed from the facet intercept normal and used to convert the AP state to PFRAME. Three flat plane tests validate the transitions from kinetic to static, static to kinetic, and vertical impact. The hazardous terrain test will be used to test for visual reasonableness. The improved model is numerically inexpensive, robust, and produces results that are reasonable.
Towards Scalable Graph Computation on Mobile Devices.
Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng
2014-10-01
Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach.
Towards Scalable Graph Computation on Mobile Devices
Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng
2015-01-01
Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach. PMID:25859564
Advances in computational design and analysis of airbreathing propulsion systems
NASA Technical Reports Server (NTRS)
Klineberg, John M.
1989-01-01
The development of commercial and military aircraft depends, to a large extent, on engine manufacturers being able to achieve significant increases in propulsion capability through improved component aerodynamics, materials, and structures. The recent history of propulsion has been marked by efforts to develop computational techniques that can speed up the propulsion design process and produce superior designs. The availability of powerful supercomputers, such as the NASA Numerical Aerodynamic Simulator, and the potential for even higher performance offered by parallel computer architectures, have opened the door to the use of multi-dimensional simulations to study complex physical phenomena in propulsion systems that have previously defied analysis or experimental observation. An overview of several NASA Lewis research efforts is provided that are contributing toward the long-range goal of a numerical test-cell for the integrated, multidisciplinary design, analysis, and optimization of propulsion systems. Specific examples in Internal Computational Fluid Mechanics, Computational Structural Mechanics, Computational Materials Science, and High Performance Computing are cited and described in terms of current capabilities, technical challenges, and future research directions.
Software Surface Modeling and Grid Generation Steering Committee
NASA Technical Reports Server (NTRS)
Smith, Robert E. (Editor)
1992-01-01
It is a NASA objective to promote improvements in the capability and efficiency of computational fluid dynamics. Grid generation, the creation of a discrete representation of the solution domain, is an essential part of computational fluid dynamics. However, grid generation about complex boundaries requires sophisticated surface-model descriptions of the boundaries. The surface modeling and the associated computation of surface grids consume an extremely large percentage of the total time required for volume grid generation. Efficient and user friendly software systems for surface modeling and grid generation are critical for computational fluid dynamics to reach its potential. The papers presented here represent the state-of-the-art in software systems for surface modeling and grid generation. Several papers describe improved techniques for grid generation.
Color graphics, interactive processing, and the supercomputer
NASA Technical Reports Server (NTRS)
Smith-Taylor, Rudeen
1987-01-01
The development of a common graphics environment for the NASA Langley Research Center user community and the integration of a supercomputer into this environment is examined. The initial computer hardware, the software graphics packages, and their configurations are described. The addition of improved computer graphics capability to the supercomputer, and the utilization of the graphic software and hardware are discussed. Consideration is given to the interactive processing system which supports the computer in an interactive debugging, processing, and graphics environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, Benjamin S.; Hamilton, Steven P.; Jarrett, Michael G.
This report describes the performance improvements made to the VERA Core Simulator (VERA-CS) during FY2016. The development of the VERA Core Simulator has focused on the capability needed to deplete physical reactors and help solve various problems; this capability required the accurate simulation of many operating cycles of a nuclear power plant. The first section of this report introduces two test problems used to assess the run-time performance of VERA-CS using a source dated February 2016. The next section provides a brief overview of the major modifications made to decrease the computational cost. Following the descriptions of the major improvements,more » the run-time for each improvement is shown. Conclusions on the work are presented, and further follow-on performance improvements are suggested.« less
Performance evaluation capabilities for the design of physical systems
NASA Technical Reports Server (NTRS)
Pilkey, W. D.; Wang, B. P.
1972-01-01
The results are presented of a study aimed at developing and formulating a capability for the limiting performance of large steady state systems. The accomplishments reported include: (1) development of a theory of limiting performance of large systems subject to steady state inputs; (2) application and modification of PERFORM, the computational capability for the limiting performance of systems with transient inputs; and (3) demonstration that use of an inherently smooth control force for a limiting performance calculation improves the system identification phase of the design process for physical systems subjected to transient loading.
NASA Technical Reports Server (NTRS)
Chow, Chuen-Yen; Ryan, James S.
1987-01-01
While the zonal grid system of Transonic Navier-Stokes (TNS) provides excellent modeling of complex geometries, improved shock capturing, and a higher Mach number range will be required if flows about hypersonic aircraft are to be modeled accurately. A computational fluid dynamics (CFD) code, the Compressible Navier-Stokes (CNS), is under development to combine the required high Mach number capability with the existing TNS geometry capability. One of several candidate flow solvers for inclusion in the CNS is that of F3D. This upwinding flow solver promises improved shock capturing, and more accurate hypersonic solutions overall, compared to the solver currently used in TNS.
A Situation Awareness Assistant for Human Deep Space Exploration
NASA Technical Reports Server (NTRS)
Boy, Guy A.; Platt, Donald
2013-01-01
This paper presents the development and testing of a Virtual Camera (VC) system to improve astronaut and mission operations situation awareness while exploring other planetary bodies. In this embodiment, the VC is implemented using a tablet-based computer system to navigate through inter active database application. It is claimed that the advanced interaction media capability of the VC can improve situation awareness as the distribution of hu man space exploration roles change in deep space exploration. The VC is being developed and tested for usability and capability to improve situation awareness. Work completed thus far as well as what is needed to complete the project will be described. Planned testing will also be described.
Automatic Data Traffic Control on DSM Architecture
NASA Technical Reports Server (NTRS)
Frumkin, Michael; Jin, Hao-Qiang; Yan, Jerry; Kwak, Dochan (Technical Monitor)
2000-01-01
We study data traffic on distributed shared memory machines and conclude that data placement and grouping improve performance of scientific codes. We present several methods which user can employ to improve data traffic in his code. We report on implementation of a tool which detects the code fragments causing data congestions and advises user on improvements of data routing in these fragments. The capabilities of the tool include deduction of data alignment and affinity from the source code; detection of the code constructs having abnormally high cache or TLB misses; generation of data placement constructs. We demonstrate the capabilities of the tool on experiments with NAS parallel benchmarks and with a simple computational fluid dynamics application ARC3D.
User's Guide for TOUGH2-MP - A Massively Parallel Version of the TOUGH2 Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Earth Sciences Division; Zhang, Keni; Zhang, Keni
TOUGH2-MP is a massively parallel (MP) version of the TOUGH2 code, designed for computationally efficient parallel simulation of isothermal and nonisothermal flows of multicomponent, multiphase fluids in one, two, and three-dimensional porous and fractured media. In recent years, computational requirements have become increasingly intensive in large or highly nonlinear problems for applications in areas such as radioactive waste disposal, CO2 geological sequestration, environmental assessment and remediation, reservoir engineering, and groundwater hydrology. The primary objective of developing the parallel-simulation capability is to significantly improve the computational performance of the TOUGH2 family of codes. The particular goal for the parallel simulator ismore » to achieve orders-of-magnitude improvement in computational time for models with ever-increasing complexity. TOUGH2-MP is designed to perform parallel simulation on multi-CPU computational platforms. An earlier version of TOUGH2-MP (V1.0) was based on the TOUGH2 Version 1.4 with EOS3, EOS9, and T2R3D modules, a software previously qualified for applications in the Yucca Mountain project, and was designed for execution on CRAY T3E and IBM SP supercomputers. The current version of TOUGH2-MP (V2.0) includes all fluid property modules of the standard version TOUGH2 V2.0. It provides computationally efficient capabilities using supercomputers, Linux clusters, or multi-core PCs, and also offers many user-friendly features. The parallel simulator inherits all process capabilities from V2.0 together with additional capabilities for handling fractured media from V1.4. This report provides a quick starting guide on how to set up and run the TOUGH2-MP program for users with a basic knowledge of running the (standard) version TOUGH2 code, The report also gives a brief technical description of the code, including a discussion of parallel methodology, code structure, as well as mathematical and numerical methods used. To familiarize users with the parallel code, illustrative sample problems are presented.« less
Smart Payload Development for High Data Rate Instrument Systems
NASA Technical Reports Server (NTRS)
Pingree, Paula J.; Norton, Charles D.
2007-01-01
This slide presentation reviews the development of smart payloads instruments systems with high data rates. On-board computation has become a bottleneck for advanced science instrument and engineering capabilities. In order to improve the computation capability on board, smart payloads have been proposed. A smart payload is a Localized instrument, that can offload the flight processor of extensive computing cycles, simplify the interfaces, and minimize the dependency of the instrument on the flight system. This has been proposed for the Mars mission, Mars Atmospheric Trace Molecule Spectroscopy (MATMOS). The design of this system is discussed; the features of the Virtex-4, are discussed, and the technical approach is reviewed. The proposed Hybrid Field Programmable Gate Array (FPGA) technology has been shown to deliver breakthrough performance by tightly coupling hardware and software. Smart Payload designs for instruments such as MATMOS can meet science data return requirements with more competitive use of available on-board resources and can provide algorithm acceleration in hardware leading to implementation of better (more advanced) algorithms in on-board systems for improved science data return
Sorensen, E.G.; Gordon, C.M.
1959-02-10
Improvements in analog eomputing machines of the class capable of evaluating differential equations, commonly termed differential analyzers, are described. In general form, the analyzer embodies a plurality of basic computer mechanisms for performing integration, multiplication, and addition, and means for directing the result of any one operation to another computer mechanism performing a further operation. In the device, numerical quantities are represented by the rotation of shafts, or the electrical equivalent of shafts.
High Speed Internet Access Using Cellular Infrastructure
2004-09-01
62 Figure 50 Initial Connection from Laptop to Mobile Phone via Bluetooth . ....................64 Figure 51 Mobile Phone Discovered by the Laptop...tailor them to their needs. Other hardware improvements include Infrared and Bluetooth capabilities for external connectivity with a computer, as well...mobile phone can communicate with a computer through a phone’s vendor specific cable, Infrared or Bluetooth . The latter two depend on both devices
NASA Astrophysics Data System (ADS)
Lee, Eun Seok
2000-10-01
An improved aerodynamics performance of a turbine cascade shape can be achieved by an understanding of the flow-field associated with the stator-rotor interaction. In this research, an axial gas turbine airfoil cascade shape is optimized for improved aerodynamic performance by using an unsteady Navier-Stokes solver and a parallel genetic algorithm. The objective of the research is twofold: (1) to develop a computational fluid dynamics code having faster convergence rate and unsteady flow simulation capabilities, and (2) to optimize a turbine airfoil cascade shape with unsteady passing wakes for improved aerodynamic performance. The computer code solves the Reynolds averaged Navier-Stokes equations. It is based on the explicit, finite difference, Runge-Kutta time marching scheme and the Diagonalized Alternating Direction Implicit (DADI) scheme, with the Baldwin-Lomax algebraic and k-epsilon turbulence modeling. Improvements in the code focused on the cascade shape design capability, convergence acceleration and unsteady formulation. First, the inverse shape design method was implemented in the code to provide the design capability, where a surface transpiration concept was employed as an inverse technique to modify the geometry satisfying the user specified pressure distribution on the airfoil surface. Second, an approximation storage multigrid method was implemented as an acceleration technique. Third, the preconditioning method was adopted to speed up the convergence rate in solving the low Mach number flows. Finally, the implicit dual time stepping method was incorporated in order to simulate the unsteady flow-fields. For the unsteady code validation, the Stokes's 2nd problem and the Poiseuille flow were chosen and compared with the computed results and analytic solutions. To test the code's ability to capture the natural unsteady flow phenomena, vortex shedding past a cylinder and the shock oscillation over a bicircular airfoil were simulated and compared with experiments and other research results. The rotor cascade shape optimization with unsteady passing wakes was performed to obtain an improved aerodynamic performance using the unsteady Navier-Stokes solver. Two objective functions were defined as minimization of total pressure loss and maximization of lift, while the mass flow rate was fixed. A parallel genetic algorithm was used as an optimizer and the penalty method was introduced. Each individual's objective function was computed simultaneously by using a 32 processor distributed memory computer. One optimization took about four days.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ritchie, L.T.; Johnson, J.D.; Blond, R.M.
The CRAC2 computer code is a revision of the Calculation of Reactor Accident Consequences computer code, CRAC, developed for the Reactor Safety Study. The CRAC2 computer code incorporates significant modeling improvements in the areas of weather sequence sampling and emergency response, and refinements to the plume rise, atmospheric dispersion, and wet deposition models. New output capabilities have also been added. This guide is to facilitate the informed and intelligent use of CRAC2. It includes descriptions of the input data, the output results, the file structures, control information, and five sample problems.
Navier-Stokes computations for circulation control airfoils
NASA Technical Reports Server (NTRS)
Pulliam, Thomas H.; Jespersen, Dennis C.; Barth, Timothy J.
1987-01-01
Navier-Stokes computations of subsonic to transonic flow past airfoils with augmented lift due to rearward jet blowing over a curved trailing edge are presented. The approach uses a spiral grid topology. Solutions are obtained using a Navier-Stokes code which employs an implicit finite difference method, an algebraic turbulence model, and developments which improve stability, convergence, and accuracy. Results are compared against experiments for no jet blowing and moderate jet pressures and demonstrate the capability to compute these complicated flows.
Navier-Stokes computations for circulation controlled airfoils
NASA Technical Reports Server (NTRS)
Pulliam, T. H.; Jesperen, D. C.; Barth, T. J.
1986-01-01
Navier-Stokes computations of subsonic to transonic flow past airfoils with augmented lift due to rearward jet blowing over a curved trailing edge are presented. The approach uses a spiral grid topology. Solutions are obtained using a Navier-Stokes code which employs an implicit finite difference method, an algebraic turbulence model, and developments which improve stability, convergence, and accuracy. Results are compared against experiments for no jet blowing and moderate jet pressures and demonstrate the capability to compute these complicated flows.
Computational chemistry and cheminformatics: an essay on the future.
Glen, Robert Charles
2012-01-01
Computers have changed the way we do science. Surrounded by a sea of data and with phenomenal computing capacity, the methodology and approach to scientific problems is evolving into a partnership between experiment, theory and data analysis. Given the pace of change of the last twenty-five years, it seems folly to speculate on the future, but along with unpredictable leaps of progress there will be a continuous evolution of capability, which points to opportunities and improvements that will certainly appear as our discipline matures.
Design of on-board parallel computer on nano-satellite
NASA Astrophysics Data System (ADS)
You, Zheng; Tian, Hexiang; Yu, Shijie; Meng, Li
2007-11-01
This paper provides one scheme of the on-board parallel computer system designed for the Nano-satellite. Based on the development request that the Nano-satellite should have a small volume, low weight, low power cost, and intelligence, this scheme gets rid of the traditional one-computer system and dual-computer system with endeavor to improve the dependability, capability and intelligence simultaneously. According to the method of integration design, it employs the parallel computer system with shared memory as the main structure, connects the telemetric system, attitude control system, and the payload system by the intelligent bus, designs the management which can deal with the static tasks and dynamic task-scheduling, protect and recover the on-site status and so forth in light of the parallel algorithms, and establishes the fault diagnosis, restoration and system restructure mechanism. It accomplishes an on-board parallel computer system with high dependability, capability and intelligence, a flexible management on hardware resources, an excellent software system, and a high ability in extension, which satisfies with the conception and the tendency of the integration electronic design sufficiently.
1982-01-27
Visible 3. 3 Ea r th Location, Colocation, and Normalization 4. IMAGE ANALYSIS 4. 1 Interactive Capabilities 4.2 Examples 5. AUTOMATED CLOUD...computer Interactive Data Access System (McIDAS) before image analysis and algorithm development were done. Earth-location is an automated procedure to...the factor l / s in (SSE) toward the gain settings given in Table 5. 4. IMAGE ANALYSIS 4.1 Interactive Capabilities The development of automated
Benchmarking of Neutron Production of Heavy-Ion Transport Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Remec, Igor; Ronningen, Reginald M.; Heilbronn, Lawrence
Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models andmore » codes and additional benchmarking are required.« less
Benchmarking of Heavy Ion Transport Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Remec, Igor; Ronningen, Reginald M.; Heilbronn, Lawrence
Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in designing and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models andmore » codes and additional benchmarking are required.« less
NASA Astrophysics Data System (ADS)
Shinkai, Takeshi; Koshiduka, Tadashi; Mori, Tadashi; Uchii, Toshiyuki; Tanaka, Tsutomu; Ikeda, Hisatoshi
Current zero measurements are performed for 245kV-50kA-60Hz short line fault (L90) interruption tests with a self-blast interrupting chamber (double volume system) which has the interrupting capability up to 245kV-50kA-50Hz L90. Lower L90 interruption capability is observed for longer arcing time although very high pressure rise is obtained. It may be caused by higher blowing temperature and lower blowing density for longer arcing time. Interruption criteria and a optimization method of the chamber design are discussed to improve L90 interruption capability with it. The new chambers are designed at 245kV-50kA-60Hz to improve gas density in thermal volume for long arcing time. 245kV-50kA-60Hz L90 interruptions are performed with the new chamber. The suggested optimization method is an efficient tool for the self-blast interrupting chamber design although study of computing methods is required to calculate arc conductance around current zero as a direct criterion for L90 interruption capability with higher accuracy.
Off-Gas Adsorption Model Capabilities and Recommendations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyon, Kevin L.; Welty, Amy K.; Law, Jack
2016-03-01
Off-gas treatment is required to reduce emissions from aqueous fuel reprocessing. Evaluating the products of innovative gas adsorption research requires increased computational simulation capability to more effectively transition from fundamental research to operational design. Early modeling efforts produced the Off-Gas SeParation and REcoverY (OSPREY) model that, while efficient in terms of computation time, was of limited value for complex systems. However, the computational and programming lessons learned in development of the initial model were used to develop Discontinuous Galerkin OSPREY (DGOSPREY), a more effective model. Initial comparisons between OSPREY and DGOSPREY show that, while OSPREY does reasonably well to capturemore » the initial breakthrough time, it displays far too much numerical dispersion to accurately capture the real shape of the breakthrough curves. DGOSPREY is a much better tool as it utilizes a more stable set of numerical methods. In addition, DGOSPREY has shown the capability to capture complex, multispecies adsorption behavior, while OSPREY currently only works for a single adsorbing species. This capability makes DGOSPREY ultimately a more practical tool for real world simulations involving many different gas species. While DGOSPREY has initially performed very well, there is still need for improvement. The current state of DGOSPREY does not include any micro-scale adsorption kinetics and therefore assumes instantaneous adsorption. This is a major source of error in predicting water vapor breakthrough because the kinetics of that adsorption mechanism is particularly slow. However, this deficiency can be remedied by building kinetic kernels into DGOSPREY. Another source of error in DGOSPREY stems from data gaps in single species, such as Kr and Xe, isotherms. Since isotherm data for each gas is currently available at a single temperature, the model is unable to predict adsorption at temperatures outside of the set of data currently available. Thus, in order to improve the predictive capabilities of the model, there is a need for more single-species adsorption isotherms at different temperatures, in addition to extending the model to include adsorption kinetics. This report provides background information about the modeling process and a path forward for further model improvement in terms of accuracy and user interface.« less
Tool for Analysis and Reduction of Scientific Data
NASA Technical Reports Server (NTRS)
James, Mark
2006-01-01
The Automated Scheduling and Planning Environment (ASPEN) computer program has been updated to version 3.0. ASPEN as a whole (up to version 2.0) has been summarized, and selected aspects of ASPEN have been discussed in several previous NASA Tech Briefs articles. Restated briefly, ASPEN is a modular, reconfigurable, application software framework for solving batch problems that involve reasoning about time, activities, states, and resources. Applications of ASPEN can include planning spacecraft missions, scheduling of personnel, and managing supply chains, inventories, and production lines. ASPEN 3.0 can be customized for a wide range of applications and for a variety of computing environments that include various central processing units and randomaccess memories. Domain-specific reasoning modules (e.g., modules for determining orbits for spacecraft) can easily be plugged into ASPEN 3.0. Improvements over other, similar software that have been incorporated into ASPEN 3.0 include a provision for more expressive time-line values, new parsing capabilities afforded by an ASPEN language based on Extensible Markup Language, improved search capabilities, and improved interfaces to other, utility-type software (notably including MATLAB).
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCoy, Michel; Archer, Bill; Hendrickson, Bruce
The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computationalmore » resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. ASC is now focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), and quantifying critical margins and uncertainties. Resolving each issue requires increasingly difficult analyses because the aging process has progressively moved the stockpile further away from the original test base. Where possible, the program also enables the use of high performance computing (HPC) and simulation tools to address broader national security needs, such as foreign nuclear weapon assessments and counter nuclear terrorism.« less
NASA Astrophysics Data System (ADS)
Sellars, S. L.; Nguyen, P.; Tatar, J.; Graham, J.; Kawsenuk, B.; DeFanti, T.; Smarr, L.; Sorooshian, S.; Ralph, M.
2017-12-01
A new era in computational earth sciences is within our grasps with the availability of ever-increasing earth observational data, enhanced computational capabilities, and innovative computation approaches that allow for the assimilation, analysis and ability to model the complex earth science phenomena. The Pacific Research Platform (PRP), CENIC and associated technologies such as the Flash I/O Network Appliance (FIONA) provide scientists a unique capability for advancing towards this new era. This presentation reports on the development of multi-institutional rapid data access capabilities and data pipeline for applying a novel image characterization and segmentation approach, CONNected objECT (CONNECT) algorithm to study Atmospheric River (AR) events impacting the Western United States. ARs are often associated with torrential rains, swollen rivers, flash flooding, and mudslides. CONNECT is computationally intensive, reliant on very large data transfers, storage and data mining techniques. The ability to apply the method to multiple variables and datasets located at different University of California campuses has previously been challenged by inadequate network bandwidth and computational constraints. The presentation will highlight how the inter-campus CONNECT data mining framework improved from our prior download speeds of 10MB/s to 500MB/s using the PRP and the FIONAs. We present a worked example using the NASA MERRA data to describe how the PRP and FIONA have provided researchers with the capability for advancing knowledge about ARs. Finally, we will discuss future efforts to expand the scope to additional variables in earth sciences.
Improving Search Properties in Genetic Programming
NASA Technical Reports Server (NTRS)
Janikow, Cezary Z.; DeWeese, Scott
1997-01-01
With the advancing computer processing capabilities, practical computer applications are mostly limited by the amount of human programming required to accomplish a specific task. This necessary human participation creates many problems, such as dramatically increased cost. To alleviate the problem, computers must become more autonomous. In other words, computers must be capable to program/reprogram themselves to adapt to changing environments/tasks/demands/domains. Evolutionary computation offers potential means, but it must be advanced beyond its current practical limitations. Evolutionary algorithms model nature. They maintain a population of structures representing potential solutions to the problem at hand. These structures undergo a simulated evolution by means of mutation, crossover, and a Darwinian selective pressure. Genetic programming (GP) is the most promising example of an evolutionary algorithm. In GP, the structures that evolve are trees, which is a dramatic departure from previously used representations such as strings in genetic algorithms. The space of potential trees is defined by means of their elements: functions, which label internal nodes, and terminals, which label leaves. By attaching semantic interpretation to those elements, trees can be interpreted as computer programs (given an interpreter), evolved architectures, etc. JSC has begun exploring GP as a potential tool for its long-term project on evolving dextrous robotic capabilities. Last year we identified representation redundancies as the primary source of inefficiency in GP. Subsequently, we proposed a method to use problem constraints to reduce those redundancies, effectively reducing GP complexity. This method was implemented afterwards at the University of Missouri. This summer, we have evaluated the payoff from using problem constraints to reduce search complexity on two classes of problems: learning boolean functions and solving the forward kinematics problem. We have also developed and implemented methods to use additional problem heuristics to fine-tune the searchable space, and to use typing information to further reduce the search space. Additional improvements have been proposed, but they are yet to be explored and implemented.
Predictive Behavior of a Computational Foot/Ankle Model through Artificial Neural Networks.
Chande, Ruchi D; Hargraves, Rosalyn Hobson; Ortiz-Robinson, Norma; Wayne, Jennifer S
2017-01-01
Computational models are useful tools to study the biomechanics of human joints. Their predictive performance is heavily dependent on bony anatomy and soft tissue properties. Imaging data provides anatomical requirements while approximate tissue properties are implemented from literature data, when available. We sought to improve the predictive capability of a computational foot/ankle model by optimizing its ligament stiffness inputs using feedforward and radial basis function neural networks. While the former demonstrated better performance than the latter per mean square error, both networks provided reasonable stiffness predictions for implementation into the computational model.
NASA Astrophysics Data System (ADS)
Igel, Heiner
2004-07-01
The European Commission recently funded a Marie-Curie Research Training Network (MCRTN) in the field of computational seismology within the 6th Framework Program. SPICE (Seismic wave Propagation and Imaging in Complex media: a European network) is coordinated by the computational seismology group of the Ludwig-Maximilians-Universität in Munich linking 14 European research institutions in total. The 4-year project will provide funding for 14 Ph.D. students (3-year projects) and 14 postdoctoral positions (2-year projects) within the various fields of computational seismology. These positions have been advertised and are currently being filled.
NASA Technical Reports Server (NTRS)
Taylor, N. L.
1983-01-01
To response to a need for improved computer-generated plots that are acceptable to the Langley publication process, the LaRC Graphics Output System has been modified to encompass the publication requirements, and a guideline has been established. This guideline deals only with the publication requirements of computer-generated plots. This report explains the capability that authors of NASA technical reports can use to obtain publication--quality computer-generated plots or the Langley publication process. The rules applied in developing this guideline and examples illustrating the rules are included.
GYC: A program to compute the turbulent boundary layer on a rotating cone
NASA Technical Reports Server (NTRS)
Sullivan, R. D.
1976-01-01
A computer program, GYC, which is capable of computing the properties of a compressible turbulent boundary layer on a rotating axisymmetric cone-cylinder body, according to the principles of invariant modeling was studied. The program is extended to include the calculation of the turbulence scale by a differential equation. GYC is in operation on the CDC-7600 computer and has undergone several corrections and improvements as a result of the experience gained. The theoretical basis for the program and the method of implementation, as well as information on its operation are given.
Xyce parallel electronic simulator : users' guide.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.
2011-05-01
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers; (2) Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-artmore » algorithms and novel techniques. (3) Device models which are specifically tailored to meet Sandia's needs, including some radiation-aware devices (for Sandia users only); and (4) Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing parallel implementation - which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The development of Xyce provides a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods, parallel solver algorithms) research and development can be performed. As a result, Xyce is a unique electrical simulation capability, designed to meet the unique needs of the laboratory.« less
Deep learning for teaching university physics to computers
NASA Astrophysics Data System (ADS)
Davis, Jackson P.; Price, Watt A.
2017-04-01
Attempts to improve physics instruction suggest that there is a fundamental barrier to the human learning of physics. We argue that the new capabilities of artificial intelligence justify a reconsideration not of how we teach physics but to whom we teach physics.
2008-01-01
Intentionally Blank 5 Table of Contents INTRODUCTION...18 Goal 5 : Organizational Excellence...fully realized in the next 5 years, it is clear that coordinated activity must occur now to improve the Coast Guard’s operational capabilities
Advancing botnet modeling techniques for military and security simulations
NASA Astrophysics Data System (ADS)
Banks, Sheila B.; Stytz, Martin R.
2011-06-01
Simulation environments serve many purposes, but they are only as good as their content. One of the most challenging and pressing areas that call for improved content is the simulation of bot armies (botnets) and their effects upon networks and computer systems. Botnets are a new type of malware, a type that is more powerful and potentially dangerous than any other type of malware. A botnet's power derives from several capabilities including the following: 1) the botnet's capability to be controlled and directed throughout all phases of its activity, 2) a command and control structure that grows increasingly sophisticated, and 3) the ability of a bot's software to be updated at any time by the owner of the bot (a person commonly called a bot master or bot herder.) Not only is a bot army powerful and agile in its technical capabilities, a bot army can be extremely large, can be comprised of tens of thousands, if not millions, of compromised computers or it can be as small as a few thousand targeted systems. In all botnets, their members can surreptitiously communicate with each other and their command and control centers. In sum, these capabilities allow a bot army to execute attacks that are technically sophisticated, difficult to trace, tactically agile, massive, and coordinated. To improve our understanding of their operation and potential, we believe that it is necessary to develop computer security simulations that accurately portray bot army activities, with the goal of including bot army simulations within military simulation environments. In this paper, we investigate issues that arise when simulating bot armies and propose a combination of the biologically inspired MSEIR infection spread model coupled with the jump-diffusion infection spread model to portray botnet propagation.
Advancing Test Capabilities at NASA Wind Tunnels
NASA Technical Reports Server (NTRS)
Bell, James
2015-01-01
NASA maintains twelve major wind tunnels at three field centers capable of providing flows at 0.1 M 10 and unit Reynolds numbers up to 45106m. The maintenance and enhancement of these facilities is handled through a unified management structure under NASAs Aeronautics and Evaluation and Test Capability (AETC) project. The AETC facilities are; the 11x11 transonic and 9x7 supersonic wind tunnels at NASA Ames; the 10x10 and 8x6 supersonic wind tunnels, 9x15 low speed tunnel, Icing Research Tunnel, and Propulsion Simulator Laboratory, all at NASA Glenn; and the National Transonic Facility, Transonic Dynamics Tunnel, LAL aerothermodynamics laboratory, 8 High Temperature Tunnel, and 14x22 low speed tunnel, all at NASA Langley. This presentation describes the primary AETC facilities and their current capabilities, as well as improvements which are planned over the next five years. These improvements fall into three categories. The first are operations and maintenance improvements designed to increase the efficiency and reliability of the wind tunnels. These include new (possibly composite) fan blades at several facilities, new temperature control systems, and new and much more capable facility data systems. The second category of improvements are facility capability advancements. These include significant improvements to optical access in wind tunnel test sections at Ames, improvements to test section acoustics at Glenn and Langley, the development of a Supercooled Large Droplet capability for icing research, and the development of an icing capability for large engine testing. The final category of improvements consists of test technology enhancements which provide value across multiple facilities. These include projects to increase balance accuracy, provide NIST-traceable calibration characterization for wind tunnels, and to advance optical instruments for Computational Fluid Dynamics (CFD) validation. Taken as a whole, these individual projects provide significant enhancements to NASA capabilities in ground-based testing. They ensure that these wind tunnels will provide accurate and relevant experimental data for years to come, supporting both NASAs mission and the missions of our government and industry customers.
The ACI-REF Program: Empowering Prospective Computational Researchers
NASA Astrophysics Data System (ADS)
Cuma, M.; Cardoen, W.; Collier, G.; Freeman, R. M., Jr.; Kitzmiller, A.; Michael, L.; Nomura, K. I.; Orendt, A.; Tanner, L.
2014-12-01
The ACI-REF program, Advanced Cyberinfrastructure - Research and Education Facilitation, represents a consortium of academic institutions seeking to further advance the capabilities of their respective campus research communities through an extension of the personal connections and educational activities that underlie the unique and often specialized cyberinfrastructure at each institution. This consortium currently includes Clemson University, Harvard University, University of Hawai'i, University of Southern California, University of Utah, and University of Wisconsin. Working together in a coordinated effort, the consortium is dedicated to the adoption of models and strategies which leverage the expertise and experience of its members with a goal of maximizing the impact of each institution's investment in research computing. The ACI-REFs (facilitators) are tasked with making connections and building bridges between the local campus researchers and the many different providers of campus, commercial, and national computing resources. Through these bridges, ACI-REFs assist researchers from all disciplines in understanding their computing and data needs and in mapping these needs to existing capabilities or providing assistance with development of these capabilities. From the Earth sciences perspective, we will give examples of how this assistance improved methods and workflows in geophysics, geography and atmospheric sciences. We anticipate that this effort will expand the number of researchers who become self-sufficient users of advanced computing resources, allowing them to focus on making research discoveries in a more timely and efficient manner.
Process improvement as an investment: Measuring its worth
NASA Technical Reports Server (NTRS)
Mcgarry, Frank; Jeletic, Kellyann
1993-01-01
This paper discusses return on investment (ROI) generated from software process improvement programs. It details the steps needed to compute ROI and compares these steps from the perspective of two process improvement approaches: the widely known Software Engineering Institute's capability maturity model and the approach employed by NASA's Software Engineering Laboratory (SEL). The paper then describes the specific investments made in the SEL over the past 18 years and discusses the improvements gained from this investment by the production organization in the SEL.
NASA Technical Reports Server (NTRS)
Ferguson, D. R.; Keith, J. S.
1975-01-01
The improvements which have been incorporated in the Streamtube Curvature Program to enhance both its computational and diagnostic capabilities are described. Detailed descriptions are given of the revisions incorporated to more reliably handle the jet stream-external flow interaction at trailing edges. Also presented are the augmented boundary layer procedures and a variety of other program changes relating to program diagnostics and extended solution capabilities. An updated User's Manual, that includes information on the computer program operation, usage, and logical structure, is presented. User documentation includes an outline of the general logical flow of the program and detailed instructions for program usage and operation. From the standpoint of the programmer, the overlay structure is described. The input data, output formats, and diagnostic printouts are covered in detail and illustrated with three typical test cases.
Interactive information processing for NASA's mesoscale analysis and space sensor program
NASA Technical Reports Server (NTRS)
Parker, K. G.; Maclean, L.; Reavis, N.; Wilson, G.; Hickey, J. S.; Dickerson, M.; Karitani, S.; Keller, D.
1985-01-01
The Atmospheric Sciences Division (ASD) of the Systems Dynamics Laboratory at NASA's Marshall Space Flight Center (MSFC) is currently involved in interactive information processing for the Mesoscale Analysis and Space Sensor (MASS) program. Specifically, the ASD is engaged in the development and implementation of new space-borne remote sensing technology to observe and measure mesoscale atmospheric processes. These space measurements and conventional observational data are being processed together to gain an improved understanding of the mesoscale structure and the dynamical evolution of the atmosphere relative to cloud development and precipitation processes. To satisfy its vast data processing requirements, the ASD has developed a Researcher Computer System consiting of three primary computer systems which provides over 20 scientists with a wide range of capabilities for processing and displaying a large volumes of remote sensing data. Each of the computers performs a specific function according to its unique capabilities.
Building a Data Science capability for USGS water research and communication
NASA Astrophysics Data System (ADS)
Appling, A.; Read, E. K.
2015-12-01
Interpreting and communicating water issues in an era of exponentially increasing information requires a blend of domain expertise, computational proficiency, and communication skills. The USGS Office of Water Information has established a Data Science team to meet these needs, providing challenging careers for diverse domain scientists and innovators in the fields of information technology and data visualization. Here, we detail the experience of building a Data Science capability as a bridging element between traditional water resources analyses and modern computing tools and data management techniques. This approach includes four major components: 1) building reusable research tools, 2) documenting data-intensive research approaches in peer reviewed journals, 3) communicating complex water resources issues with interactive web visualizations, and 4) offering training programs for our peers in scientific computing. These components collectively improve the efficiency, transparency, and reproducibility of USGS data analyses and scientific workflows.
Comprehensive Micromechanics-Analysis Code - Version 4.0
NASA Technical Reports Server (NTRS)
Arnold, S. M.; Bednarcyk, B. A.
2005-01-01
Version 4.0 of the Micromechanics Analysis Code With Generalized Method of Cells (MAC/GMC) has been developed as an improved means of computational simulation of advanced composite materials. The previous version of MAC/GMC was described in "Comprehensive Micromechanics-Analysis Code" (LEW-16870), NASA Tech Briefs, Vol. 24, No. 6 (June 2000), page 38. To recapitulate: MAC/GMC is a computer program that predicts the elastic and inelastic thermomechanical responses of continuous and discontinuous composite materials with arbitrary internal microstructures and reinforcement shapes. The predictive capability of MAC/GMC rests on a model known as the generalized method of cells (GMC) - a continuum-based model of micromechanics that provides closed-form expressions for the macroscopic response of a composite material in terms of the properties, sizes, shapes, and responses of the individual constituents or phases that make up the material. Enhancements in version 4.0 include a capability for modeling thermomechanically and electromagnetically coupled ("smart") materials; a more-accurate (high-fidelity) version of the GMC; a capability to simulate discontinuous plies within a laminate; additional constitutive models of materials; expanded yield-surface-analysis capabilities; and expanded failure-analysis and life-prediction capabilities on both the microscopic and macroscopic scales.
Multiprocessing on supercomputers for computational aerodynamics
NASA Technical Reports Server (NTRS)
Yarrow, Maurice; Mehta, Unmeel B.
1991-01-01
Little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPS or more) to improve turnaround time in computational aerodynamics. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, such improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) is applied through multitasking via a strategy that requires relatively minor modifications to an existing code for a single processor. This approach maps the available memory to multiple processors, exploiting the C-Fortran-Unix interface. The existing code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor.
Towards a Multi-Mission, Airborne Science Data System Environment
NASA Astrophysics Data System (ADS)
Crichton, D. J.; Hardman, S.; Law, E.; Freeborn, D.; Kay-Im, E.; Lau, G.; Oswald, J.
2011-12-01
NASA earth science instruments are increasingly relying on airborne missions. However, traditionally, there has been limited common infrastructure support available to principal investigators in the area of science data systems. As a result, each investigator has been required to develop their own computing infrastructures for the science data system. Typically there is little software reuse and many projects lack sufficient resources to provide a robust infrastructure to capture, process, distribute and archive the observations acquired from airborne flights. At NASA's Jet Propulsion Laboratory (JPL), we have been developing a multi-mission data system infrastructure for airborne instruments called the Airborne Cloud Computing Environment (ACCE). ACCE encompasses the end-to-end lifecycle covering planning, provisioning of data system capabilities, and support for scientific analysis in order to improve the quality, cost effectiveness, and capabilities to enable new scientific discovery and research in earth observation. This includes improving data system interoperability across each instrument. A principal characteristic is being able to provide an agile infrastructure that is architected to allow for a variety of configurations of the infrastructure from locally installed compute and storage services to provisioning those services via the "cloud" from cloud computer vendors such as Amazon.com. Investigators often have different needs that require a flexible configuration. The data system infrastructure is built on the Apache's Object Oriented Data Technology (OODT) suite of components which has been used for a number of spaceborne missions and provides a rich set of open source software components and services for constructing science processing and data management systems. In 2010, a partnership was formed between the ACCE team and the Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE) mission to support the data processing and data management needs. A principal goal is to provide support for the Fourier Transform Spectrometer (FTS) instrument which will produce over 700,000 soundings over the life of their three-year mission. The cost to purchase and operate a cluster-based system in order to generate Level 2 Full Physics products from this data was prohibitive. Through an evaluation of cloud computing solutions, Amazon's Elastic Compute Cloud (EC2) was selected for the CARVE deployment. As the ACCE infrastructure is developed and extended to form an infrastructure for airborne missions, the experience of working with CARVE has provided a number of lessons learned and has proven to be important in reinforcing the unique aspects of airborne missions and the importance of the ACCE infrastructure in developing a cost effective, flexible multi-mission capability that leverages emerging capabilities in cloud computing, workflow management, and distributed computing.
Computer-aided communication satellite system analysis and optimization
NASA Technical Reports Server (NTRS)
Stagl, T. W.; Morgan, N. H.; Morley, R. E.; Singh, J. P.
1973-01-01
The capabilities and limitations of the various published computer programs for fixed/broadcast communication satellite system synthesis and optimization are discussed. A satellite Telecommunication analysis and Modeling Program (STAMP) for costing and sensitivity analysis work in application of communication satellites to educational development is given. The modifications made to STAMP include: extension of the six beam capability to eight; addition of generation of multiple beams from a single reflector system with an array of feeds; an improved system costing to reflect the time value of money, growth in earth terminal population with time, and to account for various measures of system reliability; inclusion of a model for scintillation at microwave frequencies in the communication link loss model; and, an updated technological environment.
Benchmarking of neutron production of heavy-ion transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Remec, I.; Ronningen, R. M.; Heilbronn, L.
Document available in abstract form only, full text of document follows: Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondarymore » neutron production. Results are encouraging; however, further improvements in models and codes and additional benchmarking are required. (authors)« less
Programmable DNA-Mediated Multitasking Processor.
Shu, Jian-Jun; Wang, Qi-Wen; Yong, Kian-Yan; Shao, Fangwei; Lee, Kee Jin
2015-04-30
Because of DNA appealing features as perfect material, including minuscule size, defined structural repeat and rigidity, programmable DNA-mediated processing is a promising computing paradigm, which employs DNAs as information storing and processing substrates to tackle the computational problems. The massive parallelism of DNA hybridization exhibits transcendent potential to improve multitasking capabilities and yield a tremendous speed-up over the conventional electronic processors with stepwise signal cascade. As an example of multitasking capability, we present an in vitro programmable DNA-mediated optimal route planning processor as a functional unit embedded in contemporary navigation systems. The novel programmable DNA-mediated processor has several advantages over the existing silicon-mediated methods, such as conducting massive data storage and simultaneous processing via much fewer materials than conventional silicon devices.
Building a computer-aided design capability using a standard time share operating system
NASA Technical Reports Server (NTRS)
Sobieszczanski, J.
1975-01-01
The paper describes how an integrated system of engineering computer programs can be built using a standard commercially available operating system. The discussion opens with an outline of the auxiliary functions that an operating system can perform for a team of engineers involved in a large and complex task. An example of a specific integrated system is provided to explain how the standard operating system features can be used to organize the programs into a simple and inexpensive but effective system. Applications to an aircraft structural design study are discussed to illustrate the use of an integrated system as a flexible and efficient engineering tool. The discussion concludes with an engineer's assessment of an operating system's capabilities and desirable improvements.
ERIC Educational Resources Information Center
Matsuda, Hiroshi; Shindo, Yoshiaki
2006-01-01
The 3D computer graphics (3D-CG) animation using a virtual actor's speaking is very effective as an educational medium. But it takes a long time to produce a 3D-CG animation. To reduce the cost of producing 3D-CG educational contents and improve the capability of the education system, we have developed a new education system using Virtual Actor.…
Fault Tolerant Parallel Implementations of Iterative Algorithms for Optimal Control Problems
1988-01-21
p/.V)] steps, but did not discuss any specific parallel implementation. Gajski [51 improved upon this result by performing the SIMD computation in...N = p2. our approach reduces to that of [51, except that Gajski presents the coefficient computation and partial solution phases as a single...8217>. the SIMD algo- rithm presented by Gajski [5] can be most efficiently mapped to a unidirec- tional ring network with broadcasting capability. Based
Advanced Simulation & Computing FY15 Implementation Plan Volume 2, Rev. 0.5
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCoy, Michel; Archer, Bill; Matzen, M. Keith
2014-09-16
The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities andmore » computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. As the program approaches the end of its second decade, ASC is intently focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), quantify critical margins and uncertainties, and resolve increasingly difficult analyses needed for the SSP. Where possible, the program also enables the use of high-performance simulation and computing tools to address broader national security needs, such as foreign nuclear weapon assessments and counternuclear terrorism.« less
A Question of Interface Design: How Do Online Service GUIs Measure Up?
ERIC Educational Resources Information Center
Head, Alison J.
1997-01-01
Describes recent improvements in graphical user interfaces (GUIs) offered by online services. Highlights include design considerations, including computer engineering capabilities and users' abilities; fundamental GUI design principles; user empowerment; visual communication and interaction; and an evaluation of online search interfaces. (LRW)
This Rock 'n' Roll Video Teaches Math
ERIC Educational Resources Information Center
Niess, Margaret L.; Walker, Janet M.
2009-01-01
Mathematics is a discipline that has significantly advanced through the use of digital technologies with improved computational, graphical, and symbolic capabilities. Digital videos can be used to present challenging mathematical questions for students. Video clips offer instructional possibilities for moving students from a passive mode of…
Introduction of the ASP3D Computer Program for Unsteady Aerodynamic and Aeroelastic Analyses
NASA Technical Reports Server (NTRS)
Batina, John T.
2005-01-01
A new computer program has been developed called ASP3D (Advanced Small Perturbation 3D), which solves the small perturbation potential flow equation in an advanced form including mass-consistent surface and trailing wake boundary conditions, and entropy, vorticity, and viscous effects. The purpose of the program is for unsteady aerodynamic and aeroelastic analyses, especially in the nonlinear transonic flight regime. The program exploits the simplicity of stationary Cartesian meshes with the movement or deformation of the configuration under consideration incorporated into the solution algorithm through a planar surface boundary condition. The new ASP3D code is the result of a decade of developmental work on improvements to the small perturbation formulation, performed while the author was employed as a Senior Research Scientist in the Configuration Aerodynamics Branch at the NASA Langley Research Center. The ASP3D code is a significant improvement to the state-of-the-art for transonic aeroelastic analyses over the CAP-TSD code (Computational Aeroelasticity Program Transonic Small Disturbance), which was developed principally by the author in the mid-1980s. The author is in a unique position as the developer of both computer programs to compare, contrast, and ultimately make conclusions regarding the underlying formulations and utility of each code. The paper describes the salient features of the ASP3D code including the rationale for improvements in comparison with CAP-TSD. Numerous results are presented to demonstrate the ASP3D capability. The general conclusion is that the new ASP3D capability is superior to the older CAP-TSD code because of the myriad improvements developed and incorporated.
NASA Astrophysics Data System (ADS)
Jimenez, Edward S.; Thompson, Kyle R.; Stohn, Adriana; Goodner, Ryan N.
2017-09-01
Sandia National Laboratories has recently developed the capability to acquire multi-channel radio- graphs for multiple research and development applications in industry and security. This capability allows for the acquisition of x-ray radiographs or sinogram data to be acquired at up to 300 keV with up to 128 channels per pixel. This work will investigate whether multiple quality metrics for computed tomography can actually benefit from binned projection data compared to traditionally acquired grayscale sinogram data. Features and metrics to be evaluated include the ability to dis- tinguish between two different materials with similar absorption properties, artifact reduction, and signal-to-noise for both raw data and reconstructed volumetric data. The impact of this technology to non-destructive evaluation, national security, and industry is wide-ranging and has to potential to improve upon many inspection methods such as dual-energy methods, material identification, object segmentation, and computer vision on radiographs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Copps, Kevin D.
The Sandia Analysis Workbench (SAW) project has developed and deployed a production capability for SIERRA computational mechanics analysis workflows. However, the electrical analysis workflow capability requirements have only been demonstrated in early prototype states, with no real capability deployed for analysts’ use. This milestone aims to improve the electrical analysis workflow capability (via SAW and related tools) and deploy it for ongoing use. We propose to focus on a QASPR electrical analysis calibration workflow use case. We will include a number of new capabilities (versus today’s SAW), such as: 1) support for the XYCE code workflow component, 2) data managementmore » coupled to electrical workflow, 3) human-in-theloop workflow capability, and 4) electrical analysis workflow capability deployed on the restricted (and possibly classified) network at Sandia. While far from the complete set of capabilities required for electrical analysis workflow over the long term, this is a substantial first step toward full production support for the electrical analysts.« less
9-Ft By 7-Ft Supersonic Wind Tunnel Nozzle Improvement Study
NASA Technical Reports Server (NTRS)
Paciano, Eric N.
2014-01-01
Engineers at the Unitary Plan Wind Tunnel at NASA Ames Research Center have recently embarked on a project focused on improving flow quality and tunnel capabilities in the 9-ft by 7-ft supersonic wind tunnel. Collaborating with Jacobs Tech Group, the project has explored potential improvements to the nozzle design using computational fluid dynamics. Preliminary predictions suggest changes to the nozzle design could significantly improve flow quality at the lower operating range (M1.5-1.8), however potential improvements in the upper operating range have yet to be realized.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bryan, Frank; Dennis, John; MacCready, Parker
This project aimed to improve long term global climate simulations by resolving and enhancing the representation of the processes involved in the cycling of freshwater through estuaries and coastal regions. This was a collaborative multi-institution project consisting of physical oceanographers, climate model developers, and computational scientists. It specifically targeted the DOE objectives of advancing simulation and predictive capability of climate models through improvements in resolution and physical process representation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bryan, Frank; Dennis, John; MacCready, Parker
This project aimed to improve long term global climate simulations by resolving and enhancing the representation of the processes involved in the cycling of freshwater through estuaries and coastal regions. This was a collaborative multi-institution project consisting of physical oceanographers, climate model developers, and computational scientists. It specifically targeted the DOE objectives of advancing simulation and predictive capability of climate models through improvements in resolution and physical process representation.
Overview of ASC Capability Computing System Governance Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doebling, Scott W.
This document contains a description of the Advanced Simulation and Computing Program's Capability Computing System Governance Model. Objectives of the Governance Model are to ensure that the capability system resources are allocated on a priority-driven basis according to the Program requirements; and to utilize ASC Capability Systems for the large capability jobs for which they were designed and procured.
Driving imaging and overlay performance to the limits with advanced lithography optimization
NASA Astrophysics Data System (ADS)
Mulkens, Jan; Finders, Jo; van der Laan, Hans; Hinnen, Paul; Kubis, Michael; Beems, Marcel
2012-03-01
Immersion lithography is being extended to 22-nm and even below. Next to generic scanner system improvements, application specific solutions are needed to follow the requirements for CD control and overlay. Starting from the performance budgets, this paper discusses how to improve (in volume manufacturing environment) CDU towards 1-nm and overlay towards 3-nm. The improvements are based on deploying the actuator capabilities of the immersion scanner. The latest generation immersion scanners have extended the correction capabilities for overlay and imaging, offering freeform adjustments of lens, illuminator and wafer grid. In order to determine the needed adjustments the recipe generation per user application is based on a combination wafer metrology data and computational lithography methods. For overlay, focus and CD metrology we use an angle resolved optical scatterometer.
Aerosol Polarimetry Sensor (APS): Design Summary, Performance and Potential Modifications
NASA Technical Reports Server (NTRS)
Cairns, Brian
2014-01-01
APS is a mature design that has already been built and has a TRL of 7. Algorithmic and retrieval capabilities continue to improve and make better and more sophisticated used of the data. Adjoint solutions, both in one dimensional and three dimensional are computationally efficient and should be the preferred implementation for the calculation of Jacobians (one dimensional), or cost-function gradients (three dimensional). Adjoint solutions necessarily provide resolution of internal fields and simplify incorporation of active measurements in retrievals, which will be necessary for a future ACE mission. Its best to test these capabilities when you know the answer: OSSEs that are well constrained observationally provide the best place to test future multi-instrument platform capabilities and ensure capabilities will meet scientific needs.
Multiprocessing on supercomputers for computational aerodynamics
NASA Technical Reports Server (NTRS)
Yarrow, Maurice; Mehta, Unmeel B.
1990-01-01
Very little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPs or more) in computational aerodynamics to significantly improve turnaround time. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, the improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) through multi-tasking is applied via a strategy which requires relatively minor modifications to an existing code for a single processor. Essentially, this approach maps the available memory to multiple processors, exploiting the C-FORTRAN-Unix interface. The existing single processor code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor. As a demonstration of this approach, a Multiple Processor Multiple Grid (MPMG) code is developed. It is capable of using nine processors, and can be easily extended to a larger number of processors. This code solves the three-dimensional, Reynolds averaged, thin-layer and slender-layer Navier-Stokes equations with an implicit, approximately factored and diagonalized method. The solver is applied to generic oblique-wing aircraft problem on a four processor Cray-2 computer. A tricubic interpolation scheme is developed to increase the accuracy of coupling of overlapped grids. For the oblique-wing aircraft problem, a speedup of two in elapsed (turnaround) time is observed in a saturated time-sharing environment.
An Ecological Framework for Cancer Communication: Implications for Research
Intille, Stephen S; Zabinski, Marion F
2005-01-01
The field of cancer communication has undergone a major revolution as a result of the Internet. As recently as the early 1990s, face-to-face, print, and the telephone were the dominant methods of communication between health professionals and individuals in support of the prevention and treatment of cancer. Computer-supported interactive media existed, but this usually required sophisticated computer and video platforms that limited availability. The introduction of point-and-click interfaces for the Internet dramatically improved the ability of non-expert computer users to obtain and publish information electronically on the Web. Demand for Web access has driven computer sales for the home setting and improved the availability, capability, and affordability of desktop computers. New advances in information and computing technologies will lead to similarly dramatic changes in the affordability and accessibility of computers. Computers will move from the desktop into the environment and onto the body. Computers are becoming smaller, faster, more sophisticated, more responsive, less expensive, and—essentially—ubiquitous. Computers are evolving into much more than desktop communication devices. New computers include sensing, monitoring, geospatial tracking, just-in-time knowledge presentation, and a host of other information processes. The challenge for cancer communication researchers is to acknowledge the expanded capability of the Web and to move beyond the approaches to health promotion, behavior change, and communication that emerged during an era when language- and image-based interpersonal and mass communication strategies predominated. Ecological theory has been advanced since the early 1900s to explain the highly complex relationships among individuals, society, organizations, the built and natural environments, and personal and population health and well-being. This paper provides background on ecological theory, advances an Ecological Model of Internet-Based Cancer Communication intended to broaden the vision of potential uses of the Internet for cancer communication, and provides some examples of how such a model might inform future research and development in cancer communication. PMID:15998614
Defense Attache Saigon: RVNAF Quarterly Assessment, 1st Quarter FY75
1974-11-01
CONFIDENTIAL ___ has been realized and a new computation of requirements methodology has been developed. Improved repair capability at ATLC and the Air ... Asia Ccrtcactor (Taiwan) have also reduced the dollar value "of AIMI buy requirements from CONUS. Comparison of quarterly * requirements follows
Fast Particle Methods for Multiscale Phenomena Simulations
NASA Technical Reports Server (NTRS)
Koumoutsakos, P.; Wray, A.; Shariff, K.; Pohorille, Andrew
2000-01-01
We are developing particle methods oriented at improving computational modeling capabilities of multiscale physical phenomena in : (i) high Reynolds number unsteady vortical flows, (ii) particle laden and interfacial flows, (iii)molecular dynamics studies of nanoscale droplets and studies of the structure, functions, and evolution of the earliest living cell. The unifying computational approach involves particle methods implemented in parallel computer architectures. The inherent adaptivity, robustness and efficiency of particle methods makes them a multidisciplinary computational tool capable of bridging the gap of micro-scale and continuum flow simulations. Using efficient tree data structures, multipole expansion algorithms, and improved particle-grid interpolation, particle methods allow for simulations using millions of computational elements, making possible the resolution of a wide range of length and time scales of these important physical phenomena.The current challenges in these simulations are in : [i] the proper formulation of particle methods in the molecular and continuous level for the discretization of the governing equations [ii] the resolution of the wide range of time and length scales governing the phenomena under investigation. [iii] the minimization of numerical artifacts that may interfere with the physics of the systems under consideration. [iv] the parallelization of processes such as tree traversal and grid-particle interpolations We are conducting simulations using vortex methods, molecular dynamics and smooth particle hydrodynamics, exploiting their unifying concepts such as : the solution of the N-body problem in parallel computers, highly accurate particle-particle and grid-particle interpolations, parallel FFT's and the formulation of processes such as diffusion in the context of particle methods. This approach enables us to transcend among seemingly unrelated areas of research.
NASA Technical Reports Server (NTRS)
Shen, Bo-Wen; Tao, Wei-Kuo; Chern, Jiun-Dar
2007-01-01
Improving our understanding of hurricane inter-annual variability and the impact of climate change (e.g., doubling CO2 and/or global warming) on hurricanes brings both scientific and computational challenges to researchers. As hurricane dynamics involves multiscale interactions among synoptic-scale flows, mesoscale vortices, and small-scale cloud motions, an ideal numerical model suitable for hurricane studies should demonstrate its capabilities in simulating these interactions. The newly-developed multiscale modeling framework (MMF, Tao et al., 2007) and the substantial computing power by the NASA Columbia supercomputer show promise in pursuing the related studies, as the MMF inherits the advantages of two NASA state-of-the-art modeling components: the GEOS4/fvGCM and 2D GCEs. This article focuses on the computational issues and proposes a revised methodology to improve the MMF's performance and scalability. It is shown that this prototype implementation enables 12-fold performance improvements with 364 CPUs, thereby making it more feasible to study hurricane climate.
PARVMEC: An Efficient, Scalable Implementation of the Variational Moments Equilibrium Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seal, Sudip K; Hirshman, Steven Paul; Wingen, Andreas
The ability to sustain magnetically confined plasma in a state of stable equilibrium is crucial for optimal and cost-effective operations of fusion devices like tokamaks and stellarators. The Variational Moments Equilibrium Code (VMEC) is the de-facto serial application used by fusion scientists to compute magnetohydrodynamics (MHD) equilibria and study the physics of three dimensional plasmas in confined configurations. Modern fusion energy experiments have larger system scales with more interactive experimental workflows, both demanding faster analysis turnaround times on computational workloads that are stressing the capabilities of sequential VMEC. In this paper, we present PARVMEC, an efficient, parallel version of itsmore » sequential counterpart, capable of scaling to thousands of processors on distributed memory machines. PARVMEC is a non-linear code, with multiple numerical physics modules, each with its own computational complexity. A detailed speedup analysis supported by scaling results on 1,024 cores of a Cray XC30 supercomputer is presented. Depending on the mode of PARVMEC execution, speedup improvements of one to two orders of magnitude are reported. PARVMEC equips fusion scientists for the first time with a state-of-theart capability for rapid, high fidelity analyses of magnetically confined plasmas at unprecedented scales.« less
Integrated Modeling and Analysis of Physical Oceanographic and Acoustic Processes
2015-09-30
goal is to improve ocean physical state and acoustic state predictive capabilities. The goal fitting the scope of this project is the creation of... Project -scale objectives are to complete targeted studies of oceanographic processes in a few regimes, accompanied by studies of acoustic propagation...by the basic research efforts of this project . An additional objective is to develop improved computational tools for acoustics and for the
Experience Transitioning Models and Data at the NOAA Space Weather Prediction Center
NASA Astrophysics Data System (ADS)
Berger, Thomas
2016-07-01
The NOAA Space Weather Prediction Center has a long history of transitioning research data and models into operations and with the validation activities required. The first stage in this process involves demonstrating that the capability has sufficient value to customers to justify the cost needed to transition it and to run it continuously and reliably in operations. Once the overall value is demonstrated, a substantial effort is then required to develop the operational software from the research codes. The next stage is to implement and test the software and product generation on the operational computers. Finally, effort must be devoted to establishing long-term measures of performance, maintaining the software, and working with forecasters, customers, and researchers to improve over time the operational capabilities. This multi-stage process of identifying, transitioning, and improving operational space weather capabilities will be discussed using recent examples. Plans for future activities will also be described.
Kernel Temporal Differences for Neural Decoding
Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.
2015-01-01
We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504
EPAs National Center for Computational Toxicology is building capabilities to support a new paradigm for toxicity screening and prediction. The DSSTox project is improving public access to quality structure-annotated chemical toxicity information in less summarized forms than tr...
A History of Commitment in CALL.
ERIC Educational Resources Information Center
Jamieson, Joan
The evolution of computer-assisted language learning (CALL) is examined, focusing on what has changed and what has not changed much during that time. A variety of changes are noted: the development of multimedia capabilities, color, animation, and technical improvement of audio and video quality; availability of databases, better fit between…
A New Architecture for Improved Human Behavior in Military Simulations
2008-04-01
forces were using motorcycle couriers to avoid U.S. intelligence capabilities in the early days of Operation Iraqi Freedom (OIF), there were certainly...Games ( MMORPG ) engage millions of game players in near-real-time computing environments. Games such as World of Warcraft® attract players to
EPA’s National Center for Computational Toxicology is generating data and capabilities to support a new paradigm for toxicity screening and prediction. The DSSTox project is improving public access to quality structure-annotated chemical toxicity information in less summarized fo...
Precision Departure Release Capability (PDRC) Overview and Results: NASA to FAA Research Transition
NASA Technical Reports Server (NTRS)
Engelland, Shawn; Davis, Tom.
2013-01-01
NASA researchers developed the Precision Departure Release Capability (PDRC) concept to improve the tactical departure scheduling process. The PDRC system is comprised of: 1) a surface automation system that computes ready time predictions and departure runway assignments, 2) an en route scheduling automation tool that uses this information to estimate ascent trajectories to the merge point and computes release times and, 3) an interface that provides two-way communication between the two systems. To minimize technology transfer issues and facilitate its adoption by TMCs and Frontline Managers (FLM), NASA developed the PDRC prototype using the Surface Decision Support System (SDSS) for the Tower surface automation tool, a research version of the FAA TMA (RTMA) for en route automation tool and a digital interface between the two DSTs to facilitate coordination.
NASA Technical Reports Server (NTRS)
Perkins, Hugh Douglas
2010-01-01
In order to improve the understanding of particle vitiation effects in hypersonic propulsion test facilities, a quasi-one dimensional numerical tool was developed to efficiently model reacting particle-gas flows over a wide range of conditions. Features of this code include gas-phase finite-rate kinetics, a global porous-particle combustion model, mass, momentum and energy interactions between phases, and subsonic and supersonic particle drag and heat transfer models. The basic capabilities of this tool were validated against available data or other validated codes. To demonstrate the capabilities of the code a series of computations were performed for a model hypersonic propulsion test facility and scramjet. Parameters studied were simulated flight Mach number, particle size, particle mass fraction and particle material.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turinsky, Paul J., E-mail: turinsky@ncsu.edu; Kothe, Douglas B., E-mail: kothe@ornl.gov
The Consortium for the Advanced Simulation of Light Water Reactors (CASL), the first Energy Innovation Hub of the Department of Energy, was established in 2010 with the goal of providing modeling and simulation (M&S) capabilities that support and accelerate the improvement of nuclear energy's economic competitiveness and the reduction of spent nuclear fuel volume per unit energy, and all while assuring nuclear safety. To accomplish this requires advances in M&S capabilities in radiation transport, thermal-hydraulics, fuel performance and corrosion chemistry. To focus CASL's R&D, industry challenge problems have been defined, which equate with long standing issues of the nuclear powermore » industry that M&S can assist in addressing. To date CASL has developed a multi-physics “core simulator” based upon pin-resolved radiation transport and subchannel (within fuel assembly) thermal-hydraulics, capitalizing on the capabilities of high performance computing. CASL's fuel performance M&S capability can also be optionally integrated into the core simulator, yielding a coupled multi-physics capability with untapped predictive potential. Material models have been developed to enhance predictive capabilities of fuel clad creep and growth, along with deeper understanding of zirconium alloy clad oxidation and hydrogen pickup. Understanding of corrosion chemistry (e.g., CRUD formation) has evolved at all scales: micro, meso and macro. CFD R&D has focused on improvement in closure models for subcooled boiling and bubbly flow, and the formulation of robust numerical solution algorithms. For multiphysics integration, several iterative acceleration methods have been assessed, illuminating areas where further research is needed. Finally, uncertainty quantification and data assimilation techniques, based upon sampling approaches, have been made more feasible for practicing nuclear engineers via R&D on dimensional reduction and biased sampling. Industry adoption of CASL's evolving M&S capabilities, which is in progress, will assist in addressing long-standing and future operational and safety challenges of the nuclear industry. - Highlights: • Complexity of physics based modeling of light water reactor cores being addressed. • Capability developed to help address problems that have challenged the nuclear power industry. • Simulation capabilities that take advantage of high performance computing developed.« less
A view of Kanerva's sparse distributed memory
NASA Technical Reports Server (NTRS)
Denning, P. J.
1986-01-01
Pentti Kanerva is working on a new class of computers, which are called pattern computers. Pattern computers may close the gap between capabilities of biological organisms to recognize and act on patterns (visual, auditory, tactile, or olfactory) and capabilities of modern computers. Combinations of numeric, symbolic, and pattern computers may one day be capable of sustaining robots. The overview of the requirements for a pattern computer, a summary of Kanerva's Sparse Distributed Memory (SDM), and examples of tasks this computer can be expected to perform well are given.
NASA Technical Reports Server (NTRS)
Hahne, David E.; Glaab, Louis J.
1999-01-01
An investigation was performed to evaluate leading-and trailing-edge flap deflections for optimal aerodynamic performance of a High-Speed Civil Transport concept during takeoff and approach-to-landing conditions. The configuration used for this study was designed by the Douglas Aircraft Company during the 1970's. A 0.1-scale model of this configuration was tested in the Langley 30- by 60-Foot Tunnel with both the original leading-edge flap system and a new leading-edge flap system, which was designed with modem computational flow analysis and optimization tools. Leading-and trailing-edge flap deflections were generated for the original and modified leading-edge flap systems with the computational flow analysis and optimization tools. Although wind tunnel data indicated improvements in aerodynamic performance for the analytically derived flap deflections for both leading-edge flap systems, perturbations of the analytically derived leading-edge flap deflections yielded significant additional improvements in aerodynamic performance. In addition to the aerodynamic performance optimization testing, stability and control data were also obtained. An evaluation of the crosswind landing capability of the aircraft configuration revealed that insufficient lateral control existed as a result of high levels of lateral stability. Deflection of the leading-and trailing-edge flaps improved the crosswind landing capability of the vehicle considerably; however, additional improvements are required.
Flynn, Allen J; Boisvert, Peter; Gittlen, Nate; Gross, Colin; Iott, Brad; Lagoze, Carl; Meng, George; Friedman, Charles P
2018-01-01
The Knowledge Grid (KGrid) is a research and development program toward infrastructure capable of greatly decreasing latency between the publication of new biomedical knowledge and its widespread uptake into practice. KGrid comprises digital knowledge objects, an online Library to store them, and an Activator that uses them to provide Knowledge-as-a-Service (KaaS). KGrid's Activator enables computable biomedical knowledge, held in knowledge objects, to be rapidly deployed at Internet-scale in cloud computing environments for improved health. Here we present the Activator, its system architecture and primary functions.
The investigation of tethered satellite system dynamics
NASA Technical Reports Server (NTRS)
Lorenzini, E.
1984-01-01
Tethered satellite system (TSS) dynamics were studied. The dynamic response of the TSS during the entire stationkeeping phase for the first electrodynamic mission was investigated. An out of plane swing amplitude and the tether's bowing were observed. The dynamics of the slack tether was studied and computer code, SLACK2, was improved both in capabilities and computational speed. Speed hazard related to tether breakage or plasma contactor failure was examined. Preliminary values of the potential difference after the failure and of the drop of the electric field along the tether axis have been computed. The update of the satellite rotational dynamics model is initiated.
A New Look at NASA: Strategic Research In Information Technology
NASA Technical Reports Server (NTRS)
Alfano, David; Tu, Eugene (Technical Monitor)
2002-01-01
This viewgraph presentation provides information on research undertaken by NASA to facilitate the development of information technologies. Specific ideas covered here include: 1) Bio/nano technologies: biomolecular and nanoscale systems and tools for assembly and computing; 2) Evolvable hardware: autonomous self-improving, self-repairing hardware and software for survivable space systems in extreme environments; 3) High Confidence Software Technologies: formal methods, high-assurance software design, and program synthesis; 4) Intelligent Controls and Diagnostics: Next generation machine learning, adaptive control, and health management technologies; 5) Revolutionary computing: New computational models to increase capability and robustness to enable future NASA space missions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clemens, Noel
This project was a combined computational and experimental effort to improve predictive capability for boundary layer flashback of premixed swirl flames relevant to gas-turbine power plants operating with high-hydrogen-content fuels. During the course of this project, significant progress in modeling was made on four major fronts: 1) use of direct numerical simulation of turbulent flames to understand the coupling between the flame and the turbulent boundary layer; 2) improved modeling capability for flame propagation in stratified pre-mixtures; 3) improved portability of computer codes using the OpenFOAM platform to facilitate transfer to industry and other researchers; and 4) application of LESmore » to flashback in swirl combustors, and a detailed assessment of its capabilities and limitations for predictive purposes. A major component of the project was an experimental program that focused on developing a rich experimental database of boundary layer flashback in swirl flames. Both methane and high-hydrogen fuels, including effects of elevated pressure (1 to 5 atm), were explored. For this project, a new model swirl combustor was developed. Kilohertz-rate stereoscopic PIV and chemiluminescence imaging were used to investigate the flame propagation dynamics. In addition to the planar measurements, a technique capable of detecting the instantaneous, time-resolved 3D flame front topography was developed and applied successfully to investigate the flow-flame interaction. The UT measurements and legacy data were used in a hierarchical validation approach where flows with increasingly complex physics were used for validation. First component models were validated with DNS and literature data in simplified configurations, and this was followed by validation with the UT 1-atm flashback cases, and then the UT high-pressure flashback cases. The new models and portable code represent a major improvement over what was available before this project was initiated.« less
Overview Electrotactile Feedback for Enhancing Human Computer Interface
NASA Astrophysics Data System (ADS)
Pamungkas, Daniel S.; Caesarendra, Wahyu
2018-04-01
To achieve effective interaction between a human and a computing device or machine, adequate feedback from the computing device or machine is required. Recently, haptic feedback is increasingly being utilised to improve the interactivity of the Human Computer Interface (HCI). Most existing haptic feedback enhancements aim at producing forces or vibrations to enrich the user’s interactive experience. However, these force and/or vibration actuated haptic feedback systems can be bulky and uncomfortable to wear and only capable of delivering a limited amount of information to the user which can limit both their effectiveness and the applications they can be applied to. To address this deficiency, electrotactile feedback is used. This involves delivering haptic sensations to the user by electrically stimulating nerves in the skin via electrodes placed on the surface of the skin. This paper presents a review and explores the capability of electrotactile feedback for HCI applications. In addition, a description of the sensory receptors within the skin for sensing tactile stimulus and electric currents alsoseveral factors which influenced electric signal to transmit to the brain via human skinare explained.
Actions for productivity improvement in crew training
NASA Technical Reports Server (NTRS)
Miller, G. E.
1985-01-01
Improvement of the productivity of astronaut crew instructors in the Space Shuttle program and beyond is proposed. It is suggested that instructor certification plans should be established to shorten the time required for trainers to develop their skills and improve their ability to convey those skills. Members of the training cadre should be thoroughly cross trained in their task. This provides better understanding of the overall task and greater flexibility in instructor utilization. Improved facility access will give instructors the benefit of practical application experience. Former crews should be integrated into the training of upcoming crews to bridge some of the gap between simulated conditions and the real world. The information contained in lengthy and complex training manuals can be presented more clearly and efficiently as computer lessons. The illustration, animation and interactive capabilities of the computer combine an effective means of explanation.
Transferring ecosystem simulation codes to supercomputers
NASA Technical Reports Server (NTRS)
Skiles, J. W.; Schulbach, C. H.
1995-01-01
Many ecosystem simulation computer codes have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Supercomputing platforms (both parallel and distributed systems) have been largely unused, however, because of the perceived difficulty in accessing and using the machines. Also, significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers must be considered. We have transferred a grassland simulation model (developed on a VAX) to a Cray Y-MP/C90. We describe porting the model to the Cray and the changes we made to exploit the parallelism in the application and improve code execution. The Cray executed the model 30 times faster than the VAX and 10 times faster than a Unix workstation. We achieved an additional speedup of 30 percent by using the compiler's vectoring and 'in-line' capabilities. The code runs at only about 5 percent of the Cray's peak speed because it ineffectively uses the vector and parallel processing capabilities of the Cray. We expect that by restructuring the code, it could execute an additional six to ten times faster.
NASA's hypersonic fluid and thermal physics program (Aerothermodynamics)
NASA Technical Reports Server (NTRS)
Graves, R. A.; Hunt, J. L.
1985-01-01
This survey paper gives an overview of NASA's hypersonic fluid and thermal physics program (recently renamed aerothermodynamics). The purpose is to present the elements of, example results from, and rationale and projection for this program. The program is based on improving the fundamental understanding of aerodynamic and aerothermodynamic flow phenomena over hypersonic vehicles in the continuum, transitional, and rarefied flow regimes. Vehicle design capabilities, computational fluid dynamics, computational chemistry, turbulence modeling, aerothermal loads, orbiter flight data analysis, orbiter experiments, laser photodiagnostics, and facilities are discussed.
2007-06-01
management issues he encountered ruled out the Expanion as a viable option for thin-client computing in the Navy. An improvement in thin-client...44 Requirements to capabilities (2004). Retrieved April 29, 2007, from Vision Presence Power: A Program Guide to the U.S. Navy – 2004...Retrieved April 29, 2007, from Vision Presence Power: A Program Guide to the U.S. Navy – 2004 Edition, p. 128. Web site: http://www.chinfo.navy.mil
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laurie, Carol
2017-02-01
This book takes readers inside the places where daily discoveries shape the next generation of wind power systems. Energy Department laboratory facilities span the United States and offer wind research capabilities to meet industry needs. The facilities described in this book make it possible for industry players to increase reliability, improve efficiency, and reduce the cost of wind energy -- one discovery at a time. Whether you require blade testing or resource characterization, grid integration or high-performance computing, Department of Energy laboratory facilities offer a variety of capabilities to meet your wind research needs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Office of Energy Efficiency and Renewable Energy
This book takes readers inside the places where daily discoveries shape the next generation of wind power systems. Energy Department laboratory facilities span the United States and offer wind research capabilities to meet industry needs. The facilities described in this book make it possible for industry players to increase reliability, improve efficiency, and reduce the cost of wind energy -- one discovery at a time. Whether you require blade testing or resource characterization, grid integration or high-performance computing, Department of Energy laboratory facilities offer a variety of capabilities to meet your wind research needs.
Using Multi-Core Systems for Rover Autonomy
NASA Technical Reports Server (NTRS)
Clement, Brad; Estlin, Tara; Bornstein, Benjamin; Springer, Paul; Anderson, Robert C.
2010-01-01
Task Objectives are: (1) Develop and demonstrate key capabilities for rover long-range science operations using multi-core computing, (a) Adapt three rover technologies to execute on SOA multi-core processor (b) Illustrate performance improvements achieved (c) Demonstrate adapted capabilities with rover hardware, (2) Targeting three high-level autonomy technologies (a) Two for onboard data analysis (b) One for onboard command sequencing/planning, (3) Technologies identified as enabling for future missions, (4)Benefits will be measured along several metrics: (a) Execution time / Power requirements (b) Number of data products processed per unit time (c) Solution quality
2017-03-01
determine the optimum required operational capability of the unmanned aerial vehicles to support Korean rear area operations. We use Map Aware Non ...area operations. Through further experimentations and analyses, we were able to find the optimum characteristics of an improved unmanned aerial...operations. We use Map Aware Non -Uniform Automata, an agent-based simulation software platform for computational experiments. The study models a scenario
Bioinformatics approaches to predict target genes from transcription factor binding data.
Essebier, Alexandra; Lamprecht, Marnie; Piper, Michael; Bodén, Mikael
2017-12-01
Transcription factors regulate gene expression and play an essential role in development by maintaining proliferative states, driving cellular differentiation and determining cell fate. Transcription factors are capable of regulating multiple genes over potentially long distances making target gene identification challenging. Currently available experimental approaches to detect distal interactions have multiple weaknesses that have motivated the development of computational approaches. Although an improvement over experimental approaches, existing computational approaches are still limited in their application, with different weaknesses depending on the approach. Here, we review computational approaches with a focus on data dependency, cell type specificity and usability. With the aim of identifying transcription factor target genes, we apply available approaches to typical transcription factor experimental datasets. We show that approaches are not always capable of annotating all transcription factor binding sites; binding sites should be treated disparately; and a combination of approaches can increase the biological relevance of the set of genes identified as targets. Copyright © 2017 Elsevier Inc. All rights reserved.
Unity Power Factor Operated PFC Converter Based Power Supply for Computers
NASA Astrophysics Data System (ADS)
Singh, Shikha; Singh, Bhim; Bhuvaneswari, G.; Bist, Vashist
2017-11-01
Power Supplies (PSs) employed in personal computers pollute the single phase ac mains by drawing distorted current at a substandard Power Factor (PF). The harmonic distortion of the supply current in these personal computers are observed 75% to 90% with the Crest Factor (CF) being very high which escalates losses in the distribution system. To find a tangible solution to these issues, a non-isolated PFC converter is employed at the input of isolated converter that is capable of improving the input power quality apart from regulating the dc voltage at its output. This is given to the isolated stage that yields completely isolated and stiffly regulated multiple output voltages which is the prime requirement of computer PS. The operation of the proposed PS is evaluated under various operating conditions and the results show improved performance depicting nearly unity PF and low input current harmonics. The prototype of this PS is developed in laboratory environment and test results are recorded which corroborate the power quality improvement observed in simulation results under various operating conditions.
Brain-computer interfaces in neurological rehabilitation.
Daly, Janis J; Wolpaw, Jonathan R
2008-11-01
Recent advances in analysis of brain signals, training patients to control these signals, and improved computing capabilities have enabled people with severe motor disabilities to use their brain signals for communication and control of objects in their environment, thereby bypassing their impaired neuromuscular system. Non-invasive, electroencephalogram (EEG)-based brain-computer interface (BCI) technologies can be used to control a computer cursor or a limb orthosis, for word processing and accessing the internet, and for other functions such as environmental control or entertainment. By re-establishing some independence, BCI technologies can substantially improve the lives of people with devastating neurological disorders such as advanced amyotrophic lateral sclerosis. BCI technology might also restore more effective motor control to people after stroke or other traumatic brain disorders by helping to guide activity-dependent brain plasticity by use of EEG brain signals to indicate to the patient the current state of brain activity and to enable the user to subsequently lower abnormal activity. Alternatively, by use of brain signals to supplement impaired muscle control, BCIs might increase the efficacy of a rehabilitation protocol and thus improve muscle control for the patient.
Product definition data interface
NASA Technical Reports Server (NTRS)
Birchfield, B.; Downey, P.
1984-01-01
The development and application of advanced Computer Aided Design/Computer Aided Manufacturing (CAD/CAM) technology in aerospace industry is discussed. New CAD/CAM capabilities provide the engineer and production worker with tools to produce better products and significantly improve productivity. This technology is expanding in all phases of engineering and manufacturing with large potential for improvements in productivity. The integration of CAD and CAM systematically to insure maximum utility throughout the U.S. Aerospace Industry, its large community of supporting suppliers, and the Department of Defense aircraft overhaul and repair facilities is outlined. The need for a framework for exchange of digital product definition data, which serves the function of the conventional engineering drawing is emphasized.
A Survey of Methods for Analyzing and Improving GPU Energy Efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mittal, Sparsh; Vetter, Jeffrey S
2014-01-01
Recent years have witnessed a phenomenal growth in the computational capabilities and applications of GPUs. However, this trend has also led to dramatic increase in their power consumption. This paper surveys research works on analyzing and improving energy efficiency of GPUs. It also provides a classification of these techniques on the basis of their main research idea. Further, it attempts to synthesize research works which compare energy efficiency of GPUs with other computing systems, e.g. FPGAs and CPUs. The aim of this survey is to provide researchers with knowledge of state-of-the-art in GPU power management and motivate them to architectmore » highly energy-efficient GPUs of tomorrow.« less
Advances in U.S. Land Imaging Capabilities
NASA Astrophysics Data System (ADS)
Stryker, T. S.
2017-12-01
Advancements in Earth observations, cloud computing, and data science are improving everyday life. Information from land-imaging satellites, such as the U.S. Landsat system, helps us to better understand the changing landscapes where we live, work, and play. This understanding builds capacity for improved decision-making about our lands, waters, and resources, driving economic growth, protecting lives and property, and safeguarding the environment. The USGS is fostering the use of land remote sensing technology to meet local, national, and global challenges. A key dimension to meeting these challenges is the full, free, and open provision of land remote sensing observations for both public and private sector applications. To achieve maximum impact, these data must also be easily discoverable, accessible, and usable. The presenter will describe the USGS Land Remote Sensing Program's current capabilities and future plans to collect and deliver land remote sensing information for societal benefit. He will discuss these capabilities in the context of national plans and policies, domestic partnerships, and international collaboration. The presenter will conclude with examples of how Landsat data is being used on a daily basis to improve lives and livelihoods.
A Conceptual Design Model for CBT Development: A NATO Case Study
ERIC Educational Resources Information Center
Kok, Ayse
2014-01-01
CBT (computer-based training) can benefit from the modern multimedia tools combined with network capabilities to overcame traditional education. The objective of this paper is focused on CBT development to improve strategic decision-making with regard to air command and control system for NATO staff in virtual environment. A conceptual design for…
Meeting the needs of an ever-demanding market.
Rigby, Richard
2002-04-01
Balancing cost and performance in packaging is critical. This article outlines techniques to assist in this whilst delivering added value and product differentiation. The techniques include a rigorous statistical process capable of delivering cost reduction and improved quality and a computer modelling process that can save time when validating new packaging options.
Capabilities and Advantages of Cloud Computing in the Implementation of Electronic Health Record.
Ahmadi, Maryam; Aslani, Nasim
2018-01-01
With regard to the high cost of the Electronic Health Record (EHR), in recent years the use of new technologies, in particular cloud computing, has increased. The purpose of this study was to review systematically the studies conducted in the field of cloud computing. The present study was a systematic review conducted in 2017. Search was performed in the Scopus, Web of Sciences, IEEE, Pub Med and Google Scholar databases by combination keywords. From the 431 article that selected at the first, after applying the inclusion and exclusion criteria, 27 articles were selected for surveyed. Data gathering was done by a self-made check list and was analyzed by content analysis method. The finding of this study showed that cloud computing is a very widespread technology. It includes domains such as cost, security and privacy, scalability, mutual performance and interoperability, implementation platform and independence of Cloud Computing, ability to search and exploration, reducing errors and improving the quality, structure, flexibility and sharing ability. It will be effective for electronic health record. According to the findings of the present study, higher capabilities of cloud computing are useful in implementing EHR in a variety of contexts. It also provides wide opportunities for managers, analysts and providers of health information systems. Considering the advantages and domains of cloud computing in the establishment of HER, it is recommended to use this technology.
Capabilities and Advantages of Cloud Computing in the Implementation of Electronic Health Record
Ahmadi, Maryam; Aslani, Nasim
2018-01-01
Background: With regard to the high cost of the Electronic Health Record (EHR), in recent years the use of new technologies, in particular cloud computing, has increased. The purpose of this study was to review systematically the studies conducted in the field of cloud computing. Methods: The present study was a systematic review conducted in 2017. Search was performed in the Scopus, Web of Sciences, IEEE, Pub Med and Google Scholar databases by combination keywords. From the 431 article that selected at the first, after applying the inclusion and exclusion criteria, 27 articles were selected for surveyed. Data gathering was done by a self-made check list and was analyzed by content analysis method. Results: The finding of this study showed that cloud computing is a very widespread technology. It includes domains such as cost, security and privacy, scalability, mutual performance and interoperability, implementation platform and independence of Cloud Computing, ability to search and exploration, reducing errors and improving the quality, structure, flexibility and sharing ability. It will be effective for electronic health record. Conclusion: According to the findings of the present study, higher capabilities of cloud computing are useful in implementing EHR in a variety of contexts. It also provides wide opportunities for managers, analysts and providers of health information systems. Considering the advantages and domains of cloud computing in the establishment of HER, it is recommended to use this technology. PMID:29719309
Development of the NASA/FLAGRO computer program for analysis of airframe structures
NASA Technical Reports Server (NTRS)
Forman, R. G.; Shivakumar, V.; Newman, J. C., Jr.
1994-01-01
The NASA/FLAGRO (NASGRO) computer program was developed for fracture control analysis of space hardware and is currently the standard computer code in NASA, the U.S. Air Force, and the European Agency (ESA) for this purpose. The significant attributes of the NASGRO program are the numerous crack case solutions, the large materials file, the improved growth rate equation based on crack closure theory, and the user-friendly promptive input features. In support of the National Aging Aircraft Research Program (NAARP); NASGRO is being further developed to provide advanced state-of-the-art capability for damage tolerance and crack growth analysis of aircraft structural problems, including mechanical systems and engines. The project currently involves a cooperative development effort by NASA, FAA, and ESA. The primary tasks underway are the incorporation of advanced methodology for crack growth rate retardation resulting from spectrum loading and improved analysis for determining crack instability. Also, the current weight function solutions in NASGRO or nonlinear stress gradient problems are being extended to more crack cases, and the 2-d boundary integral routine for stress analysis and stress-intensity factor solutions is being extended to 3-d problems. Lastly, effort is underway to enhance the program to operate on personal computers and work stations in a Windows environment. Because of the increasing and already wide usage of NASGRO, the code offers an excellent mechanism for technology transfer for new fatigue and fracture mechanics capabilities developed within NAARP.
Opportunistic Computing with Lobster: Lessons Learned from Scaling up to 25k Non-Dedicated Cores
NASA Astrophysics Data System (ADS)
Wolf, Matthias; Woodard, Anna; Li, Wenzhao; Hurtado Anampa, Kenyi; Yannakopoulos, Anna; Tovar, Benjamin; Donnelly, Patrick; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas
2017-10-01
We previously described Lobster, a workflow management tool for exploiting volatile opportunistic computing resources for computation in HEP. We will discuss the various challenges that have been encountered while scaling up the simultaneous CPU core utilization and the software improvements required to overcome these challenges. Categories: Workflows can now be divided into categories based on their required system resources. This allows the batch queueing system to optimize assignment of tasks to nodes with the appropriate capabilities. Within each category, limits can be specified for the number of running jobs to regulate the utilization of communication bandwidth. System resource specifications for a task category can now be modified while a project is running, avoiding the need to restart the project if resource requirements differ from the initial estimates. Lobster now implements time limits on each task category to voluntarily terminate tasks. This allows partially completed work to be recovered. Workflow dependency specification: One workflow often requires data from other workflows as input. Rather than waiting for earlier workflows to be completed before beginning later ones, Lobster now allows dependent tasks to begin as soon as sufficient input data has accumulated. Resource monitoring: Lobster utilizes a new capability in Work Queue to monitor the system resources each task requires in order to identify bottlenecks and optimally assign tasks. The capability of the Lobster opportunistic workflow management system for HEP computation has been significantly increased. We have demonstrated efficient utilization of 25 000 non-dedicated cores and achieved a data input rate of 30 Gb/s and an output rate of 500GB/h. This has required new capabilities in task categorization, workflow dependency specification, and resource monitoring.
Development Of A Data Assimilation Capability For RAPID
NASA Astrophysics Data System (ADS)
Emery, C. M.; David, C. H.; Turmon, M.; Hobbs, J.; Allen, G. H.; Famiglietti, J. S.
2017-12-01
The global decline of in situ observations associated with the increasing ability to monitor surface water from space motivates the creation of data assimilation algorithms that merge computer models and space-based observations to produce consistent estimates of terrestrial hydrology that fill the spatiotemporal gaps in observations. RAPID is a routing model based on the Muskingum method that is capable of estimating river streamflow over large scales with a relatively short computing time. This model only requires limited inputs: a reach-based river network, and lateral surface and subsurface flow into the rivers. The relatively simple model physics imply that RAPID simulations could be significantly improved by including a data assimilation capability. Here we present the early developments of such data assimilation approach into RAPID. Given the linear and matrix-based structure of the model, we chose to apply a direct Kalman filter, hence allowing for the preservation of high computational speed. We correct the simulated streamflows by assimilating streamflow observations and our early results demonstrate the feasibility of the approach. Additionally, the use of in situ gauges at continental scales motivates the application of our new data assimilation scheme to altimetry measurements from existing (e.g. EnviSat, Jason 2) and upcoming satellite missions (e.g. SWOT), and ultimately apply the scheme globally.
A knowledge-based approach to improving optimization techniques in system planning
NASA Technical Reports Server (NTRS)
Momoh, J. A.; Zhang, Z. Z.
1990-01-01
A knowledge-based (KB) approach to improve mathematical programming techniques used in the system planning environment is presented. The KB system assists in selecting appropriate optimization algorithms, objective functions, constraints and parameters. The scheme is implemented by integrating symbolic computation of rules derived from operator and planner's experience and is used for generalized optimization packages. The KB optimization software package is capable of improving the overall planning process which includes correction of given violations. The method was demonstrated on a large scale power system discussed in the paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vigil,Benny Manuel; Ballance, Robert; Haskell, Karen
Cielo is a massively parallel supercomputer funded by the DOE/NNSA Advanced Simulation and Computing (ASC) program, and operated by the Alliance for Computing at Extreme Scale (ACES), a partnership between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL). The primary Cielo compute platform is physically located at Los Alamos National Laboratory. This Cielo Computational Environment Usage Model documents the capabilities and the environment to be provided for the Q1 FY12 Level 2 Cielo Capability Computing (CCC) Platform Production Readiness Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model ismore » focused on the needs of the ASC user working in the secure computing environments at Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory, or Sandia National Laboratories, but also addresses the needs of users working in the unclassified environment. The Cielo Computational Environment Usage Model maps the provided capabilities to the tri-Lab ASC Computing Environment (ACE) Version 8.0 requirements. The ACE requirements reflect the high performance computing requirements for the Production Readiness Milestone user environment capabilities of the ASC community. A description of ACE requirements met, and those requirements that are not met, are included in each section of this document. The Cielo Computing Environment, along with the ACE mappings, has been issued and reviewed throughout the tri-Lab community.« less
NASA Technical Reports Server (NTRS)
Meegan, C. A.; Fountain, W. F.; Berry, F. A., Jr.
1987-01-01
A system to rapidly digitize data from showers in nuclear emulsions is described. A TV camera views the emulsions though a microscope. The TV output is superimposed on the monitor of a minicomputer. The operator uses the computer's graphics capability to mark the positions of particle tracks. The coordinates of each track are stored on a disk. The computer then predicts the coordinates of each track through successive layers of emulsion. The operator, guided by the predictions, thus tracks and stores the development of the shower. The system provides a significant improvement over purely manual methods of recording shower development in nuclear emulsion stacks.
Improving the Aircraft Design Process Using Web-Based Modeling and Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.; Follen, Gregory J. (Technical Monitor)
2000-01-01
Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and multifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.
Improving the Aircraft Design Process Using Web-based Modeling and Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.
2003-01-01
Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and muitifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.
Computer-assisted learning in critical care: from ENIAC to HAL.
Tegtmeyer, K; Ibsen, L; Goldstein, B
2001-08-01
Computers are commonly used to serve many functions in today's modern intensive care unit. One of the most intriguing and perhaps most challenging applications of computers has been to attempt to improve medical education. With the introduction of the first computer, medical educators began looking for ways to incorporate their use into the modern curriculum. Prior limitations of cost and complexity of computers have consistently decreased since their introduction, making it increasingly feasible to incorporate computers into medical education. Simultaneously, the capabilities and capacities of computers have increased. Combining the computer with other modern digital technology has allowed the development of more intricate and realistic educational tools. The purpose of this article is to briefly describe the history and use of computers in medical education with special reference to critical care medicine. In addition, we will examine the role of computers in teaching and learning and discuss the types of interaction between the computer user and the computer.
High-Performance Computing Systems and Operations | Computational Science |
NREL Systems and Operations High-Performance Computing Systems and Operations NREL operates high-performance computing (HPC) systems dedicated to advancing energy efficiency and renewable energy technologies. Capabilities NREL's HPC capabilities include: High-Performance Computing Systems We operate
Computational Fluid Dynamics of Whole-Body Aircraft
NASA Astrophysics Data System (ADS)
Agarwal, Ramesh
1999-01-01
The current state of the art in computational aerodynamics for whole-body aircraft flowfield simulations is described. Recent advances in geometry modeling, surface and volume grid generation, and flow simulation algorithms have led to accurate flowfield predictions for increasingly complex and realistic configurations. As a result, computational aerodynamics has emerged as a crucial enabling technology for the design and development of flight vehicles. Examples illustrating the current capability for the prediction of transport and fighter aircraft flowfields are presented. Unfortunately, accurate modeling of turbulence remains a major difficulty in the analysis of viscosity-dominated flows. In the future, inverse design methods, multidisciplinary design optimization methods, artificial intelligence technology, and massively parallel computer technology will be incorporated into computational aerodynamics, opening up greater opportunities for improved product design at substantially reduced costs.
ICASE semiannual report, April 1 - September 30, 1989
NASA Technical Reports Server (NTRS)
1990-01-01
The Institute conducts unclassified basic research in applied mathematics, numerical analysis, and computer science in order to extend and improve problem-solving capabilities in science and engineering, particularly in aeronautics and space. The major categories of the current Institute for Computer Applications in Science and Engineering (ICASE) research program are: (1) numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; (2) control and parameter identification problems, with emphasis on effective numerical methods; (3) computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and (4) computer systems and software, especially vector and parallel computers. ICASE reports are considered to be primarily preprints of manuscripts that have been submitted to appropriate research journals or that are to appear in conference proceedings.
MSFC crack growth analysis computer program, version 2 (users manual)
NASA Technical Reports Server (NTRS)
Creager, M.
1976-01-01
An updated version of the George C. Marshall Space Flight Center Crack Growth Analysis Program is described. The updated computer program has significantly expanded capabilities over the original one. This increased capability includes an extensive expansion of the library of stress intensity factors, plotting capability, increased design iteration capability, and the capability of performing proof test logic analysis. The technical approaches used within the computer program are presented, and the input and output formats and options are described. Details of the stress intensity equations, example data, and example problems are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dahlburg, Jill; Corones, James; Batchelor, Donald
Fusion is potentially an inexhaustible energy source whose exploitation requires a basic understanding of high-temperature plasmas. The development of a science-based predictive capability for fusion-relevant plasmas is a challenge central to fusion energy science, in which numerical modeling has played a vital role for more than four decades. A combination of the very wide range in temporal and spatial scales, extreme anisotropy, the importance of geometric detail, and the requirement of causality which makes it impossible to parallelize over time, makes this problem one of the most challenging in computational physics. Sophisticated computational models are under development for many individualmore » features of magnetically confined plasmas and increases in the scope and reliability of feasible simulations have been enabled by increased scientific understanding and improvements in computer technology. However, full predictive modeling of fusion plasmas will require qualitative improvements and innovations to enable cross coupling of a wider variety of physical processes and to allow solution over a larger range of space and time scales. The exponential growth of computer speed, coupled with the high cost of large-scale experimental facilities, makes an integrated fusion simulation initiative a timely and cost-effective opportunity. Worldwide progress in laboratory fusion experiments provides the basis for a recent FESAC recommendation to proceed with a burning plasma experiment (see FESAC Review of Burning Plasma Physics Report, September 2001). Such an experiment, at the frontier of the physics of complex systems, would be a huge step in establishing the potential of magnetic fusion energy to contribute to the world’s energy security. An integrated simulation capability would dramatically enhance the utilization of such a facility and lead to optimization of toroidal fusion plasmas in general. This science-based predictive capability, which was cited in the FESAC integrated planning document (IPPA, 2000), represents a significant opportunity for the DOE Office of Science to further the understanding of fusion plasmas to a level unparalleled worldwide.« less
Multidimensional Environmental Data Resource Brokering on Computational Grids and Scientific Clouds
NASA Astrophysics Data System (ADS)
Montella, Raffaele; Giunta, Giulio; Laccetti, Giuliano
Grid computing has widely evolved over the past years, and its capabilities have found their way even into business products and are no longer relegated to scientific applications. Today, grid computing technology is not restricted to a set of specific grid open source or industrial products, but rather it is comprised of a set of capabilities virtually within any kind of software to create shared and highly collaborative production environments. These environments are focused on computational (workload) capabilities and the integration of information (data) into those computational capabilities. An active grid computing application field is the fully virtualization of scientific instruments in order to increase their availability and decrease operational and maintaining costs. Computational and information grids allow to manage real-world objects in a service-oriented way using industrial world-spread standards.
A shock wave capability for the improved Two-Dimensional Kinetics (TDK) computer program
NASA Technical Reports Server (NTRS)
Nickerson, G. R.; Dang, L. D.
1984-01-01
The Two Dimensional Kinetics (TDK) computer program is a primary tool in applying the JANNAF liquid rocket engine performance prediction procedures. The purpose of this contract has been to improve the TDK computer program so that it can be applied to rocket engine designs of advanced type. In particular, future orbit transfer vehicles (OTV) will require rocket engines that operate at high expansion ratio, i.e., in excess of 200:1. Because only a limited length is available in the space shuttle bay, it is possible that OTV nozzles will be designed with both relatively short length and high expansion ratio. In this case, a shock wave may be present in the flow. The TDK computer program was modified to include the simulation of shock waves in the supersonic nozzle flow field. The shocks induced by the wall contour can produce strong perturbations of the flow, affecting downstream conditions which need to be considered for thrust chamber performance calculations.
Computational Simulations of the NASA Langley HyMETS Arc-Jet Facility
NASA Technical Reports Server (NTRS)
Brune, A. J.; Bruce, W. E., III; Glass, D. E.; Splinter, S. C.
2017-01-01
The Hypersonic Materials Environmental Test System (HyMETS) arc-jet facility located at the NASA Langley Research Center in Hampton, Virginia, is primarily used for the research, development, and evaluation of high-temperature thermal protection systems for hypersonic vehicles and reentry systems. In order to improve testing capabilities and knowledge of the test article environment, an effort is underway to computationally simulate the flow-field using computational fluid dynamics (CFD). A detailed three-dimensional model of the arc-jet nozzle and free-jet portion of the flow-field has been developed and compared to calibration probe Pitot pressure and stagnation-point heat flux for three test conditions at low, medium, and high enthalpy. The CFD model takes into account uniform pressure and non-uniform enthalpy profiles at the nozzle inlet as well as catalytic recombination efficiency effects at the probe surface. Comparing the CFD results and test data indicates an effectively fully-catalytic copper surface on the heat flux probe of about 10% efficiency and a 2-3 kpa pressure drop from the arc heater bore, where the pressure is measured, to the plenum section, prior to the nozzle. With these assumptions, the CFD results are well within the uncertainty of the stagnation pressure and heat flux measurements. The conditions at the nozzle exit were also compared with radial and axial velocimetry. This simulation capability will be used to evaluate various three-dimensional models that are tested in the HyMETS facility. An end-to-end aerothermal and thermal simulation of HyMETS test articles will follow this work to provide a better understanding of the test environment, test results, and to aid in test planning. Additional flow-field diagnostic measurements will also be considered to improve the modeling capability.
Nuclear Engine System Simulation (NESS). Version 2.0: Program user's guide
NASA Technical Reports Server (NTRS)
Pelaccio, Dennis G.; Scheil, Christine M.; Petrosky, Lyman
1993-01-01
This Program User's Guide discusses the Nuclear Thermal Propulsion (NTP) engine system design features and capabilities modeled in the Nuclear Engine System Simulation (NESS): Version 2.0 program (referred to as NESS throughout the remainder of this document), as well as its operation. NESS was upgraded to include many new modeling capabilities not available in the original version delivered to NASA LeRC in Dec. 1991, NESS's new features include the following: (1) an improved input format; (2) an advanced solid-core NERVA-type reactor system model (ENABLER 2); (3) a bleed-cycle engine system option; (4) an axial-turbopump design option; (5) an automated pump-out turbopump assembly sizing option; (6) an off-design gas generator engine cycle design option; (7) updated hydrogen properties; (8) an improved output format; and (9) personal computer operation capability. Sample design cases are presented in the user's guide that demonstrate many of the new features associated with this upgraded version of NESS, as well as design modeling features associated with the original version of NESS.
Robot computer problem solving system
NASA Technical Reports Server (NTRS)
Merriam, E. W.; Becker, J. D.
1973-01-01
A robot computer problem solving system which represents a robot exploration vehicle in a simulated Mars environment is described. The model exhibits changes and improvements made on a previously designed robot in a city environment. The Martian environment is modeled in Cartesian coordinates; objects are scattered about a plane; arbitrary restrictions on the robot's vision have been removed; and the robot's path contains arbitrary curves. New environmental features, particularly the visual occlusion of objects by other objects, were added to the model. Two different algorithms were developed for computing occlusion. Movement and vision capabilities of the robot were established in the Mars environment, using LISP/FORTRAN interface for computational efficiency. The graphical display program was redesigned to reflect the change to the Mars-like environment.
ERIC Educational Resources Information Center
Liu, Chen-Chung; Kao, L.-C.
2007-01-01
One-to-one computing environments change and improve classroom dynamics as individual students can bring handheld devices fitted with wireless communication capabilities into the classrooms. However, the screens of handheld devices, being designed for individual-user mobile application, limit promotion of interaction among groups of learners. This…
NASA Technical Reports Server (NTRS)
Vos, R. G.; Straayer, J. W.
1975-01-01
The BOPACE 3-D is a finite element computer program, which provides a general family of three-dimensional isoparametric solid elements, and includes a new algorithm for improving the efficiency of the elastic-plastic-creep solution procedure. Theoretical, user, and programmer oriented sections are presented to describe the program.
NASA Technical Reports Server (NTRS)
Berger, P. E.; Thornton, E. A.
1976-01-01
The APAS program a multistation structural synthesis procedure developed to evaluate material, geometry, and configuration with various design criteria usually considered for the primary structure of transport aircraft is described and evaluated. Recommendations to improve accuracy and extend the capabilities of the APAS program are given. Flow diagrams are included.
Methods of treating complex space vehicle geometry for charged particle radiation transport
NASA Technical Reports Server (NTRS)
Hill, C. W.
1973-01-01
Current methods of treating complex geometry models for space radiation transport calculations are reviewed. The geometric techniques used in three computer codes are outlined. Evaluations of geometric capability and speed are provided for these codes. Although no code development work is included several suggestions for significantly improving complex geometry codes are offered.
ERIC Educational Resources Information Center
Pusey, Portia; Sadera, William A.
2012-01-01
In teacher education programs, preservice teachers learn about strategies to appropriately integrate computer-related and Internet-capable technologies into instructional settings to improve student learning. Many presume that preservice teachers have the knowledge to competently model and teach issues of safety when working with these devices as…
NASA Technical Reports Server (NTRS)
Knezovich, F. M.
1976-01-01
A modular structured system of computer programs is presented utilizing earth and ocean dynamical data keyed to finitely defined parameters. The model is an assemblage of mathematical algorithms with an inherent capability of maturation with progressive improvements in observational data frequencies, accuracies and scopes. The Eom in its present state is a first-order approach to a geophysical model of the earth's dynamics.
Advanced capabilities for materials modelling with Quantum ESPRESSO
NASA Astrophysics Data System (ADS)
Giannozzi, P.; Andreussi, O.; Brumme, T.; Bunau, O.; Buongiorno Nardelli, M.; Calandra, M.; Car, R.; Cavazzoni, C.; Ceresoli, D.; Cococcioni, M.; Colonna, N.; Carnimeo, I.; Dal Corso, A.; de Gironcoli, S.; Delugas, P.; DiStasio, R. A., Jr.; Ferretti, A.; Floris, A.; Fratesi, G.; Fugallo, G.; Gebauer, R.; Gerstmann, U.; Giustino, F.; Gorni, T.; Jia, J.; Kawamura, M.; Ko, H.-Y.; Kokalj, A.; Küçükbenli, E.; Lazzeri, M.; Marsili, M.; Marzari, N.; Mauri, F.; Nguyen, N. L.; Nguyen, H.-V.; Otero-de-la-Roza, A.; Paulatto, L.; Poncé, S.; Rocca, D.; Sabatini, R.; Santra, B.; Schlipf, M.; Seitsonen, A. P.; Smogunov, A.; Timrov, I.; Thonhauser, T.; Umari, P.; Vast, N.; Wu, X.; Baroni, S.
2017-11-01
Quantum EXPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the-art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudopotential and projector-augmented-wave approaches. Quantum EXPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement their ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software.
Advanced capabilities for materials modelling with Quantum ESPRESSO.
Giannozzi, P; Andreussi, O; Brumme, T; Bunau, O; Buongiorno Nardelli, M; Calandra, M; Car, R; Cavazzoni, C; Ceresoli, D; Cococcioni, M; Colonna, N; Carnimeo, I; Dal Corso, A; de Gironcoli, S; Delugas, P; DiStasio, R A; Ferretti, A; Floris, A; Fratesi, G; Fugallo, G; Gebauer, R; Gerstmann, U; Giustino, F; Gorni, T; Jia, J; Kawamura, M; Ko, H-Y; Kokalj, A; Küçükbenli, E; Lazzeri, M; Marsili, M; Marzari, N; Mauri, F; Nguyen, N L; Nguyen, H-V; Otero-de-la-Roza, A; Paulatto, L; Poncé, S; Rocca, D; Sabatini, R; Santra, B; Schlipf, M; Seitsonen, A P; Smogunov, A; Timrov, I; Thonhauser, T; Umari, P; Vast, N; Wu, X; Baroni, S
2017-10-24
Quantum EXPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the-art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudopotential and projector-augmented-wave approaches. Quantum EXPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement their ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software.
Advanced capabilities for materials modelling with Quantum ESPRESSO.
Andreussi, Oliviero; Brumme, Thomas; Bunau, Oana; Buongiorno Nardelli, Marco; Calandra, Matteo; Car, Roberto; Cavazzoni, Carlo; Ceresoli, Davide; Cococcioni, Matteo; Colonna, Nicola; Carnimeo, Ivan; Dal Corso, Andrea; de Gironcoli, Stefano; Delugas, Pietro; DiStasio, Robert; Ferretti, Andrea; Floris, Andrea; Fratesi, Guido; Fugallo, Giorgia; Gebauer, Ralph; Gerstmann, Uwe; Giustino, Feliciano; Gorni, Tommaso; Jia, Junteng; Kawamura, Mitsuaki; Ko, Hsin-Yu; Kokalj, Anton; Küçükbenli, Emine; Lazzeri, Michele; Marsili, Margherita; Marzari, Nicola; Mauri, Francesco; Nguyen, Ngoc Linh; Nguyen, Huy-Viet; Otero-de-la-Roza, Alberto; Paulatto, Lorenzo; Poncé, Samuel; Giannozzi, Paolo; Rocca, Dario; Sabatini, Riccardo; Santra, Biswajit; Schlipf, Martin; Seitsonen, Ari Paavo; Smogunov, Alexander; Timrov, Iurii; Thonhauser, Timo; Umari, Paolo; Vast, Nathalie; Wu, Xifan; Baroni, Stefano
2017-09-27
Quantum ESPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudo-potential and projector-augmented-wave approaches. Quantum ESPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement theirs ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software. © 2017 IOP Publishing Ltd.
Design of convolutional tornado code
NASA Astrophysics Data System (ADS)
Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu
2017-09-01
As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.
Havery Mudd 2014-2015 Computer Science Conduit Clinic Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aspesi, G; Bai, J; Deese, R
2015-05-12
Conduit, a new open-source library developed at Lawrence Livermore National Laboratories, provides a C++ application programming interface (API) to describe and access scientific data. Conduit’s primary use is for inmemory data exchange in high performance computing (HPC) applications. Our team tested and improved Conduit to make it more appealing to potential adopters in the HPC community. We extended Conduit’s capabilities by prototyping four libraries: one for parallel communication using MPI, one for I/O functionality, one for aggregating performance data, and one for data visualization.
Kassab, Ghassan S.; An, Gary; Sander, Edward A.; Miga, Michael; Guccione, Julius M.; Ji, Songbai; Vodovotz, Yoram
2016-01-01
In this era of tremendous technological capabilities and increased focus on improving clinical outcomes, decreasing costs, and increasing precision, there is a need for a more quantitative approach to the field of surgery. Multiscale computational modeling has the potential to bridge the gap to the emerging paradigms of Precision Medicine and Translational Systems Biology, in which quantitative metrics and data guide patient care through improved stratification, diagnosis, and therapy. Achievements by multiple groups have demonstrated the potential for 1) multiscale computational modeling, at a biological level, of diseases treated with surgery and the surgical procedure process at the level of the individual and the population; along with 2) patient-specific, computationally-enabled surgical planning, delivery, and guidance and robotically-augmented manipulation. In this perspective article, we discuss these concepts, and cite emerging examples from the fields of trauma, wound healing, and cardiac surgery. PMID:27015816
Gai, Jiading; Obeid, Nady; Holtrop, Joseph L.; Wu, Xiao-Long; Lam, Fan; Fu, Maojing; Haldar, Justin P.; Hwu, Wen-mei W.; Liang, Zhi-Pei; Sutton, Bradley P.
2013-01-01
Several recent methods have been proposed to obtain significant speed-ups in MRI image reconstruction by leveraging the computational power of GPUs. Previously, we implemented a GPU-based image reconstruction technique called the Illinois Massively Parallel Acquisition Toolkit for Image reconstruction with ENhanced Throughput in MRI (IMPATIENT MRI) for reconstructing data collected along arbitrary 3D trajectories. In this paper, we improve IMPATIENT by removing computational bottlenecks by using a gridding approach to accelerate the computation of various data structures needed by the previous routine. Further, we enhance the routine with capabilities for off-resonance correction and multi-sensor parallel imaging reconstruction. Through implementation of optimized gridding into our iterative reconstruction scheme, speed-ups of more than a factor of 200 are provided in the improved GPU implementation compared to the previous accelerated GPU code. PMID:23682203
Interactive multi-mode blade impact analysis
NASA Technical Reports Server (NTRS)
Alexander, A.; Cornell, R. W.
1978-01-01
The theoretical methodology used in developing an analysis for the response of turbine engine fan blades subjected to soft-body (bird) impacts is reported, and the computer program developed using this methodology as its basis is described. This computer program is an outgrowth of two programs that were previously developed for the purpose of studying problems of a similar nature (a 3-mode beam impact analysis and a multi-mode beam impact analysis). The present program utilizes an improved missile model that is interactively coupled with blade motion which is more consistent with actual observations. It takes into account local deformation at the impact area, blade camber effects, and the spreading of the impacted missile mass on the blade surface. In addition, it accommodates plate-type mode shapes. The analysis capability in this computer program represents a significant improvement in the development of the methodology for evaluating potential fan blade materials and designs with regard to foreign object impact resistance.
Parallelization of the Coupled Earthquake Model
NASA Technical Reports Server (NTRS)
Block, Gary; Li, P. Peggy; Song, Yuhe T.
2007-01-01
This Web-based tsunami simulation system allows users to remotely run a model on JPL s supercomputers for a given undersea earthquake. At the time of this reporting, predicting tsunamis on the Internet has never happened before. This new code directly couples the earthquake model and the ocean model on parallel computers and improves simulation speed. Seismometers can only detect information from earthquakes; they cannot detect whether or not a tsunami may occur as a result of the earthquake. When earthquake-tsunami models are coupled with the improved computational speed of modern, high-performance computers and constrained by remotely sensed data, they are able to provide early warnings for those coastal regions at risk. The software is capable of testing NASA s satellite observations of tsunamis. It has been successfully tested for several historical tsunamis, has passed all alpha and beta testing, and is well documented for users.
Verification of a Finite Element Model for Pyrolyzing Ablative Materials
NASA Technical Reports Server (NTRS)
Risch, Timothy K.
2017-01-01
Ablating thermal protection system (TPS) materials have been used in many reentering spacecraft and in other applications such as rocket nozzle linings, fire protection materials, and as countermeasures for directed energy weapons. The introduction of the finite element model to the analysis of ablation has arguably resulted in improved computational capabilities due the flexibility and extended applicability of the method, especially to complex geometries. Commercial finite element codes often provide enhanced capability compared to custom, specially written programs based on versatility, usability, pre- and post-processing, grid generation, total life-cycle costs, and speed.
DDP-516 Computer Graphics System Capabilities
DOT National Transportation Integrated Search
1972-06-01
This report describes the capabilities of the DDP-516 Computer Graphics System. One objective of this report is to acquaint DOT management and project planners with the system's current capabilities, applications hardware and software. The Appendix i...
On the Exploitation of Sensitivity Derivatives for Improving Sampling Methods
NASA Technical Reports Server (NTRS)
Cao, Yanzhao; Hussaini, M. Yousuff; Zang, Thomas A.
2003-01-01
Many application codes, such as finite-element structural analyses and computational fluid dynamics codes, are capable of producing many sensitivity derivatives at a small fraction of the cost of the underlying analysis. This paper describes a simple variance reduction method that exploits such inexpensive sensitivity derivatives to increase the accuracy of sampling methods. Three examples, including a finite-element structural analysis of an aircraft wing, are provided that illustrate an order of magnitude improvement in accuracy for both Monte Carlo and stratified sampling schemes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barney, B; Shuler, J
2006-08-21
Purple is an Advanced Simulation and Computing (ASC) funded massively parallel supercomputer located at Lawrence Livermore National Laboratory (LLNL). The Purple Computational Environment documents the capabilities and the environment provided for the FY06 LLNL Level 1 General Availability Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model is focused on the needs of the ASC user working in the secure computing environments at Los Alamos National Laboratory, Lawrence Livermore National Laboratory, and Sandia National Laboratories, but also documents needs of the LLNL and Alliance users working in the unclassified environment. Additionally,more » the Purple Computational Environment maps the provided capabilities to the Trilab ASC Computing Environment (ACE) Version 8.0 requirements. The ACE requirements reflect the high performance computing requirements for the General Availability user environment capabilities of the ASC community. Appendix A lists these requirements and includes a description of ACE requirements met and those requirements that are not met for each section of this document. The Purple Computing Environment, along with the ACE mappings, has been issued and reviewed throughout the Tri-lab community.« less
Designing and validating the joint battlespace infosphere
NASA Astrophysics Data System (ADS)
Peterson, Gregory D.; Alexander, W. Perry; Birdwell, J. Douglas
2001-08-01
Fielding and managing the dynamic, complex information systems infrastructure necessary for defense operations presents significant opportunities for revolutionary improvements in capabilities. An example of this technology trend is the creation and validation of the Joint Battlespace Infosphere (JBI) being developed by the Air Force Research Lab. The JBI is a system of systems that integrates, aggregates, and distributes information to users at all echelons, from the command center to the battlefield. The JBI is a key enabler of meeting the Air Force's Joint Vision 2010 core competencies such as Information Superiority, by providing increased situational awareness, planning capabilities, and dynamic execution. At the same time, creating this new operational environment introduces significant risk due to an increased dependency on computational and communications infrastructure combined with more sophisticated and frequent threats. Hence, the challenge facing the nation is the most effective means to exploit new computational and communications technologies while mitigating the impact of attacks, faults, and unanticipated usage patterns.
3D Space Radiation Transport in a Shielded ICRU Tissue Sphere
NASA Technical Reports Server (NTRS)
Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.
2014-01-01
A computationally efficient 3DHZETRN code capable of simulating High Charge (Z) and Energy (HZE) and light ions (including neutrons) under space-like boundary conditions with enhanced neutron and light ion propagation was recently developed for a simple homogeneous shield object. Monte Carlo benchmarks were used to verify the methodology in slab and spherical geometry, and the 3D corrections were shown to provide significant improvement over the straight-ahead approximation in some cases. In the present report, the new algorithms with well-defined convergence criteria are extended to inhomogeneous media within a shielded tissue slab and a shielded tissue sphere and tested against Monte Carlo simulation to verify the solution methods. The 3D corrections are again found to more accurately describe the neutron and light ion fluence spectra as compared to the straight-ahead approximation. These computationally efficient methods provide a basis for software capable of space shield analysis and optimization.
NASA Astrophysics Data System (ADS)
Russkova, Tatiana V.
2017-11-01
One tool to improve the performance of Monte Carlo methods for numerical simulation of light transport in the Earth's atmosphere is the parallel technology. A new algorithm oriented to parallel execution on the CUDA-enabled NVIDIA graphics processor is discussed. The efficiency of parallelization is analyzed on the basis of calculating the upward and downward fluxes of solar radiation in both a vertically homogeneous and inhomogeneous models of the atmosphere. The results of testing the new code under various atmospheric conditions including continuous singlelayered and multilayered clouds, and selective molecular absorption are presented. The results of testing the code using video cards with different compute capability are analyzed. It is shown that the changeover of computing from conventional PCs to the architecture of graphics processors gives more than a hundredfold increase in performance and fully reveals the capabilities of the technology used.
Computational fluid dynamics: Transition to design applications
NASA Technical Reports Server (NTRS)
Bradley, R. G.; Bhateley, I. C.; Howell, G. A.
1987-01-01
The development of aerospace vehicles, over the years, was an evolutionary process in which engineering progress in the aerospace community was based, generally, on prior experience and data bases obtained through wind tunnel and flight testing. Advances in the fundamental understanding of flow physics, wind tunnel and flight test capability, and mathematical insights into the governing flow equations were translated into improved air vehicle design. The modern day field of Computational Fluid Dynamics (CFD) is a continuation of the growth in analytical capability and the digital mathematics needed to solve the more rigorous form of the flow equations. Some of the technical and managerial challenges that result from rapidly developing CFD capabilites, some of the steps being taken by the Fort Worth Division of General Dynamics to meet these challenges, and some of the specific areas of application for high performance air vehicles are presented.
NASA Technical Reports Server (NTRS)
Gordy, R. S.
1972-01-01
An improved broadband impedance matching technique was developed. The technique is capable of resolving points in the waveguide which generate reflected energy. A version of the comparison reflectometer was developed and fabricated to determine the mean amplitude of the reflection coefficient excited at points in the guide as a function of distance, and the complex reflection coefficient of a specific discontinuity in the guide as a function of frequency. An impedance matching computer program was developed which is capable of impedance matching the characteristics of each disturbance independent of other reflections in the guide. The characteristics of four standard matching elements were compiled, and their associated curves of reflection coefficient and shunt susceptance as a function of frequency are presented. It is concluded that an economical, fast, and reliable impedance matching technique has been established which can provide broadband impedance matches.
Xyce Parallel Electronic Simulator : users' guide, version 2.0.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoekstra, Robert John; Waters, Lon J.; Rankin, Eric Lamont
2004-06-01
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator capable of simulating electrical circuits at a variety of abstraction levels. Primarily, Xyce has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability the current state-of-the-art in the following areas: {sm_bullet} Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers. {sm_bullet} Improved performance for allmore » numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. {sm_bullet} Device models which are specifically tailored to meet Sandia's needs, including many radiation-aware devices. {sm_bullet} A client-server or multi-tiered operating model wherein the numerical kernel can operate independently of the graphical user interface (GUI). {sm_bullet} Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing of computing platforms. These include serial, shared-memory and distributed-memory parallel implementation - which allows it to run efficiently on the widest possible number parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. One feature required by designers is the ability to add device models, many specific to the needs of Sandia, to the code. To this end, the device package in the Xyce These input formats include standard analytical models, behavioral models look-up Parallel Electronic Simulator is designed to support a variety of device model inputs. tables, and mesh-level PDE device models. Combined with this flexible interface is an architectural design that greatly simplifies the addition of circuit models. One of the most important feature of Xyce is in providing a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia now has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods) research and development can be performed. Ultimately, these capabilities are migrated to end users.« less
Carney, Timothy Jay; Morgan, Geoffrey P.; Jones, Josette; McDaniel, Anna M.; Weaver, Michael; Weiner, Bryan; Haggstrom, David A.
2014-01-01
Our conceptual model demonstrates our goal to investigate the impact of clinical decision support (CDS) utilization on cancer screening improvement strategies in the community health care (CHC) setting. We employed a dual modeling technique using both statistical and computational modeling to evaluate impact. Our statistical model used the Spearman’s Rho test to evaluate the strength of relationship between our proximal outcome measures (CDS utilization) against our distal outcome measure (provider self-reported cancer screening improvement). Our computational model relied on network evolution theory and made use of a tool called Construct-TM to model the use of CDS measured by the rate of organizational learning. We employed the use of previously collected survey data from community health centers Cancer Health Disparities Collaborative (HDCC). Our intent is to demonstrate the added valued gained by using a computational modeling tool in conjunction with a statistical analysis when evaluating the impact a health information technology, in the form of CDS, on health care quality process outcomes such as facility-level screening improvement. Significant simulated disparities in organizational learning over time were observed between community health centers beginning the simulation with high and low clinical decision support capability. PMID:24953241
Infrastructure for Training and Partnershipes: California Water and Coastal Ocean Resources
NASA Technical Reports Server (NTRS)
Siegel, David A.; Dozier, Jeffrey; Gautier, Catherine; Davis, Frank; Dickey, Tommy; Dunne, Thomas; Frew, James; Keller, Arturo; MacIntyre, Sally; Melack, John
2000-01-01
The purpose of this project was to advance the existing ICESS/Bren School computing infrastructure to allow scientists, students, and research trainees the opportunity to interact with environmental data and simulations in near-real time. Improvements made with the funding from this project have helped to strengthen the research efforts within both units, fostered graduate research training, and helped fortify partnerships with government and industry. With this funding, we were able to expand our computational environment in which computer resources, software, and data sets are shared by ICESS/Bren School faculty researchers in all areas of Earth system science. All of the graduate and undergraduate students associated with the Donald Bren School of Environmental Science and Management and the Institute for Computational Earth System Science have benefited from the infrastructure upgrades accomplished by this project. Additionally, the upgrades fostered a significant number of research projects (attached is a list of the projects that benefited from the upgrades). As originally proposed, funding for this project provided the following infrastructure upgrades: 1) a modem file management system capable of interoperating UNIX and NT file systems that can scale to 6.7 TB, 2) a Qualstar 40-slot tape library with two AIT tape drives and Legato Networker backup/archive software, 3) previously unavailable import/export capability for data sets on Zip, Jaz, DAT, 8mm, CD, and DLT media in addition to a 622Mb/s Internet 2 connection, 4) network switches capable of 100 Mbps to 128 desktop workstations, 5) Portable Batch System (PBS) computational task scheduler, and vi) two Compaq/Digital Alpha XP1000 compute servers each with 1.5 GB of RAM along with an SGI Origin 2000 (purchased partially using funds from this project along with funding from various other sources) to be used for very large computations, as required for simulation of mesoscale meteorology or climate.
Williams, W E
1987-01-01
The maturing of technologies in computer capabilities, particularly direct digital signals, has provided an exciting variety of new communication and facility control opportunities. These include telecommunications, energy management systems, security systems, office automation systems, local area networks, and video conferencing. New applications are developing continuously. The so-called "intelligent" or "smart" building concept evolves from the development of this advanced technology in building environments. Automation has had a dramatic effect on facility planning. For decades, communications were limited to the telephone, the typewritten message, and copy machines. The office itself and its functions had been essentially unchanged for decades. Office automation systems began to surface during the energy crisis and, although their newer technology was timely, they were, for the most part, designed separately from other new building systems. For example, most mainframe computer systems were originally stand-alone, as were word processing installations. In the last five years, the advances in distributive systems, networking, and personal computer capabilities have provided opportunities to make such dramatic improvements in productivity that the Selectric typewriter has gone from being the most advanced piece of office equipment to nearly total obsolescence.
The future challenge for aeropropulsion
NASA Technical Reports Server (NTRS)
Rosen, Robert; Bowditch, David N.
1992-01-01
NASA's research in aeropropulsion is focused on improving the efficiency, capability, and environmental compatibility for all classes of future aircraft. The development of innovative concepts, and theoretical, experimental, and computational tools provide the knowledge base for continued propulsion system advances. Key enabling technologies include advances in internal fluid mechanics, structures, light-weight high-strength composite materials, and advanced sensors and controls. Recent emphasis has been on the development of advanced computational tools in internal fluid mechanics, structural mechanics, reacting flows, and computational chemistry. For subsonic transport applications, very high bypass ratio turbofans with increased engine pressure ratio are being investigated to increase fuel efficiency and reduce airport noise levels. In a joint supersonic cruise propulsion program with industry, the critical environmental concerns of emissions and community noise are being addressed. NASA is also providing key technologies for the National Aerospaceplane, and is studying propulsion systems that provide the capability for aircraft to accelerate to and cruise in the Mach 4-6 speed range. The combination of fundamental, component, and focused technology development underway at NASA will make possible dramatic advances in aeropropulsion efficiency and environmental compatibility for future aeronautical vehicles.
Infrared Astrophysics in the SOFIA Era - An Overview
NASA Astrophysics Data System (ADS)
Yorke, Harold W.
2018-06-01
The Stratospheric Observatory for Infrared Astronomy (SOFIA) provides the international astronomical community access to a broad range of instrumentation that covers wavelengths spanning the near to far infrared. The high spectral resolution of many of these instruments in several wavelength bands is unmatched by any existing or near future planned facility. The far infrared polarization capabilities of one of its instruments, HAWC+, is also unique. Moreover, SOFIA allows for additional instrument augmentations, as new state-of-the-art photometric, spectrometric, and polarimetric capabilities have been added and are being further improved. The fact that SOFIA provides ample mass, power, computing capabilities as well as 4K cooling eases the constraints on future instrument design, technical readiness, and the instrument build to an extent not possible for space-borne missions. We will review SOFIA's current and future planned capabilities and highlight specific science areas for which the stratospheric observatory will be able to significantly advance Origins science topics.
Self-Aware Vehicles: Mission and Performance Adaptation to System Health
NASA Technical Reports Server (NTRS)
Gregory, Irene M.; Leonard, Charles; Scotti, Stephen J.
2016-01-01
Advances in sensing (miniaturization, distributed sensor networks) combined with improvements in computational power leading to significant gains in perception, real-time decision making/reasoning and dynamic planning under uncertainty as well as big data predictive analysis have set the stage for realization of autonomous system capability. These advances open the design and operating space for self-aware vehicles that are able to assess their own capabilities and adjust their behavior to either complete the assigned mission or to modify the mission to reflect their current capabilities. This paper discusses the self-aware vehicle concept and associated technologies necessary for full exploitation of the concept. A self-aware aircraft, spacecraft or system is one that is aware of its internal state, has situational awareness of its environment, can assess its capabilities currently and project them into the future, understands its mission objectives, and can make decisions under uncertainty regarding its ability to achieve its mission objectives.
NASA Astrophysics Data System (ADS)
Gage, Douglas W.; Pletta, J. Bryan
1987-01-01
Initial investigations into two different approaches for applying autonomous ground vehicle technology to the vehicle convoying application are described. A minimal capability system that would maintain desired speed and vehicle spacing while a human driver provided steering control could improve convoy performance and provide positive control at night and in inclement weather, but would not reduce driver manpower requirements. Such a system could be implemented in a modular and relatively low cost manner. A more capable system would eliminate the human driver in following vehicles and reduce manpower requirements for the transportation of supplies. This technology could also be used to aid in the deployment of teleoperated vehicles in a battlefield environment. The needs, requirements, and several proposed solutions for such an Attachable Robotic Convoy Capability (ARCC) system will be discussed. Included are discussions of sensors, communications, computers, control systems and safety issues. This advanced robotic convoy system will provide a much greater capability, but will be more difficult and expensive to implement.
Graphics supercomputer for computational fluid dynamics research
NASA Astrophysics Data System (ADS)
Liaw, Goang S.
1994-11-01
The objective of this project is to purchase a state-of-the-art graphics supercomputer to improve the Computational Fluid Dynamics (CFD) research capability at Alabama A & M University (AAMU) and to support the Air Force research projects. A cutting-edge graphics supercomputer system, Onyx VTX, from Silicon Graphics Computer Systems (SGI), was purchased and installed. Other equipment including a desktop personal computer, PC-486 DX2 with a built-in 10-BaseT Ethernet card, a 10-BaseT hub, an Apple Laser Printer Select 360, and a notebook computer from Zenith were also purchased. A reading room has been converted to a research computer lab by adding some furniture and an air conditioning unit in order to provide an appropriate working environments for researchers and the purchase equipment. All the purchased equipment were successfully installed and are fully functional. Several research projects, including two existing Air Force projects, are being performed using these facilities.
The history of MR imaging as seen through the pages of radiology.
Edelman, Robert R
2014-11-01
The first reports in Radiology pertaining to magnetic resonance (MR) imaging were published in 1980, 7 years after Paul Lauterbur pioneered the first MR images and 9 years after the first human computed tomographic images were obtained. Historical advances in the research and clinical applications of MR imaging very much parallel the remarkable advances in MR imaging technology. These advances can be roughly classified into hardware (eg, magnets, gradients, radiofrequency [RF] coils, RF transmitter and receiver, MR imaging-compatible biopsy devices) and imaging techniques (eg, pulse sequences, parallel imaging, and so forth). Image quality has been dramatically improved with the introduction of high-field-strength superconducting magnets, digital RF systems, and phased-array coils. Hybrid systems, such as MR/positron emission tomography (PET), combine the superb anatomic and functional imaging capabilities of MR imaging with the unsurpassed capability of PET to demonstrate tissue metabolism. Supported by the improvements in hardware, advances in pulse sequence design and image reconstruction techniques have spurred dramatic improvements in imaging speed and the capability for studying tissue function. In this historical review, the history of MR imaging technology and developing research and clinical applications, as seen through the pages of Radiology, will be considered.
2009-10-09
Capability of the People’s Republic of China to Conduct Cyber Warfare and Computer Network Exploitation Prepared for The US-China Economic and...the People?s Republic of China to Conduct Cyber Warfare and Computer Network Exploitation 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...Capability of the People’s Republic of China to Conduct Cyber Warfare and Computer Network Exploitation 2 US-China Economic and Security Review
Medical informatics--an Australian perspective.
Hannan, T
1991-06-01
Computers, like the X-ray and stethoscope can be seen as clinical tools, that provide physicians with improved expertise in solving patient management problems. As tools they enable us to extend our clinical information base, and they also provide facilities that improve the delivery of the health care we provide. Automation (computerisation) in the health domain will cause the computer to become a more integral part of health care management and delivery before the start of the next century. To understand how the computer assists those who deliver and manage health care, it is important to be aware of its functional capabilities and how we can use them in medical practice. The rapid technological advances in computers over the last two decades has had both beneficial and counterproductive effects on the implementation of effective computer applications in the delivery of health care. For example, in the 1990s the computer hobbyist is able to make an investment of less than $10,000 on computer hardware that will match or exceed the technological capacities of machines of the 1960s. These rapid technological advances, which have produced a quantum leap in our ability to store and process information, have tended to make us overlook the need for effective computer programmes which will meet the needs of patient care. As the 1990s begin, those delivering health care (eg, physicians, nurses, pharmacists, administrators ...) need to become more involved in directing the effective implementation of computer applications that will provide the tools for improved information management, knowledge processing, and ultimately better patient care.
Reduced and Validated Kinetic Mechanisms for Hydrogen-CO-sir Combustion in Gas Turbines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yiguang Ju; Frederick Dryer
2009-02-07
Rigorous experimental, theoretical, and numerical investigation of various issues relevant to the development of reduced, validated kinetic mechanisms for synthetic gas combustion in gas turbines was carried out - including the construction of new radiation models for combusting flows, improvement of flame speed measurement techniques, measurements and chemical kinetic analysis of H{sub 2}/CO/CO{sub 2}/O{sub 2}/diluent mixtures, revision of the H{sub 2}/O{sub 2} kinetic model to improve flame speed prediction capabilities, and development of a multi-time scale algorithm to improve computational efficiency in reacting flow simulations.
Protecting Your Computer from Viruses
ERIC Educational Resources Information Center
Descy, Don E.
2006-01-01
A computer virus is defined as a software program capable of reproducing itself and usually capable of causing great harm to files or other programs on the same computer. The existence of computer viruses--or the necessity of avoiding viruses--is part of using a computer. With the advent of the Internet, the door was opened wide for these…
Surveying Requirements Meeting Management Sessions, 1-5 February 1982,
1983-02-01
organized and conducted by the Engineering Division, Directorate of Civil Works, Office of the Chief of Engineers, to improve management and...Development) 1. Technical User Groups Overview - M. K. Miles, OCE 2. Organizing a Successful Computer Aided Applications Program - Dr. N. Radhakrishnan...2. Organization Structure 3. In-House Capabilities 4. Expertise Requirements 5. Professionalism TUESDAY-2 FEBRUARY No Management Activities WEDNESDAY
Improving Mastery of Fractions by Blending Video Games into the Math Classroom
ERIC Educational Resources Information Center
Masek, M.; Boston, J.; Lam, C. P.; Corcoran, S.
2017-01-01
Concepts from the Australian mathematics curriculum on fractions were used as core elements to design three computer games. In each game, the concepts were presented in the form of tangible puzzles, customized to a difficulty level based on student capability. The games were integrated into a single virtual game world, and a fantasy story was used…
Unmanned Surface Vehicle Human-Computer Interface for Amphibious Operations
2013-08-01
Amy Bolton from 2007 through 2011, with a follow- on effort conducted during 2012 sponsored by LCS Mission Modules Program Office (PMS 420) under the...performance, the researchers conclude that improvements in on -board sensor capabilities and obstacle avoidance systems may still be necessary to safely...38 5.4.2 Phase I – One USV vs. Two USVs with Baseline HCI
The role of the research simulator in the systems development of rotorcraft
NASA Technical Reports Server (NTRS)
Statler, I. C.; Deel, A.
1981-01-01
The potential application of the research simulator to future rotorcraft systems design, development, product improvement evaluations, and safety analysis is examined. Current simulation capabilities for fixed-wing aircraft are reviewed and the requirements of a rotorcraft simulator are defined. The visual system components, vertical motion simulator, cab, and computation system for a research simulator under development are described.
ERIC Educational Resources Information Center
Robadue, Donald D., Jr.
2012-01-01
Those advocating for effective management of the use of coastal areas and ecosystems have long aspired for an approach to governance that includes information systems with the capability to predict the end results of various courses of action, monitor the impacts of decisions and compare results with those predicted by computer models in order to…
The human adrenocortical carcinoma cell line H295R is being used as an in vitro steroidogenesis screening assay to assess the impact of endocrine active chemicals (EACs) capable of altering steroid biosynthesis. To enhance the interpretation and quantitative application of measur...
Severe storms and local weather research
NASA Technical Reports Server (NTRS)
1981-01-01
Developments in the use of space related techniques to understand storms and local weather are summarized. The observation of lightning, storm development, cloud development, mesoscale phenomena, and ageostrophic circulation are discussed. Data acquisition, analysis, and the development of improved sensor and computer systems capability are described. Signal processing and analysis and application of Doppler lidar data are discussed. Progress in numerous experiments is summarized.
An improved wrapper-based feature selection method for machinery fault diagnosis
2017-01-01
A major issue of machinery fault diagnosis using vibration signals is that it is over-reliant on personnel knowledge and experience in interpreting the signal. Thus, machine learning has been adapted for machinery fault diagnosis. The quantity and quality of the input features, however, influence the fault classification performance. Feature selection plays a vital role in selecting the most representative feature subset for the machine learning algorithm. In contrast, the trade-off relationship between capability when selecting the best feature subset and computational effort is inevitable in the wrapper-based feature selection (WFS) method. This paper proposes an improved WFS technique before integration with a support vector machine (SVM) model classifier as a complete fault diagnosis system for a rolling element bearing case study. The bearing vibration dataset made available by the Case Western Reserve University Bearing Data Centre was executed using the proposed WFS and its performance has been analysed and discussed. The results reveal that the proposed WFS secures the best feature subset with a lower computational effort by eliminating the redundancy of re-evaluation. The proposed WFS has therefore been found to be capable and efficient to carry out feature selection tasks. PMID:29261689
New Toxico-Cheminformatics & Computational Toxicology ...
EPA’s National Center for Computational Toxicology is building capabilities to support a new paradigm for toxicity screening and prediction. The DSSTox project is improving public access to quality structure-annotated chemical toxicity information in less summarized forms than traditionally employed in SAR modeling, and in ways that facilitate data-mining, and data read-across. The DSSTox Structure-Browser provides structure searchability across all published DSSTox toxicity-related inventory, and is enabling linkages between previously isolated toxicity data resources. As of early March 2008, the public DSSTox inventory has been integrated into PubChem, allowing a user to take full advantage of PubChem structure-activity and bioassay clustering features. The most recent DSSTox version of the Carcinogenic Potency Database file (CPDBAS) illustrates ways in which various summary definitions of carcinogenic activity can be employed in modeling and data mining. Phase I of the ToxCastTM project is generating high-throughput screening data from several hundred biochemical and cell-based assays for a set of 320 chemicals, mostly pesticide actives, with rich toxicology profiles. Incorporating and expanding traditional SAR concepts into this new high-throughput and data-rich world pose conceptual and practical challenges, but also holds great promise for improving predictive capabilities.
NASA Technical Reports Server (NTRS)
Butler, C.
1986-01-01
The improvement of computer hardware and software of the NASA Multipurpose Differential Absorption Lidar (DIAL) system is documented. The NASA DIAL system is undergoing development and experimental deployment at NASA Langley Research Center for the remote measurement of atmospheric trace gas concentrations from ground and aircraft platforms. A viable DIAL system was developed capable of remotely measuring O3 and H2O concentrations from an aircraft platform. Test flights of the DIAL system were successfully performed onboard the NASA Goddard Flight Center Electra aircraft from 1980 to 1985. The DIAL Data Acquisition System has undergone a number of improvements over the past few years. These improvements have now been field tested. The theory behind a real time computer system as it applies to the needs of the DIAL system is discussed. This report is designed to be used as an operational manual for the DIAL DAS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henline, P.A.
1995-12-31
The increased use of UNIX based computer systems for machine control, data handling and analysis has greatly enhanced the operating scenarios and operating efficiency of the DIII-D tokamak. This paper will describe some of these UNIX systems and their specific uses. These include the plasma control system, the electron cyclotron heating control system, the analysis of electron temperature and density measurements and the general data acquisition system (which is collecting over 130 Mbytes of data). The speed and total capability of these systems has dramatically affected the ability to operate DIII-D. The improved operating scenarios include better plasma shape controlmore » due to the more thorough MHD calculations done between shots and the new ability to see the time dependence of profile data as it relates across different spatial locations in the tokamak. Other analysis which engenders improved operating abilities will be described.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henline, P.A.
1995-10-01
The increased use of UNIX based computer systems for machine control, data handling and analysis has greatly enhanced the operating scenarios and operating efficiency of the DRI-D tokamak. This paper will describe some of these UNIX systems and their specific uses. These include the plasma control system, the electron cyclotron heating control system, the analysis of electron temperature and density measurements and the general data acquisition system (which is collecting over 130 Mbytes of data). The speed and total capability of these systems has dramatically affected the ability to operate DIII-D. The improved operating scenarios include better plasma shape controlmore » due to the more thorough MHD calculations done between shots and the new ability to see the time dependence of profile data as it relates across different spatial locations in the tokamak. Other analysis which engenders improved operating abilities will be described.« less
NASA and USGS invest in invasive species modeling to evaluate habitat for Africanized Honey Bees
2009-01-01
Invasive non-native species, such as plants, animals, and pathogens, have long been an interest to the U.S. Geological Survey (USGS) and NASA. Invasive species cause harm to our economy (around $120 B/year), the environment (e.g., replacing native biodiversity, forest pathogens negatively affecting carbon storage), and human health (e.g., plague, West Nile virus). Five years ago, the USGS and NASA formed a partnership to improve ecological forecasting capabilities for the early detection and containment of the highest priority invasive species. Scientists from NASA Goddard Space Flight Center (GSFC) and the Fort Collins Science Center developed a longterm strategy to integrate remote sensing capabilities, high-performance computing capabilities and new spatial modeling techniques to advance the science of ecological invasions [Schnase et al., 2002].
Workshop on Engineering Turbulence Modeling
NASA Technical Reports Server (NTRS)
Povinelli, Louis A. (Editor); Liou, W. W. (Editor); Shabbir, A. (Editor); Shih, T.-H. (Editor)
1992-01-01
Discussed here is the future direction of various levels of engineering turbulence modeling related to computational fluid dynamics (CFD) computations for propulsion. For each level of computation, there are a few turbulence models which represent the state-of-the-art for that level. However, it is important to know their capabilities as well as their deficiencies in order to help engineers select and implement the appropriate models in their real world engineering calculations. This will also help turbulence modelers perceive the future directions for improving turbulence models. The focus is on one-point closure models (i.e., from algebraic models to higher order moment closure schemes and partial differential equation methods) which can be applied to CFD computations. However, other schemes helpful in developing one-point closure models, are also discussed.
Improved capabilities of the Multispectral Atmospheric Mapping Sensor (MAMS)
NASA Technical Reports Server (NTRS)
Jedlovec, Gary J.; Batson, K. Bryan; Atkinson, Robert J.; Moeller, Chris C.; Menzel, W. Paul; James, Mark W.
1989-01-01
The Multispectral Atmospheric Mapping Sensor (MAMS) is an airborne instrument being investigated as part of NASA's high altitude research program. Findings from work on this and other instruments have been important as the scientific justification of new instrumentation for the Earth Observing System (EOS). This report discusses changes to the instrument which have led to new capabilities, improved data quality, and more accurate calibration methods. In order to provide a summary of the data collected with MAMS, a complete list of flight dates and locations is provided. For many applications, registration of MAMS imagery with landmarks is required. The navigation of this data on the Man-computer Interactive Data Access System (McIDAS) is discussed. Finally, research applications of the data are discussed and specific examples are presented to show the applicability of these measurements to NASA's Earth System Science (ESS) objectives.
Pointer, William David; Baglietto, Emilio
2016-05-01
Here, in the effort to reinvigorate innovation in the way we design, build, and operate the nuclear power generating stations of today and tomorrow, nothing can be taken for granted. Not even the seemingly familiar physics of boiling water. The Consortium for the Advanced Simulation of Light Water Reactors, or CASL, is focused on the deployment of advanced modeling and simulation capabilities to enable the nuclear industry to reduce uncertainties in the prediction of multi-physics phenomena and continue to improve the performance of today’s Light Water Reactors and their fuel. An important part of the CASL mission is the developmentmore » of a next generation thermal hydraulics simulation capability, integrating the history of engineering models based on experimental experience with the computing technology of the future.« less
High-contrast imaging in the cloud with klipReduce and Findr
NASA Astrophysics Data System (ADS)
Haug-Baltzell, Asher; Males, Jared R.; Morzinski, Katie M.; Wu, Ya-Lin; Merchant, Nirav; Lyons, Eric; Close, Laird M.
2016-08-01
Astronomical data sets are growing ever larger, and the area of high contrast imaging of exoplanets is no exception. With the advent of fast, low-noise detectors operating at 10 to 1000 Hz, huge numbers of images can be taken during a single hours-long observation. High frame rates offer several advantages, such as improved registration, frame selection, and improved speckle calibration. However, advanced image processing algorithms are computationally challenging to apply. Here we describe a parallelized, cloud-based data reduction system developed for the Magellan Adaptive Optics VisAO camera, which is capable of rapidly exploring tens of thousands of parameter sets affecting the Karhunen-Loève image processing (KLIP) algorithm to produce high-quality direct images of exoplanets. We demonstrate these capabilities with a visible wavelength high contrast data set of a hydrogen-accreting brown dwarf companion.
The super-Turing computational power of plastic recurrent neural networks.
Cabessa, Jérémie; Siegelmann, Hava T
2014-12-01
We study the computational capabilities of a biologically inspired neural model where the synaptic weights, the connectivity pattern, and the number of neurons can evolve over time rather than stay static. Our study focuses on the mere concept of plasticity of the model so that the nature of the updates is assumed to be not constrained. In this context, we show that the so-called plastic recurrent neural networks (RNNs) are capable of the precise super-Turing computational power--as the static analog neural networks--irrespective of whether their synaptic weights are modeled by rational or real numbers, and moreover, irrespective of whether their patterns of plasticity are restricted to bi-valued updates or expressed by any other more general form of updating. Consequently, the incorporation of only bi-valued plastic capabilities in a basic model of RNNs suffices to break the Turing barrier and achieve the super-Turing level of computation. The consideration of more general mechanisms of architectural plasticity or of real synaptic weights does not further increase the capabilities of the networks. These results support the claim that the general mechanism of plasticity is crucially involved in the computational and dynamical capabilities of biological neural networks. They further show that the super-Turing level of computation reflects in a suitable way the capabilities of brain-like models of computation.
The importance of structural anisotropy in computational models of traumatic brain injury.
Carlsen, Rika W; Daphalapurkar, Nitin P
2015-01-01
Understanding the mechanisms of injury might prove useful in assisting the development of methods for the management and mitigation of traumatic brain injury (TBI). Computational head models can provide valuable insight into the multi-length-scale complexity associated with the primary nature of diffuse axonal injury. It involves understanding how the trauma to the head (at the centimeter length scale) translates to the white-matter tissue (at the millimeter length scale), and even further down to the axonal-length scale, where physical injury to axons (e.g., axon separation) may occur. However, to accurately represent the development of TBI, the biofidelity of these computational models is of utmost importance. There has been a focused effort to improve the biofidelity of computational models by including more sophisticated material definitions and implementing physiologically relevant measures of injury. This paper summarizes recent computational studies that have incorporated structural anisotropy in both the material definition of the white matter and the injury criterion as a means to improve the predictive capabilities of computational models for TBI. We discuss the role of structural anisotropy on both the mechanical response of the brain tissue and on the development of injury. We also outline future directions in the computational modeling of TBI.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eslinger, Paul W.; Aaberg, Rosanne L.; Lopresti, Charles A.
2004-09-14
This document contains detailed user instructions for a suite of utility codes developed for Rev. 1 of the Systems Assessment Capability. The suite of computer codes for Rev. 1 of Systems Assessment Capability performs many functions.
NASA Technical Reports Server (NTRS)
Key, Samuel W.
1993-01-01
The explicit transient dynamics technology in use today for simulating the impact and subsequent transient dynamic response of a structure has its origins in the 'hydrocodes' dating back to the late 1940's. The growth in capability in explicit transient dynamics technology parallels the growth in speed and size of digital computers. Computer software for simulating the explicit transient dynamic response of a structure is characterized by algorithms that use a large number of small steps. In explicit transient dynamics software there is a significant emphasis on speed and simplicity. The finite element technology used to generate the spatial discretization of a structure is based on a compromise between completeness of the representation for the physical processes modelled and speed in execution. That is, since it is expected in every calculation that the deformation will be finite and the material will be strained beyond the elastic range, the geometry and the associated gradient operators must be reconstructed, as well as complex stress-strain models evaluated at every time step. As a result, finite elements derived for explicit transient dynamics software use the simplest and barest constructions possible for computational efficiency while retaining an essential representation of the physical behavior. The best example of this technology is the four-node bending quadrilateral derived by Belytschko, Lin and Tsay. Today, the speed, memory capacity and availability of computer hardware allows a number of the previously used algorithms to be 'improved.' That is, it is possible with today's computing hardware to modify many of the standard algorithms to improve their representation of the physical process at the expense of added complexity and computational effort. The purpose is to review a number of these algorithms and identify the improvements possible. In many instances, both the older, faster version of the algorithm and the improved and somewhat slower version of the algorithm are found implemented together in software. Specifically, the following seven algorithmic items are examined: the invariant time derivatives of stress used in material models expressed in rate form; incremental objectivity and strain used in the numerical integration of the material models; the use of one-point element integration versus mean quadrature; shell elements used to represent the behavior of thin structural components; beam elements based on stress-resultant plasticity versus cross-section integration; the fidelity of elastic-plastic material models in their representation of ductile metals; and the use of Courant subcycling to reduce computational effort.
Managing Computer Systems Development: Understanding the Human and Technological Imperatives.
1985-06-01
for their organization’s use? How can they predict tle impact of future systems ca their management control capabilities ? Cf equal importance is the...commercial organizations discovered that there was only a limited capability of interaction between various types of computers. These organizations were...Viewed together, these three interrelated subsystems, EDP, MIS, and DSS, establish the framework of an overall systems capability known as a Computer
Wang, Hua; Liu, Feng; Xia, Ling; Crozier, Stuart
2008-11-21
This paper presents a stabilized Bi-conjugate gradient algorithm (BiCGstab) that can significantly improve the performance of the impedance method, which has been widely applied to model low-frequency field induction phenomena in voxel phantoms. The improved impedance method offers remarkable computational advantages in terms of convergence performance and memory consumption over the conventional, successive over-relaxation (SOR)-based algorithm. The scheme has been validated against other numerical/analytical solutions on a lossy, multilayered sphere phantom excited by an ideal coil loop. To demonstrate the computational performance and application capability of the developed algorithm, the induced fields inside a human phantom due to a low-frequency hyperthermia device is evaluated. The simulation results show the numerical accuracy and superior performance of the method.
Tenth NASTRAN User's Colloquium
NASA Technical Reports Server (NTRS)
1982-01-01
The development of the NASTRAN computer program, a general purpose finite element computer code for structural analysis, was discussed. The application and development of NASTRAN is presented in the following topics: improvements and enhancements; developments of pre and postprocessors; interactive review system; the use of harmonic expansions in magnetic field problems; improving a dynamic model with test data using Linwood; solution of axisymmetric fluid structure interaction problems; large displacements and stability analysis of nonlinear propeller structures; prediction of bead area contact load at the tire wheel interface; elastic plastic analysis of an overloaded breech ring; finite element solution of torsion and other 2-D Poisson equations; new capability for elastic aircraft airloads; usage of substructuring analysis in the get away special program; solving symmetric structures with nonsymmetric loads; evaluation and reduction of errors induced by Guyan transformation.
Improving Distributed Diagnosis Through Structural Model Decomposition
NASA Technical Reports Server (NTRS)
Bregon, Anibal; Daigle, Matthew John; Roychoudhury, Indranil; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino
2011-01-01
Complex engineering systems require efficient fault diagnosis methodologies, but centralized approaches do not scale well, and this motivates the development of distributed solutions. This work presents an event-based approach for distributed diagnosis of abrupt parametric faults in continuous systems, by using the structural model decomposition capabilities provided by Possible Conflicts. We develop a distributed diagnosis algorithm that uses residuals computed by extending Possible Conflicts to build local event-based diagnosers based on global diagnosability analysis. The proposed approach is applied to a multitank system, and results demonstrate an improvement in the design of local diagnosers. Since local diagnosers use only a subset of the residuals, and use subsystem models to compute residuals (instead of the global system model), the local diagnosers are more efficient than previously developed distributed approaches.
Development of Software to Model AXAF-I Image Quality
NASA Technical Reports Server (NTRS)
Geary, Joseph; Hawkins, Lamar; Ahmad, Anees; Gong, Qian
1997-01-01
This report describes work conducted on Delivery Order 181 between October 1996 through June 1997. During this period software was written to: compute axial PSD's from RDOS AXAF-I mirror surface maps; plot axial surface errors and compute PSD's from HDOS "Big 8" axial scans; plot PSD's from FITS format PSD files; plot band-limited RMS vs axial and azimuthal position for multiple PSD files; combine and organize PSD's from multiple mirror surface measurements formatted as input to GRAZTRACE; modify GRAZTRACE to read FITS formatted PSD files; evaluate AXAF-I test results; improve and expand the capabilities of the GT x-ray mirror analysis package. During this period work began on a more user-friendly manual for the GT program, and improvements were made to the on-line help manual.
ERIC Educational Resources Information Center
Nee, John G.; Kare, Audhut P.
1987-01-01
Explores several concepts in computer assisted design/computer assisted manufacturing (CAD/CAM). Defines, evaluates, reviews and compares advanced computer-aided geometric modeling and analysis techniques. Presents the results of a survey to establish the capabilities of minicomputer based-systems with the CAD/CAM packages evaluated. (CW)
Human factors aspects of control room design
NASA Technical Reports Server (NTRS)
Jenkins, J. P.
1983-01-01
A plan for the design and analysis of a multistation control room is reviewed. It is found that acceptance of the computer based information system by the uses in the control room is mandatory for mission and system success. Criteria to improve computer/user interface include: match of system input/output with user; reliability, compatibility and maintainability; easy to learn and little training needed; self descriptive system; system under user control; transparent language, format and organization; corresponds to user expectations; adaptable to user experience level; fault tolerant; dialog capability user communications needs reflected in flexibility, complexity, power and information load; integrated system; and documentation.
Remote sensing of environmental impact of land use activities
NASA Technical Reports Server (NTRS)
Paul, C. K.
1977-01-01
The capability to monitor land cover, associated in the past with aerial film cameras and radar systems, was discussed in regard to aircraft and spacecraft multispectral scanning sensors. A proposed thematic mapper with greater spectral and spatial resolutions for the fourth LANDSAT is expected to usher in new environmental monitoring capability. In addition, continuing improvements in image classification by supervised and unsupervised computer techniques are being operationally verified for discriminating environmental impacts of human activities on the land. The benefits of employing remote sensing for this discrimination was shown to far outweigh the incremental costs of converting to an aircraft-satellite multistage system.
NASA Technical Reports Server (NTRS)
Crisp, David; Komar, George (Technical Monitor)
2001-01-01
Advancement of our predictive capabilities will require new scientific knowledge, improvement of our modeling capabilities, and new observation strategies to generate the complex data sets needed by coupled modeling networks. New observation strategies must support remote sensing from a variety of vantage points and will include "sensorwebs" of small satellites in low Earth orbit, large aperture sensors in Geostationary orbits, and sentinel satellites at L1 and L2 to provide day/night views of the entire globe. Onboard data processing and high speed computing and communications will enable near real-time tailoring and delivery of information products (i.e., predictions) directly to users.
Calculating Trajectories And Orbits
NASA Technical Reports Server (NTRS)
Alderson, Daniel J.; Brady, Franklyn H.; Breckheimer, Peter J.; Campbell, James K.; Christensen, Carl S.; Collier, James B.; Ekelund, John E.; Ellis, Jordan; Goltz, Gene L.; Hintz, Gerarld R.;
1989-01-01
Double-Precision Trajectory Analysis Program, DPTRAJ, and Orbit Determination Program, ODP, developed and improved over years to provide highly reliable and accurate navigation capability for deep-space missions like Voyager. Each collection of programs working together to provide desired computational results. DPTRAJ, ODP, and supporting utility programs capable of handling massive amounts of data and performing various numerical calculations required for solving navigation problems associated with planetary fly-by and lander missions. Used extensively in support of NASA's Voyager project. DPTRAJ-ODP available in two machine versions. UNIVAC version, NPO-15586, written in FORTRAN V, SFTRAN, and ASSEMBLER. VAX/VMS version, NPO-17201, written in FORTRAN V, SFTRAN, PL/1 and ASSEMBLER.
Computer aided system engineering for space construction
NASA Technical Reports Server (NTRS)
Racheli, Ugo
1989-01-01
This viewgraph presentation covers the following topics. Construction activities envisioned for the assembly of large platforms in space (as well as interplanetary spacecraft and bases on extraterrestrial surfaces) require computational tools that exceed the capability of conventional construction management programs. The Center for Space Construction is investigating the requirements for new computational tools and, at the same time, suggesting the expansion of graduate and undergraduate curricula to include proficiency in Computer Aided Engineering (CAE) though design courses and individual or team projects in advanced space systems design. In the center's research, special emphasis is placed on problems of constructability and of the interruptability of planned activity sequences to be carried out by crews operating under hostile environmental conditions. The departure point for the planned work is the acquisition of the MCAE I-DEAS software, developed by the Structural Dynamics Research Corporation (SDRC), and its expansion to the level of capability denoted by the acronym IDEAS**2 currently used for configuration maintenance on Space Station Freedom. In addition to improving proficiency in the use of I-DEAS and IDEAS**2, it is contemplated that new software modules will be developed to expand the architecture of IDEAS**2. Such modules will deal with those analyses that require the integration of a space platform's configuration with a breakdown of planned construction activities and with a failure modes analysis to support computer aided system engineering (CASE) applied to space construction.
Devi, D Chitra; Uthariaraj, V Rhymend
2016-01-01
Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods.
Devi, D. Chitra; Uthariaraj, V. Rhymend
2016-01-01
Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods. PMID:26955656
The emergence of spatial cyberinfrastructure.
Wright, Dawn J; Wang, Shaowen
2011-04-05
Cyberinfrastructure integrates advanced computer, information, and communication technologies to empower computation-based and data-driven scientific practice and improve the synthesis and analysis of scientific data in a collaborative and shared fashion. As such, it now represents a paradigm shift in scientific research that has facilitated easy access to computational utilities and streamlined collaboration across distance and disciplines, thereby enabling scientific breakthroughs to be reached more quickly and efficiently. Spatial cyberinfrastructure seeks to resolve longstanding complex problems of handling and analyzing massive and heterogeneous spatial datasets as well as the necessity and benefits of sharing spatial data flexibly and securely. This article provides an overview and potential future directions of spatial cyberinfrastructure. The remaining four articles of the special feature are introduced and situated in the context of providing empirical examples of how spatial cyberinfrastructure is extending and enhancing scientific practice for improved synthesis and analysis of both physical and social science data. The primary focus of the articles is spatial analyses using distributed and high-performance computing, sensor networks, and other advanced information technology capabilities to transform massive spatial datasets into insights and knowledge.
The emergence of spatial cyberinfrastructure
Wright, Dawn J.; Wang, Shaowen
2011-01-01
Cyberinfrastructure integrates advanced computer, information, and communication technologies to empower computation-based and data-driven scientific practice and improve the synthesis and analysis of scientific data in a collaborative and shared fashion. As such, it now represents a paradigm shift in scientific research that has facilitated easy access to computational utilities and streamlined collaboration across distance and disciplines, thereby enabling scientific breakthroughs to be reached more quickly and efficiently. Spatial cyberinfrastructure seeks to resolve longstanding complex problems of handling and analyzing massive and heterogeneous spatial datasets as well as the necessity and benefits of sharing spatial data flexibly and securely. This article provides an overview and potential future directions of spatial cyberinfrastructure. The remaining four articles of the special feature are introduced and situated in the context of providing empirical examples of how spatial cyberinfrastructure is extending and enhancing scientific practice for improved synthesis and analysis of both physical and social science data. The primary focus of the articles is spatial analyses using distributed and high-performance computing, sensor networks, and other advanced information technology capabilities to transform massive spatial datasets into insights and knowledge. PMID:21467227
Information processing in echo state networks at the edge of chaos.
Boedecker, Joschka; Obst, Oliver; Lizier, Joseph T; Mayer, N Michael; Asada, Minoru
2012-09-01
We investigate information processing in randomly connected recurrent neural networks. It has been shown previously that the computational capabilities of these networks are maximized when the recurrent layer is close to the border between a stable and an unstable dynamics regime, the so called edge of chaos. The reasons, however, for this maximized performance are not completely understood. We adopt an information-theoretical framework and are for the first time able to quantify the computational capabilities between elements of these networks directly as they undergo the phase transition to chaos. Specifically, we present evidence that both information transfer and storage in the recurrent layer are maximized close to this phase transition, providing an explanation for why guiding the recurrent layer toward the edge of chaos is computationally useful. As a consequence, our study suggests self-organized ways of improving performance in recurrent neural networks, driven by input data. Moreover, the networks we study share important features with biological systems such as feedback connections and online computation on input streams. A key example is the cerebral cortex, which was shown to also operate close to the edge of chaos. Consequently, the behavior of model systems as studied here is likely to shed light on reasons why biological systems are tuned into this specific regime.
Computer search for binary cyclic UEP codes of odd length up to 65
NASA Technical Reports Server (NTRS)
Lin, Mao-Chao; Lin, Chi-Chang; Lin, Shu
1990-01-01
Using an exhaustive computation, the unequal error protection capabilities of all binary cyclic codes of odd length up to 65 that have minimum distances at least 3 are found. For those codes that can only have upper bounds on their unequal error protection capabilities computed, an analytic method developed by Dynkin and Togonidze (1976) is used to show that the upper bounds meet the exact unequal error protection capabilities.
High performance flight computer developed for deep space applications
NASA Technical Reports Server (NTRS)
Bunker, Robert L.
1993-01-01
The development of an advanced space flight computer for real time embedded deep space applications which embodies the lessons learned on Galileo and modern computer technology is described. The requirements are listed and the design implementation that meets those requirements is described. The development of SPACE-16 (Spaceborne Advanced Computing Engine) (where 16 designates the databus width) was initiated to support the MM2 (Marine Mark 2) project. The computer is based on a radiation hardened emulation of a modern 32 bit microprocessor and its family of support devices including a high performance floating point accelerator. Additional custom devices which include a coprocessor to improve input/output capabilities, a memory interface chip, and an additional support chip that provide management of all fault tolerant features, are described. Detailed supporting analyses and rationale which justifies specific design and architectural decisions are provided. The six chip types were designed and fabricated. Testing and evaluation of a brass/board was initiated.
Enabling Disabled Persons to Gain Access to Digital Media
NASA Technical Reports Server (NTRS)
Beach, Glenn; OGrady, Ryan
2011-01-01
A report describes the first phase in an effort to enhance the NaviGaze software to enable profoundly disabled persons to operate computers. (Running on a Windows-based computer equipped with a video camera aimed at the user s head, the original NaviGaze software processes the user's head movements and eye blinks into cursor movements and mouse clicks to enable hands-free control of the computer.) To accommodate large variations in movement capabilities among disabled individuals, one of the enhancements was the addition of a graphical user interface for selection of parameters that affect the way the software interacts with the computer and tracks the user s movements. Tracking algorithms were improved to reduce sensitivity to rotations and reduce the likelihood of tracking the wrong features. Visual feedback to the user was improved to provide an indication of the state of the computer system. It was found that users can quickly learn to use the enhanced software, performing single clicks, double clicks, and drags within minutes of first use. Available programs that could increase the usability of NaviGaze were identified. One of these enables entry of text by using NaviGaze as a mouse to select keys on a virtual keyboard.
Shulaker, Max M; Hills, Gage; Patil, Nishant; Wei, Hai; Chen, Hong-Yu; Wong, H-S Philip; Mitra, Subhasish
2013-09-26
The miniaturization of electronic devices has been the principal driving force behind the semiconductor industry, and has brought about major improvements in computational power and energy efficiency. Although advances with silicon-based electronics continue to be made, alternative technologies are being explored. Digital circuits based on transistors fabricated from carbon nanotubes (CNTs) have the potential to outperform silicon by improving the energy-delay product, a metric of energy efficiency, by more than an order of magnitude. Hence, CNTs are an exciting complement to existing semiconductor technologies. Owing to substantial fundamental imperfections inherent in CNTs, however, only very basic circuit blocks have been demonstrated. Here we show how these imperfections can be overcome, and demonstrate the first computer built entirely using CNT-based transistors. The CNT computer runs an operating system that is capable of multitasking: as a demonstration, we perform counting and integer-sorting simultaneously. In addition, we implement 20 different instructions from the commercial MIPS instruction set to demonstrate the generality of our CNT computer. This experimental demonstration is the most complex carbon-based electronic system yet realized. It is a considerable advance because CNTs are prominent among a variety of emerging technologies that are being considered for the next generation of highly energy-efficient electronic systems.
An overview of the F-117A avionics flight test program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silz, R.
1992-02-01
This paper is an overview of the history of the F-117A avionics flight test program. System design concepts and equipment selections are explored followed by a review of full scale development and full capability development testing. Flight testing the Weapon System Computational Subsystem upgrade and the Offensive Combat Improvement Program are reviewed. Current flight test programs and future system updates are highlighted.
CFD Code Survey for Thrust Chamber Application
NASA Technical Reports Server (NTRS)
Gross, Klaus W.
1990-01-01
In the quest fo find analytical reference codes, responses from a questionnaire are presented which portray the current computational fluid dynamics (CFD) program status and capability at various organizations, characterizing liquid rocket thrust chamber flow fields. Sample cases are identified to examine the ability, operational condition, and accuracy of the codes. To select the best suited programs for accelerated improvements, evaluation criteria are being proposed.
Advanced Computer Image Generation Techniques Exploiting Perceptual Characteristics
1981-08-01
the capabilities/limitations of the human visual perceptual processing system and improve the training effectiveness of visual simulation systems...Myron Braunstein of the University of California at Irvine performed all the work in the perceptual area. Mr. Timothy A. Zimmerlin contributed the... work . Thus, while some areas are related, each is resolved independently in order to focus on the basic perceptual limitation. In addition, the
Approaches and possible improvements in the area of multibody dynamics modeling
NASA Technical Reports Server (NTRS)
Lips, K. W.; Singh, R.
1987-01-01
A wide ranging look is taken at issues involved in the dynamic modeling of complex, multibodied orbiting space systems. Capabilities and limitations of two major codes (DISCOS, TREETOPS) are assessed and possible extensions to the CONTOPS software are outlined. In addition, recommendations are made concerning the direction future development should take in order to achieve higher fidelity, more computationally efficient multibody software solutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ritchie, L.T.; Alpert, D.J.; Burke, R.P.
1984-03-01
The CRAC2 computer code is a revised version of CRAC (Calculation of Reactor Accident Consequences) which was developed for the Reactor Safety Study. This document provides an overview of the CRAC2 code and a description of each of the models used. Significant improvements incorporated into CRAC2 include an improved weather sequence sampling technique, a new evacuation model, and new output capabilities. In addition, refinements have been made to the atmospheric transport and deposition model. Details of the modeling differences between CRAC2 and CRAC are emphasized in the model descriptions.
On central-difference and upwind schemes
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Turkel, Eli
1990-01-01
A class of numerical dissipation models for central-difference schemes constructed with second- and fourth-difference terms is considered. The notion of matrix dissipation associated with upwind schemes is used to establish improved shock capturing capability for these models. In addition, conditions are given that guarantee that such dissipation models produce a Total Variation Diminishing (TVD) scheme. Appropriate switches for this type of model to ensure satisfaction of the TVD property are presented. Significant improvements in the accuracy of a central-difference scheme are demonstrated by computing both inviscid and viscous transonic airfoil flows.
NASA Astrophysics Data System (ADS)
Xu, Xiaoyang; Deng, Xiao-Long
2016-04-01
In this paper, an improved weakly compressible smoothed particle hydrodynamics (SPH) method is proposed to simulate transient free surface flows of viscous and viscoelastic fluids. The improved SPH algorithm includes the implementation of (i) the mixed symmetric correction of kernel gradient to improve the accuracy and stability of traditional SPH method and (ii) the Rusanov flux in the continuity equation for improving the computation of pressure distributions in the dynamics of liquids. To assess the effectiveness of the improved SPH algorithm, a number of numerical examples including the stretching of an initially circular water drop, dam breaking flow against a vertical wall, the impact of viscous and viscoelastic fluid drop with a rigid wall, and the extrudate swell of viscoelastic fluid have been presented and compared with available numerical and experimental data in literature. The convergent behavior of the improved SPH algorithm has also been studied by using different number of particles. All numerical results demonstrate that the improved SPH algorithm proposed here is capable of modeling free surface flows of viscous and viscoelastic fluids accurately and stably, and even more important, also computing an accurate and little oscillatory pressure field.
USM3D Analysis of Low Boom Configuration
NASA Technical Reports Server (NTRS)
Carter, Melissa B.; Campbell, Richard L.; Nayani, Sudheer N.
2011-01-01
In the past few years considerable improvement was made in NASA's in house boom prediction capability. As part of this improved capability, the USM3D Navier-Stokes flow solver, when combined with a suitable unstructured grid, went from accurately predicting boom signatures at 1 body length to 10 body lengths. Since that time, the research emphasis has shifted from analysis to the design of supersonic configurations with boom signature mitigation In order to design an aircraft, the techniques for accurately predicting boom and drag need to be determined. This paper compares CFD results with the wind tunnel experimental results conducted on a Gulfstream reduced boom and drag configuration. Two different wind-tunnel models were designed and tested for drag and boom data. The goal of this study was to assess USM3D capability for predicting both boom and drag characteristics. Overall, USM3D coupled with a grid that was sheared and stretched was able to reasonably predict boom signature. The computational drag polar matched the experimental results for a lift coefficient above 0.1 despite some mismatch in the predicted lift-curve slope.
Development of Semi-Span Model Test Techniques
NASA Technical Reports Server (NTRS)
Pulnam, L. Elwood (Technical Monitor); Milholen, William E., II; Chokani, Ndaona; McGhee, Robert J.
1996-01-01
A computational investigation was performed to support the development of a semi-span model test capability in the NASA Langley Research Center's National Transonic Facility. This capability is desirable for the testing of advanced subsonic transport aircraft at full-scale Reynolds numbers. A state-of-the-art three-dimensional Navier-Stokes solver was used to examine methods to improve the flow over a semi-span configuration. First, a parametric study is conducted to examine the influence of the stand-off height on the flow over the semi-span model. It is found that decreasing the stand-off height, below the maximum fuselage radius, improves the aerodynamic characteristics of the semi-span model. Next, active sidewall boundary layer control techniques are examined. Juncture region blowing jets, upstream tangential blowing, and sidewall suction are found to improve the flow over the aft portion of the semi-span model. Both upstream blowing and suction are found to reduce the sidewall boundary layer separation. The resulting near surface streamline patterns are improved, and found to be quite similar to the full-span results. Both techniques however adversely affect the pitching moment coefficient.
Development of Semi-Span Model Test Techniques
NASA Technical Reports Server (NTRS)
Milholen, William E., II; Chokani, Ndaona; McGhee, Robert J.
1996-01-01
A computational investigation was performed to support the development of a semispan model test capability in the NASA Langley Research Center's National Transonic Facility. This capability is desirable for the testing of advanced subsonic transport aircraft at full-scale Reynolds numbers. A state-of-the-art three-dimensional Navier-Stokes solver was used to examine methods to improve the flow over a semi-span configuration. First, a parametric study is conducted to examine the influence of the stand-off height on the flow over the semispan model. It is found that decreasing the stand-off height, below the maximum fuselage radius, improves the aerodynamic characteristics of the semi-span model. Next, active sidewall boundary layer control techniques are examined. Juncture region blowing jets, upstream tangential blowing, and sidewall suction are found to improve the flow over the aft portion of the semispan model. Both upstream blowing and suction are found to reduce the sidewall boundary layer separation. The resulting near surface streamline patterns are improved, and found to be quite similar to the full-span results. Both techniques however adversely affect the pitching moment coefficient.
LaRC local area networks to support distributed computing
NASA Technical Reports Server (NTRS)
Riddle, E. P.
1984-01-01
The Langley Research Center's (LaRC) Local Area Network (LAN) effort is discussed. LaRC initiated the development of a LAN to support a growing distributed computing environment at the Center. The purpose of the network is to provide an improved capability (over inteactive and RJE terminal access) for sharing multivendor computer resources. Specifically, the network will provide a data highway for the transfer of files between mainframe computers, minicomputers, work stations, and personal computers. An important influence on the overall network design was the vital need of LaRC researchers to efficiently utilize the large CDC mainframe computers in the central scientific computing facility. Although there was a steady migration from a centralized to a distributed computing environment at LaRC in recent years, the work load on the central resources increased. Major emphasis in the network design was on communication with the central resources within the distributed environment. The network to be implemented will allow researchers to utilize the central resources, distributed minicomputers, work stations, and personal computers to obtain the proper level of computing power to efficiently perform their jobs.
COMPUTER SUPPORT SYSTEMS FOR ESTIMATING CHEMICAL TOXICITY: PRESENT CAPABILITIES AND FUTURE TRENDS
Computer Support Systems for Estimating Chemical Toxicity: Present Capabilities and Future Trends
A wide variety of computer-based artificial intelligence (AI) and decision support systems exist currently to aid in the assessment of toxicity for environmental chemicals. T...
Modal analysis and dynamic stresses for acoustically excited shuttle insulation tiles
NASA Technical Reports Server (NTRS)
Ojalvo, I. U.; Ogilvie, P. L.
1975-01-01
Improvements and extensions to the RESIST computer program developed for determining the normalized modal stress response of shuttle insulation tiles are described. The new version of RESIST can accommodate primary structure panels with closed-cell stringers, in addition to the capability for treating open-cell stringers. In addition, the present version of RESIST numerically solves vibration problems several times faster than its predecessor. A new digital computer program, titled ARREST (Acoustic Response of Reusable Shuttle Tiles) is also described. Starting with modal information contained on output tapes from RESIST computer runs, ARREST determines RMS stresses, deflections and accelerations of shuttle panels with reusable surface insulation tiles. Both programs are applicable to stringer stiffened structural panels with or without reusable surface insulation titles.
Computer graphics for management: An abstract of capabilities and applications of the EIS system
NASA Technical Reports Server (NTRS)
Solem, B. J.
1975-01-01
The Executive Information Services (EIS) system, developed as a computer-based, time-sharing tool for making and implementing management decisions, and including computer graphics capabilities, was described. The following resources are available through the EIS languages: centralized corporate/gov't data base, customized and working data bases, report writing, general computational capability, specialized routines, modeling/programming capability, and graphics. Nearly all EIS graphs can be created by a single, on-line instruction. A large number of options are available, such as selection of graphic form, line control, shading, placement on the page, multiple images on a page, control of scaling and labeling, plotting of cum data sets, optical grid lines, and stack charts. The following are examples of areas in which the EIS system may be used: research, estimating services, planning, budgeting, and performance measurement, national computer hook-up negotiations.
Banta, Edward R.; Poeter, Eileen P.; Doherty, John E.; Hill, Mary C.
2006-01-01
he Joint Universal Parameter IdenTification and Evaluation of Reliability Application Programming Interface (JUPITER API) improves the computer programming resources available to those developing applications (computer programs) for model analysis.The JUPITER API consists of eleven Fortran-90 modules that provide for encapsulation of data and operations on that data. Each module contains one or more entities: data, data types, subroutines, functions, and generic interfaces. The modules do not constitute computer programs themselves; instead, they are used to construct computer programs. Such computer programs are called applications of the API. The API provides common modeling operations for use by a variety of computer applications.The models being analyzed are referred to here as process models, and may, for example, represent the physics, chemistry, and(or) biology of a field or laboratory system. Process models commonly are constructed using published models such as MODFLOW (Harbaugh et al., 2000; Harbaugh, 2005), MT3DMS (Zheng and Wang, 1996), HSPF (Bicknell et al., 1997), PRMS (Leavesley and Stannard, 1995), and many others. The process model may be accessed by a JUPITER API application as an external program, or it may be implemented as a subroutine within a JUPITER API application . In either case, execution of the model takes place in a framework designed by the application programmer. This framework can be designed to take advantage of any parallel processing capabilities possessed by the process model, as well as the parallel-processing capabilities of the JUPITER API.Model analyses for which the JUPITER API could be useful include, for example: Compare model results to observed values to determine how well the model reproduces system processes and characteristics.Use sensitivity analysis to determine the information provided by observations to parameters and predictions of interest.Determine the additional data needed to improve selected model predictions.Use calibration methods to modify parameter values and other aspects of the model.Compare predictions to regulatory limits.Quantify the uncertainty of predictions based on the results of one or many simulations using inferential or Monte Carlo methods.Determine how to manage the system to achieve stated objectives.The capabilities provided by the JUPITER API include, for example, communication with process models, parallel computations, compressed storage of matrices, and flexible input capabilities. The input capabilities use input blocks suitable for lists or arrays of data. The input blocks needed for one application can be included within one data file or distributed among many files. Data exchange between different JUPITER API applications or between applications and other programs is supported by data-exchange files.The JUPITER API has already been used to construct a number of applications. Three simple example applications are presented in this report. More complicated applications include the universal inverse code UCODE_2005 (Poeter et al., 2005), the multi-model analysis MMA (Eileen P. Poeter, Mary C. Hill, E.R. Banta, S.W. Mehl, and Steen Christensen, written commun., 2006), and a code named OPR_PPR (Matthew J. Tonkin, Claire R. Tiedeman, Mary C. Hill, and D. Matthew Ely, written communication, 2006).This report describes a set of underlying organizational concepts and complete specifics about the JUPITER API. While understanding the organizational concept presented is useful to understanding the modules, other organizational concepts can be used in applications constructed using the JUPITER API.
The Transfer Function Model as a Tool to Study and Describe Space Weather Phenomena
NASA Technical Reports Server (NTRS)
Porter, Hayden S.; Mayr, Hans G.; Bhartia, P. K. (Technical Monitor)
2001-01-01
The Transfer Function Model (TFM) is a semi-analytical, linear model that is designed especially to describe thermospheric perturbations associated with magnetic storms and substorm. activity. It is a multi-constituent model (N2, O, He H, Ar) that accounts for wind induced diffusion, which significantly affects not only the composition and mass density but also the temperature and wind fields. Because the TFM adopts a semianalytic approach in which the geometry and temporal dependencies of the driving sources are removed through the use of height-integrated Green's functions, it provides physical insight into the essential properties of processes being considered, which are uncluttered by the accidental complexities that arise from particular source geometrie and time dependences. Extending from the ground to 700 km, the TFM eliminates spurious effects due to arbitrarily chosen boundary conditions. A database of transfer functions, computed only once, can be used to synthesize a wide range of spatial and temporal sources dependencies. The response synthesis can be performed quickly in real-time using only limited computing capabilities. These features make the TFM unique among global dynamical models. Given these desirable properties, a version of the TFM has been developed for personal computers (PC) using advanced platform-independent 3D visualization capabilities. We demonstrate the model capabilities with simulations for different auroral sources, including the response of ducted gravity waves modes that propagate around the globe. The thermospheric response is found to depend strongly on the spatial and temporal frequency spectra of the storm. Such varied behavior is difficult to describe in statistical empirical models. To improve the capability of space weather prediction, the TFM thus could be grafted naturally onto existing statistical models using data assimilation.
Spielberg, Freya; Kurth, Ann; Reidy, William; McKnight, Teka; Dikobe, Wame; Wilson, Charles
2011-06-01
This article highlights findings from an evaluation that explored the impact of mobile versus clinic-based testing, rapid versus central-lab based testing, incentives for testing, and the use of a computer counseling program to guide counseling and automate evaluation in a mobile program reaching people of color at risk for HIV. The program's results show that an increased focus on mobile outreach using rapid testing, incentives and health information technology tools may improve program acceptability, quality, productivity and timeliness of reports. This article describes program design decisions based on continuous quality assessment efforts. It also examines the impact of the Computer Assessment and Risk Reduction Education computer tool on HIV testing rates, staff perception of counseling quality, program productivity, and on the timeliness of evaluation reports. The article concludes with a discussion of implications for programmatic responses to the Centers for Disease Control and Prevention's HIV testing recommendations.
Spielberg, Freya; Kurth, Ann; Reidy, William; McKnight, Teka; Dikobe, Wame; Wilson, Charles
2016-01-01
This article highlights findings from an evaluation that explored the impact of mobile versus clinic-based testing, rapid versus central-lab based testing, incentives for testing, and the use of a computer counseling program to guide counseling and automate evaluation in a mobile program reaching people of color at risk for HIV. The program’s results show that an increased focus on mobile outreach using rapid testing, incentives and health information technology tools may improve program acceptability, quality, productivity and timeliness of reports. This article describes program design decisions based on continuous quality assessment efforts. It also examines the impact of the Computer Assessment and Risk Reduction Education computer tool on HIV testing rates, staff perception of counseling quality, program productivity, and on the timeliness of evaluation reports. The article concludes with a discussion of implications for programmatic responses to the Centers for Disease Control and Prevention’s HIV testing recommendations. PMID:21689041
Plant, Richard R; Turner, Garry
2009-08-01
Since the publication of Plant, Hammond, and Turner (2004), which highlighted a pressing need for researchers to pay more attention to sources of error in computer-based experiments, the landscape has undoubtedly changed, but not necessarily for the better. Readily available hardware has improved in terms of raw speed; multi core processors abound; graphics cards now have hundreds of megabytes of RAM; main memory is measured in gigabytes; drive space is measured in terabytes; ever larger thin film transistor displays capable of single-digit response times, together with newer Digital Light Processing multimedia projectors, enable much greater graphic complexity; and new 64-bit operating systems, such as Microsoft Vista, are now commonplace. However, have millisecond-accurate presentation and response timing improved, and will they ever be available in commodity computers and peripherals? In the present article, we used a Black Box ToolKit to measure the variability in timing characteristics of hardware used commonly in psychological research.
Results of solar electric thrust vector control system design, development and tests
NASA Technical Reports Server (NTRS)
Fleischer, G. E.
1973-01-01
Efforts to develop and test a thrust vector control system TVCS for a solar-energy-powered ion engine array are described. The results of solar electric propulsion system technology (SEPST) III real-time tests of present versions of TVCS hardware in combination with computer-simulated attitude dynamics of a solar electric multi-mission spacecraft (SEMMS) Phase A-type spacecraft configuration are summarized. Work on an improved solar electric TVCS, based on the use of a state estimator, is described. SEPST III tests of TVCS hardware have generally proved successful and dynamic response of the system is close to predictions. It appears that, if TVCS electronic hardware can be effectively replaced by control computer software, a significant advantage in control capability and flexibility can be gained in future developmental testing, with practical implications for flight systems as well. Finally, it is concluded from computer simulations that TVCS stabilization using rate estimation promises a substantial performance improvement over the present design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, D.G.: Watkins, J.C.
This report documents an evaluation of the TRAC-PF1/MOD1 reactor safety analysis computer code during computer simulations of feedwater line break transients. The experimental data base for the evaluation included the results of three bottom feedwater line break tests performed in the Semiscale Mod-2C test facility. The tests modeled 14.3% (S-FS-7), 50% (S-FS-11), and 100% (S-FS-6B) breaks. The test facility and the TRAC-PF1/MOD1 model used in the calculations are described. Evaluations of the accuracy of the calculations are presented in the form of comparisons of measured and calculated histories of selected parameters associated with the primary and secondary systems. In additionmore » to evaluating the accuracy of the code calculations, the computational performance of the code during the simulations was assessed. A conclusion was reached that the code is capable of making feedwater line break transient calculations efficiently, but there is room for significant improvements in the simulations that were performed. Recommendations are made for follow-on investigations to determine how to improve future feedwater line break calculations and for code improvements to make the code easier to use.« less
Simulation study of a new inverse-pinch high Coulomb transfer switch
NASA Technical Reports Server (NTRS)
Choi, S. H.
1984-01-01
A simulation study of a simplified model of a high coulomb transfer switch is performed. The switch operates in an inverse pinch geometry formed by an all metal chamber, which greatly reduces hot spot formations on the electrode surfaces. Advantages of the switch over the conventional switches are longer useful life, higher current capability and lower inductance, which improves the characteristics required for a high repetition rate switch. The simulation determines the design parameters by analytical computations and comparison with the experimentally measured risetime, current handling capability, electrode damage, and hold-off voltages. The parameters of initial switch design can be determined for the anticipated switch performance. Results are in agreement with the experiment results. Although the model is simplified, the switch characteristics such as risetime, current handling capability, electrode damages, and hold-off voltages are accurately determined.
A research study for the preliminary definition of an aerophysics free-flight laboratory facility
NASA Technical Reports Server (NTRS)
Canning, Thomas N.
1988-01-01
A renewed interest in hypervelocity vehicles requires an increase in the knowledge of aerodynamic phenomena. Tests conducted with ground-based facilities can be used both to better understand the physics of hypervelocity flight, and to calibrate and validate computer codes designed to predict vehicle performance in the hypervelocity environment. This research reviews the requirements for aerothermodynamic testing and discusses the ballistic range and its capabilities. Examples of the kinds of testing performed in typical high performance ballistic ranges are described. We draw heavily on experience obtained in the ballistics facilities at NASA Ames Research Center, Moffett Field, California. Prospects for improving the capabilities of the ballistic range by using advanced instrumentation are discussed. Finally, recent developments in gun technology and their application to extend the capability of the ballistic range are summarized.
Reacting Multi-Species Gas Capability for USM3D Flow Solver
NASA Technical Reports Server (NTRS)
Frink, Neal T.; Schuster, David M.
2012-01-01
The USM3D Navier-Stokes flow solver contributed heavily to the NASA Constellation Project (CxP) as a highly productive computational tool for generating the aerodynamic databases for the Ares I and V launch vehicles and Orion launch abort vehicle (LAV). USM3D is currently limited to ideal-gas flows, which are not adequate for modeling the chemistry or temperature effects of hot-gas jet flows. This task was initiated to create an efficient implementation of multi-species gas and equilibrium chemistry into the USM3D code to improve its predictive capabilities for hot jet impingement effects. The goal of this NASA Engineering and Safety Center (NESC) assessment was to implement and validate a simulation capability to handle real-gas effects in the USM3D code. This document contains the outcome of the NESC assessment.
Telecommunication Networks. Tech Use Guide: Using Computer Technology.
ERIC Educational Resources Information Center
Council for Exceptional Children, Reston, VA. Center for Special Education Technology.
One of nine brief guides for special educators on using computer technology, this guide focuses on utilizing the telecommunications capabilities of computers. Network capabilities including electronic mail, bulletin boards, and access to distant databases are briefly explained. Networks useful to the educator, general commercial systems, and local…
Conceptual design for aerospace vehicles
NASA Technical Reports Server (NTRS)
Gratzer, Louis B.
1989-01-01
The designers of aircraft and more recently, aerospace vehicles have always struggled with the problems of evolving their designs to produce a machine which would perform its assigned task(s) in some optimum fashion. Almost invariably this involved dealing with more variables and constraints than could be handled in any computationally feasible way. With the advent of the electronic digital computer, the possibilities for introducing more variable and constraints into the initial design process led to greater expectations for improvement in vehicle (system) efficiency. The creation of the large scale systems necessary to achieve optimum designs has, for many reason, proved to be difficult. From a technical standpoint, significant problems arise in the development of satisfactory algorithms for processing of data from the various technical disciplines in a way that would be compatible with the complex optimization function. Also, the creation of effective optimization routines for multi-variable and constraint situations which could lead to consistent results has lagged. The current capability for carrying out the conceptual design of an aircraft on an interdisciplinary bases was evaluated to determine the need for extending this capability, and if necessary, to recommend means by which this could be carried out. Based on a review of available documentation and individual consultations, it appears that there is extensive interest at Langley Research Center as well as in the aerospace community in providing a higher level of capability that meets the technical challenges. By implication, the current design capability is inadequate and it does not operate in a way that allows the various technical disciplines to participate and cooperately interact in the design process. Based on this assessment, it was concluded that substantial effort should be devoted to developing a computer-based conceptual design system that would provide the capability needed for the near-term as well as framework for development of more advanced methods to serve future needs.
NASA Technical Reports Server (NTRS)
Esgar, J. B.; Sokolowski, Daniel E.
1989-01-01
The Hot Section Technology (HOST) Project, which was initiated by NASA Lewis Research Center in 1980 and concluded in 1987, was aimed at improving advanced aircraft engine hot section durability through better technical understanding and more accurate design analysis capability. The project was a multidisciplinary, multiorganizational, focused research effort that involved 21 organizations and 70 research and technology activities and generated approximately 250 research reports. No major hardware was developed. To evaluate whether HOST had a significant impact on the overall aircraft engine industry in the development of new engines, interviews were conducted with 41 participants in the project to obtain their views. The summarized results of these interviews are presented. Emphasis is placed on results relative to three-dimensional inelastic structural analysis, thermomechanical fatigue testing, constitutive modeling, combustor aerothermal modeling, turbine heat transfer, protective coatings, computer codes, improved engine design capability, reduced engine development costs, and the impacts on technology transfer and the industry-government partnership.
Processing of Cryo-EM Movie Data.
Ripstein, Z A; Rubinstein, J L
2016-01-01
Direct detector device (DDD) cameras dramatically enhance the capabilities of electron cryomicroscopy (cryo-EM) due to their improved detective quantum efficiency (DQE) relative to other detectors. DDDs use semiconductor technology that allows micrographs to be recorded as movies rather than integrated individual exposures. Movies from DDDs improve cryo-EM in another, more surprising, way. DDD movies revealed beam-induced specimen movement as a major source of image degradation and provide a way to partially correct the problem by aligning frames or regions of frames to account for this specimen movement. In this chapter, we use a self-consistent mathematical notation to explain, compare, and contrast several of the most popular existing algorithms for computationally correcting specimen movement in DDD movies. We conclude by discussing future developments in algorithms for processing DDD movies that would extend the capabilities of cryo-EM even further. © 2016 Elsevier Inc. All rights reserved.
Perspective: Advanced particle imaging
Chandler, David W.; Houston, Paul L.; Parker, David H.
2017-05-26
This study discuss, the first ion imaging experiment demonstrating the capability of collecting an image of the photofragments from a unimolecular dissociation event and analyzing that image to obtain the three-dimensional velocity distribution of the fragments, the efficacy and breadth of application of the ion imaging technique have continued to improve and grow. With the addition of velocity mapping, ion/electron centroiding, and slice imaging techniques, the versatility and velocity resolution have been unmatched. Recent improvements in molecular beam, laser, sensor, and computer technology are allowing even more advanced particle imaging experiments, and eventually we can expect multi-mass imaging with co-variancemore » and full coincidence capability on a single shot basis with repetition rates in the kilohertz range. This progress should further enable “complete” experiments—the holy grail of molecular dynamics—where all quantum numbers of reactants and products of a bimolecular scattering event are fully determined and even under our control.« less
FY17 Status Report on NEAMS Neutronics Activities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C. H.; Jung, Y. S.; Smith, M. A.
2017-09-30
Under the U.S. DOE NEAMS program, the high-fidelity neutronics code system has been developed to support the multiphysics modeling and simulation capability named SHARP. The neutronics code system includes the high-fidelity neutronics code PROTEUS, the cross section library and preprocessing tools, the multigroup cross section generation code MC2-3, the in-house meshing generation tool, the perturbation and sensitivity analysis code PERSENT, and post-processing tools. The main objectives of the NEAMS neutronics activities in FY17 are to continue development of an advanced nodal solver in PROTEUS for use in nuclear reactor design and analysis projects, implement a simplified sub-channel based thermal-hydraulic (T/H)more » capability into PROTEUS to efficiently compute the thermal feedback, improve the performance of PROTEUS-MOCEX using numerical acceleration and code optimization, improve the cross section generation tools including MC2-3, and continue to perform verification and validation tests for PROTEUS.« less
Modeling the Cloud to Enhance Capabilities for Crises and Catastrophe Management
2016-11-16
order for cloud computing infrastructures to be successfully deployed in real world scenarios as tools for crisis and catastrophe management, where...Statement of the Problem Studied As cloud computing becomes the dominant computational infrastructure[1] and cloud technologies make a transition to hosting...1. Formulate rigorous mathematical models representing technological capabilities and resources in cloud computing for performance modeling and
NASA Technical Reports Server (NTRS)
Park, Michael A.; Krakos, Joshua A.; Michal, Todd; Loseille, Adrien; Alonso, Juan J.
2016-01-01
Unstructured grid adaptation is a powerful tool to control discretization error for Computational Fluid Dynamics (CFD). It has enabled key increases in the accuracy, automation, and capacity of some fluid simulation applications. Slotnick et al. provides a number of case studies in the CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences to illustrate the current state of CFD capability and capacity. The authors forecast the potential impact of emerging High Performance Computing (HPC) environments forecast in the year 2030 and identify that mesh generation and adaptivity continue to be significant bottlenecks in the CFD work flow. These bottlenecks may persist because very little government investment has been targeted in these areas. To motivate investment, the impacts of improved grid adaptation technologies are identified. The CFD Vision 2030 Study roadmap and anticipated capabilities in complementary disciplines are quoted to provide context for the progress made in grid adaptation in the past fifteen years, current status, and a forecast for the next fifteen years with recommended investments. These investments are specific to mesh adaptation and impact other aspects of the CFD process. Finally, a strategy is identified to diffuse grid adaptation technology into production CFD work flows.
NASA Technical Reports Server (NTRS)
Liever, Peter A.; West, Jeffrey S.
2016-01-01
A hybrid Computational Fluid Dynamics and Computational Aero-Acoustics (CFD/CAA) modeling framework has been developed for launch vehicle liftoff acoustic environment predictions. The framework couples the existing highly-scalable NASA production CFD code, Loci/CHEM, with a high-order accurate discontinuous Galerkin solver developed in the same production framework, Loci/THRUST, to accurately resolve and propagate acoustic physics across the entire launch environment. Time-accurate, Hybrid RANS/LES CFD modeling is applied for predicting the acoustic generation physics at the plume source, and a high-order accurate unstructured discontinuous Galerkin (DG) method is employed to propagate acoustic waves away from the source across large distances using high-order accurate schemes. The DG solver is capable of solving 2nd, 3rd, and 4th order Euler solutions for non-linear, conservative acoustic field propagation. Initial application testing and validation has been carried out against high resolution acoustic data from the Ares Scale Model Acoustic Test (ASMAT) series to evaluate the capabilities and production readiness of the CFD/CAA system to resolve the observed spectrum of acoustic frequency content. This paper presents results from this validation and outlines efforts to mature and improve the computational simulation framework.
Report of the Panel on Computer and Information Technology
NASA Technical Reports Server (NTRS)
Lundstrom, Stephen F.; Larsen, Ronald L.
1984-01-01
Aircraft have become more and more dependent on computers (information processing) for improved performance and safety. It is clear that this activity will grow, since information processing technology has advanced by a factor of 10 every 5 years for the past 35 years and will continue to do so. Breakthroughs in device technology, from vacuum tubes through transistors to integrated circuits, contribute to this rapid pace. This progress is nearly matched by similar, though not as dramatic, advances in numerical software and algorithms. Progress has not been easy. Many technical and nontechnical challenges were surmounted. The outlook is for continued growth in capability but will require surmounting new challenges. The technology forecast presented in this report has been developed by extrapolating current trends and assessing the possibilities of several high-risk research topics. In the process, critical problem areas that require research and development emphasis have been identified. The outlook assumes a positive perspective; the projected capabilities are possible by the year 2000, and adequate resources will be made available to achieve them. Computer and information technology forecasts and the potential impacts of this technology on aeronautics are identified. Critical issues and technical challenges underlying the achievement of forecasted performance and benefits are addressed.
The Coastal Ocean Prediction Systems program: Understanding and managing our coastal ocean
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eden, H.F.; Mooers, C.N.K.
1990-06-01
The goal of COPS is to couple a program of regular observations to numerical models, through techniques of data assimilation, in order to provide a predictive capability for the US coastal ocean including the Great Lakes, estuaries, and the entire Exclusive Economic Zone (EEZ). The objectives of the program include: determining the predictability of the coastal ocean and the processes that govern the predictability; developing efficient prediction systems for the coastal ocean based on the assimilation of real-time observations into numerical models; and coupling the predictive systems for the physical behavior of the coastal ocean to predictive systems for biological,more » chemical, and geological processes to achieve an interdisciplinary capability. COPS will provide the basis for effective monitoring and prediction of coastal ocean conditions by optimizing the use of increased scientific understanding, improved observations, advanced computer models, and computer graphics to make the best possible estimates of sea level, currents, temperatures, salinities, and other properties of entire coastal regions.« less
EMTP; A powerful tool for analyzing power system transients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, W.; Cotcher, D.; Ruiu, D.
1990-07-01
This paper reports on the electromagnetic transients program (EMTP), a general purpose computer program for simulating high-speed transient effects in electric power systems. The program features an extremely wide variety of modeling capabilities encompassing electromagnetic and electromechanical oscillations ranging in duration from microseconds to seconds. Examples of its use include switching and lightning surge analysis, insulation coordination, shaft torsional oscillations, ferroresonance, and HVDC converter control and operation. In the late 1960s Hermann Dommel developed the EMTP at Bonneville Power Administration (BPA), which considered the program to be the digital computer replacement for the transient network analyzer. The program initially comprisedmore » about 5000 lines of code, and was useful primarily for transmission line switching studies. As more uses for the program became apparent, BPA coordinated many improvements to the program. As the program grew in versatility and in size, it likewise became more unwieldy and difficult to use. One had to be an EMTP aficionado to take advantage of its capabilities.« less
A Bridge for Accelerating Materials by Design
Sumpter, Bobby G.; Vasudevan, Rama K.; Potok, Thomas E.; ...
2015-11-25
Recent technical advances in the area of nanoscale imaging, spectroscopy, and scattering/diffraction have led to unprecedented capabilities for investigating materials structural, dynamical and functional characteristics. In addition, recent advances in computational algorithms and computer capacities that are orders of magnitude larger/faster have enabled large-scale simulations of materials properties starting with nothing but the identity of the atomic species and the basic principles of quantum- and statistical-mechanics and thermodynamics. Along with these advances, an explosion of high-resolution data has emerged. This confluence of capabilities and rise of big data offer grand opportunities for advancing materials sciences but also introduce several challenges.more » In this editorial we identify challenges impeding progress towards advancing materials by design (e.g., the design/discovery of materials with improved properties/performance), possible solutions, and provide examples of scientific issues that can be addressed by using a tightly integrated approach where theory and experiments are linked through big-deep data.« less
Fan Noise Prediction with Applications to Aircraft System Noise Assessment
NASA Technical Reports Server (NTRS)
Nark, Douglas M.; Envia, Edmane; Burley, Casey L.
2009-01-01
This paper describes an assessment of current fan noise prediction tools by comparing measured and predicted sideline acoustic levels from a benchmark fan noise wind tunnel test. Specifically, an empirical method and newly developed coupled computational approach are utilized to predict aft fan noise for a benchmark test configuration. Comparisons with sideline noise measurements are performed to assess the relative merits of the two approaches. The study identifies issues entailed in coupling the source and propagation codes, as well as provides insight into the capabilities of the tools in predicting the fan noise source and subsequent propagation and radiation. In contrast to the empirical method, the new coupled computational approach provides the ability to investigate acoustic near-field effects. The potential benefits/costs of these new methods are also compared with the existing capabilities in a current aircraft noise system prediction tool. The knowledge gained in this work provides a basis for improved fan source specification in overall aircraft system noise studies.
Computer algebra and operators
NASA Technical Reports Server (NTRS)
Fateman, Richard; Grossman, Robert
1989-01-01
The symbolic computation of operator expansions is discussed. Some of the capabilities that prove useful when performing computer algebra computations involving operators are considered. These capabilities may be broadly divided into three areas: the algebraic manipulation of expressions from the algebra generated by operators; the algebraic manipulation of the actions of the operators upon other mathematical objects; and the development of appropriate normal forms and simplification algorithms for operators and their actions. Brief descriptions are given of the computer algebra computations that arise when working with various operators and their actions.
Personal-Computer Video-Terminal Emulator
NASA Technical Reports Server (NTRS)
Buckley, R. H.; Koromilas, A.; Smith, R. M.; Lee, G. E.; Giering, E. W.
1985-01-01
OWL-1200 video terminal emulator has been written for IBM Personal Computer. The OWL-1200 is a simple user terminal with some intelligent capabilities. These capabilities include screen formatting and block transmission of data. Emulator is written in PASCAL and Assembler for the IBM Personal Computer operating under DOS 1.1.
Space Spurred Computer Graphics
NASA Technical Reports Server (NTRS)
1983-01-01
Dicomed Corporation was asked by NASA in the early 1970s to develop processing capabilities for recording images sent from Mars by Viking spacecraft. The company produced a film recorder which increased the intensity levels and the capability for color recording. This development led to a strong technology base resulting in sophisticated computer graphics equipment. Dicomed systems are used to record CAD (computer aided design) and CAM (computer aided manufacturing) equipment, to update maps and produce computer generated animation.
NASA Astrophysics Data System (ADS)
Tanner, Steve; Stein, Cara; Graves, Sara J.
Networks of remote sensors are becoming more common as technology improves and costs decline. In the past, a remote sensor was usually a device that collected data to be retrieved at a later time by some other mechanism. This collected data were usually processed well after the fact at a computer greatly removed from the in situ sensing location. This has begun to change as sensor technology, on-board processing, and network communication capabilities have increased and their prices have dropped. There has been an explosion in the number of sensors and sensing devices, not just around the world, but literally throughout the solar system. These sensors are not only becoming vastly more sophisticated, accurate, and detailed in the data they gather but they are also becoming cheaper, lighter, and smaller. At the same time, engineers have developed improved methods to embed computing systems, memory, storage, and communication capabilities into the platforms that host these sensors. Now, it is not unusual to see large networks of sensors working in cooperation with one another. Nor does it seem strange to see the autonomous operation of sensorbased systems, from space-based satellites to smart vacuum cleaners that keep our homes clean and robotic toys that help to entertain and educate our children. But access to sensor data and computing power is only part of the story. For all the power of these systems, there are still substantial limits to what they can accomplish. These include the well-known limits to current Artificial Intelligence capabilities and our limited ability to program the abstract concepts, goals, and improvisation needed for fully autonomous systems. But it also includes much more basic engineering problems such as lack of adequate power, communications bandwidth, and memory, as well as problems with the geolocation and real-time georeferencing required to integrate data from multiple sensors to be used together.
NASA Astrophysics Data System (ADS)
Zhang, Ju; Jackson, Thomas; Balachandar, Sivaramakrishnan
2015-06-01
We will develop a computational model built upon our verified and validated in-house SDT code to provide improved description of the multiphase blast wave dynamics where solid particles are considered deformable and can even undergo phase transitions. Our SDT computational framework includes a reactive compressible flow solver with sophisticated material interface tracking capability and realistic equation of state (EOS) such as Mie-Gruneisen EOS for multiphase flow modeling. The behavior of diffuse interface models by Shukla et al. (2010) and Tiwari et al. (2013) at different shock impedance ratio will be first examined and characterized. The recent constrained interface reinitialization by Shukla (2014) will then be developed to examine if conservation property can be improved. This work was supported in part by the U.S. Department of Energy and by the Defense Threat Reduction Agency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tim Roney; Robert Seifert; Bob Pink
2011-09-01
The field-portable Digital Radiography and Computed Tomography (DRCT) x-ray inspection systems developed for the Project Manager for NonStockpile Chemical Materiel (PMNSCM) over the past 13 years have used linear diode detector arrays from two manufacturers; Thomson and Thales. These two manufacturers no longer produce this type of detector. In the interest of insuring the long term viability of the portable DRCT single munitions inspection systems and to improve the imaging capabilities, this project has been investigating improved, commercially available detectors. During FY-10, detectors were evaluated and one in particular, manufactured by Detection Technologies (DT), Inc, was acquired for possible integrationmore » into the DRCT systems. The remainder of this report describes the work performed in FY-11 to complete evaluations and fully integrate the detector onto a representative DRCT platform.« less
Improving student retention in computer engineering technology
NASA Astrophysics Data System (ADS)
Pierozinski, Russell Ivan
The purpose of this research project was to improve student retention in the Computer Engineering Technology program at the Northern Alberta Institute of Technology by reducing the number of dropouts and increasing the graduation rate. This action research project utilized a mixed methods approach of a survey and face-to-face interviews. The participants were male and female, with a large majority ranging from 18 to 21 years of age. The research found that participants recognized their skills and capability, but their capacity to remain in the program was dependent on understanding and meeting the demanding pace and rigour of the program. The participants recognized that curriculum delivery along with instructor-student interaction had an impact on student retention. To be successful in the program, students required support in four domains: academic, learning management, career, and social.
Fault tolerance in computational grids: perspectives, challenges, and issues.
Haider, Sajjad; Nazir, Babar
2016-01-01
Computational grids are established with the intention of providing shared access to hardware and software based resources with special reference to increased computational capabilities. Fault tolerance is one of the most important issues faced by the computational grids. The main contribution of this survey is the creation of an extended classification of problems that incur in the computational grid environments. The proposed classification will help researchers, developers, and maintainers of grids to understand the types of issues to be anticipated. Moreover, different types of problems, such as omission, interaction, and timing related have been identified that need to be handled on various layers of the computational grid. In this survey, an analysis and examination is also performed pertaining to the fault tolerance and fault detection mechanisms. Our conclusion is that a dependable and reliable grid can only be established when more emphasis is on fault identification. Moreover, our survey reveals that adaptive and intelligent fault identification, and tolerance techniques can improve the dependability of grid working environments.
Using a hybrid neuron in physiologically inspired models of the basal ganglia.
Thibeault, Corey M; Srinivasa, Narayan
2013-01-01
Our current understanding of the basal ganglia (BG) has facilitated the creation of computational models that have contributed novel theories, explored new functional anatomy and demonstrated results complementing physiological experiments. However, the utility of these models extends beyond these applications. Particularly in neuromorphic engineering, where the basal ganglia's role in computation is important for applications such as power efficient autonomous agents and model-based control strategies. The neurons used in existing computational models of the BG, however, are not amenable for many low-power hardware implementations. Motivated by a need for more hardware accessible networks, we replicate four published models of the BG, spanning single neuron and small networks, replacing the more computationally expensive neuron models with an Izhikevich hybrid neuron. This begins with a network modeling action-selection, where the basal activity levels and the ability to appropriately select the most salient input is reproduced. A Parkinson's disease model is then explored under normal conditions, Parkinsonian conditions and during subthalamic nucleus deep brain stimulation (DBS). The resulting network is capable of replicating the loss of thalamic relay capabilities in the Parkinsonian state and its return under DBS. This is also demonstrated using a network capable of action-selection. Finally, a study of correlation transfer under different patterns of Parkinsonian activity is presented. These networks successfully captured the significant results of the originals studies. This not only creates a foundation for neuromorphic hardware implementations but may also support the development of large-scale biophysical models. The former potentially providing a way of improving the efficacy of DBS and the latter allowing for the efficient simulation of larger more comprehensive networks.
Improved Load Alleviation Capability for the KC-135
1997-09-01
software, such as Matlab, Mathematica, Simulink, and Robotica Front End for Mathematica available in the simulation laboratory Overview This thesis report is...outlined in Spong’s text in order to utilize the Robotica system development software which automates the process of calculating the kinematic and...kinematic and dynamic equations can be accomplished using a computer tool called Robotica Front End (RFE) [ 15], developed by Doctor Spong. Boom Root d3
PNNLâs Building Operations Control Center
Belew, Shan
2018-01-16
PNNL's Building Operations Control Center (BOCC) video provides an overview of the center, its capabilities, and its objectives. The BOCC was relocated to PNNL's new 3820 Systems Engineering Building in 2015. Although a key focus of the BOCC is on monitoring and improving the operations of PNNL buildings, the center's state-of-the-art computational, software and visualization resources also have provided a platform for PNNL buildings-related research projects.
Academic Research Vessels 1985-1990.
1982-01-01
converted to a single-point lifting system for launch and recovery (UNOLS submersible report). The experience of the Navy in operatin -’ the SEA CLIFF...on improvements in our technical capabilities. The Ocean Sciences Board has recenitW carried out studies on computer needs and on satellite systems ...tions, not on system -wide savings incurred by redistribution of the fleet). Though there is a reduction of approximately 13 percent in operating costs
High-End Climate Science: Development of Modeling and Related Computing Capabilities
2000-12-01
toward strengthening research on key scientific issues. The Program has supported research that has led to substantial increases in knowledge , improved...provides overall direction and executive oversight of the USGCRP. Within this framework, agencies manage and coordinate Federally supported scientific...critical for the U.S. Global Change Research Program. Such models can be used to look backward to test the consistency of our knowledge of Earth system
2013-01-01
Gold nanoparticles (AuNPs) have generated interest as both imaging and therapeutic agents. AuNPs are attractive for imaging applications since they are nontoxic and provide nearly three times greater X-ray attenuation per unit weight than iodine. As therapeutic agents, AuNPs can sensitize tumor cells to ionizing radiation. To create a nanoplatform that could simultaneously exhibit long circulation times, achieve appreciable tumor accumulation, generate computed tomography (CT) image contrast, and serve as a radiosensitizer, gold-loaded polymeric micelles (GPMs) were prepared. Specifically, 1.9 nm AuNPs were encapsulated within the hydrophobic core of micelles formed with the amphiphilic diblock copolymer poly(ethylene glycol)-b-poly(ε-capralactone). GPMs were produced with low polydispersity and mean hydrodynamic diameters ranging from 25 to 150 nm. Following intravenous injection, GPMs provided blood pool contrast for up to 24 h and improved the delineation of tumor margins via CT. Thus, GPM-enhanced CT imaging was used to guide radiation therapy delivered via a small animal radiation research platform. In combination with the radiosensitizing capabilities of gold, tumor-bearing mice exhibited a 1.7-fold improvement in the median survival time, compared with mice receiving radiation alone. It is envisioned that translation of these capabilities to human cancer patients could guide and enhance the efficacy of radiation therapy. PMID:24377302
EPA Project Updates: DSSTox and ToxCast Generating New ...
EPAs National Center for Computational Toxicology is building capabilities to support a new paradigm for toxicity screening and prediction. The DSSTox project is improving public access to quality structure-annotated chemical toxicity information in less summarized forms than traditionally employed in SAR modeling, and in ways that facilitate data-mining, and data read-across. The DSSTox Structure-Browser, launched in September 2007, provides structure searchability across all published DSSTox toxicity-related inventory, and is enabling linkages between previously isolated toxicity data resources. As of early March 2008, the public DSSTox inventory as been integrated into PubChem, allowing a user to take full advantage of PubChem structure-activity and bioassay clustering features. The most recent DSSTox version of Carcinogenic Potency Database file (CPDBAS) illustrates ways in which various summary definitions of carcinogenic activity can be employed in modeling and data mining. Phase I of the ToxCast project is generating high-throughput screening data from several hundred biochemical and cell-based assays for a set of 320 chemicals, mostly pesticide actives, with rich toxicology profiles. Incorporating and expanding traditional SAR Concepts into this new high-throughput and data-rich would pose conceptual and practical challenges, but also holds great promise for improving predictive capabilities. EPA's National Center for Computational Toxicology is bu
NASA Astrophysics Data System (ADS)
Rimland, Jeffrey; McNeese, Michael; Hall, David
2013-05-01
Although the capability of computer-based artificial intelligence techniques for decision-making and situational awareness has seen notable improvement over the last several decades, the current state-of-the-art still falls short of creating computer systems capable of autonomously making complex decisions and judgments in many domains where data is nuanced and accountability is high. However, there is a great deal of potential for hybrid systems in which software applications augment human capabilities by focusing the analyst's attention to relevant information elements based on both a priori knowledge of the analyst's goals and the processing/correlation of a series of data streams too numerous and heterogeneous for the analyst to digest without assistance. Researchers at Penn State University are exploring ways in which an information framework influenced by Klein's (Recognition Primed Decision) RPD model, Endsley's model of situational awareness, and the Joint Directors of Laboratories (JDL) data fusion process model can be implemented through a novel combination of Complex Event Processing (CEP) and Multi-Agent Software (MAS). Though originally designed for stock market and financial applications, the high performance data-driven nature of CEP techniques provide a natural compliment to the proven capabilities of MAS systems for modeling naturalistic decision-making, performing process adjudication, and optimizing networked processing and cognition via the use of "mobile agents." This paper addresses the challenges and opportunities of such a framework for augmenting human observational capability as well as enabling the ability to perform collaborative context-aware reasoning in both human teams and hybrid human / software agent teams.
An open-loop system design for deep space signal processing applications
NASA Astrophysics Data System (ADS)
Tang, Jifei; Xia, Lanhua; Mahapatra, Rabi
2018-06-01
A novel open-loop system design with high performance is proposed for space positioning and navigation signal processing. Divided by functions, the system has four modules, bandwidth selectable data recorder, narrowband signal analyzer, time-delay difference of arrival estimator and ANFIS supplement processor. A hardware-software co-design approach is made to accelerate computing capability and improve system efficiency. Embedded with the proposed signal processing algorithms, the designed system is capable of handling tasks with high accuracy over long period of continuous measurements. The experiment results show the Doppler frequency tracking root mean square error during 3 h observation is 0.0128 Hz, while the TDOA residue analysis in correlation power spectrum is 0.1166 rad.
Weather Research and Forecasting Model with Vertical Nesting Capability
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-08-01
The Weather Research and Forecasting (WRF) model with vertical nesting capability is an extension of the WRF model, which is available in the public domain, from www.wrf-model.org. The new code modifies the nesting procedure, which passes lateral boundary conditions between computational domains in the WRF model. Previously, the same vertical grid was required on all domains, while the new code allows different vertical grids to be used on concurrently run domains. This new functionality improves WRF's ability to produce high-resolution simulations of the atmosphere by allowing a wider range of scales to be efficiently resolved and more accurate lateral boundarymore » conditions to be provided through the nesting procedure.« less
Functional and performance requirements of the next NOAA-Kasas City computer system
NASA Technical Reports Server (NTRS)
Mosher, F. R.
1985-01-01
The development of the Advanced Weather Interactive Processing System for the 1990's (AWIPS-90) will result in more timely and accurate forecasts with improved cost effectiveness. As part of the AWIPS-90 initiative, the National Meteorological Center (NMC), the National Severe Storms Forecast Center (NSSFC), and the National Hurricane Center (NHC) are to receive upgrades of interactive processing systems. This National Center Upgrade program will support the specialized inter-center communications, data acquisition, and processing needs of these centers. The missions, current capabilities and general functional requirements for the upgrade to the NSSFC are addressed. System capabilities are discussed along with the requirements for the upgraded system.
NASA Astrophysics Data System (ADS)
Vassiliou, Marius S.; Sundareswaran, Venkataraman; Chen, S.; Behringer, Reinhold; Tam, Clement K.; Chan, M.; Bangayan, Phil T.; McGee, Joshua H.
2000-08-01
We describe new systems for improved integrated multimodal human-computer interaction and augmented reality for a diverse array of applications, including future advanced cockpits, tactical operations centers, and others. We have developed an integrated display system featuring: speech recognition of multiple concurrent users equipped with both standard air- coupled microphones and novel throat-coupled sensors (developed at Army Research Labs for increased noise immunity); lip reading for improving speech recognition accuracy in noisy environments, three-dimensional spatialized audio for improved display of warnings, alerts, and other information; wireless, coordinated handheld-PC control of a large display; real-time display of data and inferences from wireless integrated networked sensors with on-board signal processing and discrimination; gesture control with disambiguated point-and-speak capability; head- and eye- tracking coupled with speech recognition for 'look-and-speak' interaction; and integrated tetherless augmented reality on a wearable computer. The various interaction modalities (speech recognition, 3D audio, eyetracking, etc.) are implemented a 'modality servers' in an Internet-based client-server architecture. Each modality server encapsulates and exposes commercial and research software packages, presenting a socket network interface that is abstracted to a high-level interface, minimizing both vendor dependencies and required changes on the client side as the server's technology improves.
NASA Technical Reports Server (NTRS)
1994-01-01
In the mid-1980s, Kinetic Systems and Langley Research Center determined that high speed CAMAC (Computer Automated Measurement and Control) data acquisition systems could significantly improve Langley's ARTS (Advanced Real Time Simulation) system. The ARTS system supports flight simulation R&D, and the CAMAC equipment allowed 32 high performance simulators to be controlled by centrally located host computers. This technology broadened Kinetic Systems' capabilities and led to several commercial applications. One of them is General Atomics' fusion research program. Kinetic Systems equipment allows tokamak data to be acquired four to 15 times more rapidly. Ford Motor company uses the same technology to control and monitor transmission testing facilities.
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.
2004-01-01
Differential Evolution (DE) is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. The DE algorithm has been recently extended to multiobjective optimization problem by using a Pareto-based approach. In this paper, a Pareto DE algorithm is applied to multiobjective aerodynamic shape optimization problems that are characterized by computationally expensive objective function evaluations. To improve computational expensive the algorithm is coupled with generalized response surface meta-models based on artificial neural networks. Results are presented for some test optimization problems from the literature to demonstrate the capabilities of the method.
Modeling flow at the nozzle of a solid rocket motor
NASA Technical Reports Server (NTRS)
Chow, Alan S.; Jin, Kang-Ren
1991-01-01
The mechanical behavior of a rocket motor internal flow field results in a system of nonlinear partial differential equations which can be solved numerically. The accuracy and the convergence of the solution of the system of equations depends largely on how precisely the sharp gradients can be resolved. An adaptive grid generation scheme is incorporated into the computer algorithm to enhance the capability of numerical modeling. With this scheme, the grid is refined as the solution evolves. This scheme significantly improves the methodology of solving flow problems in rocket nozzle by putting the refinement part of grid generation into the computer algorithm.
NASA Astrophysics Data System (ADS)
Yamanishi, Manabu
A combined experimental and computational investigation was performed in order to evaluate the effects of various design parameters of an in-line injection pump on the nozzle exit characteristics for DI diesel engines. Measurements of the pump chamber pressure and the delivery valve lift were included for validation by using specially designed transducers installed inside the pump. The results confirm that the simulation model is capable of predicting the pump operation for all the different designs investigated pump operating conditions. Following the successful validation of this model, parametric studies were performed which allow for improved fuel injection system design.
Computer-oriented synthesis of wide-band non-uniform negative resistance amplifiers
NASA Technical Reports Server (NTRS)
Branner, G. R.; Chan, S.-P.
1975-01-01
This paper presents a synthesis procedure which provides design values for broad-band amplifiers using non-uniform negative resistance devices. Employing a weighted least squares optimization scheme, the technique, based on an extension of procedures for uniform negative resistance devices, is capable of providing designs for a variety of matching network topologies. It also provides, for the first time, quantitative results for predicting the effects of parameter element variations on overall amplifier performance. The technique is also unique in that it employs exact partial derivatives for optimization and sensitivity computation. In comparison with conventional procedures, significantly improved broad-band designs are shown to result.
NASA Technical Reports Server (NTRS)
Reuter, William H.; Buning, Pieter G.; Hobson, Garth V.
1993-01-01
An effective control canard design to provide enhanced controllability throughout the flight regime is described which uses a 3D, Navier-Stokes computational solution. The use of canard by the Space Shuttle Orbiter in both hypersonic and subsonic flight regimes can enhance its usefullness by expanding its payload carrying capability and improving its static stability. The canard produces an additional nose-up pitching moment to relax center-of-gravity constraint and alleviates the need for large, lift-destroying elevon deflections required to maintain the high angles of attack for effective hypersonic flight.
Digital processing of mesoscale analysis and space sensor data
NASA Technical Reports Server (NTRS)
Hickey, J. S.; Karitani, S.
1985-01-01
The mesoscale analysis and space sensor (MASS) data management and analysis system on the research computer system is presented. The MASS data base management and analysis system was implemented on the research computer system which provides a wide range of capabilities for processing and displaying large volumes of conventional and satellite derived meteorological data. The research computer system consists of three primary computers (HP-1000F, Harris/6, and Perkin-Elmer 3250), each of which performs a specific function according to its unique capabilities. The overall tasks performed concerning the software, data base management and display capabilities of the research computer system in terms of providing a very effective interactive research tool for the digital processing of mesoscale analysis and space sensor data is described.
NASA Technical Reports Server (NTRS)
Treinish, Lloyd A.; Gough, Michael L.; Wildenhain, W. David
1987-01-01
The capability was developed of rapidly producing visual representations of large, complex, multi-dimensional space and earth sciences data sets via the implementation of computer graphics modeling techniques on the Massively Parallel Processor (MPP) by employing techniques recently developed for typically non-scientific applications. Such capabilities can provide a new and valuable tool for the understanding of complex scientific data, and a new application of parallel computing via the MPP. A prototype system with such capabilities was developed and integrated into the National Space Science Data Center's (NSSDC) Pilot Climate Data System (PCDS) data-independent environment for computer graphics data display to provide easy access to users. While developing these capabilities, several problems had to be solved independently of the actual use of the MPP, all of which are outlined.
NASA high performance computing and communications program
NASA Technical Reports Server (NTRS)
Holcomb, Lee; Smith, Paul; Hunter, Paul
1993-01-01
The National Aeronautics and Space Administration's HPCC program is part of a new Presidential initiative aimed at producing a 1000-fold increase in supercomputing speed and a 100-fold improvement in available communications capability by 1997. As more advanced technologies are developed under the HPCC program, they will be used to solve NASA's 'Grand Challenge' problems, which include improving the design and simulation of advanced aerospace vehicles, allowing people at remote locations to communicate more effectively and share information, increasing scientist's abilities to model the Earth's climate and forecast global environmental trends, and improving the development of advanced spacecraft. NASA's HPCC program is organized into three projects which are unique to the agency's mission: the Computational Aerosciences (CAS) project, the Earth and Space Sciences (ESS) project, and the Remote Exploration and Experimentation (REE) project. An additional project, the Basic Research and Human Resources (BRHR) project exists to promote long term research in computer science and engineering and to increase the pool of trained personnel in a variety of scientific disciplines. This document presents an overview of the objectives and organization of these projects as well as summaries of individual research and development programs within each project.
Experimental and Computational Sonic Boom Assessment of Lockheed-Martin N+2 Low Boom Models
NASA Technical Reports Server (NTRS)
Cliff, Susan E.; Durston, Donald A.; Elmiligui, Alaa A.; Walker, Eric L.; Carter, Melissa B.
2015-01-01
Flight at speeds greater than the speed of sound is not permitted over land, primarily because of the noise and structural damage caused by sonic boom pressure waves of supersonic aircraft. Mitigation of sonic boom is a key focus area of the High Speed Project under NASA's Fundamental Aeronautics Program. The project is focusing on technologies to enable future civilian aircraft to fly efficiently with reduced sonic boom, engine and aircraft noise, and emissions. A major objective of the project is to improve both computational and experimental capabilities for design of low-boom, high-efficiency aircraft. NASA and industry partners are developing improved wind tunnel testing techniques and new pressure instrumentation to measure the weak sonic boom pressure signatures of modern vehicle concepts. In parallel, computational methods are being developed to provide rapid design and analysis of supersonic aircraft with improved meshing techniques that provide efficient, robust, and accurate on- and off-body pressures at several body lengths from vehicles with very low sonic boom overpressures. The maturity of these critical parallel efforts is necessary before low-boom flight can be demonstrated and commercial supersonic flight can be realized.
Waggle: A Framework for Intelligent Attentive Sensing and Actuation
NASA Astrophysics Data System (ADS)
Sankaran, R.; Jacob, R. L.; Beckman, P. H.; Catlett, C. E.; Keahey, K.
2014-12-01
Advances in sensor-driven computation and computationally steered sensing will greatly enable future research in fields including environmental and atmospheric sciences. We will present "Waggle," an open-source hardware and software infrastructure developed with two goals: (1) reducing the separation and latency between sensing and computing and (2) improving the reliability and longevity of sensing-actuation platforms in challenging and costly deployments. Inspired by "deep-space probe" systems, the Waggle platform design includes features that can support longitudinal studies, deployments with varying communication links, and remote management capabilities. Waggle lowers the barrier for scientists to incorporate real-time data from their sensors into their computations and to manipulate the sensors or provide feedback through actuators. A standardized software and hardware design allows quick addition of new sensors/actuators and associated software in the nodes and enables them to be coupled with computational codes both insitu and on external compute infrastructure. The Waggle framework currently drives the deployment of two observational systems - a portable and self-sufficient weather platform for study of small-scale effects in Chicago's urban core and an open-ended distributed instrument in Chicago that aims to support several research pursuits across a broad range of disciplines including urban planning, microbiology and computer science. Built around open-source software, hardware, and Linux OS, the Waggle system comprises two components - the Waggle field-node and Waggle cloud-computing infrastructure. Waggle field-node affords a modular, scalable, fault-tolerant, secure, and extensible platform for hosting sensors and actuators in the field. It supports insitu computation and data storage, and integration with cloud-computing infrastructure. The Waggle cloud infrastructure is designed with the goal of scaling to several hundreds of thousands of Waggle nodes. It supports aggregating data from sensors hosted by the nodes, staging computation, relaying feedback to the nodes and serving data to end-users. We will discuss the Waggle design principles and their applicability to various observational research pursuits, and demonstrate its capabilities.
NASA Astrophysics Data System (ADS)
Wu, Yueqian; Yang, Minglin; Sheng, Xinqing; Ren, Kuan Fang
2015-05-01
Light scattering properties of absorbing particles, such as the mineral dusts, attract a wide attention due to its importance in geophysical and environment researches. Due to the absorbing effect, light scattering properties of particles with absorption differ from those without absorption. Simple shaped absorbing particles such as spheres and spheroids have been well studied with different methods but little work on large complex shaped particles has been reported. In this paper, the surface Integral Equation (SIE) with Multilevel Fast Multipole Algorithm (MLFMA) is applied to study scattering properties of large non-spherical absorbing particles. SIEs are carefully discretized with piecewise linear basis functions on triangle patches to model whole surface of the particle, hence computation resource needs increase much more slowly with the particle size parameter than the volume discretized methods. To improve further its capability, MLFMA is well parallelized with Message Passing Interface (MPI) on distributed memory computer platform. Without loss of generality, we choose the computation of scattering matrix elements of absorbing dust particles as an example. The comparison of the scattering matrix elements computed by our method and the discrete dipole approximation method (DDA) for an ellipsoid dust particle shows that the precision of our method is very good. The scattering matrix elements of large ellipsoid dusts with different aspect ratios and size parameters are computed. To show the capability of the presented algorithm for complex shaped particles, scattering by asymmetry Chebyshev particle with size parameter larger than 600 of complex refractive index m = 1.555 + 0.004 i and different orientations are studied.
Wilson, J Adam; Williams, Justin C
2009-01-01
The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card [graphics processing unit (GPU)] was developed for real-time neural signal processing of a brain-computer interface (BCI). The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter), followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a central processing unit-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels of 250 ms in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.
Secure chaotic map based block cryptosystem with application to camera sensor networks.
Guo, Xianfeng; Zhang, Jiashu; Khan, Muhammad Khurram; Alghathbar, Khaled
2011-01-01
Recently, Wang et al. presented an efficient logistic map based block encryption system. The encryption system employs feedback ciphertext to achieve plaintext dependence of sub-keys. Unfortunately, we discovered that their scheme is unable to withstand key stream attack. To improve its security, this paper proposes a novel chaotic map based block cryptosystem. At the same time, a secure architecture for camera sensor network is constructed. The network comprises a set of inexpensive camera sensors to capture the images, a sink node equipped with sufficient computation and storage capabilities and a data processing server. The transmission security between the sink node and the server is gained by utilizing the improved cipher. Both theoretical analysis and simulation results indicate that the improved algorithm can overcome the flaws and maintain all the merits of the original cryptosystem. In addition, computational costs and efficiency of the proposed scheme are encouraging for the practical implementation in the real environment as well as camera sensor network.
Secure Chaotic Map Based Block Cryptosystem with Application to Camera Sensor Networks
Guo, Xianfeng; Zhang, Jiashu; Khan, Muhammad Khurram; Alghathbar, Khaled
2011-01-01
Recently, Wang et al. presented an efficient logistic map based block encryption system. The encryption system employs feedback ciphertext to achieve plaintext dependence of sub-keys. Unfortunately, we discovered that their scheme is unable to withstand key stream attack. To improve its security, this paper proposes a novel chaotic map based block cryptosystem. At the same time, a secure architecture for camera sensor network is constructed. The network comprises a set of inexpensive camera sensors to capture the images, a sink node equipped with sufficient computation and storage capabilities and a data processing server. The transmission security between the sink node and the server is gained by utilizing the improved cipher. Both theoretical analysis and simulation results indicate that the improved algorithm can overcome the flaws and maintain all the merits of the original cryptosystem. In addition, computational costs and efficiency of the proposed scheme are encouraging for the practical implementation in the real environment as well as camera sensor network. PMID:22319371
A Web Centric Architecture for Deploying Multi-Disciplinary Engineering Design Processes
NASA Technical Reports Server (NTRS)
Woyak, Scott; Kim, Hongman; Mullins, James; Sobieszczanski-Sobieski, Jaroslaw
2004-01-01
There are continuous needs for engineering organizations to improve their design process. Current state of the art techniques use computational simulations to predict design performance, and optimize it through advanced design methods. These tools have been used mostly by individual engineers. This paper presents an architecture for achieving results at an organization level beyond individual level. The next set of gains in process improvement will come from improving the effective use of computers and software within a whole organization, not just for an individual. The architecture takes advantage of state of the art capabilities to produce a Web based system to carry engineering design into the future. To illustrate deployment of the architecture, a case study for implementing advanced multidisciplinary design optimization processes such as Bi-Level Integrated System Synthesis is discussed. Another example for rolling-out a design process for Design for Six Sigma is also described. Each example explains how an organization can effectively infuse engineering practice with new design methods and retain the knowledge over time.
Requirements for fault-tolerant factoring on an atom-optics quantum computer.
Devitt, Simon J; Stephens, Ashley M; Munro, William J; Nemoto, Kae
2013-01-01
Quantum information processing and its associated technologies have reached a pivotal stage in their development, with many experiments having established the basic building blocks. Moving forward, the challenge is to scale up to larger machines capable of performing computational tasks not possible today. This raises questions that need to be urgently addressed, such as what resources these machines will consume and how large will they be. Here we estimate the resources required to execute Shor's factoring algorithm on an atom-optics quantum computer architecture. We determine the runtime and size of the computer as a function of the problem size and physical error rate. Our results suggest that once the physical error rate is low enough to allow quantum error correction, optimization to reduce resources and increase performance will come mostly from integrating algorithms and circuits within the error correction environment, rather than from improving the physical hardware.
A Gateway for Phylogenetic Analysis Powered by Grid Computing Featuring GARLI 2.0
Bazinet, Adam L.; Zwickl, Derrick J.; Cummings, Michael P.
2014-01-01
We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. [garli, gateway, grid computing, maximum likelihood, molecular evolution portal, phylogenetics, web service.] PMID:24789072
Accelerating Full Configuration Interaction Calculations for Nuclear Structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Chao; Sternberg, Philip; Maris, Pieter
2008-04-14
One of the emerging computational approaches in nuclear physics is the full configuration interaction (FCI) method for solving the many-body nuclear Hamiltonian in a sufficiently large single-particle basis space to obtain exact answers - either directly or by extrapolation. The lowest eigenvalues and correspondingeigenvectors for very large, sparse and unstructured nuclear Hamiltonian matrices are obtained and used to evaluate additional experimental quantities. These matrices pose a significant challenge to the design and implementation of efficient and scalable algorithms for obtaining solutions on massively parallel computer systems. In this paper, we describe the computational strategies employed in a state-of-the-art FCI codemore » MFDn (Many Fermion Dynamics - nuclear) as well as techniques we recently developed to enhance the computational efficiency of MFDn. We will demonstrate the current capability of MFDn and report the latest performance improvement we have achieved. We will also outline our future research directions.« less
A Pilot Computer-Aided Design and Manufacturing Curriculum that Promotes Engineering
NASA Technical Reports Server (NTRS)
2002-01-01
Elizabeth City State University (ECSU) is located in a community that is mostly rural in nature. The area is economically deprived when compared to the rest of the state. Many businesses lack the computerized equipment and skills needed to propel upward in today's technologically advanced society. This project will close the ever-widening gap between advantaged and disadvantaged workers as well as increase their participation with industry, NASA and/or other governmental agencies. Everyone recognizes computer technology as the catalyst for advances in design, prototyping, and manufacturing or the art of machining. Unprecedented quality control and cost-efficiency improvements are recognized through the use of computer technology. This technology has changed the manufacturing industry with advanced high-tech capabilities needed by NASA. With the ever-widening digital divide, we must continue to provide computer technology to those who are socio-economically disadvantaged.
Fault-tolerant building-block computer study
NASA Technical Reports Server (NTRS)
Rennels, D. A.
1978-01-01
Ultra-reliable core computers are required for improving the reliability of complex military systems. Such computers can provide reliable fault diagnosis, failure circumvention, and, in some cases serve as an automated repairman for their host systems. A small set of building-block circuits which can be implemented as single very large integration devices, and which can be used with off-the-shelf microprocessors and memories to build self checking computer modules (SCCM) is described. Each SCCM is a microcomputer which is capable of detecting its own faults during normal operation and is described to communicate with other identical modules over one or more Mil Standard 1553A buses. Several SCCMs can be connected into a network with backup spares to provide fault-tolerant operation, i.e. automated recovery from faults. Alternative fault-tolerant SCCM configurations are discussed along with the cost and reliability associated with their implementation.
NASA Technical Reports Server (NTRS)
Mannino, Antonio
2008-01-01
Understanding how the different components of seawater alter the path of incident sunlight through scattering and absorption is essential to using remotely sensed ocean color observations effectively. This is particularly apropos in coastal waters where the different optically significant components (phytoplankton, detrital material, inorganic minerals, etc.) vary widely in concentration, often independently from one another. Inherent Optical Properties (IOPs) form the link between these biogeochemical constituents and the Apparent Optical Properties (AOPs). understanding this interrelationship is at the heart of successfully carrying out inversions of satellite-measured radiance to biogeochemical properties. While sufficient covariation of seawater constituents in case I waters typically allows empirical algorithms connecting AOPs and biogeochemical parameters to behave well, these empirical algorithms normally do not hold for case I1 regimes (Carder et al. 2003). Validation in the context of ocean color remote sensing refers to in-situ measurements used to verify or characterize algorithm products or any assumption used as input to an algorithm. In this project, validation capabilities are considered those measurement capabilities, techniques, methods, models, etc. that allow effective validation. Enhancing current validation capabilities by incorporating state-of-the-art IOP measurements and optical models is the purpose of this work. Involved in this pursuit is improving core IOP measurement capabilities (spectral, angular, spatio-temporal resolutions), improving our understanding of the behavior of analytical AOP-IOP approximations in complex coastal waters, and improving the spatial and temporal resolution of biogeochemical data for validation by applying biogeochemical-IOP inversion models so that these parameters can be computed from real-time IOP sensors with high sampling rates. Research cruises supported by this project provides for collection and processing of seawater samples for biogeochemical (pigments, DOC and POC) and optical (CDOM and POM absorption coefficients) analyses to enhance our understanding of the linkages between in-water optical measurements (IOPs and AOPs) and biogeochemical constituents and to provide a more comprehensive suite of validation products.
An improved method of continuous LOD based on fractal theory in terrain rendering
NASA Astrophysics Data System (ADS)
Lin, Lan; Li, Lijun
2007-11-01
With the improvement of computer graphic hardware capability, the algorithm of 3D terrain rendering is going into the hot topic of real-time visualization. In order to solve conflict between the rendering speed and reality of rendering, this paper gives an improved method of terrain rendering which improves the traditional continuous level of detail technique based on fractal theory. This method proposes that the program needn't to operate the memory repeatedly to obtain different resolution terrain model, instead, obtains the fractal characteristic parameters of different region according to the movement of the viewpoint. Experimental results show that the method guarantees the authenticity of landscape, and increases the real-time 3D terrain rendering speed.
NASA Technical Reports Server (NTRS)
Butler, C.
1985-01-01
Computer hardware and software of the NASA multipurpose differential absorption lidar (DIAL) sysatem were improved. The NASA DIAL system is undergoing development and experimental deployment for remote measurement of atmospheric trace gas concentration from ground and aircraft platforms. A viable DIAL system was developed with the capability of remotely measuring O3 and H2O concentrations from an aircraft platform. Test flights were successfully performed on board the NASA/Goddard Flight Center Electra aircraft from 1980 to 1984. Improvements on the DIAL data acquisition system (DAS) are described.
RELAP-7 Software Verification and Validation Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Curtis L.; Choi, Yong-Joon; Zou, Ling
This INL plan comprehensively describes the software for RELAP-7 and documents the software, interface, and software design requirements for the application. The plan also describes the testing-based software verification and validation (SV&V) process—a set of specially designed software models used to test RELAP-7. The RELAP-7 (Reactor Excursion and Leak Analysis Program) code is a nuclear reactor system safety analysis code being developed at Idaho National Laboratory (INL). The code is based on the INL’s modern scientific software development framework – MOOSE (Multi-Physics Object-Oriented Simulation Environment). The overall design goal of RELAP-7 is to take advantage of the previous thirty yearsmore » of advancements in computer architecture, software design, numerical integration methods, and physical models. The end result will be a reactor systems analysis capability that retains and improves upon RELAP5’s capability and extends the analysis capability for all reactor system simulation scenarios.« less
Integrated long-range UAV/UGV collaborative target tracking
NASA Astrophysics Data System (ADS)
Moseley, Mark B.; Grocholsky, Benjamin P.; Cheung, Carol; Singh, Sanjiv
2009-05-01
Coordinated operations between unmanned air and ground assets allow leveraging of multi-domain sensing and increase opportunities for improving line of sight communications. While numerous military missions would benefit from coordinated UAV-UGV operations, foundational capabilities that integrate stove-piped tactical systems and share available sensor data are required and not yet available. iRobot, AeroVironment, and Carnegie Mellon University are working together, partially SBIR-funded through ARDEC's small unit network lethality initiative, to develop collaborative capabilities for surveillance, targeting, and improved communications based on PackBot UGV and Raven UAV platforms. We integrate newly available technologies into computational, vision, and communications payloads and develop sensing algorithms to support vision-based target tracking. We first simulated and then applied onto real tactical platforms an implementation of Decentralized Data Fusion, a novel technique for fusing track estimates from PackBot and Raven platforms for a moving target in an open environment. In addition, system integration with AeroVironment's Digital Data Link onto both air and ground platforms has extended our capabilities in communications range to operate the PackBot as well as in increased video and data throughput. The system is brought together through a unified Operator Control Unit (OCU) for the PackBot and Raven that provides simultaneous waypoint navigation and traditional teleoperation. We also present several recent capability accomplishments toward PackBot-Raven coordinated operations, including single OCU display design and operation, early target track results, and Digital Data Link integration efforts, as well as our near-term capability goals.
Towards Large-area Field-scale Operational Evapotranspiration for Water Use Mapping
NASA Astrophysics Data System (ADS)
Senay, G. B.; Friedrichs, M.; Morton, C.; Huntington, J. L.; Verdin, J.
2017-12-01
Field-scale evapotranspiration (ET) estimates are needed for improving surface and groundwater use and water budget studies. Ideally, field-scale ET estimates would be at regional to national levels and cover long time periods. As a result of large data storage and computational requirements associated with processing field-scale satellite imagery such as Landsat, numerous challenges remain to develop operational ET estimates over large areas for detailed water use and availability studies. However, the combination of new science, data availability, and cloud computing technology is enabling unprecedented capabilities for ET mapping. To demonstrate this capability, we used Google's Earth Engine cloud computing platform to create nationwide annual ET estimates with 30-meter resolution Landsat ( 16,000 images) and gridded weather data using the Operational Simplified Surface Energy Balance (SSEBop) model in support of the National Water Census, a USGS research program designed to build decision support capacity for water management agencies and other natural resource managers. By leveraging Google's Earth Engine Application Programming Interface (API) and developing software in a collaborative, open-platform environment, we rapidly advance from research towards applications for large-area field-scale ET mapping. Cloud computing of the Landsat image archive combined with other satellite, climate, and weather data, is creating never imagined opportunities for assessing ET model behavior and uncertainty, and ultimately providing the ability for more robust operational monitoring and assessment of water use at field-scales.
NASA Astrophysics Data System (ADS)
Jaber, Khalid Mohammad; Alia, Osama Moh'd.; Shuaib, Mohammed Mahmod
2018-03-01
Finding the optimal parameters that can reproduce experimental data (such as the velocity-density relation and the specific flow rate) is a very important component of the validation and calibration of microscopic crowd dynamic models. Heavy computational demand during parameter search is a known limitation that exists in a previously developed model known as the Harmony Search-Based Social Force Model (HS-SFM). In this paper, a parallel-based mechanism is proposed to reduce the computational time and memory resource utilisation required to find these parameters. More specifically, two MATLAB-based multicore techniques (parfor and create independent jobs) using shared memory are developed by taking advantage of the multithreading capabilities of parallel computing, resulting in a new framework called the Parallel Harmony Search-Based Social Force Model (P-HS-SFM). The experimental results show that the parfor-based P-HS-SFM achieved a better computational time of about 26 h, an efficiency improvement of ? 54% and a speedup factor of 2.196 times in comparison with the HS-SFM sequential processor. The performance of the P-HS-SFM using the create independent jobs approach is also comparable to parfor with a computational time of 26.8 h, an efficiency improvement of about 30% and a speedup of 2.137 times.
Applying a cloud computing approach to storage architectures for spacecraft
NASA Astrophysics Data System (ADS)
Baldor, Sue A.; Quiroz, Carlos; Wood, Paul
As sensor technologies, processor speeds, and memory densities increase, spacecraft command, control, processing, and data storage systems have grown in complexity to take advantage of these improvements and expand the possible missions of spacecraft. Spacecraft systems engineers are increasingly looking for novel ways to address this growth in complexity and mitigate associated risks. Looking to conventional computing, many solutions have been executed to solve both the problem of complexity and heterogeneity in systems. In particular, the cloud-based paradigm provides a solution for distributing applications and storage capabilities across multiple platforms. In this paper, we propose utilizing a cloud-like architecture to provide a scalable mechanism for providing mass storage in spacecraft networks that can be reused on multiple spacecraft systems. By presenting a consistent interface to applications and devices that request data to be stored, complex systems designed by multiple organizations may be more readily integrated. Behind the abstraction, the cloud storage capability would manage wear-leveling, power consumption, and other attributes related to the physical memory devices, critical components in any mass storage solution for spacecraft. Our approach employs SpaceWire networks and SpaceWire-capable devices, although the concept could easily be extended to non-heterogeneous networks consisting of multiple spacecraft and potentially the ground segment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crabtree, George; Glotzer, Sharon; McCurdy, Bill
This report is based on a SC Workshop on Computational Materials Science and Chemistry for Innovation on July 26-27, 2010, to assess the potential of state-of-the-art computer simulations to accelerate understanding and discovery in materials science and chemistry, with a focus on potential impacts in energy technologies and innovation. The urgent demand for new energy technologies has greatly exceeded the capabilities of today's materials and chemical processes. To convert sunlight to fuel, efficiently store energy, or enable a new generation of energy production and utilization technologies requires the development of new materials and processes of unprecedented functionality and performance. Newmore » materials and processes are critical pacing elements for progress in advanced energy systems and virtually all industrial technologies. Over the past two decades, the United States has developed and deployed the world's most powerful collection of tools for the synthesis, processing, characterization, and simulation and modeling of materials and chemical systems at the nanoscale, dimensions of a few atoms to a few hundred atoms across. These tools, which include world-leading x-ray and neutron sources, nanoscale science facilities, and high-performance computers, provide an unprecedented view of the atomic-scale structure and dynamics of materials and the molecular-scale basis of chemical processes. For the first time in history, we are able to synthesize, characterize, and model materials and chemical behavior at the length scale where this behavior is controlled. This ability is transformational for the discovery process and, as a result, confers a significant competitive advantage. Perhaps the most spectacular increase in capability has been demonstrated in high performance computing. Over the past decade, computational power has increased by a factor of a million due to advances in hardware and software. This rate of improvement, which shows no sign of abating, has enabled the development of computer simulations and models of unprecedented fidelity. We are at the threshold of a new era where the integrated synthesis, characterization, and modeling of complex materials and chemical processes will transform our ability to understand and design new materials and chemistries with predictive power. In turn, this predictive capability will transform technological innovation by accelerating the development and deployment of new materials and processes in products and manufacturing. Harnessing the potential of computational science and engineering for the discovery and development of materials and chemical processes is essential to maintaining leadership in these foundational fields that underpin energy technologies and industrial competitiveness. Capitalizing on the opportunities presented by simulation-based engineering and science in materials and chemistry will require an integration of experimental capabilities with theoretical and computational modeling; the development of a robust and sustainable infrastructure to support the development and deployment of advanced computational models; and the assembly of a community of scientists and engineers to implement this integration and infrastructure. This community must extend to industry, where incorporating predictive materials science and chemistry into design tools can accelerate the product development cycle and drive economic competitiveness. The confluence of new theories, new materials synthesis capabilities, and new computer platforms has created an unprecedented opportunity to implement a "materials-by-design" paradigm with wide-ranging benefits in technological innovation and scientific discovery. The Workshop on Computational Materials Science and Chemistry for Innovation was convened in Bethesda, Maryland, on July 26-27, 2010. Sponsored by the Department of Energy (DOE) Offices of Advanced Scientific Computing Research and Basic Energy Sciences, the workshop brought together 160 experts in materials science, chemistry, and computational science representing more than 65 universities, laboratories, and industries, and four agencies. The workshop examined seven foundational challenge areas in materials science and chemistry: materials for extreme conditions, self-assembly, light harvesting, chemical reactions, designer fluids, thin films and interfaces, and electronic structure. Each of these challenge areas is critical to the development of advanced energy systems, and each can be accelerated by the integrated application of predictive capability with theory and experiment. The workshop concluded that emerging capabilities in predictive modeling and simulation have the potential to revolutionize the development of new materials and chemical processes. Coupled with world-leading materials characterization and nanoscale science facilities, this predictive capability provides the foundation for an innovation ecosystem that can accelerate the discovery, development, and deployment of new technologies, including advanced energy systems. Delivering on the promise of this innovation ecosystem requires the following: Integration of synthesis, processing, characterization, theory, and simulation and modeling. Many of the newly established Energy Frontier Research Centers and Energy Hubs are exploiting this integration. Achieving/strengthening predictive capability in foundational challenge areas. Predictive capability in the seven foundational challenge areas described in this report is critical to the development of advanced energy technologies. Developing validated computational approaches that span vast differences in time and length scales. This fundamental computational challenge crosscuts all of the foundational challenge areas. Similarly challenging is coupling of analytical data from multiple instruments and techniques that are required to link these length and time scales. Experimental validation and quantification of uncertainty in simulation and modeling. Uncertainty quantification becomes increasingly challenging as simulations become more complex. Robust and sustainable computational infrastructure, including software and applications. For modeling and simulation, software equals infrastructure. To validate the computational tools, software is critical infrastructure that effectively translates huge arrays of experimental data into useful scientific understanding. An integrated approach for managing this infrastructure is essential. Efficient transfer and incorporation of simulation-based engineering and science in industry. Strategies for bridging the gap between research and industrial applications and for widespread industry adoption of integrated computational materials engineering are needed.« less
Archiving Software Systems: Approaches to Preserve Computational Capabilities
NASA Astrophysics Data System (ADS)
King, T. A.
2014-12-01
A great deal of effort is made to preserve scientific data. Not only because data is knowledge, but it is often costly to acquire and is sometimes collected under unique circumstances. Another part of the science enterprise is the development of software to process and analyze the data. Developed software is also a large investment and worthy of preservation. However, the long term preservation of software presents some challenges. Software often requires a specific technology stack to operate. This can include software, operating systems and hardware dependencies. One past approach to preserve computational capabilities is to maintain ancient hardware long past its typical viability. On an archive horizon of 100 years, this is not feasible. Another approach to preserve computational capabilities is to archive source code. While this can preserve details of the implementation and algorithms, it may not be possible to reproduce the technology stack needed to compile and run the resulting applications. This future forward dilemma has a solution. Technology used to create clouds and process big data can also be used to archive and preserve computational capabilities. We explore how basic hardware, virtual machines, containers and appropriate metadata can be used to preserve computational capabilities and to archive functional software systems. In conjunction with data archives, this provides scientist with both the data and capability to reproduce the processing and analysis used to generate past scientific results.
Using adaptive grid in modeling rocket nozzle flow
NASA Technical Reports Server (NTRS)
Chow, Alan S.; Jin, Kang-Ren
1992-01-01
The mechanical behavior of a rocket motor internal flow field results in a system of nonlinear partial differential equations which cannot be solved analytically. However, this system of equations called the Navier-Stokes equations can be solved numerically. The accuracy and the convergence of the solution of the system of equations will depend largely on how precisely the sharp gradients in the domain of interest can be resolved. With the advances in computer technology, more sophisticated algorithms are available to improve the accuracy and convergence of the solutions. An adaptive grid generation is one of the schemes which can be incorporated into the algorithm to enhance the capability of numerical modeling. It is equivalent to putting intelligence into the algorithm to optimize the use of computer memory. With this scheme, the finite difference domain of the flow field called the grid does neither have to be very fine nor strategically placed at the location of sharp gradients. The grid is self adapting as the solution evolves. This scheme significantly improves the methodology of solving flow problems in rocket nozzles by taking the refinement part of grid generation out of the hands of computational fluid dynamics (CFD) specialists and place it into the computer algorithm itself.
Predictive Capability Maturity Model for computational modeling and simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.
2007-10-01
The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronauticsmore » and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.« less
Analysis of computer capabilities of Pacific Northwest paratransit providers
DOT National Transportation Integrated Search
1996-07-01
The major project objectives are to quantify the computer capabilities and to determine the computerization needs of paratransit operators in the Northwest, and to create a training program to assist paratransit operators in developing realistic spec...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mattsson, Ann E.
Density Functional Theory (DFT) based Equation of State (EOS) construction is a prominent part of Sandia’s capabilities to support engineering sciences. This capability is based on augmenting experimental data with information gained from computational investigations, especially in those parts of the phase space where experimental data is hard, dangerous, or expensive to obtain. A key part of the success of the Sandia approach is the fundamental science work supporting the computational capability. Not only does this work enhance the capability to perform highly accurate calculations but it also provides crucial insight into the limitations of the computational tools, providing highmore » confidence in the results even where results cannot be, or have not yet been, validated by experimental data. This report concerns the key ingredient of projector augmented-wave (PAW) potentials for use in pseudo-potential computational codes. Using the tools discussed in SAND2012-7389 we assess the standard Vienna Ab-initio Simulation Package (VASP) PAWs for Molybdenum.« less
IPAD products and implications for the future
NASA Technical Reports Server (NTRS)
Miller, R. E., Jr.
1980-01-01
The betterment of productivity through the improvement of product quality and the reduction of cost is addressed. Productivity improvement is sought through (1) reduction of required resources, (2) improved ask results through the management of such saved resources, (3) reduced downstream costs through manufacturing-oriented engineering, and (4) lowered risks in the making of product design decisions. The IPAD products are both hardware architecture and software distributed over a number of heterogeneous computers in this architecture. These IPAD products are described in terms of capability and engineering usefulness. The future implications of state-of-the-art IPAD hardware and software architectures are discussed in terms of their impact on the functions and on structures of organizations concerned with creating products.
2013-09-30
founded Quantum Intelligence, Inc. She was principal investigator (PI) for six contracts awarded by the DoD Small Business Innovation Research (SBIR... Quantum Intelligence, Inc. CLA is a computer-based learning agent, or agent collaboration, capable of ingesting and processing data sources. We have...opportunities all need to be addressed consciously and consistently. Following a series of deliberate experiments, long-term procedural improvements to the
NASA Technical Reports Server (NTRS)
1989-01-01
Technology developed during a joint research program with Langley and Kinetic Systems Corporation led to Kinetic Systems' production of a high speed Computer Automated Measurement and Control (CAMAC) data acquisition system. The study, which involved the use of CAMAC equipment applied to flight simulation, significantly improved the company's technical capability and produced new applications. With Digital Equipment Corporation, Kinetic Systems is marketing the system to government and private companies for flight simulation, fusion research, turbine testing, steelmaking, etc.
ERIC Educational Resources Information Center
Spielberg, Freya; Kurth, Ann; Reidy, William; McKnight, Teka; Dikobe, Wame; Wilson, Charles
2011-01-01
This article highlights findings from an evaluation that explored the impact of mobile versus clinic-based testing, rapid versus central-lab based testing, incentives for testing, and the use of a computer counseling program to guide counseling and automate evaluation in a mobile program reaching people of color at risk for HIV. The program's…
ASTEC: Controls analysis for personal computers
NASA Technical Reports Server (NTRS)
Downing, John P.; Bauer, Frank H.; Thorpe, Christopher J.
1989-01-01
The ASTEC (Analysis and Simulation Tools for Engineering Controls) software is under development at Goddard Space Flight Center (GSFC). The design goal is to provide a wide selection of controls analysis tools at the personal computer level, as well as the capability to upload compute-intensive jobs to a mainframe or supercomputer. The project is a follow-on to the INCA (INteractive Controls Analysis) program that has been developed at GSFC over the past five years. While ASTEC makes use of the algorithms and expertise developed for the INCA program, the user interface was redesigned to take advantage of the capabilities of the personal computer. The design philosophy and the current capabilities of the ASTEC software are described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jurrus, Elizabeth; Engel, Dave; Star, Keith
The Adaptive Poisson-Boltzmann Solver (APBS) software was developed to solve the equations of continuum electrostatics for large biomolecular assemblages that has provided impact in the study of a broad range of chemical, biological, and biomedical applications. APBS addresses three key technology challenges for understanding solvation and electrostatics in biomedical applications: accurate and efficient models for biomolecular solvation and electrostatics, robust and scalable software for applying those theories to biomolecular systems, and mechanisms for sharing and analyzing biomolecular electrostatics data in the scientific community. To address new research applications and advancing computational capabilities, we have continually updated APBS and its suitemore » of accompanying software since its release in 2001. In this manuscript, we discuss the models and capabilities that have recently been implemented within the APBS software package including: a Poisson-Boltzmann analytical and a semi-analytical solver, an optimized boundary element solver, a geometry-based geometric flow solvation model, a graph theory based algorithm for determining pKa values, and an improved web-based visualization tool for viewing electrostatics.« less
Wind-US Code Physical Modeling Improvements to Complement Hypersonic Testing and Evaluation
NASA Technical Reports Server (NTRS)
Georgiadis, Nicholas J.; Yoder, Dennis A.; Towne, Charles S.; Engblom, William A.; Bhagwandin, Vishal A.; Power, Greg D.; Lankford, Dennis W.; Nelson, Christopher C.
2009-01-01
This report gives an overview of physical modeling enhancements to the Wind-US flow solver which were made to improve the capabilities for simulation of hypersonic flows and the reliability of computations to complement hypersonic testing. The improvements include advanced turbulence models, a bypass transition model, a conjugate (or closely coupled to vehicle structure) conduction-convection heat transfer capability, and an upgraded high-speed combustion solver. A Mach 5 shock-wave boundary layer interaction problem is used to investigate the benefits of k- s and k-w based explicit algebraic stress turbulence models relative to linear two-equation models. The bypass transition model is validated using data from experiments for incompressible boundary layers and a Mach 7.9 cone flow. The conjugate heat transfer method is validated for a test case involving reacting H2-O2 rocket exhaust over cooled calorimeter panels. A dual-mode scramjet configuration is investigated using both a simplified 1-step kinetics mechanism and an 8-step mechanism. Additionally, variations in the turbulent Prandtl and Schmidt numbers are considered for this scramjet configuration.
Meteor Shower Forecast Improvements from a Survey of All-Sky Network Observations
NASA Technical Reports Server (NTRS)
Moorhead, Althea V.; Sugar, Glenn; Brown, Peter G.; Cooke, William J.
2015-01-01
Meteoroid impacts are capable of damaging spacecraft and potentially ending missions. In order to help spacecraft programs mitigate these risks, NASA's Meteoroid Environment Office (MEO) monitors and predicts meteoroid activity. Temporal variations in near-Earth space are described by the MEO's annual meteor shower forecast, which is based on both past shower activity and model predictions. The MEO and the University of Western Ontario operate sister networks of all-sky meteor cameras. These networks have been in operation for more than 7 years and have computed more than 20,000 meteor orbits. Using these data, we conduct a survey of meteor shower activity in the "fireball" size regime using DBSCAN. For each shower detected in our survey, we compute the date of peak activity and characterize the growth and decay of the shower's activity before and after the peak. These parameters are then incorporated into the annual forecast for an improved treatment of annual activity.
NASA Astrophysics Data System (ADS)
Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo
2018-05-01
The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.
Simulation Enabled Safeguards Assessment Methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robert Bean; Trond Bjornard; Thomas Larson
2007-09-01
It is expected that nuclear energy will be a significant component of future supplies. New facilities, operating under a strengthened international nonproliferation regime will be needed. There is good reason to believe virtual engineering applied to the facility design, as well as to the safeguards system design will reduce total project cost and improve efficiency in the design cycle. Simulation Enabled Safeguards Assessment MEthodology (SESAME) has been developed as a software package to provide this capability for nuclear reprocessing facilities. The software architecture is specifically designed for distributed computing, collaborative design efforts, and modular construction to allow step improvements inmore » functionality. Drag and drop wireframe construction allows the user to select the desired components from a component warehouse, render the system for 3D visualization, and, linked to a set of physics libraries and/or computational codes, conduct process evaluations of the system they have designed.« less
The role of thermal and lubricant boundary layers in the transient thermal analysis of spur gears
NASA Technical Reports Server (NTRS)
El-Bayoumy, L. E.; Akin, L. S.; Townsend, D. P.; Choy, F. C.
1989-01-01
An improved convection heat-transfer model has been developed for the prediction of the transient tooth surface temperature of spur gears. The dissipative quality of the lubricating fluid is shown to be limited to the capacity extent of the thermal boundary layer. This phenomenon can be of significance in the determination of the thermal limit of gears accelerating to the point where gear scoring occurs. Steady-state temperature prediction is improved considerably through the use of a variable integration time step that substantially reduces computer time. Computer-generated plots of temperature contours enable the user to animate the propagation of the thermal wave as the gears come into and out of contact, thus contributing to better understanding of this complex problem. This model has a much better capability at predicting gear-tooth temperatures than previous models.
Improved Force Fields for Peptide Nucleic Acids with Optimized Backbone Torsion Parameters.
Jasiński, Maciej; Feig, Michael; Trylska, Joanna
2018-06-06
Peptide nucleic acids are promising nucleic acid analogs for antisense therapies as they can form stable duplex and triplex structures with DNA and RNA. Computational studies of PNA-containing duplexes and triplexes are an important component for guiding their design, yet existing force fields have not been well validated and parametrized with modern computational capabilities. We present updated CHARMM and Amber force fields for PNA that greatly improve the stability of simulated PNA-containing duplexes and triplexes in comparison with experimental structures and allow such systems to be studied on microsecond time scales. The force field modifications focus on reparametrized PNA backbone torsion angles to match high-level quantum mechanics reference energies for a model compound. The microsecond simulations of PNA-PNA, PNA-DNA, PNA-RNA, and PNA-DNA-PNA complexes also allowed a comprehensive analysis of hydration and ion interactions with such systems.
Afify, Ahmed; Haney, Stephan
2016-08-01
Since it was first introduced into the dental world, computer-aided design/computer-aided manufacturing (CAD/CAM) technology has improved dramatically in regards to both data acquisition and fabrication abilities. CAD/CAM is capable of providing well-fitting intra- and extraoral prostheses when sound guidelines are followed. As CAD/CAM technology encompasses both surgical and prosthetic dental applications as well as fixed and removable aspects, it could improve the average quality of dental prostheses compared with the results obtained by conventional manufacturing methods. The purpose of this article is to provide an introduction into the methods in which this technology may be used to enhance the wear and fracture resistance of dentures and overdentures. This article will also showcase two clinical reports in which CAD/CAM technology has been implemented. © 2016 by the American College of Prosthodontists.
CFD research, parallel computation and aerodynamic optimization
NASA Technical Reports Server (NTRS)
Ryan, James S.
1995-01-01
Over five years of research in Computational Fluid Dynamics and its applications are covered in this report. Using CFD as an established tool, aerodynamic optimization on parallel architectures is explored. The objective of this work is to provide better tools to vehicle designers. Submarine design requires accurate force and moment calculations in flow with thick boundary layers and large separated vortices. Low noise production is critical, so flow into the propulsor region must be predicted accurately. The High Speed Civil Transport (HSCT) has been the subject of recent work. This vehicle is to be a passenger vehicle with the capability of cutting overseas flight times by more than half. A successful design must surpass the performance of comparable planes. Fuel economy, other operational costs, environmental impact, and range must all be improved substantially. For all these reasons, improved design tools are required, and these tools must eventually integrate optimization, external aerodynamics, propulsion, structures, heat transfer and other disciplines.
Mechanics of airflow in the human nasal airways.
Doorly, D J; Taylor, D J; Schroter, R C
2008-11-30
The mechanics of airflow in the human nasal airways is reviewed, drawing on the findings of experimental and computational model studies. Modelling inevitably requires simplifications and assumptions, particularly given the complexity of the nasal airways. The processes entailed in modelling the nasal airways (from defining the model, to its production and, finally, validating the results) is critically examined, both for physical models and for computational simulations. Uncertainty still surrounds the appropriateness of the various assumptions made in modelling, particularly with regard to the nature of flow. New results are presented in which high-speed particle image velocimetry (PIV) and direct numerical simulation are applied to investigate the development of flow instability in the nasal cavity. These illustrate some of the improved capabilities afforded by technological developments for future model studies. The need for further improvements in characterising airway geometry and flow together with promising new methods are briefly discussed.
Beyond the computer-based patient record: re-engineering with a vision.
Genn, B; Geukers, L
1995-01-01
In order to achieve real benefit from the potential offered by a Computer-Based Patient Record, the capabilities of the technology must be applied along with true re-engineering of healthcare delivery processes. University Hospital recognizes this and is using systems implementation projects, such as the catalyst, for transforming the way we care for our patients. Integration is fundamental to the success of these initiatives and this must be explicitly planned against an organized systems architecture whose standards are market-driven. University Hospital also recognizes that Community Health Information Networks will offer improved quality of patient care at a reduced overall cost to the system. All of these implementation factors are considered up front as the hospital makes its initial decisions on to how to computerize its patient records. This improves our chances for success and will provide a consistent vision to guide the hospital's development of new and better patient care.
NASA Astrophysics Data System (ADS)
Verma, Sudeep; Dewan, Anupam
2018-01-01
The Partially-Averaged Navier-Stokes (PANS) approach has been applied for the first time to model turbulent flow and heat transfer in an ideal Czochralski set up with the realistic boundary conditions. This method provides variable level of resolution ranging from the Reynolds-Averaged Navier-Stokes (RANS) modelling to Direct Numerical Simulation (DNS) based on the filter control parameter. For the present case, a low-Re PANS model has been developed for Czochralski melt flow, which includes the effect of coriolis, centrifugal, buoyant and surface tension induced forces. The aim of the present study is to assess improvement in results on switching to PANS modelling from unsteady RANS (URANS) approach on the same computational mesh. The PANS computed results were found to be in good agreement with the reported experimental, DNS and Large Eddy Simulation (LES) data. A clear improvement in computational accuracy is observed in switching from the URANS approach to the PANS methodology. The computed results further improved with a reduction in the PANS filter width. Further the capability of the PANS model to capture key characteristics of the Czochralski crystal growth is also highlighted. It was observed that the PANS model was able to resolve the three-dimensional turbulent nature of the melt, characteristic flow structures arising due to flow instabilities and generation of thermal plumes and vortices in the Czochralski melt.
Development of a Vision-Based Situational Awareness Capability for Unmanned Surface Vessels
2017-09-01
used to provide an SA capability for USVs. This thesis addresses the following research questions: (1) Can a computer vision– based technique be...BLANK 51 VI. CONCLUSION AND RECOMMENDATIONS A. CONCLUSION This research demonstrated the feasibility of using a computer vision– based ...VISION- BASED SITUATIONAL AWARENESS CAPABILITY FOR UNMANNED SURFACE VESSELS by Ying Jie Benjemin Toh September 2017 Thesis Advisor: Oleg
Symbolic Computation Using Cellular Automata-Based Hyperdimensional Computing.
Yilmaz, Ozgur
2015-12-01
This letter introduces a novel framework of reservoir computing that is capable of both connectionist machine intelligence and symbolic computation. A cellular automaton is used as the reservoir of dynamical systems. Input is randomly projected onto the initial conditions of automaton cells, and nonlinear computation is performed on the input via application of a rule in the automaton for a period of time. The evolution of the automaton creates a space-time volume of the automaton state space, and it is used as the reservoir. The proposed framework is shown to be capable of long-term memory, and it requires orders of magnitude less computation compared to echo state networks. As the focus of the letter, we suggest that binary reservoir feature vectors can be combined using Boolean operations as in hyperdimensional computing, paving a direct way for concept building and symbolic processing. To demonstrate the capability of the proposed system, we make analogies directly on image data by asking, What is the automobile of air?
Chopp-Hurley, Jaclyn N; Brookham, Rebecca L; Dickerson, Clark R
2016-12-01
Biomechanical models are often used to estimate the muscular demands of various activities. However, specific muscle dysfunctions typical of unique clinical populations are rarely considered. Due to iatrogenic tissue damage, pectoralis major capability is markedly reduced in breast cancer population survivors, which could influence arm internal and external rotation muscular strategies. Accordingly, an optimization-based muscle force prediction model was systematically modified to emulate breast cancer population survivors through adjusting pectoralis capability and enforcing an empirical muscular co-activation relationship. Model permutations were evaluated through comparisons between predicted muscle forces and empirically measured muscle activations in survivors. Similarities between empirical data and model outputs were influenced by muscle type, hand force, pectoralis major capability and co-activation constraints. Differences in magnitude were lower when the co-activation constraint was enforced (-18.4% [31.9]) than unenforced (-23.5% [27.6]) (p<0.0001). This research demonstrates that muscle dysfunction in breast cancer population survivors can be reflected through including a capability constraint for pectoralis major. Further refinement of the co-activation constraint for survivors could improve its generalizability across this population and activities. Improving biomechanical models to more accurately represent clinical populations can provide novel information that can help in the development of optimal treatment programs for breast cancer population survivors. Copyright © 2016 Elsevier Ltd. All rights reserved.
Using wide area differential GPS to improve total system error for precision flight operations
NASA Astrophysics Data System (ADS)
Alter, Keith Warren
Total System Error (TSE) refers to an aircraft's total deviation from the desired flight path. TSE can be divided into Navigational System Error (NSE), the error attributable to the aircraft's navigation system, and Flight Technical Error (FTE), the error attributable to pilot or autopilot control. Improvement in either NSE or FTE reduces TSE and leads to the capability to fly more precise flight trajectories. The Federal Aviation Administration's Wide Area Augmentation System (WAAS) became operational for non-safety critical applications in 2000 and will become operational for safety critical applications in 2002. This navigation service will provide precise 3-D positioning (demonstrated to better than 5 meters horizontal and vertical accuracy) for civil aircraft in the United States. Perhaps more importantly, this navigation system, which provides continuous operation across large regions, enables new flight instrumentation concepts which allow pilots to fly aircraft significantly more precisely, both for straight and curved flight paths. This research investigates the capabilities of some of these new concepts, including the Highway-In-The Sky (HITS) display, which not only improves FTE but also reduces pilot workload when compared to conventional flight instrumentation. Augmentation to the HITS display, including perspective terrain and terrain alerting, improves pilot situational awareness. Flight test results from demonstrations in Juneau, AK, and Lake Tahoe, CA, provide evidence of the overall feasibility of integrated, low-cost flight navigation systems based on these concepts. These systems, requiring no more computational power than current-generation low-end desktop computers, have immediate applicability to general aviation flight from Cessnas to business jets and can support safer and ultimately more economical flight operations. Commercial airlines may also, over time, benefit from these new technologies.
Development of AN Innovative Three-Dimensional Complete Body Screening Device - 3D-CBS
NASA Astrophysics Data System (ADS)
Crosetto, D. B.
2004-07-01
This article describes an innovative technological approach that increases the efficiency with which a large number of particles (photons) can be detected and analyzed. The three-dimensional complete body screening (3D-CBS) combines the functional imaging capability of the Positron Emission Tomography (PET) with those of the anatomical imaging capability of Computed Tomography (CT). The novel techniques provide better images in a shorter time with less radiation to the patient. A primary means of accomplishing this is the use of a larger solid angle, but this requires a new electronic technique capable of handling the increased data rate. This technique, combined with an improved and simplified detector assembly, enables executing complex real-time algorithms and allows more efficiently use of economical crystals. These are the principal features of this invention. A good synergy of advanced techniques in particle detection, together with technological progress in industry (latest FPGA technology) and simple, but cost-effective ideas provide a revolutionary invention. This technology enables over 400 times PET efficiency improvement at once compared to two to three times improvements achieved every five years during the past decades. Details of the electronics are provided, including an IBM PC board with a parallel-processing architecture implemented in FPGA, enabling the execution of a programmable complex real-time algorithm for best detection of photons.
Is extreme learning machine feasible? A theoretical assessment (part II).
Lin, Shaobo; Liu, Xia; Fang, Jian; Xu, Zongben
2015-01-01
An extreme learning machine (ELM) can be regarded as a two-stage feed-forward neural network (FNN) learning system that randomly assigns the connections with and within hidden neurons in the first stage and tunes the connections with output neurons in the second stage. Therefore, ELM training is essentially a linear learning problem, which significantly reduces the computational burden. Numerous applications show that such a computation burden reduction does not degrade the generalization capability. It has, however, been open that whether this is true in theory. The aim of this paper is to study the theoretical feasibility of ELM by analyzing the pros and cons of ELM. In the previous part of this topic, we pointed out that via appropriately selected activation functions, ELM does not degrade the generalization capability in the sense of expectation. In this paper, we launch the study in a different direction and show that the randomness of ELM also leads to certain negative consequences. On one hand, we find that the randomness causes an additional uncertainty problem of ELM, both in approximation and learning. On the other hand, we theoretically justify that there also exist activation functions such that the corresponding ELM degrades the generalization capability. In particular, we prove that the generalization capability of ELM with Gaussian kernel is essentially worse than that of FNN with Gaussian kernel. To facilitate the use of ELM, we also provide a remedy to such a degradation. We find that the well-developed coefficient regularization technique can essentially improve the generalization capability. The obtained results reveal the essential characteristic of ELM in a certain sense and give theoretical guidance concerning how to use ELM.
NASA Astrophysics Data System (ADS)
Kucera, P. A.; Burek, T.; Halley-Gotway, J.
2015-12-01
NCAR's Joint Numerical Testbed Program (JNTP) focuses on the evaluation of experimental forecasts of tropical cyclones (TCs) with the goal of developing new research tools and diagnostic evaluation methods that can be transitioned to operations. Recent activities include the development of new TC forecast verification methods and the development of an adaptable TC display and diagnostic system. The next generation display and diagnostic system is being developed to support evaluation needs of the U.S. National Hurricane Center (NHC) and broader TC research community. The new hurricane display and diagnostic capabilities allow forecasters and research scientists to more deeply examine the performance of operational and experimental models. The system is built upon modern and flexible technology that includes OpenLayers Mapping tools that are platform independent. The forecast track and intensity along with associated observed track information are stored in an efficient MySQL database. The system provides easy-to-use interactive display system, and provides diagnostic tools to examine forecast track stratified by intensity. Consensus forecasts can be computed and displayed interactively. The system is designed to display information for both real-time and for historical TC cyclones. The display configurations are easily adaptable to meet the needs of the end-user preferences. Ongoing enhancements include improving capabilities for stratification and evaluation of historical best tracks, development and implementation of additional methods to stratify and compute consensus hurricane track and intensity forecasts, and improved graphical display tools. The display is also being enhanced to incorporate gridded forecast, satellite, and sea surface temperature fields. The presentation will provide an overview of the display and diagnostic system development and demonstration of the current capabilities.
Advances in Computational Capabilities for Hypersonic Flows
NASA Technical Reports Server (NTRS)
Kumar, Ajay; Gnoffo, Peter A.; Moss, James N.; Drummond, J. Philip
1997-01-01
The paper reviews the growth and advances in computational capabilities for hypersonic applications over the period from the mid-1980's to the present day. The current status of the code development issues such as surface and field grid generation, algorithms, physical and chemical modeling, and validation is provided. A brief description of some of the major codes being used at NASA Langley Research Center for hypersonic continuum and rarefied flows is provided, along with their capabilities and deficiencies. A number of application examples are presented, and future areas of research to enhance accuracy, reliability, efficiency, and robustness of computational codes are discussed.
Simulation of cryogenic turbopump annular seals
NASA Astrophysics Data System (ADS)
Palazzolo, Alan B.
1992-12-01
The goal of the current work is to develop software that can accurately predict the dynamic coefficients, forces, leakage and horsepower loss for annular seals which have a potential for affecting the rotordynamic behavior of the pumps. The fruit of last year's research was the computer code SEALPAL which included capabilities for linear tapered geometry, Moody friction factor and inlet pre-swirl. This code produced results which in most cases compared very well with check cases presented in the literature. TAMUSEAL Icode, which was written to improve SEALPAL by correcting a bug and by adding more accurate integration algorithms and additional capabilities, was then used to predict dynamic coefficients and leakage for the NASA/Pratt and Whitney Alternate Turbopump Development (ATD) LOX Pump's seal.
Methods and Research for Multi-Component Cutting Force Sensing Devices and Approaches in Machining
Liang, Qiaokang; Zhang, Dan; Wu, Wanneng; Zou, Kunlin
2016-01-01
Multi-component cutting force sensing systems in manufacturing processes applied to cutting tools are gradually becoming the most significant monitoring indicator. Their signals have been extensively applied to evaluate the machinability of workpiece materials, predict cutter breakage, estimate cutting tool wear, control machine tool chatter, determine stable machining parameters, and improve surface finish. Robust and effective sensing systems with capability of monitoring the cutting force in machine operations in real time are crucial for realizing the full potential of cutting capabilities of computer numerically controlled (CNC) tools. The main objective of this paper is to present a brief review of the existing achievements in the field of multi-component cutting force sensing systems in modern manufacturing. PMID:27854322
Improvement in HPC performance through HIPPI RAID storage
NASA Technical Reports Server (NTRS)
Homan, Blake
1993-01-01
In 1986, RAID (redundant array of inexpensive (or independent) disks) technology was introduced as a viable solution to the I/O bottleneck. A number of different RAID levels were defined in 1987 by the Computer Science Division (EECS) University of California, Berkeley, each with specific advantages and disadvantages. With multiple RAID options available, taking advantage of RAID technology required matching particular RAID levels with specific applications. It was not possible to use one RAID device to address all applications. Maximum Strategy's Gen 4 Storage Server addresses this issue with a new capability called programmable RAID level partitioning. This capability enables users to have multiple RAID levels coexist on the same disks, thereby providing the versatility necessary for multiple concurrent applications.
Methods and Research for Multi-Component Cutting Force Sensing Devices and Approaches in Machining.
Liang, Qiaokang; Zhang, Dan; Wu, Wanneng; Zou, Kunlin
2016-11-16
Multi-component cutting force sensing systems in manufacturing processes applied to cutting tools are gradually becoming the most significant monitoring indicator. Their signals have been extensively applied to evaluate the machinability of workpiece materials, predict cutter breakage, estimate cutting tool wear, control machine tool chatter, determine stable machining parameters, and improve surface finish. Robust and effective sensing systems with capability of monitoring the cutting force in machine operations in real time are crucial for realizing the full potential of cutting capabilities of computer numerically controlled (CNC) tools. The main objective of this paper is to present a brief review of the existing achievements in the field of multi-component cutting force sensing systems in modern manufacturing.
Updated Panel-Method Computer Program
NASA Technical Reports Server (NTRS)
Ashby, Dale L.
1995-01-01
Panel code PMARC_12 (Panel Method Ames Research Center, version 12) computes potential-flow fields around complex three-dimensional bodies such as complete aircraft models. Contains several advanced features, including internal mathematical modeling of flow, time-stepping wake model for simulating either steady or unsteady motions, capability for Trefftz computation of drag induced by plane, and capability for computation of off-body and on-body streamlines, and capability of computation of boundary-layer parameters by use of two-dimensional integral boundary-layer method along surface streamlines. Investigators interested in visual representations of phenomena, may want to consider obtaining program GVS (ARC-13361), General visualization System. GVS is Silicon Graphics IRIS program created to support scientific-visualization needs of PMARC_12. GVS available separately from COSMIC. PMARC_12 written in standard FORTRAN 77, with exception of NAMELIST extension used for input.
Aprà, E; Kowalski, K
2016-03-08
In this paper we discuss the implementation of multireference coupled-cluster formalism with singles, doubles, and noniterative triples (MRCCSD(T)), which is capable of taking advantage of the processing power of the Intel Xeon Phi coprocessor. We discuss the integration of two levels of parallelism underlying the MRCCSD(T) implementation with computational kernels designed to offload the computationally intensive parts of the MRCCSD(T) formalism to Intel Xeon Phi coprocessors. Special attention is given to the enhancement of the parallel performance by task reordering that has improved load balancing in the noniterative part of the MRCCSD(T) calculations. We also discuss aspects regarding efficient optimization and vectorization strategies.
Man-machine analysis of translation and work tasks of Skylab films
NASA Technical Reports Server (NTRS)
Hosler, W. W.; Boelter, J. G.; Morrow, J. R., Jr.; Jackson, J. T.
1979-01-01
An objective approach to determine the concurrent validity of computer-graphic models is real time film analysis. This technique was illustrated through the procedures and results obtained in an evaluation of translation of Skylab mission astronauts. The quantitative analysis was facilitated by the use of an electronic film analyzer, minicomputer, and specifically supportive software. The uses of this technique for human factors research are: (1) validation of theoretical operator models; (2) biokinetic analysis; (3) objective data evaluation; (4) dynamic anthropometry; (5) empirical time-line analysis; and (6) consideration of human variability. Computer assisted techniques for interface design and evaluation have the potential for improving the capability for human factors engineering.
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Schreckenghost, Debra L.; Woods, David D.; Potter, Scott S.; Johannesen, Leila; Holloway, Matthew; Forbus, Kenneth D.
1991-01-01
Initial results are reported from a multi-year, interdisciplinary effort to provide guidance and assistance for designers of intelligent systems and their user interfaces. The objective is to achieve more effective human-computer interaction (HCI) for systems with real time fault management capabilities. Intelligent fault management systems within the NASA were evaluated for insight into the design of systems with complex HCI. Preliminary results include: (1) a description of real time fault management in aerospace domains; (2) recommendations and examples for improving intelligent systems design and user interface design; (3) identification of issues requiring further research; and (4) recommendations for a development methodology integrating HCI design into intelligent system design.
Geometric modeling for computer aided design
NASA Technical Reports Server (NTRS)
Schwing, James L.
1992-01-01
The goal was the design and implementation of software to be used in the conceptual design of aerospace vehicles. Several packages and design studies were completed, including two software tools currently used in the conceptual level design of aerospace vehicles. These tools are the Solid Modeling Aerospace Research Tool (SMART) and the Environment for Software Integration and Execution (EASIE). SMART provides conceptual designers with a rapid prototyping capability and additionally provides initial mass property analysis. EASIE provides a set of interactive utilities that simplify the task of building and executing computer aided design systems consisting of diverse, stand alone analysis codes that result in the streamlining of the exchange of data between programs, reducing errors and improving efficiency.
Evaluation of the three-dimensional parabolic flow computer program SHIP
NASA Technical Reports Server (NTRS)
Pan, Y. S.
1978-01-01
The three-dimensional parabolic flow program SHIP designed for predicting supersonic combustor flow fields is evaluated to determine its capabilities. The mathematical foundation and numerical procedure are reviewed; simplifications are pointed out and commented upon. The program is then evaluated numerically by applying it to several subsonic and supersonic, turbulent, reacting and nonreacting flow problems. Computational results are compared with available experimental or other analytical data. Good agreements are obtained when the simplifications on which the program is based are justified. Limitations of the program and the needs for improvement and extension are pointed out. The present three dimensional parabolic flow program appears to be potentially useful for the development of supersonic combustors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Pengchen; Settgast, Randolph R.; Johnson, Scott M.
2014-12-17
GEOS is a massively parallel, multi-physics simulation application utilizing high performance computing (HPC) to address subsurface reservoir stimulation activities with the goal of optimizing current operations and evaluating innovative stimulation methods. GEOS enables coupling of di erent solvers associated with the various physical processes occurring during reservoir stimulation in unique and sophisticated ways, adapted to various geologic settings, materials and stimulation methods. Developed at the Lawrence Livermore National Laboratory (LLNL) as a part of a Laboratory-Directed Research and Development (LDRD) Strategic Initiative (SI) project, GEOS represents the culmination of a multi-year ongoing code development and improvement e ort that hasmore » leveraged existing code capabilities and sta expertise to design new computational geosciences software.« less
IPAD 2: Advances in Distributed Data Base Management for CAD/CAM
NASA Technical Reports Server (NTRS)
Bostic, S. W. (Compiler)
1984-01-01
The Integrated Programs for Aerospace-Vehicle Design (IPAD) Project objective is to improve engineering productivity through better use of computer-aided design and manufacturing (CAD/CAM) technology. The focus is on development of technology and associated software for integrated company-wide management of engineering information. The objectives of this conference are as follows: to provide a greater awareness of the critical need by U.S. industry for advancements in distributed CAD/CAM data management capability; to present industry experiences and current and planned research in distributed data base management; and to summarize IPAD data management contributions and their impact on U.S. industry and computer hardware and software vendors.
Global Optimization of N-Maneuver, High-Thrust Trajectories Using Direct Multiple Shooting
NASA Technical Reports Server (NTRS)
Vavrina, Matthew A.; Englander, Jacob A.; Ellison, Donald H.
2016-01-01
The performance of impulsive, gravity-assist trajectories often improves with the inclusion of one or more maneuvers between flybys. However, grid-based scans over the entire design space can become computationally intractable for even one deep-space maneuver, and few global search routines are capable of an arbitrary number of maneuvers. To address this difficulty a trajectory transcription allowing for any number of maneuvers is developed within a multi-objective, global optimization framework for constrained, multiple gravity-assist trajectories. The formulation exploits a robust shooting scheme and analytic derivatives for computational efficiency. The approach is applied to several complex, interplanetary problems, achieving notable performance without a user-supplied initial guess.
NASA Technical Reports Server (NTRS)
Liever, Peter A.; West, Jeffrey S.; Harris, Robert E.
2016-01-01
A hybrid Computational Fluid Dynamics and Computational Aero-Acoustics (CFD/CAA) modeling framework has been developed for launch vehicle liftoff acoustic environment predictions. The framework couples the existing highly-scalable NASA production CFD code, Loci/CHEM, with a high-order accurate Discontinuous Galerkin solver developed in the same production framework, Loci/THRUST, to accurately resolve and propagate acoustic physics across the entire launch environment. Time-accurate, Hybrid RANS/LES CFD modeling is applied for predicting the acoustic generation physics at the plume source, and a high-order accurate unstructured mesh Discontinuous Galerkin (DG) method is employed to propagate acoustic waves away from the source across large distances using high-order accurate schemes. The DG solver is capable of solving 2nd, 3rd, and 4th order Euler solutions for non-linear, conservative acoustic field propagation. Initial application testing and validation has been carried out against high resolution acoustic data from the Ares Scale Model Acoustic Test (ASMAT) series to evaluate the capabilities and production readiness of the CFD/CAA system to resolve the observed spectrum of acoustic frequency content. This paper presents results from this validation and outlines efforts to mature and improve the computational simulation framework.
Synthesized interstitial lung texture for use in anthropomorphic computational phantoms
NASA Astrophysics Data System (ADS)
Becchetti, Marc F.; Solomon, Justin B.; Segars, W. Paul; Samei, Ehsan
2016-04-01
A realistic model of the anatomical texture from the pulmonary interstitium was developed with the goal of extending the capability of anthropomorphic computational phantoms (e.g., XCAT, Duke University), allowing for more accurate image quality assessment. Contrast-enhanced, high dose, thorax images for a healthy patient from a clinical CT system (Discovery CT750HD, GE healthcare) with thin (0.625 mm) slices and filtered back- projection (FBP) were used to inform the model. The interstitium which gives rise to the texture was defined using 24 volumes of interest (VOIs). These VOIs were selected manually to avoid vasculature, bronchi, and bronchioles. A small scale Hessian-based line filter was applied to minimize the amount of partial-volumed supernumerary vessels and bronchioles within the VOIs. The texture in the VOIs was characterized using 8 Haralick and 13 gray-level run length features. A clustered lumpy background (CLB) model with added noise and blurring to match CT system was optimized to resemble the texture in the VOIs using a genetic algorithm with the Mahalanobis distance as a similarity metric between the texture features. The most similar CLB model was then used to generate the interstitial texture to fill the lung. The optimization improved the similarity by 45%. This will substantially enhance the capabilities of anthropomorphic computational phantoms, allowing for more realistic CT simulations.
NASA Technical Reports Server (NTRS)
Korkegi, R. H.
1983-01-01
The results of a National Research Council study on the effect that advances in computational fluid dynamics (CFD) will have on conventional aeronautical ground testing are reported. Current CFD capabilities include the depiction of linearized inviscid flows and a boundary layer, initial use of Euler coordinates using supercomputers to automatically generate a grid, research and development on Reynolds-averaged Navier-Stokes (N-S) equations, and preliminary research on solutions to the full N-S equations. Improvements in the range of CFD usage is dependent on the development of more powerful supercomputers, exceeding even the projected abilities of the NASA Numerical Aerodynamic Simulator (1 BFLOP/sec). Full representation of the Re-averaged N-S equations will require over one million grid points, a computing level predicted to be available in 15 yr. Present capabilities allow identification of data anomalies, confirmation of data accuracy, and adequateness of model design in wind tunnel trials. Account can be taken of the wall effects and the Re in any flight regime during simulation. CFD can actually be more accurate than instrumented tests, since all points in a flow can be modeled with CFD, while they cannot all be monitored with instrumentation in a wind tunnel.
Modeling and Simulation Reliable Spacecraft On-Board Computing
NASA Technical Reports Server (NTRS)
Park, Nohpill
1999-01-01
The proposed project will investigate modeling and simulation-driven testing and fault tolerance schemes for Spacecraft On-Board Computing, thereby achieving reliable spacecraft telecommunication. A spacecraft communication system has inherent capabilities of providing multipoint and broadcast transmission, connectivity between any two distant nodes within a wide-area coverage, quick network configuration /reconfiguration, rapid allocation of space segment capacity, and distance-insensitive cost. To realize the capabilities above mentioned, both the size and cost of the ground-station terminals have to be reduced by using reliable, high-throughput, fast and cost-effective on-board computing system which has been known to be a critical contributor to the overall performance of space mission deployment. Controlled vulnerability of mission data (measured in sensitivity), improved performance (measured in throughput and delay) and fault tolerance (measured in reliability) are some of the most important features of these systems. The system should be thoroughly tested and diagnosed before employing a fault tolerance into the system. Testing and fault tolerance strategies should be driven by accurate performance models (i.e. throughput, delay, reliability and sensitivity) to find an optimal solution in terms of reliability and cost. The modeling and simulation tools will be integrated with a system architecture module, a testing module and a module for fault tolerance all of which interacting through a centered graphical user interface.
Xyce parallel electronic simulator users guide, version 6.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R; Mei, Ting; Russo, Thomas V.
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas; Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers; A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models; Device models that are specifically tailored to meet Sandia's needs, including some radiationaware devices (for Sandia users only); and Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase-a message passing parallel implementation-which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less
Xyce parallel electronic simulator users' guide, Version 6.0.1.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R; Mei, Ting; Russo, Thomas V.
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandias needs, including some radiationaware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase a message passing parallel implementation which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less
Emerging Security Mechanisms for Medical Cyber Physical Systems.
Kocabas, Ovunc; Soyata, Tolga; Aktas, Mehmet K
2016-01-01
The following decade will witness a surge in remote health-monitoring systems that are based on body-worn monitoring devices. These Medical Cyber Physical Systems (MCPS) will be capable of transmitting the acquired data to a private or public cloud for storage and processing. Machine learning algorithms running in the cloud and processing this data can provide decision support to healthcare professionals. There is no doubt that the security and privacy of the medical data is one of the most important concerns in designing an MCPS. In this paper, we depict the general architecture of an MCPS consisting of four layers: data acquisition, data aggregation, cloud processing, and action. Due to the differences in hardware and communication capabilities of each layer, different encryption schemes must be used to guarantee data privacy within that layer. We survey conventional and emerging encryption schemes based on their ability to provide secure storage, data sharing, and secure computation. Our detailed experimental evaluation of each scheme shows that while the emerging encryption schemes enable exciting new features such as secure sharing and secure computation, they introduce several orders-of-magnitude computational and storage overhead. We conclude our paper by outlining future research directions to improve the usability of the emerging encryption schemes in an MCPS.
Xyce parallel electronic simulator users guide, version 6.0.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R; Mei, Ting; Russo, Thomas V.
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandias needs, including some radiationaware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase a message passing parallel implementation which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less
Improvements to the fastex flutter analysis computer code
NASA Technical Reports Server (NTRS)
Taylor, Ronald F.
1987-01-01
Modifications to the FASTEX flutter analysis computer code (UDFASTEX) are described. The objectives were to increase the problem size capacity of FASTEX, reduce run times by modification of the modal interpolation procedure, and to add new user features. All modifications to the program are operable on the VAX 11/700 series computers under the VAX operating system. Interfaces were provided to aid in the inclusion of alternate aerodynamic and flutter eigenvalue calculations. Plots can be made of the flutter velocity, display and frequency data. A preliminary capability was also developed to plot contours of unsteady pressure amplitude and phase. The relevant equations of motion, modal interpolation procedures, and control system considerations are described and software developments are summarized. Additional information documenting input instructions, procedures, and details of the plate spline algorithm is found in the appendices.
A Computational Approach for Probabilistic Analysis of Water Impact Simulations
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Mason, Brian H.; Lyle, Karen H.
2009-01-01
NASA's development of new concepts for the Crew Exploration Vehicle Orion presents many similar challenges to those worked in the sixties during the Apollo program. However, with improved modeling capabilities, new challenges arise. For example, the use of the commercial code LS-DYNA, although widely used and accepted in the technical community, often involves high-dimensional, time consuming, and computationally intensive simulations. The challenge is to capture what is learned from a limited number of LS-DYNA simulations to develop models that allow users to conduct interpolation of solutions at a fraction of the computational time. This paper presents a description of the LS-DYNA model, a brief summary of the response surface techniques, the analysis of variance approach used in the sensitivity studies, equations used to estimate impact parameters, results showing conditions that might cause injuries, and concluding remarks.
Algorithmic Enhancements for Unsteady Aerodynamics and Combustion Applications
NASA Technical Reports Server (NTRS)
Venkateswaran, Sankaran; Olsen, Michael (Technical Monitor)
2001-01-01
Research in the FY01 focused on the analysis and development of enhanced algorithms for unsteady aerodynamics and chemically reacting flowfields. The research was performed in support of NASA Ames' efforts to improve the capabilities of the in-house computational fluid dynamics code, OVERFLOW. Specifically, the research was focused on the four areas: (1) investigation of stagnation region effects; (2) unsteady preconditioning dual-time procedures; (3) dissipation formulation for combustion; and (4) time-stepping methods for combustion.
Free-Field Spatialized Aural Cues for Synthetic Environments
1994-09-01
any of the references previously listed. B. MIDI Other than electronic musicians and a few hobbyists, the Musical Instrument Digital Interface (MIDI...developed in 1983 and still has a long way to go in improving its capabilities, but the advantages are numerous. An entire musical score can be stored...the same musical file on a computer in one of the various digital sound formats could easily occupy 90 megabytes of disk space. 7 K III. PREVIOUS WORK
Report on results of current and future metal casting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Unal, Cetin; Carlson, Neil N.
2015-09-28
New modeling capabilities needed to simulate the casting of metallic fuels are added to Truchas code. In this report we summarize improvements we made in FY2015 in three areas; (1) Analysis of new casting experiments conducted with BCS and EFL designs, (2) the simulation of INL’s U-Zr casting experiments with Flow3D computer program, (3) the implementation of surface tension model into Truchas for unstructured mesh required to run U-Zr casting.
NASA Astrophysics Data System (ADS)
White, Christopher Joseph
We describe the implementation of sophisticated numerical techniques for general-relativistic magnetohydrodynamics simulations in the Athena++ code framework. Improvements over many existing codes include the use of advanced Riemann solvers and of staggered-mesh constrained transport. Combined with considerations for computational performance and parallel scalability, these allow us to investigate black hole accretion flows with unprecedented accuracy. The capability of the code is demonstrated by exploring magnetically arrested disks.
Programmable computing with a single magnetoresistive element
NASA Astrophysics Data System (ADS)
Ney, A.; Pampuch, C.; Koch, R.; Ploog, K. H.
2003-10-01
The development of transistor-based integrated circuits for modern computing is a story of great success. However, the proved concept for enhancing computational power by continuous miniaturization is approaching its fundamental limits. Alternative approaches consider logic elements that are reconfigurable at run-time to overcome the rigid architecture of the present hardware systems. Implementation of parallel algorithms on such `chameleon' processors has the potential to yield a dramatic increase of computational speed, competitive with that of supercomputers. Owing to their functional flexibility, `chameleon' processors can be readily optimized with respect to any computer application. In conventional microprocessors, information must be transferred to a memory to prevent it from getting lost, because electrically processed information is volatile. Therefore the computational performance can be improved if the logic gate is additionally capable of storing the output. Here we describe a simple hardware concept for a programmable logic element that is based on a single magnetic random access memory (MRAM) cell. It combines the inherent advantage of a non-volatile output with flexible functionality which can be selected at run-time to operate as an AND, OR, NAND or NOR gate.
Design and implementation of a Windows NT network to support CNC activities
NASA Technical Reports Server (NTRS)
Shearrow, C. A.
1996-01-01
The Manufacturing, Materials, & Processes Technology Division is undergoing dramatic changes to bring it's manufacturing practices current with today's technological revolution. The Division is developing Computer Automated Design and Computer Automated Manufacturing (CAD/CAM) abilities. The development of resource tracking is underway in the form of an accounting software package called Infisy. These two efforts will bring the division into the 1980's in relationship to manufacturing processes. Computer Integrated Manufacturing (CIM) is the final phase of change to be implemented. This document is a qualitative study and application of a CIM application capable of finishing the changes necessary to bring the manufacturing practices into the 1990's. The documentation provided in this qualitative research effort includes discovery of the current status of manufacturing in the Manufacturing, Materials, & Processes Technology Division including the software, hardware, network and mode of operation. The proposed direction of research included a network design, computers to be used, software to be used, machine to computer connections, estimate a timeline for implementation, and a cost estimate. Recommendation for the division's improvement include action to be taken, software to utilize, and computer configurations.
DURIP: High Performance Computing in Biomathematics Applications
2017-05-10
Mathematics and Statistics (AMS) at the University of California, Santa Cruz (UCSC) to conduct research and research-related education in areas of...Computing in Biomathematics Applications Report Title The goal of this award was to enhance the capabilities of the Department of Applied Mathematics and...DURIP: High Performance Computing in Biomathematics Applications The goal of this award was to enhance the capabilities of the Department of Applied
Web-based applications for building, managing and analysing kinetic models of biological systems.
Lee, Dong-Yup; Saha, Rajib; Yusufi, Faraaz Noor Khan; Park, Wonjun; Karimi, Iftekhar A
2009-01-01
Mathematical modelling and computational analysis play an essential role in improving our capability to elucidate the functions and characteristics of complex biological systems such as metabolic, regulatory and cell signalling pathways. The modelling and concomitant simulation render it possible to predict the cellular behaviour of systems under various genetically and/or environmentally perturbed conditions. This motivates systems biologists/bioengineers/bioinformaticians to develop new tools and applications, allowing non-experts to easily conduct such modelling and analysis. However, among a multitude of systems biology tools developed to date, only a handful of projects have adopted a web-based approach to kinetic modelling. In this report, we evaluate the capabilities and characteristics of current web-based tools in systems biology and identify desirable features, limitations and bottlenecks for further improvements in terms of usability and functionality. A short discussion on software architecture issues involved in web-based applications and the approaches taken by existing tools is included for those interested in developing their own simulation applications.
High capacity reversible watermarking for audio by histogram shifting and predicted error expansion.
Wang, Fei; Xie, Zhaoxin; Chen, Zuo
2014-01-01
Being reversible, the watermarking information embedded in audio signals can be extracted while the original audio data can achieve lossless recovery. Currently, the few reversible audio watermarking algorithms are confronted with following problems: relatively low SNR (signal-to-noise) of embedded audio; a large amount of auxiliary embedded location information; and the absence of accurate capacity control capability. In this paper, we present a novel reversible audio watermarking scheme based on improved prediction error expansion and histogram shifting. First, we use differential evolution algorithm to optimize prediction coefficients and then apply prediction error expansion to output stego data. Second, in order to reduce location map bits length, we introduced histogram shifting scheme. Meanwhile, the prediction error modification threshold according to a given embedding capacity can be computed by our proposed scheme. Experiments show that this algorithm improves the SNR of embedded audio signals and embedding capacity, drastically reduces location map bits length, and enhances capacity control capability.
Content-Aware Video Adaptation under Low-Bitrate Constraint
NASA Astrophysics Data System (ADS)
Hsiao, Ming-Ho; Chen, Yi-Wen; Chen, Hua-Tsung; Chou, Kuan-Hung; Lee, Suh-Yin
2007-12-01
With the development of wireless network and the improvement of mobile device capability, video streaming is more and more widespread in such an environment. Under the condition of limited resource and inherent constraints, appropriate video adaptations have become one of the most important and challenging issues in wireless multimedia applications. In this paper, we propose a novel content-aware video adaptation in order to effectively utilize resource and improve visual perceptual quality. First, the attention model is derived from analyzing the characteristics of brightness, location, motion vector, and energy features in compressed domain to reduce computation complexity. Then, through the integration of attention model, capability of client device and correlational statistic model, attractive regions of video scenes are derived. The information object- (IOB-) weighted rate distortion model is used for adjusting the bit allocation. Finally, the video adaptation scheme dynamically adjusts video bitstream in frame level and object level. Experimental results validate that the proposed scheme achieves better visual quality effectively and efficiently.
Perspective: Advanced particle imaging
Chandler, David W.
2017-01-01
Since the first ion imaging experiment [D. W. Chandler and P. L. Houston, J. Chem. Phys. 87, 1445–1447 (1987)], demonstrating the capability of collecting an image of the photofragments from a unimolecular dissociation event and analyzing that image to obtain the three-dimensional velocity distribution of the fragments, the efficacy and breadth of application of the ion imaging technique have continued to improve and grow. With the addition of velocity mapping, ion/electron centroiding, and slice imaging techniques, the versatility and velocity resolution have been unmatched. Recent improvements in molecular beam, laser, sensor, and computer technology are allowing even more advanced particle imaging experiments, and eventually we can expect multi-mass imaging with co-variance and full coincidence capability on a single shot basis with repetition rates in the kilohertz range. This progress should further enable “complete” experiments—the holy grail of molecular dynamics—where all quantum numbers of reactants and products of a bimolecular scattering event are fully determined and even under our control. PMID:28688442
Improved CPAS Photogrammetric Capabilities for Engineering Development Unit (EDU) Testing
NASA Technical Reports Server (NTRS)
Ray, Eric S.; Bretz, David R.
2013-01-01
This paper focuses on two key improvements to the photogrammetric analysis capabilities of the Capsule Parachute Assembly System (CPAS) for the Orion vehicle. The Engineering Development Unit (EDU) system deploys Drogue and Pilot parachutes via mortar, where an important metric is the muzzle velocity. This can be estimated using a high speed camera pointed along the mortar trajectory. The distance to the camera is computed from the apparent size of features of known dimension. This method was validated with a ground test and compares favorably with simulations. The second major photogrammetric product is measuring the geometry of the Main parachute cluster during steady-state descent using onboard cameras. This is challenging as the current test vehicles are suspended by a single-point attachment unlike earlier stable platforms suspended under a confluence fitting. The mathematical modeling of fly-out angles and projected areas has undergone significant revision. As the test program continues, several lessons were learned about optimizing the camera usage, installation, and settings to obtain the highest quality imagery possible.
Progress in and prospects for fluvial flood modelling.
Wheater, H S
2002-07-15
Recent floods in the UK have raised public and political awareness of flood risk. There is an increasing recognition that flood management and land-use planning are linked, and that decision-support modelling tools are required to address issues of climate and land-use change for integrated catchment management. In this paper, the scientific context for fluvial flood modelling is discussed, current modelling capability is considered and research challenges are identified. Priorities include (i) appropriate representation of spatial precipitation, including scenarios of climate change; (ii) development of a national capability for continuous hydrological simulation of ungauged catchments; (iii) improved scientific understanding of impacts of agricultural land-use and land-management change, and the development of new modelling approaches to represent those impacts; (iv) improved representation of urban flooding, at both local and catchment scale; (v) appropriate parametrizations for hydraulic simulation of in-channel and flood-plain flows, assimilating available ground observations and remotely sensed data; and (vi) a flexible decision-support modelling framework, incorporating developments in computing, data availability, data assimilation and uncertainty analysis.
xSDK Foundations: Toward an Extreme-scale Scientific Software Development Kit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heroux, Michael A.; Bartlett, Roscoe; Demeshko, Irina
Here, extreme-scale computational science increasingly demands multiscale and multiphysics formulations. Combining software developed by independent groups is imperative: no single team has resources for all predictive science and decision support capabilities. Scientific libraries provide high-quality, reusable software components for constructing applications with improved robustness and portability. However, without coordination, many libraries cannot be easily composed. Namespace collisions, inconsistent arguments, lack of third-party software versioning, and additional difficulties make composition costly. The Extreme-scale Scientific Software Development Kit (xSDK) defines community policies to improve code quality and compatibility across independently developed packages (hypre, PETSc, SuperLU, Trilinos, and Alquimia) and provides a foundationmore » for addressing broader issues in software interoperability, performance portability, and sustainability. The xSDK provides turnkey installation of member software and seamless combination of aggregate capabilities, and it marks first steps toward extreme-scale scientific software ecosystems from which future applications can be composed rapidly with assured quality and scalability.« less
xSDK Foundations: Toward an Extreme-scale Scientific Software Development Kit
Heroux, Michael A.; Bartlett, Roscoe; Demeshko, Irina; ...
2017-03-01
Here, extreme-scale computational science increasingly demands multiscale and multiphysics formulations. Combining software developed by independent groups is imperative: no single team has resources for all predictive science and decision support capabilities. Scientific libraries provide high-quality, reusable software components for constructing applications with improved robustness and portability. However, without coordination, many libraries cannot be easily composed. Namespace collisions, inconsistent arguments, lack of third-party software versioning, and additional difficulties make composition costly. The Extreme-scale Scientific Software Development Kit (xSDK) defines community policies to improve code quality and compatibility across independently developed packages (hypre, PETSc, SuperLU, Trilinos, and Alquimia) and provides a foundationmore » for addressing broader issues in software interoperability, performance portability, and sustainability. The xSDK provides turnkey installation of member software and seamless combination of aggregate capabilities, and it marks first steps toward extreme-scale scientific software ecosystems from which future applications can be composed rapidly with assured quality and scalability.« less
Pathfinder radar development at Sandia National Laboratories
NASA Astrophysics Data System (ADS)
Castillo, Steven
2016-05-01
Since the invention of Synthetic Aperture Radar imaging in the 1950's, users or potential users have sought to exploit SAR imagery for a variety of applications including the earth sciences and defense. At Sandia Laboratories, SAR Research and Development and associated defense applications grew out of the nuclear weapons program in the 1980's and over the years has become a highly viable ISR sensor for a variety of tactical applications. Sandia SAR systems excel where real--time, high--resolution, all--weather, day or night surveillance is required for developing situational awareness. This presentation will discuss the various aspects of Sandia's airborne ISR capability with respect to issues related to current operational success as well as the future direction of the capability as Sandia seeks to improve the SAR capability it delivers into multiple mission scenarios. Issues discussed include fundamental radar capabilities, advanced exploitation techniques and human--computer interface (HMI) challenges that are part of the advances required to maintain Sandia's ability to continue to support ever changing and demanding mission challenges.
A Study on Coexistence Capability Evaluations of the Enhanced Channel Hopping Mechanism in WBANs
Wei, Zhongcheng; Sun, Yongmei; Ji, Yuefeng
2017-01-01
As an important coexistence technology, channel hopping can reduce the interference among Wireless Body Area Networks (WBANs). However, it simultaneously brings some issues, such as energy waste, long latency and communication interruptions, etc. In this paper, we propose an enhanced channel hopping mechanism that allows multiple WBANs coexisted in the same channel. In order to evaluate the coexistence performance, some critical metrics are designed to reflect the possibility of channel conflict. Furthermore, by taking the queuing and non-queuing behaviors into consideration, we present a set of analysis approaches to evaluate the coexistence capability. On the one hand, we present both service-dependent and service-independent analysis models to estimate the number of coexisting WBANs. On the other hand, based on the uniform distribution assumption and the additive property of Possion-stream, we put forward two approximate methods to compute the number of occupied channels. Extensive simulation results demonstrate that our estimation approaches can provide an effective solution for coexistence capability estimation. Moreover, the enhanced channel hopping mechanism can significantly improve the coexistence capability and support a larger arrival rate of WBANs. PMID:28098818
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlson, Joseph; Savage, Martin J.; Gerber, Richard
Imagine being able to predict — with unprecedented accuracy and precision — the structure of the proton and neutron, and the forces between them, directly from the dynamics of quarks and gluons, and then using this information in calculations of the structure and reactions of atomic nuclei and of the properties of dense neutron stars (NSs). Also imagine discovering new and exotic states of matter, and new laws of nature, by being able to collect more experimental data than we dream possible today, analyzing it in real time to feed back into an experiment, and curating the data with fullmore » tracking capabilities and with fully distributed data mining capabilities. Making this vision a reality would improve basic scientific understanding, enabling us to precisely calculate, for example, the spectrum of gravity waves emitted during NS coalescence, and would have important societal applications in nuclear energy research, stockpile stewardship, and other areas. This review presents the components and characteristics of the exascale computing ecosystems necessary to realize this vision.« less
Eddy Current Influences on the Dynamic Behaviour of Magnetic Suspension Systems
NASA Technical Reports Server (NTRS)
Britcher, Colin P.; Bloodgood, Dale V.
1998-01-01
This report will summarize some results from a multi-year research effort at NASA Langley Research Center aimed at the development of an improved capability for practical modelling of eddy current effects in magnetic suspension systems. Particular attention is paid to large-gap systems, although generic results applicable to both large-gap and small-gap systems are presented. It is shown that eddy currents can significantly affect the dynamic behavior of magnetic suspension systems, but that these effects can be amenable to modelling and measurement. Theoretical frameworks are presented, together with comparisons of computed and experimental data particularly related to the Large Angle Magnetic Suspension Test Fixture at NASA Langley Research Center, and the Annular Suspension and Pointing System at Old Dominion University. In both cases, practical computations are capable of providing reasonable estimates of important performance-related parameters. The most difficult case is seen to be that of eddy currents in highly permeable material, due to the low skin depths. Problems associated with specification of material properties and areas for future research are discussed.
NASA Astrophysics Data System (ADS)
Cao, Duc; Moses, Gregory; Delettrez, Jacques
2015-08-01
An implicit, non-local thermal conduction algorithm based on the algorithm developed by Schurtz, Nicolai, and Busquet (SNB) [Schurtz et al., Phys. Plasmas 7, 4238 (2000)] for non-local electron transport is presented and has been implemented in the radiation-hydrodynamics code DRACO. To study the model's effect on DRACO's predictive capability, simulations of shot 60 303 from OMEGA are completed using the iSNB model, and the computed shock speed vs. time is compared to experiment. Temperature outputs from the iSNB model are compared with the non-local transport model of Goncharov et al. [Phys. Plasmas 13, 012702 (2006)]. Effects on adiabat are also examined in a polar drive surrogate simulation. Results show that the iSNB model is not only capable of flux-limitation but also preheat prediction while remaining numerically robust and sacrificing little computational speed. Additionally, the results provide strong incentive to further modify key parameters within the SNB theory, namely, the newly introduced non-local mean free path. This research was supported by the Laboratory for Laser Energetics of the University of Rochester.
Gerritsen, M G; Willemink, M J; Pompe, E; van der Bruggen, T; van Rhenen, A; Lammers, J W J; Wessels, F; Sprengers, R W; de Jong, P A; Minnema, M C
2017-01-01
We performed a prospective study in patients with chemotherapy induced febrile neutropenia to investigate the diagnostic value of low-dose computed tomography compared to standard chest radiography. The aim was to compare both modalities for detection of pulmonary infections and to explore performance of low-dose computed tomography for early detection of invasive fungal disease. The low-dose computed tomography remained blinded during the study. A consensus diagnosis of the fever episode made by an expert panel was used as reference standard. We included 67 consecutive patients on the first day of febrile neutropenia. According to the consensus diagnosis 11 patients (16.4%) had pulmonary infections. Sensitivity, specificity, positive predictive value and negative predictive value were 36%, 93%, 50% and 88% for radiography, and 73%, 91%, 62% and 94% for low-dose computed tomography, respectively. An uncorrected McNemar showed no statistical difference (p = 0.197). Mean radiation dose for low-dose computed tomography was 0.24 mSv. Four out of 5 included patients diagnosed with invasive fungal disease had radiographic abnormalities suspect for invasive fungal disease on the low-dose computed tomography scan made on day 1 of fever, compared to none of the chest radiographs. We conclude that chest radiography has little value in the initial assessment of febrile neutropenia on day 1 for detection of pulmonary abnormalities. Low-dose computed tomography improves detection of pulmonary infiltrates and seems capable of detecting invasive fungal disease at a very early stage with a low radiation dose.
Pompe, E.; van der Bruggen, T.; van Rhenen, A.; Lammers, J. W. J.; Wessels, F.; Sprengers, R. W.; de Jong, P. A.; Minnema, M. C.
2017-01-01
We performed a prospective study in patients with chemotherapy induced febrile neutropenia to investigate the diagnostic value of low-dose computed tomography compared to standard chest radiography. The aim was to compare both modalities for detection of pulmonary infections and to explore performance of low-dose computed tomography for early detection of invasive fungal disease. The low-dose computed tomography remained blinded during the study. A consensus diagnosis of the fever episode made by an expert panel was used as reference standard. We included 67 consecutive patients on the first day of febrile neutropenia. According to the consensus diagnosis 11 patients (16.4%) had pulmonary infections. Sensitivity, specificity, positive predictive value and negative predictive value were 36%, 93%, 50% and 88% for radiography, and 73%, 91%, 62% and 94% for low-dose computed tomography, respectively. An uncorrected McNemar showed no statistical difference (p = 0.197). Mean radiation dose for low-dose computed tomography was 0.24 mSv. Four out of 5 included patients diagnosed with invasive fungal disease had radiographic abnormalities suspect for invasive fungal disease on the low-dose computed tomography scan made on day 1 of fever, compared to none of the chest radiographs. We conclude that chest radiography has little value in the initial assessment of febrile neutropenia on day 1 for detection of pulmonary abnormalities. Low-dose computed tomography improves detection of pulmonary infiltrates and seems capable of detecting invasive fungal disease at a very early stage with a low radiation dose. PMID:28235014
Computational Toxicology Advances: Emerging capabilities for data exploration and SAR model development
Ann M. Richard and ClarLynda R. Williams, National Health & Environmental Effects Research Laboratory, US EPA, Research Triangle Park, NC, USA; email: richard.ann@epa.gov
A synthetic design environment for ship design
NASA Technical Reports Server (NTRS)
Chipman, Richard R.
1995-01-01
Rapid advances in computer science and information system technology have made possible the creation of synthetic design environments (SDE) which use virtual prototypes to increase the efficiency and agility of the design process. This next generation of computer-based design tools will rely heavily on simulation and advanced visualization techniques to enable integrated product and process teams to concurrently conceptualize, design, and test a product and its fabrication processes. This paper summarizes a successful demonstration of the feasibility of using a simulation based design environment in the shipbuilding industry. As computer science and information science technologies have evolved, there have been many attempts to apply and integrate the new capabilities into systems for the improvement of the process of design. We see the benefits of those efforts in the abundance of highly reliable, technologically complex products and services in the modern marketplace. Furthermore, the computer-based technologies have been so cost effective that the improvements embodied in modern products have been accompanied by lowered costs. Today the state-of-the-art in computerized design has advanced so dramatically that the focus is no longer on merely improving design methodology; rather the goal is to revolutionize the entire process by which complex products are conceived, designed, fabricated, tested, deployed, operated, maintained, refurbished and eventually decommissioned. By concurrently addressing all life-cycle issues, the basic decision making process within an enterprise will be improved dramatically, leading to new levels of quality, innovation, efficiency, and customer responsiveness. By integrating functions and people with an enterprise, such systems will change the fundamental way American industries are organized, creating companies that are more competitive, creative, and productive.
NASA High Performance Computing and Communications program
NASA Technical Reports Server (NTRS)
Holcomb, Lee; Smith, Paul; Hunter, Paul
1994-01-01
The National Aeronautics and Space Administration's HPCC program is part of a new Presidential initiative aimed at producing a 1000-fold increase in supercomputing speed and a 1(X)-fold improvement in available communications capability by 1997. As more advanced technologies are developed under the HPCC program, they will be used to solve NASA's 'Grand Challenge' problems, which include improving the design and simulation of advanced aerospace vehicles, allowing people at remote locations to communicate more effectively and share information, increasing scientists' abilities to model the Earth's climate and forecast global environmental trends, and improving the development of advanced spacecraft. NASA's HPCC program is organized into three projects which are unique to the agency's mission: the Computational Aerosciences (CAS) project, the Earth and Space Sciences (ESS) project, and the Remote Exploration and Experimentation (REE) project. An additional project, the Basic Research and Human Resources (BRHR) project, exists to promote long term research in computer science and engineering and to increase the pool of trained personnel in a variety of scientific disciplines. This document presents an overview of the objectives and organization of these projects, as well as summaries of early accomplishments and the significance, status, and plans for individual research and development programs within each project. Areas of emphasis include benchmarking, testbeds, software and simulation methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCoy, M; Kissel, L
2002-01-29
We are experimenting with a new computing model to be applied to a new computer dedicated to that model. Several LLNL science teams now have computational requirements, evidenced by the mature scientific applications that have been developed over the past five plus years, that far exceed the capability of the institution's computing resources. Thus, there is increased demand for dedicated, powerful parallel computational systems. Computation can, in the coming year, potentially field a capability system that is low cost because it will be based on a model that employs open source software and because it will use PC (IA32-P4) hardware.more » This incurs significant computer science risk regarding stability and system features but also presents great opportunity. We believe the risks can be managed, but the existence of risk cannot be ignored. In order to justify the budget for this system, we need to make the case that it serves science and, through serving science, serves the institution. That is the point of the meeting and the White Paper that we are proposing to prepare. The questions are listed and the responses received are in this report.« less
Computer graphics application in the engineering design integration system
NASA Technical Reports Server (NTRS)
Glatt, C. R.; Abel, R. W.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Stewart, W. A.
1975-01-01
The computer graphics aspect of the Engineering Design Integration (EDIN) system and its application to design problems were discussed. Three basic types of computer graphics may be used with the EDIN system for the evaluation of aerospace vehicles preliminary designs: offline graphics systems using vellum-inking or photographic processes, online graphics systems characterized by direct coupled low cost storage tube terminals with limited interactive capabilities, and a minicomputer based refresh terminal offering highly interactive capabilities. The offline line systems are characterized by high quality (resolution better than 0.254 mm) and slow turnaround (one to four days). The online systems are characterized by low cost, instant visualization of the computer results, slow line speed (300 BAUD), poor hard copy, and the early limitations on vector graphic input capabilities. The recent acquisition of the Adage 330 Graphic Display system has greatly enhanced the potential for interactive computer aided design.
NASA Applications of Molecular Nanotechnology
NASA Technical Reports Server (NTRS)
Globus, Al; Bailey, David; Han, Jie; Jaffe, Richard; Levit, Creon; Merkle, Ralph; Srivastava, Deepak
1998-01-01
Laboratories throughout the world are rapidly gaining atomically precise control over matter. As this control extends to an ever wider variety of materials, processes and devices, opportunities for applications relevant to NASA's missions will be created. This document surveys a number of future molecular nanotechnology capabilities of aerospace interest. Computer applications, launch vehicle improvements, and active materials appear to be of particular interest. We also list a number of applications for each of NASA's enterprises. If advanced molecular nanotechnology can be developed, almost all of NASA's endeavors will be radically improved. In particular, a sufficiently advanced molecular nanotechnology can arguably bring large scale space colonization within our grasp.
NASA Technical Reports Server (NTRS)
Chase, W. D.
1975-01-01
The calligraphic chromatic projector described was developed to improve the perceived realism of visual scene simulation ('out-the-window visuals'). The optical arrangement of the projector is illustrated and discussed. The device permits drawing 2000 vectors in as many as 500 colors, all above critical flicker frequencies, and use of high scene resolution and brightness at an acceptable level to the pilot, with the maximum system capabilities of 1000 lines and 1000 fL. The device for generating the colors is discussed, along with an experiment conducted to demonstrate potential improvements in performance and pilot opinion. Current research work and future research plans are noted.
The architecture of the High Performance Storage System (HPSS)
NASA Technical Reports Server (NTRS)
Teaff, Danny; Watson, Dick; Coyne, Bob
1994-01-01
The rapid growth in the size of datasets has caused a serious imbalance in I/O and storage system performance and functionality relative to application requirements and the capabilities of other system components. The High Performance Storage System (HPSS) is a scalable, next-generation storage system that will meet the functionality and performance requirements or large-scale scientific and commercial computing environments. Our goal is to improve the performance and capacity of storage by two orders of magnitude or more over what is available in the general or mass marketplace today. We are also providing corresponding improvements in architecture and functionality. This paper describes the architecture and functionality of HPSS.
NASA Astrophysics Data System (ADS)
Ramli, Razamin; Tein, Lim Huai
2016-08-01
A good work schedule can improve hospital operations by providing better coverage with appropriate staffing levels in managing nurse personnel. Hence, constructing the best nurse work schedule is the appropriate effort. In doing so, an improved selection operator in the Evolutionary Algorithm (EA) strategy for a nurse scheduling problem (NSP) is proposed. The smart and efficient scheduling procedures were considered. Computation of the performance of each potential solution or schedule was done through fitness evaluation. The best so far solution was obtained via special Maximax&Maximin (MM) parent selection operator embedded in the EA, which fulfilled all constraints considered in the NSP.
Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing
NASA Technical Reports Server (NTRS)
Kory, Carol L.
2001-01-01
The phenomenal growth of the satellite communications industry has created a large demand for traveling-wave tubes (TWT's) operating with unprecedented specifications requiring the design and production of many novel devices in record time. To achieve this, the TWT industry heavily relies on computational modeling. However, the TWT industry's computational modeling capabilities need to be improved because there are often discrepancies between measured TWT data and that predicted by conventional two-dimensional helical TWT interaction codes. This limits the analysis and design of novel devices or TWT's with parameters differing from what is conventionally manufactured. In addition, the inaccuracy of current computational tools limits achievable TWT performance because optimized designs require highly accurate models. To address these concerns, a fully three-dimensional, time-dependent, helical TWT interaction model was developed using the electromagnetic particle-in-cell code MAFIA (Solution of MAxwell's equations by the Finite-Integration-Algorithm). The model includes a short section of helical slow-wave circuit with excitation fed by radiofrequency input/output couplers, and an electron beam contained by periodic permanent magnet focusing. A cutaway view of several turns of the three-dimensional helical slow-wave circuit with input/output couplers is shown. This has been shown to be more accurate than conventionally used two-dimensional models. The growth of the communications industry has also imposed a demand for increased data rates for the transmission of large volumes of data. To achieve increased data rates, complex modulation and multiple access techniques are employed requiring minimum distortion of the signal as it is passed through the TWT. Thus, intersymbol interference (ISI) becomes a major consideration, as well as suspected causes such as reflections within the TWT. To experimentally investigate effects of the physical TWT on ISI would be prohibitively expensive, as it would require manufacturing numerous amplifiers, in addition to acquiring the required digital hardware. As an alternative, the time-domain TWT interaction model developed here provides the capability to establish a computational test bench where ISI or bit error rate can be simulated as a function of TWT operating parameters and component geometries. Intermodulation products, harmonic generation, and backward waves can also be monitored with the model for similar correlations. The advancements in computational capabilities and corresponding potential improvements in TWT performance may prove to be the enabling technologies for realizing unprecedented data rates for near real time transmission of the increasingly larger volumes of data demanded by planned commercial and Government satellite communications applications. This work is in support of the Cross Enterprise Technology Development Program in Headquarters' Advanced Technology & Mission Studies Division and the Air Force Office of Scientific Research Small Business Technology Transfer programs.
Development of an Aeroelastic Modeling Capability for Transient Nozzle Side Load Analysis
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Zhao, Xiang; Zhang, Sijun; Chen, Yen-Sen
2013-01-01
Lateral nozzle forces are known to cause severe structural damage to any new rocket engine in development during test. While three-dimensional, transient, turbulent, chemically reacting computational fluid dynamics methodology has been demonstrated to capture major side load physics with rigid nozzles, hot-fire tests often show nozzle structure deformation during major side load events, leading to structural damages if structural strengthening measures were not taken. The modeling picture is incomplete without the capability to address the two-way responses between the structure and fluid. The objective of this study is to develop a coupled aeroelastic modeling capability by implementing the necessary structural dynamics component into an anchored computational fluid dynamics methodology. The computational fluid dynamics component is based on an unstructured-grid, pressure-based computational fluid dynamics formulation, while the computational structural dynamics component is developed in the framework of modal analysis. Transient aeroelastic nozzle startup analyses of the Block I Space Shuttle Main Engine at sea level were performed. The computed results from the aeroelastic nozzle modeling are presented.
Rapid solution of large-scale systems of equations
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O.
1994-01-01
The analysis and design of complex aerospace structures requires the rapid solution of large systems of linear and nonlinear equations, eigenvalue extraction for buckling, vibration and flutter modes, structural optimization and design sensitivity calculation. Computers with multiple processors and vector capabilities can offer substantial computational advantages over traditional scalar computer for these analyses. These computers fall into two categories: shared memory computers and distributed memory computers. This presentation covers general-purpose, highly efficient algorithms for generation/assembly or element matrices, solution of systems of linear and nonlinear equations, eigenvalue and design sensitivity analysis and optimization. All algorithms are coded in FORTRAN for shared memory computers and many are adapted to distributed memory computers. The capability and numerical performance of these algorithms will be addressed.
Neural network controller development for a magnetically suspended flywheel energy storage system
NASA Technical Reports Server (NTRS)
Fittro, Roger L.; Pang, Da-Chen; Anand, Davinder K.
1994-01-01
A neural network controller has been developed to accommodate disturbances and nonlinearities and improve the robustness of a magnetically suspended flywheel energy storage system. The controller is trained using the back propagation-through-time technique incorporated with a time-averaging scheme. The resulting nonlinear neural network controller improves system performance by adapting flywheel stiffness and damping based on operating speed. In addition, a hybrid multi-layered neural network controller is developed off-line which is capable of improving system performance even further. All of the research presented in this paper was implemented via a magnetic bearing computer simulation. However, careful attention was paid to developing a practical methodology which will make future application to the actual bearing system fairly straightforward.
NASA Astrophysics Data System (ADS)
Jedlovec, G.; Molthan, A.; Zavodsky, B.; Case, J.; Lafontaine, F.
2010-12-01
The NASA Short-term Prediction Research and Transition (SPoRT) Center focuses on the transition of unique observations and research capabilities to the operational weather community, with a goal of improving short-term forecasts on a regional scale. Advances in research computing have lead to “Climate in a Box” systems, with hardware configurations capable of producing high resolution, near real-time weather forecasts, but with footprints, power, and cooling requirements that are comparable to desktop systems. The SPoRT Center has developed several capabilities for incorporating unique NASA research capabilities and observations with real-time weather forecasts. Planned utilization includes the development of a fully-cycled data assimilation system used to drive 36-48 hour forecasts produced by the NASA Unified version of the Weather Research and Forecasting (WRF) model (NU-WRF). The horsepower provided by the “Climate in a Box” system is expected to facilitate the assimilation of vertical profiles of temperature and moisture provided by the Atmospheric Infrared Sounder (AIRS) aboard the NASA Aqua satellite. In addition, the Moderate Resolution Imaging Spectroradiometer (MODIS) instruments aboard NASA’s Aqua and Terra satellites provide high-resolution sea surface temperatures and vegetation characteristics. The development of MODIS normalized difference vegetation index (NVDI) composites for use within the NASA Land Information System (LIS) will assist in the characterization of vegetation, and subsequently the surface albedo and processes related to soil moisture. Through application of satellite simulators, NASA satellite instruments can be used to examine forecast model errors in cloud cover and other characteristics. Through the aforementioned application of the “Climate in a Box” system and NU-WRF capabilities, an end goal is the establishment of a real-time forecast system that fully integrates modeling and analysis capabilities developed within the NASA SPoRT Center, with benefits provided to the operational forecasting community.
NASA Technical Reports Server (NTRS)
Jedlovec, Gary J.; Molthan, Andrew L.; Zavodsky, Bradley; Case, Jonathan L.; LaFontaine, Frank J.
2010-01-01
The NASA Short-term Prediction Research and Transition (SPoRT) Center focuses on the transition of unique observations and research capabilities to the operational weather community, with a goal of improving short-term forecasts on a regional scale. Advances in research computing have lead to "Climate in a Box" systems, with hardware configurations capable of producing high resolution, near real-time weather forecasts, but with footprints, power, and cooling requirements that are comparable to desktop systems. The SPoRT Center has developed several capabilities for incorporating unique NASA research capabilities and observations with real-time weather forecasts. Planned utilization includes the development of a fully-cycled data assimilation system used to drive 36-48 hour forecasts produced by the NASA Unified version of the Weather Research and Forecasting (WRF) model (NU-WRF). The horsepower provided by the "Climate in a Box" system is expected to facilitate the assimilation of vertical profiles of temperature and moisture provided by the Atmospheric Infrared Sounder (AIRS) aboard the NASA Aqua satellite. In addition, the Moderate Resolution Imaging Spectroradiometer (MODIS) instruments aboard NASA s Aqua and Terra satellites provide high-resolution sea surface temperatures and vegetation characteristics. The development of MODIS normalized difference vegetation index (NVDI) composites for use within the NASA Land Information System (LIS) will assist in the characterization of vegetation, and subsequently the surface albedo and processes related to soil moisture. Through application of satellite simulators, NASA satellite instruments can be used to examine forecast model errors in cloud cover and other characteristics. Through the aforementioned application of the "Climate in a Box" system and NU-WRF capabilities, an end goal is the establishment of a real-time forecast system that fully integrates modeling and analysis capabilities developed within the NASA SPoRT Center, with benefits provided to the operational forecasting community.
Middleware Architecture for Ambient Intelligence in the Networked Home
NASA Astrophysics Data System (ADS)
Georgantas, Nikolaos; Issarny, Valerie; Mokhtar, Sonia Ben; Bromberg, Yerom-David; Bianco, Sebastien; Thomson, Graham; Raverdy, Pierre-Guillaume; Urbieta, Aitor; Cardoso, Roberto Speicys
With computing and communication capabilities now embedded in most physical objects of the surrounding environment and most users carrying wireless computing devices, the Ambient Intelligence (AmI) / pervasive computing vision [28] pioneered by Mark Weiser [32] is becoming a reality. Devices carried by nomadic users can seamlessly network with a variety of devices, both stationary and mobile, both nearby and remote, providing a wide range of functional capabilities, from base sensing and actuating to rich applications (e.g., smart spaces). This then allows the dynamic deployment of pervasive applications, which dynamically compose functional capabilities accessible in the pervasive network at the given time and place of an application request.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clouse, C. J.; Edwards, M. J.; McCoy, M. G.
2015-07-07
Through its Advanced Scientific Computing (ASC) and Inertial Confinement Fusion (ICF) code development efforts, Lawrence Livermore National Laboratory (LLNL) provides a world leading numerical simulation capability for the National HED/ICF program in support of the Stockpile Stewardship Program (SSP). In addition the ASC effort provides high performance computing platform capabilities upon which these codes are run. LLNL remains committed to, and will work with, the national HED/ICF program community to help insure numerical simulation needs are met and to make those capabilities available, consistent with programmatic priorities and available resources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
GEL, Aytekin; Jiao, Yang; Emady, Heather
Two major challenges hinder the effective use and adoption of multiphase computational fluid dynamics tools by the industry. The first is the need for significant computational resources, which is inversely proportional to the accuracy of solutions due to computational intensity of the algorithms. The second barrier is assessing the prediction credibility and confidence in the simulation results. In this project, a multi-tiered approach has been proposed under four broad activities to overcome these challenges while addressing all of the objectives outlined in FOA-0001238 through Phases 1 and 2 of the project. The present report consists of the results for onlymore » Phase 1, which was the funded performance period. From the start the project, all of the objectives outlined in FOA were addressed through four major activity tasks in an integrated and balanced fashion to improve adoption of MFIX suite of solvers for industrial use. The first task aimed to improve the performance of MFIX-DEM specifically targeting to acquire the peak performance on Intel Xeon and Xeon Phi based systems, which are expected to be one of the primary high-performance computing platforms both affordable and available for the industrial users in the next two to five years. However, due to a number of changes in course of the project, the scope of the performance improvements related task was significantly reduced to avoid duplicate work. Hence, more emphasis was placed on the other three tasks as discussed below.The second task aimed at physical modeling enhancements through implementation of polydispersity capability and validation of heat transfer models in MFIX. An extended verification and validation (V&V) study was performed for the new polydispersity feature implemented in MFIX-DEM both for granular and coupled gas-solid flows. The features of the polydispersity capability and results for an industrially relevant problem were disseminated through journal papers (one published and one under review at the time of writing of the final technical report). As part of the validation efforts, another industrially relevant problem of interest based on rotary drums was studied for several modes of heat transfer and results were presented in conferences. Third task was aimed towards an important and unique contribution of the project, which was to develop a unified uncertainty quantification framework by integrating MFIX-DEM with a graphical user interface (GUI) driven uncertainty quantification (UQ) engine, i.e., MFIX-GUI and PSUADE. The goal was to enable a user with only modest knowledge of statistics to effectively utilize the UQ framework offered with MFIX-DEM Phi to perform UQ analysis routinely. For Phase 1, a proof-of-concept demonstration of the proposed framework was completed and shared. Direct industry involvement was one of the key virtues of this project, which was performed through forth task. For this purpose, even at the proposal stage, the project team received strong interest in the proposed capabilities from two major corporations, which were further expanded throughout Phase 1 and a new collaboration with another major corporation from chemical industry was also initiated. The level of interest received and continued collaboration for the project during Phase 1 clearly shows the relevance and potential impact of the project for the industrial users.« less
Parallel computers - Estimate errors caused by imprecise data
NASA Technical Reports Server (NTRS)
Kreinovich, Vladik; Bernat, Andrew; Villa, Elsa; Mariscal, Yvonne
1991-01-01
A new approach to the problem of estimating errors caused by imprecise data is proposed in the context of software engineering. A software device is used to produce an ideal solution to the problem, when the computer is capable of computing errors of arbitrary programs. The software engineering aspect of this problem is to describe a device for computing the error estimates in software terms and then to provide precise numbers with error estimates to the user. The feasibility of the program capable of computing both some quantity and its error estimate in the range of possible measurement errors is demonstrated.
NASA Technical Reports Server (NTRS)
Gillian, Ronnie E.; Lotts, Christine G.
1988-01-01
The Computational Structural Mechanics (CSM) Activity at Langley Research Center is developing methods for structural analysis on modern computers. To facilitate that research effort, an applications development environment has been constructed to insulate the researcher from the many computer operating systems of a widely distributed computer network. The CSM Testbed development system was ported to the Numerical Aerodynamic Simulator (NAS) Cray-2, at the Ames Research Center, to provide a high end computational capability. This paper describes the implementation experiences, the resulting capability, and the future directions for the Testbed on supercomputers.
From computer-assisted intervention research to clinical impact: The need for a holistic approach.
Ourselin, Sébastien; Emberton, Mark; Vercauteren, Tom
2016-10-01
The early days of the field of medical image computing (MIC) and computer-assisted intervention (CAI), when publishing a strong self-contained methodological algorithm was enough to produce impact, are over. As a community, we now have substantial responsibility to translate our scientific progresses into improved patient care. In the field of computer-assisted interventions, the emphasis is also shifting from the mere use of well-known established imaging modalities and position trackers to the design and combination of innovative sensing, elaborate computational models and fine-grained clinical workflow analysis to create devices with unprecedented capabilities. The barriers to translating such devices in the complex and understandably heavily regulated surgical and interventional environment can seem daunting. Whether we leave the translation task mostly to our industrial partners or welcome, as researchers, an important share of it is up to us. We argue that embracing the complexity of surgical and interventional sciences is mandatory to the evolution of the field. Being able to do so requires large-scale infrastructure and a critical mass of expertise that very few research centres have. In this paper, we emphasise the need for a holistic approach to computer-assisted interventions where clinical, scientific, engineering and regulatory expertise are combined as a means of moving towards clinical impact. To ensure that the breadth of infrastructure and expertise required for translational computer-assisted intervention research does not lead to a situation where the field advances only thanks to a handful of exceptionally large research centres, we also advocate that solutions need to be designed to lower the barriers to entry. Inspired by fields such as particle physics and astronomy, we claim that centralised very large innovation centres with state of the art technology and health technology assessment capabilities backed by core support staff and open interoperability standards need to be accessible to the wider computer-assisted intervention research community. Copyright © 2016. Published by Elsevier B.V.
Altman, Michael D.; Bardhan, Jaydeep P.; White, Jacob K.; Tidor, Bruce
2009-01-01
We present a boundary-element method (BEM) implementation for accurately solving problems in biomolecular electrostatics using the linearized Poisson–Boltzmann equation. Motivating this implementation is the desire to create a solver capable of precisely describing the geometries and topologies prevalent in continuum models of biological molecules. This implementation is enabled by the synthesis of four technologies developed or implemented specifically for this work. First, molecular and accessible surfaces used to describe dielectric and ion-exclusion boundaries were discretized with curved boundary elements that faithfully reproduce molecular geometries. Second, we avoided explicitly forming the dense BEM matrices and instead solved the linear systems with a preconditioned iterative method (GMRES), using a matrix compression algorithm (FFTSVD) to accelerate matrix-vector multiplication. Third, robust numerical integration methods were employed to accurately evaluate singular and near-singular integrals over the curved boundary elements. Finally, we present a general boundary-integral approach capable of modeling an arbitrary number of embedded homogeneous dielectric regions with differing dielectric constants, possible salt treatment, and point charges. A comparison of the presented BEM implementation and standard finite-difference techniques demonstrates that for certain classes of electrostatic calculations, such as determining absolute electrostatic solvation and rigid-binding free energies, the improved convergence properties of the BEM approach can have a significant impact on computed energetics. We also demonstrate that the improved accuracy offered by the curved-element BEM is important when more sophisticated techniques, such as non-rigid-binding models, are used to compute the relative electrostatic effects of molecular modifications. In addition, we show that electrostatic calculations requiring multiple solves using the same molecular geometry, such as charge optimization or component analysis, can be computed to high accuracy using the presented BEM approach, in compute times comparable to traditional finite-difference methods. PMID:18567005
Validation of High-Fidelity CFD Simulations for Rocket Injector Design
NASA Technical Reports Server (NTRS)
Tucker, P. Kevin; Menon, Suresh; Merkle, Charles L.; Oefelein, Joseph C.; Yang, Vigor
2008-01-01
Computational fluid dynamics (CFD) has the potential to improve the historical rocket injector design process by evaluating the sensitivity of performance and injector-driven thermal environments to the details of the injector geometry and key operational parameters. Methodical verification and validation efforts on a range of coaxial injector elements have shown the current production CFD capability must be improved in order to quantitatively impact the injector design process. This paper documents the status of a focused effort to compare and understand the predictive capabilities and computational requirements of a range of CFD methodologies on a set of single element injector model problems. The steady Reynolds-Average Navier-Stokes (RANS), unsteady Reynolds-Average Navier-Stokes (URANS) and three different approaches using the Large Eddy Simulation (LES) technique were used to simulate the initial model problem, a single element coaxial injector using gaseous oxygen and gaseous hydrogen propellants. While one high-fidelity LES result matches the experimental combustion chamber wall heat flux very well, there is no monotonic convergence to the data with increasing computational tool fidelity. Systematic evaluation of key flow field regions such as the flame zone, the head end recirculation zone and the downstream near wall zone has shed significant, though as of yet incomplete, light on the complex, underlying causes for the performance level of each technique. 1 Aerospace Engineer and Combustion CFD Team Leader, MS ER42, NASA MSFC, AL 35812, Senior Member, AIAA. 2 Professor and Director, Computational Combustion Laboratory, School of Aerospace Engineering, 270 Ferst Dr., Atlanta, GA 30332, Associate Fellow, AIAA. 3 Reilly Professor of Engineering, School of Mechanical Engineering, 585 Purdue Mall, West Lafayette, IN 47907, Fellow, AIAA. 4 Principal Member of Technical Staff, Combustion Research Facility, 7011 East Avenue, MS9051, Livermore, CA 94550, Associate Fellow, AIAA. 5 J. L. and G. H. McCain Endowed Chair, Mechanical Engineering, 104 Research Building East, University Park, PA 16802, Fellow, AIAA. American Institute of Aeronautics and Astronautics 1
The JPL Library information retrieval system
NASA Technical Reports Server (NTRS)
Walsh, J.
1975-01-01
The development, capabilities, and products of the computer-based retrieval system of the Jet Propulsion Laboratory Library are described. The system handles books and documents, produces a book catalog, and provides a machine search capability. Programs and documentation are available to the public through NASA's computer software dissemination program.
CHEETAH: A fast thermochemical code for detonation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fried, L.E.
1993-11-01
For more than 20 years, TIGER has been the benchmark thermochemical code in the energetic materials community. TIGER has been widely used because it gives good detonation parameters in a very short period of time. Despite its success, TIGER is beginning to show its age. The program`s chemical equilibrium solver frequently crashes, especially when dealing with many chemical species. It often fails to find the C-J point. Finally, there are many inconveniences for the user stemming from the programs roots in pre-modern FORTRAN. These inconveniences often lead to mistakes in preparing input files and thus erroneous results. We are producingmore » a modern version of TIGER, which combines the best features of the old program with new capabilities, better computational algorithms, and improved packaging. The new code, which will evolve out of TIGER in the next few years, will be called ``CHEETAH.`` Many of the capabilities that will be put into CHEETAH are inspired by the thermochemical code CHEQ. The new capabilities of CHEETAH are: calculate trace levels of chemical compounds for environmental analysis; kinetics capability: CHEETAH will predict chemical compositions as a function of time given individual chemical reaction rates. Initial application: carbon condensation; CHEETAH will incorporate partial reactions; CHEETAH will be based on computer-optimized JCZ3 and BKW parameters. These parameters will be fit to over 20 years of data collected at LLNL. We will run CHEETAH thousands of times to determine the best possible parameter sets; CHEETAH will fit C-J data to JWL`s,and also predict full-wall and half-wall cylinder velocities.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Tianzhen; Buhl, Fred; Haves, Philip
2008-09-20
EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences,more » identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.« less
Darmanis, Spyridon; Toms, Andrew; Durman, Robert; Moore, Donna; Eyres, Keith
2007-07-01
To reduce the operating time in computer-assisted navigated total knee replacement (TKR), by improving communication between the infrared camera and the trackers placed on the patient. The innovation involves placing a routinely used laser pointer on top of the camera, so that the infrared cameras focus precisely on the trackers located on the knee to be operated on. A prospective randomized study was performed involving 40 patients divided into two groups, A and B. Both groups underwent navigated TKR, but for group B patients a laser pointer was used to improve the targeting capabilities of the cameras. Without the laser pointer, the camera had to move a mean 9.2 times in order to identify the trackers. With the introduction of the laser pointer, this was reduced to 0.9 times. Accordingly, the additional mean time required without the laser pointer was 11.6 minutes. Time delays are a major problem in computer-assisted surgery, and our technical suggestion can contribute towards reducing the delays associated with this particular application.
NASA Astrophysics Data System (ADS)
McCray, Wilmon Wil L., Jr.
The research was prompted by a need to conduct a study that assesses process improvement, quality management and analytical techniques taught to students in U.S. colleges and universities undergraduate and graduate systems engineering and the computing science discipline (e.g., software engineering, computer science, and information technology) degree programs during their academic training that can be applied to quantitatively manage processes for performance. Everyone involved in executing repeatable processes in the software and systems development lifecycle processes needs to become familiar with the concepts of quantitative management, statistical thinking, process improvement methods and how they relate to process-performance. Organizations are starting to embrace the de facto Software Engineering Institute (SEI) Capability Maturity Model Integration (CMMI RTM) Models as process improvement frameworks to improve business processes performance. High maturity process areas in the CMMI model imply the use of analytical, statistical, quantitative management techniques, and process performance modeling to identify and eliminate sources of variation, continually improve process-performance; reduce cost and predict future outcomes. The research study identifies and provides a detail discussion of the gap analysis findings of process improvement and quantitative analysis techniques taught in U.S. universities systems engineering and computing science degree programs, gaps that exist in the literature, and a comparison analysis which identifies the gaps that exist between the SEI's "healthy ingredients " of a process performance model and courses taught in U.S. universities degree program. The research also heightens awareness that academicians have conducted little research on applicable statistics and quantitative techniques that can be used to demonstrate high maturity as implied in the CMMI models. The research also includes a Monte Carlo simulation optimization model and dashboard that demonstrates the use of statistical methods, statistical process control, sensitivity analysis, quantitative and optimization techniques to establish a baseline and predict future customer satisfaction index scores (outcomes). The American Customer Satisfaction Index (ACSI) model and industry benchmarks were used as a framework for the simulation model.
ERIC Educational Resources Information Center
Beale, Ivan L.
2005-01-01
Computer assisted learning (CAL) can involve a computerised intelligent learning environment, defined as an environment capable of automatically, dynamically and continuously adapting to the learning context. One aspect of this adaptive capability involves automatic adjustment of instructional procedures in response to each learner's performance,…
A Man-Machine System for Contemporary Counseling Practice: Diagnosis and Prediction.
ERIC Educational Resources Information Center
Roach, Arthur J.
This paper looks at present and future capabilities for diagnosis and prediction in computer-based guidance efforts and reviews the problems and potentials which will accompany the implementation of such capabilities. In addition to necessary procedural refinement in prediction, future developments in computer-based educational and career…
7 CFR 4290.504 - Equipment and office requirements.
Code of Federal Regulations, 2011 CFR
2011-01-01
... office requirements. (a) Computer capability. You must have a personal computer with access to the Internet and be able to use this equipment to prepare reports, for which you will receive the necessary software, and transmit such reports to the Secretary. In addition, you must have the capability to send and...
WPS mediation: An approach to process geospatial data on different computing backends
NASA Astrophysics Data System (ADS)
Giuliani, Gregory; Nativi, Stefano; Lehmann, Anthony; Ray, Nicolas
2012-10-01
The OGC Web Processing Service (WPS) specification allows generating information by processing distributed geospatial data made available through Spatial Data Infrastructures (SDIs). However, current SDIs have limited analytical capacities and various problems emerge when trying to use them in data and computing-intensive domains such as environmental sciences. These problems are usually not or only partially solvable using single computing resources. Therefore, the Geographic Information (GI) community is trying to benefit from the superior storage and computing capabilities offered by distributed computing (e.g., Grids, Clouds) related methods and technologies. Currently, there is no commonly agreed approach to grid-enable WPS. No implementation allows one to seamlessly execute a geoprocessing calculation following user requirements on different computing backends, ranging from a stand-alone GIS server up to computer clusters and large Grid infrastructures. Considering this issue, this paper presents a proof of concept by mediating different geospatial and Grid software packages, and by proposing an extension of WPS specification through two optional parameters. The applicability of this approach will be demonstrated using a Normalized Difference Vegetation Index (NDVI) mediated WPS process, highlighting benefits, and issues that need to be further investigated to improve performances.
Microscope self-calibration based on micro laser line imaging and soft computing algorithms
NASA Astrophysics Data System (ADS)
Apolinar Muñoz Rodríguez, J.
2018-06-01
A technique to perform microscope self-calibration via micro laser line and soft computing algorithms is presented. In this technique, the microscope vision parameters are computed by means of soft computing algorithms based on laser line projection. To implement the self-calibration, a microscope vision system is constructed by means of a CCD camera and a 38 μm laser line. From this arrangement, the microscope vision parameters are represented via Bezier approximation networks, which are accomplished through the laser line position. In this procedure, a genetic algorithm determines the microscope vision parameters by means of laser line imaging. Also, the approximation networks compute the three-dimensional vision by means of the laser line position. Additionally, the soft computing algorithms re-calibrate the vision parameters when the microscope vision system is modified during the vision task. The proposed self-calibration improves accuracy of the traditional microscope calibration, which is accomplished via external references to the microscope system. The capability of the self-calibration based on soft computing algorithms is determined by means of the calibration accuracy and the micro-scale measurement error. This contribution is corroborated by an evaluation based on the accuracy of the traditional microscope calibration.
Dynamic Load-Balancing for Distributed Heterogeneous Computing of Parallel CFD Problems
NASA Technical Reports Server (NTRS)
Ecer, A.; Chien, Y. P.; Boenisch, T.; Akay, H. U.
2000-01-01
The developed methodology is aimed at improving the efficiency of executing block-structured algorithms on parallel, distributed, heterogeneous computers. The basic approach of these algorithms is to divide the flow domain into many sub- domains called blocks, and solve the governing equations over these blocks. Dynamic load balancing problem is defined as the efficient distribution of the blocks among the available processors over a period of several hours of computations. In environments with computers of different architecture, operating systems, CPU speed, memory size, load, and network speed, balancing the loads and managing the communication between processors becomes crucial. Load balancing software tools for mutually dependent parallel processes have been created to efficiently utilize an advanced computation environment and algorithms. These tools are dynamic in nature because of the chances in the computer environment during execution time. More recently, these tools were extended to a second operating system: NT. In this paper, the problems associated with this application will be discussed. Also, the developed algorithms were combined with the load sharing capability of LSF to efficiently utilize workstation clusters for parallel computing. Finally, results will be presented on running a NASA based code ADPAC to demonstrate the developed tools for dynamic load balancing.
Exploiting Parallel R in the Cloud with SPRINT
Piotrowski, M.; McGilvary, G.A.; Sloan, T. M.; Mewissen, M.; Lloyd, A.D.; Forster, T.; Mitchell, L.; Ghazal, P.; Hill, J.
2012-01-01
Background Advances in DNA Microarray devices and next-generation massively parallel DNA sequencing platforms have led to an exponential growth in data availability but the arising opportunities require adequate computing resources. High Performance Computing (HPC) in the Cloud offers an affordable way of meeting this need. Objectives Bioconductor, a popular tool for high-throughput genomic data analysis, is distributed as add-on modules for the R statistical programming language but R has no native capabilities for exploiting multi-processor architectures. SPRINT is an R package that enables easy access to HPC for genomics researchers. This paper investigates: setting up and running SPRINT-enabled genomic analyses on Amazon’s Elastic Compute Cloud (EC2), the advantages of submitting applications to EC2 from different parts of the world and, if resource underutilization can improve application performance. Methods The SPRINT parallel implementations of correlation, permutation testing, partitioning around medoids and the multi-purpose papply have been benchmarked on data sets of various size on Amazon EC2. Jobs have been submitted from both the UK and Thailand to investigate monetary differences. Results It is possible to obtain good, scalable performance but the level of improvement is dependent upon the nature of algorithm. Resource underutilization can further improve the time to result. End-user’s location impacts on costs due to factors such as local taxation. Conclusions: Although not designed to satisfy HPC requirements, Amazon EC2 and cloud computing in general provides an interesting alternative and provides new possibilities for smaller organisations with limited funds. PMID:23223611
Exploiting parallel R in the cloud with SPRINT.
Piotrowski, M; McGilvary, G A; Sloan, T M; Mewissen, M; Lloyd, A D; Forster, T; Mitchell, L; Ghazal, P; Hill, J
2013-01-01
Advances in DNA Microarray devices and next-generation massively parallel DNA sequencing platforms have led to an exponential growth in data availability but the arising opportunities require adequate computing resources. High Performance Computing (HPC) in the Cloud offers an affordable way of meeting this need. Bioconductor, a popular tool for high-throughput genomic data analysis, is distributed as add-on modules for the R statistical programming language but R has no native capabilities for exploiting multi-processor architectures. SPRINT is an R package that enables easy access to HPC for genomics researchers. This paper investigates: setting up and running SPRINT-enabled genomic analyses on Amazon's Elastic Compute Cloud (EC2), the advantages of submitting applications to EC2 from different parts of the world and, if resource underutilization can improve application performance. The SPRINT parallel implementations of correlation, permutation testing, partitioning around medoids and the multi-purpose papply have been benchmarked on data sets of various size on Amazon EC2. Jobs have been submitted from both the UK and Thailand to investigate monetary differences. It is possible to obtain good, scalable performance but the level of improvement is dependent upon the nature of the algorithm. Resource underutilization can further improve the time to result. End-user's location impacts on costs due to factors such as local taxation. Although not designed to satisfy HPC requirements, Amazon EC2 and cloud computing in general provides an interesting alternative and provides new possibilities for smaller organisations with limited funds.
Rapid Automated Aircraft Simulation Model Updating from Flight Data
NASA Technical Reports Server (NTRS)
Brian, Geoff; Morelli, Eugene A.
2011-01-01
Techniques to identify aircraft aerodynamic characteristics from flight measurements and compute corrections to an existing simulation model of a research aircraft were investigated. The purpose of the research was to develop a process enabling rapid automated updating of aircraft simulation models using flight data and apply this capability to all flight regimes, including flight envelope extremes. The process presented has the potential to improve the efficiency of envelope expansion flight testing, revision of control system properties, and the development of high-fidelity simulators for pilot training.
Assessment of the National Combustion Code
NASA Technical Reports Server (NTRS)
Liu, nan-Suey; Iannetti, Anthony; Shih, Tsan-Hsing
2007-01-01
The advancements made during the last decade in the areas of combustion modeling, numerical simulation, and computing platform have greatly facilitated the use of CFD based tools in the development of combustion technology. Further development of verification, validation and uncertainty quantification will have profound impact on the reliability and utility of these CFD based tools. The objectives of the present effort are to establish baseline for the National Combustion Code (NCC) and experimental data, as well as to document current capabilities and identify gaps for further improvements.
Parallel-Vector Algorithm For Rapid Structural Anlysis
NASA Technical Reports Server (NTRS)
Agarwal, Tarun R.; Nguyen, Duc T.; Storaasli, Olaf O.
1993-01-01
New algorithm developed to overcome deficiency of skyline storage scheme by use of variable-band storage scheme. Exploits both parallel and vector capabilities of modern high-performance computers. Gives engineers and designers opportunity to include more design variables and constraints during optimization of structures. Enables use of more refined finite-element meshes to obtain improved understanding of complex behaviors of aerospace structures leading to better, safer designs. Not only attractive for current supercomputers but also for next generation of shared-memory supercomputers.
NASA Technical Reports Server (NTRS)
Molthan, Andrew; Case, Jonathan; Venner, Jason; Moreno-Madrinan, Max J.; Delgado, Francisco
2012-01-01
Two projects at NASA Marshall Space Flight Center have collaborated to develop a high resolution weather forecast model for Mesoamerica: The NASA Short-term Prediction Research and Transition (SPoRT) Center, which integrates unique NASA satellite and weather forecast modeling capabilities into the operational weather forecasting community. NASA's SERVIR Program, which integrates satellite observations, ground-based data, and forecast models to improve disaster response in Central America, the Caribbean, Africa, and the Himalayas.
1998-12-01
Soft Sphere Molecular Model for Inverse-Power-Law or Lennard Jones Potentials , Physics of Fluids A, Vol. 3, No. 10, pp. 2459-2465. 42. Legge, H...information; — Providing assistance to member nations for the purpose of increasing their scientific and technical potential ; — Rendering scientific and...nal, 34:756-763, 1996. [22] W. Jones and B. Launder. The Prediction of Laminarization with a Two-Equation Model of Turbulence. Int. Journal of Heat
The Shock and Vibration Bulletin. Part 1. Welcome, Keynote Address, Invited Papers.
1980-09-01
modes. Turning and pointing such a structure is a bit like aiming a wet noodle floating in a bowl of water. If you do it very slowly, it can be done...effective plastic strain 7P can be computed at each finite difference mesh point for each instant of time. Furthermore, the plastic work effected...attempted at any instant . In somewhat similar vein, digital control systems have the inherent capability to improve the performance of re- sponse