High Performance Computer Cluster for Theoretical Studies of Roaming in Chemical Reactions
2016-08-30
High-performance Computer Cluster for Theoretical Studies of Roaming in Chemical Reactions A dedicated high-performance computer cluster was...SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS (ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 Computer cluster ...peer-reviewed journals: Final Report: High-performance Computer Cluster for Theoretical Studies of Roaming in Chemical Reactions Report Title A dedicated
Advanced Certification Program for Computer Graphic Specialists. Final Performance Report.
ERIC Educational Resources Information Center
Parkland Coll., Champaign, IL.
A pioneer program in computer graphics was implemented at Parkland College (Illinois) to meet the demand for specialized technicians to visualize data generated on high performance computers. In summer 1989, 23 students were accepted into the pilot program. Courses included C programming, calculus and analytic geometry, computer graphics, and…
Chemical calculations on Cray computers
NASA Technical Reports Server (NTRS)
Taylor, Peter R.; Bauschlicher, Charles W., Jr.; Schwenke, David W.
1989-01-01
The influence of recent developments in supercomputing on computational chemistry is discussed with particular reference to Cray computers and their pipelined vector/limited parallel architectures. After reviewing Cray hardware and software the performance of different elementary program structures are examined, and effective methods for improving program performance are outlined. The computational strategies appropriate for obtaining optimum performance in applications to quantum chemistry and dynamics are discussed. Finally, some discussion is given of new developments and future hardware and software improvements.
Performance Evaluation in Network-Based Parallel Computing
NASA Technical Reports Server (NTRS)
Dezhgosha, Kamyar
1996-01-01
Network-based parallel computing is emerging as a cost-effective alternative for solving many problems which require use of supercomputers or massively parallel computers. The primary objective of this project has been to conduct experimental research on performance evaluation for clustered parallel computing. First, a testbed was established by augmenting our existing SUNSPARCs' network with PVM (Parallel Virtual Machine) which is a software system for linking clusters of machines. Second, a set of three basic applications were selected. The applications consist of a parallel search, a parallel sort, a parallel matrix multiplication. These application programs were implemented in C programming language under PVM. Third, we conducted performance evaluation under various configurations and problem sizes. Alternative parallel computing models and workload allocations for application programs were explored. The performance metric was limited to elapsed time or response time which in the context of parallel computing can be expressed in terms of speedup. The results reveal that the overhead of communication latency between processes in many cases is the restricting factor to performance. That is, coarse-grain parallelism which requires less frequent communication between processes will result in higher performance in network-based computing. Finally, we are in the final stages of installing an Asynchronous Transfer Mode (ATM) switch and four ATM interfaces (each 155 Mbps) which will allow us to extend our study to newer applications, performance metrics, and configurations.
Update of aircraft profile data for the Integrated Noise Model computer program, vol 1: final report
DOT National Transportation Integrated Search
1992-03-01
This report provides aircraft takeoff and landing profiles, aircraft aerodynamic performance coefficients and engine performance coefficients for the aircraft data base (Database 9) in the Integrated Noise Model (INM) computer program. Flight profile...
ERIC Educational Resources Information Center
Mobray, Deborah, Ed.
Papers on local area networks (LANs), modelling techniques, software improvement, capacity planning, software engineering, microcomputers and end user computing, cost accounting and chargeback, configuration and performance management, and benchmarking presented at this conference include: (1) "Theoretical Performance Analysis of Virtual…
Performance Comparison of Mainframe, Workstations, Clusters, and Desktop Computers
NASA Technical Reports Server (NTRS)
Farley, Douglas L.
2005-01-01
A performance evaluation of a variety of computers frequently found in a scientific or engineering research environment was conducted using a synthetic and application program benchmarks. From a performance perspective, emerging commodity processors have superior performance relative to legacy mainframe computers. In many cases, the PC clusters exhibited comparable performance with traditional mainframe hardware when 8-12 processors were used. The main advantage of the PC clusters was related to their cost. Regardless of whether the clusters were built from new computers or whether they were created from retired computers their performance to cost ratio was superior to the legacy mainframe computers. Finally, the typical annual maintenance cost of legacy mainframe computers is several times the cost of new equipment such as multiprocessor PC workstations. The savings from eliminating the annual maintenance fee on legacy hardware can result in a yearly increase in total computational capability for an organization.
JPL IGS Analysis Center Report, 2001-2003
NASA Technical Reports Server (NTRS)
Heflin, M. B.; Bar-Sever, Y. E.; Jefferson, D. C.; Meyer, R. F.; Newport, B. J.; Vigue-Rodi, Y.; Webb, F. H.; Zumberge, J. F.
2004-01-01
Three GPS orbit and clock products are currently provided by JPL for consideration by the IGS. Each differs in its latency and quality, with later results being more accurate. Results are typically available in both IGS and GIPSY formats via anonymous ftp. Current performance based on comparisons with the IGS final products is summarized. Orbit performance was determined by computing the 3D RMS difference between each JPL product and the IGS final orbits based on 15 minute estimates from the sp3 files. Clock performance was computed as the RMS difference after subtracting a linear trend based on 15 minute estimates from the sp3 files.
Final Report Extreme Computing and U.S. Competitiveness DOE Award. DE-FG02-11ER26087/DE-SC0008764
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mustain, Christopher J.
The Council has acted on each of the grant deliverables during the funding period. The deliverables are: (1) convening the Council’s High Performance Computing Advisory Committee (HPCAC) on a bi-annual basis; (2) broadening public awareness of high performance computing (HPC) and exascale developments; (3) assessing the industrial applications of extreme computing; and (4) establishing a policy and business case for an exascale economy.
NASA Astrophysics Data System (ADS)
Ahn, Sul-Ah; Jung, Youngim
2016-10-01
The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.
Data Network Weather Service Reporting - Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michael Frey
2012-08-30
A final report is made of a three-year effort to develop a new forecasting paradigm for computer network performance. This effort was made in co-ordination with Fermi Lab's construction of e-Weather Center.
ERIC Educational Resources Information Center
Zheng, Lanqin
2016-01-01
This meta-analysis examined research on the effects of self-regulated learning scaffolds on academic performance in computer-based learning environments from 2004 to 2015. A total of 29 articles met inclusion criteria and were included in the final analysis with a total sample size of 2,648 students. Moderator analyses were performed using a…
ERIC Educational Resources Information Center
Lee, Connie W.; Hinson, Tony M.
This publication is the final report of a 21-month project designed to (1) expand and refine the computer capabilities of the Vocational-Technical Education Consortium of States (V-TECS) to ensure rapid data access for generating routine and special occupational data-based reports; (2) develop and implement a computer storage and retrieval system…
NASA Astrophysics Data System (ADS)
Imamura, Seigo; Ono, Kenji; Yokokawa, Mitsuo
2016-07-01
Ensemble computing, which is an instance of capacity computing, is an effective computing scenario for exascale parallel supercomputers. In ensemble computing, there are multiple linear systems associated with a common coefficient matrix. We improve the performance of iterative solvers for multiple vectors by solving them at the same time, that is, by solving for the product of the matrices. We implemented several iterative methods and compared their performance. The maximum performance on Sparc VIIIfx was 7.6 times higher than that of a naïve implementation. Finally, to deal with the different convergence processes of linear systems, we introduced a control method to eliminate the calculation of already converged vectors.
Enhancement of the Probabilistic CEramic Matrix Composite ANalyzer (PCEMCAN) Computer Code
NASA Technical Reports Server (NTRS)
Shah, Ashwin
2000-01-01
This report represents a final technical report for Order No. C-78019-J entitled "Enhancement of the Probabilistic Ceramic Matrix Composite Analyzer (PCEMCAN) Computer Code." The scope of the enhancement relates to including the probabilistic evaluation of the D-Matrix terms in MAT2 and MAT9 material properties card (available in CEMCAN code) for the MSC/NASTRAN. Technical activities performed during the time period of June 1, 1999 through September 3, 1999 have been summarized, and the final version of the enhanced PCEMCAN code and revisions to the User's Manual is delivered along with. Discussions related to the performed activities were made to the NASA Project Manager during the performance period. The enhanced capabilities have been demonstrated using sample problems.
34 CFR 5.72 - Records available.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Department or not. (c) Contracts. (1) Contract instruments. (2) Portions of offers reflecting final prices submitted in negotiated procurements. (d) Reports on grantee, contractor, or provider performance. Final... projects, such as films, computer software, other copyrightable materials and reports of inventions, will...
Bias and Stability of Single Variable Classifiers for Feature Ranking and Selection
Fakhraei, Shobeir; Soltanian-Zadeh, Hamid; Fotouhi, Farshad
2014-01-01
Feature rankings are often used for supervised dimension reduction especially when discriminating power of each feature is of interest, dimensionality of dataset is extremely high, or computational power is limited to perform more complicated methods. In practice, it is recommended to start dimension reduction via simple methods such as feature rankings before applying more complex approaches. Single Variable Classifier (SVC) ranking is a feature ranking based on the predictive performance of a classifier built using only a single feature. While benefiting from capabilities of classifiers, this ranking method is not as computationally intensive as wrappers. In this paper, we report the results of an extensive study on the bias and stability of such feature ranking method. We study whether the classifiers influence the SVC rankings or the discriminative power of features themselves has a dominant impact on the final rankings. We show the common intuition of using the same classifier for feature ranking and final classification does not always result in the best prediction performance. We then study if heterogeneous classifiers ensemble approaches provide more unbiased rankings and if they improve final classification performance. Furthermore, we calculate an empirical prediction performance loss for using the same classifier in SVC feature ranking and final classification from the optimal choices. PMID:25177107
Bias and Stability of Single Variable Classifiers for Feature Ranking and Selection.
Fakhraei, Shobeir; Soltanian-Zadeh, Hamid; Fotouhi, Farshad
2014-11-01
Feature rankings are often used for supervised dimension reduction especially when discriminating power of each feature is of interest, dimensionality of dataset is extremely high, or computational power is limited to perform more complicated methods. In practice, it is recommended to start dimension reduction via simple methods such as feature rankings before applying more complex approaches. Single Variable Classifier (SVC) ranking is a feature ranking based on the predictive performance of a classifier built using only a single feature. While benefiting from capabilities of classifiers, this ranking method is not as computationally intensive as wrappers. In this paper, we report the results of an extensive study on the bias and stability of such feature ranking method. We study whether the classifiers influence the SVC rankings or the discriminative power of features themselves has a dominant impact on the final rankings. We show the common intuition of using the same classifier for feature ranking and final classification does not always result in the best prediction performance. We then study if heterogeneous classifiers ensemble approaches provide more unbiased rankings and if they improve final classification performance. Furthermore, we calculate an empirical prediction performance loss for using the same classifier in SVC feature ranking and final classification from the optimal choices.
Ku-Band rendezvous radar performance computer simulation model
NASA Technical Reports Server (NTRS)
Magnusson, H. G.; Goff, M. F.
1984-01-01
All work performed on the Ku-band rendezvous radar performance computer simulation model program since the release of the preliminary final report is summarized. Developments on the program fall into three distinct categories: (1) modifications to the existing Ku-band radar tracking performance computer model; (2) the addition of a highly accurate, nonrealtime search and acquisition performance computer model to the total software package developed on this program; and (3) development of radar cross section (RCS) computation models for three additional satellites. All changes in the tracking model involved improvements in the automatic gain control (AGC) and the radar signal strength (RSS) computer models. Although the search and acquisition computer models were developed under the auspices of the Hughes Aircraft Company Ku-Band Integrated Radar and Communications Subsystem program office, they have been supplied to NASA as part of the Ku-band radar performance comuter model package. Their purpose is to predict Ku-band acquisition performance for specific satellite targets on specific missions. The RCS models were developed for three satellites: the Long Duration Exposure Facility (LDEF) spacecraft, the Solar Maximum Mission (SMM) spacecraft, and the Space Telescopes.
Ku-Band rendezvous radar performance computer simulation model
NASA Astrophysics Data System (ADS)
Magnusson, H. G.; Goff, M. F.
1984-06-01
All work performed on the Ku-band rendezvous radar performance computer simulation model program since the release of the preliminary final report is summarized. Developments on the program fall into three distinct categories: (1) modifications to the existing Ku-band radar tracking performance computer model; (2) the addition of a highly accurate, nonrealtime search and acquisition performance computer model to the total software package developed on this program; and (3) development of radar cross section (RCS) computation models for three additional satellites. All changes in the tracking model involved improvements in the automatic gain control (AGC) and the radar signal strength (RSS) computer models. Although the search and acquisition computer models were developed under the auspices of the Hughes Aircraft Company Ku-Band Integrated Radar and Communications Subsystem program office, they have been supplied to NASA as part of the Ku-band radar performance comuter model package. Their purpose is to predict Ku-band acquisition performance for specific satellite targets on specific missions. The RCS models were developed for three satellites: the Long Duration Exposure Facility (LDEF) spacecraft, the Solar Maximum Mission (SMM) spacecraft, and the Space Telescopes.
ERIC Educational Resources Information Center
Blinn Coll., Brenham, TX.
Blinn College final course grade distributions are summarized for spring 1990 to 1994 in this four-part report. Section I presents tables of final grade distributions by campus and course in accounting; agriculture; anthropology; biology; business; chemistry; child development; communications; computer science; criminal justice; drama; emergency…
Exascale computing and big data
Reed, Daniel A.; Dongarra, Jack
2015-06-25
Scientific discovery and engineering innovation requires unifying traditionally separated high-performance computing and big data analytics. The tools and cultures of high-performance computing and big data analytics have diverged, to the detriment of both; unification is essential to address a spectrum of major research domains. The challenges of scale tax our ability to transmit data, compute complicated functions on that data, or store a substantial part of it; new approaches are required to meet these challenges. Finally, the international nature of science demands further development of advanced computer architectures and global standards for processing data, even as international competition complicates themore » openness of the scientific process.« less
Exascale computing and big data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reed, Daniel A.; Dongarra, Jack
Scientific discovery and engineering innovation requires unifying traditionally separated high-performance computing and big data analytics. The tools and cultures of high-performance computing and big data analytics have diverged, to the detriment of both; unification is essential to address a spectrum of major research domains. The challenges of scale tax our ability to transmit data, compute complicated functions on that data, or store a substantial part of it; new approaches are required to meet these challenges. Finally, the international nature of science demands further development of advanced computer architectures and global standards for processing data, even as international competition complicates themore » openness of the scientific process.« less
NASA Astrophysics Data System (ADS)
Iwasaki, Y.
1997-02-01
The CP-PACS computer with a peak speed of 300 Gflops was completed in March 1996 and has started to operate. We describe the final specification and the hardware implementation of the CP-PACS computer, and its performance for QCD codes. A plan of the grade-up of the computer scheduled for fall of 1996 is also given.
Classical multiparty computation using quantum resources
NASA Astrophysics Data System (ADS)
Clementi, Marco; Pappa, Anna; Eckstein, Andreas; Walmsley, Ian A.; Kashefi, Elham; Barz, Stefanie
2017-12-01
In this work, we demonstrate a way to perform classical multiparty computing among parties with limited computational resources. Our method harnesses quantum resources to increase the computational power of the individual parties. We show how a set of clients restricted to linear classical processing are able to jointly compute a nonlinear multivariable function that lies beyond their individual capabilities. The clients are only allowed to perform classical xor gates and single-qubit gates on quantum states. We also examine the type of security that can be achieved in this limited setting. Finally, we provide a proof-of-concept implementation using photonic qubits that allows four clients to compute a specific example of a multiparty function, the pairwise and.
NASA Technical Reports Server (NTRS)
Rediess, Herman A.; Hewett, M. D.
1991-01-01
The requirements are assessed for the use of remote computation to support HRV flight testing. First, remote computational requirements were developed to support functions that will eventually be performed onboard operational vehicles of this type. These functions which either cannot be performed onboard in the time frame of initial HRV flight test programs because the technology of airborne computers will not be sufficiently advanced to support the computational loads required, or it is not desirable to perform the functions onboard in the flight test program for other reasons. Second, remote computational support either required or highly desirable to conduct flight testing itself was addressed. The use is proposed of an Automated Flight Management System which is described in conceptual detail. Third, autonomous operations is discussed and finally, unmanned operations.
Great Computational Intelligence in the Formal Sciences via Analogical Reasoning
2017-05-08
computational harnessing of traditional mathematical statistics (as e.g. covered in Hogg, Craig & McKean 2005) is used to power statistical learning techniques...AFRL-AFOSR-VA-TR-2017-0099 Great Computational Intelligence in the Formal Sciences via Analogical Reasoning Selmer Bringsjord RENSSELAER POLYTECHNIC...08-05-2017 2. REPORT TYPE Final Performance 3. DATES COVERED (From - To) 15 Oct 2011 to 31 Dec 2016 4. TITLE AND SUBTITLE Great Computational
NASA Technical Reports Server (NTRS)
Kowalski, E. J.
1979-01-01
A computerized method which utilizes the engine performance data is described. The method estimates the installed performance of aircraft gas turbine engines. This installation includes: engine weight and dimensions, inlet and nozzle internal performance and drag, inlet and nacelle weight, and nacelle drag.
2014-01-01
Background We aimed to observe the preparedness level of final year medical students in approaching emergencies by computer-based simulation training and evaluate the efficacy of the program. Methods A computer-based prototype simulation program (Lsim), designed by researchers from the medical education and computer science departments, was used to present virtual cases for medical learning. Fifty-four final year medical students from Ondokuz Mayis University School of Medicine attended an education program on June 20, 2012 and were trained with Lsim. Volunteer attendants completed a pre-test and post-test exam at the beginning and end of the course, respectively, on the same day. Results Twenty-nine of the 54 students who attended the course accepted to take the pre-test and post-test exams; 58.6% (n = 17) were female. In 10 emergency medical cases, an average of 3.9 correct medical approaches were performed in the pre-test and an average of 9.6 correct medical approaches were performed in the post-test (t = 17.18, P = 0.006). Conclusions This study’s results showed that the readiness level of students for an adequate medical approach to emergency cases was very low. Computer-based training could help in the adequate approach of students to various emergency cases. PMID:24386919
ERIC Educational Resources Information Center
Mercer County Schools, Princeton, WV.
A project was undertaken to identify machine shop occupations requiring workers to use computers, identify the computer skills needed to perform machine shop tasks, and determine which software products are currently being used in machine shop programs. A search of the Dictionary of Occupational Titles revealed that computer skills will become…
Techniques and Tools for Performance Tuning of Parallel and Distributed Scientific Applications
NASA Technical Reports Server (NTRS)
Sarukkai, Sekhar R.; VanderWijngaart, Rob F.; Castagnera, Karen (Technical Monitor)
1994-01-01
Performance degradation in scientific computing on parallel and distributed computer systems can be caused by numerous factors. In this half-day tutorial we explain what are the important methodological issues involved in obtaining codes that have good performance potential. Then we discuss what are the possible obstacles in realizing that potential on contemporary hardware platforms, and give an overview of the software tools currently available for identifying the performance bottlenecks. Finally, some realistic examples are used to illustrate the actual use and utility of such tools.
1981-12-01
ADVANCED COMPUTER TYPOGRAPHY .(U) DEC 81 A V HERSHEY UNCLASSIFIED NPS012-81-005 M MEEEIEEEII IIUJIL15I.4 MICROCQP RE SO.JjI ON ft R NPS012-81-005...NAVAL POSTGRADUATE SCHOOL 0Monterey, California DTIC SELECTEWA APR 5 1982 B ADVANCED COMPUTER TYPOGRAPHY by A. V. HERSHEY December 1981 OApproved for...Subtitle) S. TYPE Or REPORT & PERIOD COVERED Final ADVANCED COMPUTER TYPOGRAPHY Dec 1979 - Dec 1981 S. PERFORMING ORG. REPORT NUMBER 7. AUTHOR(s) S CONTRACT
Automated Instructional Monitors for Complex Operational Tasks. Final Report.
ERIC Educational Resources Information Center
Feurzeig, Wallace
A computer-based instructional system is described which incorporates diagnosis of students difficulties in acquiring complex concepts and skills. A computer automatically generated a simulated display. It then monitored and analyzed a student's work in the performance of assigned training tasks. Two major tasks were studied. The first,…
Computational Particle Dynamic Simulations on Multicore Processors (CPDMu) Final Report Phase I
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmalz, Mark S
2011-07-24
Statement of Problem - Department of Energy has many legacy codes for simulation of computational particle dynamics and computational fluid dynamics applications that are designed to run on sequential processors and are not easily parallelized. Emerging high-performance computing architectures employ massively parallel multicore architectures (e.g., graphics processing units) to increase throughput. Parallelization of legacy simulation codes is a high priority, to achieve compatibility, efficiency, accuracy, and extensibility. General Statement of Solution - A legacy simulation application designed for implementation on mainly-sequential processors has been represented as a graph G. Mathematical transformations, applied to G, produce a graph representation {und G}more » for a high-performance architecture. Key computational and data movement kernels of the application were analyzed/optimized for parallel execution using the mapping G {yields} {und G}, which can be performed semi-automatically. This approach is widely applicable to many types of high-performance computing systems, such as graphics processing units or clusters comprised of nodes that contain one or more such units. Phase I Accomplishments - Phase I research decomposed/profiled computational particle dynamics simulation code for rocket fuel combustion into low and high computational cost regions (respectively, mainly sequential and mainly parallel kernels), with analysis of space and time complexity. Using the research team's expertise in algorithm-to-architecture mappings, the high-cost kernels were transformed, parallelized, and implemented on Nvidia Fermi GPUs. Measured speedups (GPU with respect to single-core CPU) were approximately 20-32X for realistic model parameters, without final optimization. Error analysis showed no loss of computational accuracy. Commercial Applications and Other Benefits - The proposed research will constitute a breakthrough in solution of problems related to efficient parallel computation of particle and fluid dynamics simulations. These problems occur throughout DOE, military and commercial sectors: the potential payoff is high. We plan to license or sell the solution to contractors for military and domestic applications such as disaster simulation (aerodynamic and hydrodynamic), Government agencies (hydrological and environmental simulations), and medical applications (e.g., in tomographic image reconstruction). Keywords - High-performance Computing, Graphic Processing Unit, Fluid/Particle Simulation. Summary for Members of Congress - Department of Energy has many simulation codes that must compute faster, to be effective. The Phase I research parallelized particle/fluid simulations for rocket combustion, for high-performance computing systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, Z.T.
2001-11-15
The objective of this project was to conduct high-performance computing research and teaching at AAMU, and to train African-American and other minority students and scientists in the computational science field for eventual employment with DOE. During the project period, eight tasks were accomplished. Student Research Assistant, Work Study, Summer Interns, Scholarship were proved to be one of the best ways for us to attract top-quality minority students. Under the support of DOE, through research, summer interns, collaborations, scholarships programs, AAMU has successfully provided research and educational opportunities to minority students in the field related to computational science.
Quantum Accelerators for High-performance Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humble, Travis S.; Britt, Keith A.; Mohiyaddin, Fahd A.
We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, themore » prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.« less
NASA Technical Reports Server (NTRS)
Charlton, Eric F.
1998-01-01
Aerodynamic analysis are performed using the Lockheed-Martin Tactical Aircraft Systems (LMTAS) Splitflow computational fluid dynamics code to investigate the computational prediction capabilities for vortex-dominated flow fields of two different tailless aircraft models at large angles of attack and sideslip. These computations are performed with the goal of providing useful stability and control data to designers of high performance aircraft. Appropriate metrics for accuracy, time, and ease of use are determined in consultations with both the LMTAS Advanced Design and Stability and Control groups. Results are obtained and compared to wind-tunnel data for all six components of forces and moments. Moment data is combined to form a "falling leaf" stability analysis. Finally, a handful of viscous simulations were also performed to further investigate nonlinearities and possible viscous effects in the differences between the accumulated inviscid computational and experimental data.
A unified method for evaluating real-time computer controllers: A case study. [aircraft control
NASA Technical Reports Server (NTRS)
Shin, K. G.; Krishna, C. M.; Lee, Y. H.
1982-01-01
A real time control system consists of a synergistic pair, that is, a controlled process and a controller computer. Performance measures for real time controller computers are defined on the basis of the nature of this synergistic pair. A case study of a typical critical controlled process is presented in the context of new performance measures that express the performance of both controlled processes and real time controllers (taken as a unit) on the basis of a single variable: controller response time. Controller response time is a function of current system state, system failure rate, electrical and/or magnetic interference, etc., and is therefore a random variable. Control overhead is expressed as a monotonically nondecreasing function of the response time and the system suffers catastrophic failure, or dynamic failure, if the response time for a control task exceeds the corresponding system hard deadline, if any. A rigorous probabilistic approach is used to estimate the performance measures. The controlled process chosen for study is an aircraft in the final stages of descent, just prior to landing. First, the performance measures for the controller are presented. Secondly, control algorithms for solving the landing problem are discussed and finally the impact of the performance measures on the problem is analyzed.
ERIC Educational Resources Information Center
Connelly, E. M.; And Others
A new approach to deriving human performance measures and criteria for use in automatically evaluating trainee performance is described. Ultimately, this approach will allow automatic measurement of pilot performance in a flight simulator or from recorded in-flight data. An efficient method of representing performance data within a computer is…
FY 1978 Budget, FY 1979 Authorization Request and FY 1978-1982 Defense Programs,
1977-01-17
technological opportunities with defense applica- tions -- such as long-range cruise missiles and guidance, improved sensors, 25 miniaturization, and computer ...Various methods exist for computing the number of theater nuclear weapons needed to perform these missions with an acceptable level of confidence...foreign military forces. Mini-micro computers are especially interesting. -- Finally, since geography remains important, we must recognize that the
NASA Astrophysics Data System (ADS)
Iwasaki, Y.; CP-PACS Collaboration
1998-01-01
The CP-PACS project is a five year plan, which formally started in April 1992 and has been completed in March 1997, to develop a massively parallel computer for carrying out research in computational physics with primary emphasis on lattice QCD. The initial version of the CP-PACS computer with a theoretical peak speed of 307 GFLOPS with 1024 processors was completed in March 1996. The final version with a peak speed of 614 GFLOPS with 2048 processors was completed in September 1996, and has been in full operation since October 1996. We describe the architecture, the final specification, the hardware implementation, and the software of the CP-PACS computer. The CP-PACS has been used for hadron spectroscopy production runs since July 1996. The performance for lattice QCD applications and the LINPACK benchmark are given.
Evaluation of Cache-based Superscalar and Cacheless Vector Architectures for Scientific Computations
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Carter, Jonathan; Shalf, John; Skinner, David; Ethier, Stephane; Biswas, Rupak; Djomehri, Jahed; VanderWijngaart, Rob
2003-01-01
The growing gap between sustained and peak performance for scientific applications has become a well-known problem in high performance computing. The recent development of parallel vector systems offers the potential to bridge this gap for a significant number of computational science codes and deliver a substantial increase in computing capabilities. This paper examines the intranode performance of the NEC SX6 vector processor and the cache-based IBM Power3/4 superscalar architectures across a number of key scientific computing areas. First, we present the performance of a microbenchmark suite that examines a full spectrum of low-level machine characteristics. Next, we study the behavior of the NAS Parallel Benchmarks using some simple optimizations. Finally, we evaluate the perfor- mance of several numerical codes from key scientific computing domains. Overall results demonstrate that the SX6 achieves high performance on a large fraction of our application suite and in many cases significantly outperforms the RISC-based architectures. However, certain classes of applications are not easily amenable to vectorization and would likely require extensive reengineering of both algorithm and implementation to utilize the SX6 effectively.
Application of Game Theory to Improve the Defense of the Smart Grid
2012-03-01
Computer Systems and Networks ...............................................22 2.4.2 Trust Models ...systems. In this environment, developers assumed deterministic communications mediums rather than the “best effort” models provided in most modern... models or computational models to validate the SPSs design. Finally, the study reveals concerns about the performance of load rejection schemes
Avoiding Defect Nucleation during Equilibration in Molecular Dynamics Simulations with ReaxFF
2015-04-01
respectively. All simulations are performed using the LAMMPS computer code.12 2 Fig. 1 a) Initial and b) final configurations of the molecular centers...Plimpton S. Fast parallel algorithms for short-range molecular dynamics. Comput J Phys. 1995;117:1–19. (Software available at http:// lammps .sandia.gov
Performance Measures in Courses Using Computer-Aided Personalized System of Instruction
ERIC Educational Resources Information Center
Springer, C. R.; Pear, J. J.
2008-01-01
Archived data from four courses taught with computer-aided personalized system of instruction (CAPSI)--an online, self-paced, instructional program--were used to explore the relationship between objectively rescored final exam grades, peer reviewing, and progress rate--i.e., the rate at which students completed unit tests. There was a strong…
Multi-threading: A new dimension to massively parallel scientific computation
NASA Astrophysics Data System (ADS)
Nielsen, Ida M. B.; Janssen, Curtis L.
2000-06-01
Multi-threading is becoming widely available for Unix-like operating systems, and the application of multi-threading opens new ways for performing parallel computations with greater efficiency. We here briefly discuss the principles of multi-threading and illustrate the application of multi-threading for a massively parallel direct four-index transformation of electron repulsion integrals. Finally, other potential applications of multi-threading in scientific computing are outlined.
Petascale supercomputing to accelerate the design of high-temperature alloys
Shin, Dongwon; Lee, Sangkeun; Shyam, Amit; ...
2017-10-25
Recent progress in high-performance computing and data informatics has opened up numerous opportunities to aid the design of advanced materials. Herein, we demonstrate a computational workflow that includes rapid population of high-fidelity materials datasets via petascale computing and subsequent analyses with modern data science techniques. We use a first-principles approach based on density functional theory to derive the segregation energies of 34 microalloying elements at the coherent and semi-coherent interfaces between the aluminium matrix and the θ'-Al 2Cu precipitate, which requires several hundred supercell calculations. We also perform extensive correlation analyses to identify materials descriptors that affect the segregation behaviourmore » of solutes at the interfaces. Finally, we show an example of leveraging machine learning techniques to predict segregation energies without performing computationally expensive physics-based simulations. As a result, the approach demonstrated in the present work can be applied to any high-temperature alloy system for which key materials data can be obtained using high-performance computing.« less
Petascale supercomputing to accelerate the design of high-temperature alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shin, Dongwon; Lee, Sangkeun; Shyam, Amit
Recent progress in high-performance computing and data informatics has opened up numerous opportunities to aid the design of advanced materials. Herein, we demonstrate a computational workflow that includes rapid population of high-fidelity materials datasets via petascale computing and subsequent analyses with modern data science techniques. We use a first-principles approach based on density functional theory to derive the segregation energies of 34 microalloying elements at the coherent and semi-coherent interfaces between the aluminium matrix and the θ'-Al 2Cu precipitate, which requires several hundred supercell calculations. We also perform extensive correlation analyses to identify materials descriptors that affect the segregation behaviourmore » of solutes at the interfaces. Finally, we show an example of leveraging machine learning techniques to predict segregation energies without performing computationally expensive physics-based simulations. As a result, the approach demonstrated in the present work can be applied to any high-temperature alloy system for which key materials data can be obtained using high-performance computing.« less
Petascale supercomputing to accelerate the design of high-temperature alloys
NASA Astrophysics Data System (ADS)
Shin, Dongwon; Lee, Sangkeun; Shyam, Amit; Haynes, J. Allen
2017-12-01
Recent progress in high-performance computing and data informatics has opened up numerous opportunities to aid the design of advanced materials. Herein, we demonstrate a computational workflow that includes rapid population of high-fidelity materials datasets via petascale computing and subsequent analyses with modern data science techniques. We use a first-principles approach based on density functional theory to derive the segregation energies of 34 microalloying elements at the coherent and semi-coherent interfaces between the aluminium matrix and the θ‧-Al2Cu precipitate, which requires several hundred supercell calculations. We also perform extensive correlation analyses to identify materials descriptors that affect the segregation behaviour of solutes at the interfaces. Finally, we show an example of leveraging machine learning techniques to predict segregation energies without performing computationally expensive physics-based simulations. The approach demonstrated in the present work can be applied to any high-temperature alloy system for which key materials data can be obtained using high-performance computing.
Computer aided design of Langasite resonant cantilevers: analytical models and simulations
NASA Astrophysics Data System (ADS)
Tellier, C. R.; Leblois, T. G.; Durand, S.
2010-05-01
Analytical models for the piezoelectric excitation and for the wet micromachining of resonant cantilevers are proposed. Firstly, computations of metrological performances of micro-resonators allow us to select special cuts and special alignment of the cantilevers. Secondly the self-elaborated simulator TENSOSIM based on the kinematic and tensorial model furnishes etching shapes of cantilevers. As the result the number of selected cuts is reduced. Finally the simulator COMSOL® is used to evaluate the influence of final etching shape on metrological performances and especially on the resonance frequency. Changes in frequency are evaluated and deviating behaviours of structures with less favourable built-ins are tested showing that the X cut is the best cut for LGS resonant cantilevers vibrating in flexural modes (type 1 and type 2) or in torsion mode.
ERIC Educational Resources Information Center
Johnson, William B.; And Others
This annotated bibliography developed in connection with an ongoing investigation of the use of computer simulations for fault diagnosis training cites 61 published works taken predominantly from the disciplines of engineering, psychology, and education. A review of the existing literature included computer searches of the past ten years of…
A Management System for Computer Performance Evaluation.
1981-12-01
AD-AIlS 538 AIR FORCE INST OF TECH WRIGHT-PATTERSON AFB OH SCHOO-ETC F/6 S/1 MANAGEMENT SYSTEM FOR COMPUTER PERFORMANCE EVALUATION. (U DEC 81 H K...release; distribution unlimited. AFIT/GCS/1,Y/81 D)-i PREFACE As an installation manager of a Burroughs 3500 1 erncountered many problems concerning its...techniques to select, and finally, how do I organize the effort. As a manager I felt that I needed a reference or tool that would broaden my OPE
NASA Technical Reports Server (NTRS)
Makivic, Miloje S.
1996-01-01
This is the final technical report for the project entitled: "High-Performance Computing and Four-Dimensional Data Assimilation: The Impact on Future and Current Problems", funded at NPAC by the DAO at NASA/GSFC. First, the motivation for the project is given in the introductory section, followed by the executive summary of major accomplishments and the list of project-related publications. Detailed analysis and description of research results is given in subsequent chapters and in the Appendix.
Experimental and Computational Investigation of Triple-rotating Blades in a Mower Deck
NASA Astrophysics Data System (ADS)
Chon, Woochong; Amano, Ryoichi S.
Experimental and computational studies were performed on the 1.27m wide three-spindle lawn mower deck with side discharge arrangement. Laser Doppler Velocimetry was used to measure the air velocity at 12 different sections under the mower deck. The high-speed video camera test provided valuable visual evidence of airflow and grass discharge patterns. The strain gages were attached at several predetermined locations of the mower blades to measure the strain. In computational fluid dynamics work, computer based analytical studies were performed. During this phase of work, two different trials were attempted. First, two-dimensional blade shapes at several arbitrary radial sections were selected for flow computations around the blade model. Finally, a three-dimensional full deck model was developed and compared with the experimental results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Paul T.; Heroux, Michael A.; Barrett, Richard F.
The performance of a large-scale, production-quality science and engineering application (‘app’) is often dominated by a small subset of the code. Even within that subset, computational and data access patterns are often repeated, so that an even smaller portion can represent the performance-impacting features. If application developers, parallel computing experts, and computer architects can together identify this representative subset and then develop a small mini-application (‘miniapp’) that can capture these primary performance characteristics, then this miniapp can be used to both improve the performance of the app as well as provide a tool for co-design for the high-performance computing community.more » However, a critical question is whether a miniapp can effectively capture key performance behavior of an app. This study provides a comparison of an implicit finite element semiconductor device modeling app on unstructured meshes with an implicit finite element miniapp on unstructured meshes. The goal is to assess whether the miniapp is predictive of the performance of the app. Finally, single compute node performance will be compared, as well as scaling up to 16,000 cores. Results indicate that the miniapp can be reasonably predictive of the performance characteristics of the app for a single iteration of the solver on a single compute node.« less
Lin, Paul T.; Heroux, Michael A.; Barrett, Richard F.; ...
2015-07-30
The performance of a large-scale, production-quality science and engineering application (‘app’) is often dominated by a small subset of the code. Even within that subset, computational and data access patterns are often repeated, so that an even smaller portion can represent the performance-impacting features. If application developers, parallel computing experts, and computer architects can together identify this representative subset and then develop a small mini-application (‘miniapp’) that can capture these primary performance characteristics, then this miniapp can be used to both improve the performance of the app as well as provide a tool for co-design for the high-performance computing community.more » However, a critical question is whether a miniapp can effectively capture key performance behavior of an app. This study provides a comparison of an implicit finite element semiconductor device modeling app on unstructured meshes with an implicit finite element miniapp on unstructured meshes. The goal is to assess whether the miniapp is predictive of the performance of the app. Finally, single compute node performance will be compared, as well as scaling up to 16,000 cores. Results indicate that the miniapp can be reasonably predictive of the performance characteristics of the app for a single iteration of the solver on a single compute node.« less
ERIC Educational Resources Information Center
Capizzo, Maria Concetta; Nuzzo, Silvana; Zarcone, Michelangelo
2006-01-01
The case study described in this paper investigates the relationship among some pre-instructional knowledge, the learning gain and the final physics performance of computing engineering students in the introductory physics course. The results of the entrance engineering test (EET) have been used as a measurement of reading comprehension, logic and…
NASA Technical Reports Server (NTRS)
Morgan, Philip E.
2004-01-01
This final report contains reports of research related to the tasks "Scalable High Performance Computing: Direct and Lark-Eddy Turbulent FLow Simulations Using Massively Parallel Computers" and "Devleop High-Performance Time-Domain Computational Electromagnetics Capability for RCS Prediction, Wave Propagation in Dispersive Media, and Dual-Use Applications. The discussion of Scalable High Performance Computing reports on three objectives: validate, access scalability, and apply two parallel flow solvers for three-dimensional Navier-Stokes flows; develop and validate a high-order parallel solver for Direct Numerical Simulations (DNS) and Large Eddy Simulation (LES) problems; and Investigate and develop a high-order Reynolds averaged Navier-Stokes turbulence model. The discussion of High-Performance Time-Domain Computational Electromagnetics reports on five objectives: enhancement of an electromagnetics code (CHARGE) to be able to effectively model antenna problems; utilize lessons learned in high-order/spectral solution of swirling 3D jets to apply to solving electromagnetics project; transition a high-order fluids code, FDL3DI, to be able to solve Maxwell's Equations using compact-differencing; develop and demonstrate improved radiation absorbing boundary conditions for high-order CEM; and extend high-order CEM solver to address variable material properties. The report also contains a review of work done by the systems engineer.
Partnership in Computational Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huray, Paul G.
1999-02-24
This is the final report for the "Partnership in Computational Science" (PICS) award in an amount of $500,000 for the period January 1, 1993 through December 31, 1993. A copy of the proposal with its budget is attached as Appendix A. This report first describes the consequent significance of the DOE award in building infrastructure of high performance computing in the Southeast and then describes the work accomplished under this grant and a list of publications resulting from it.
ERIC Educational Resources Information Center
Bayne, Pauline S; Rader, Joe C.
The purpose of this project was to demonstrate that computer-based training (CBT) sessions, produced as HyperCard stacks (files), are an efficient and effective component for staff training in libraries. The purpose was successfully met in the 15-month period of development, evaluation, and implementation, and the University of Tennessee (UT)…
Computer-Aided Air-Traffic Control In The Terminal Area
NASA Technical Reports Server (NTRS)
Erzberger, Heinz
1995-01-01
Developmental computer-aided system for automated management and control of arrival traffic at large airport includes three integrated subsystems. One subsystem, called Traffic Management Advisor, another subsystem, called Descent Advisor, and third subsystem, called Final Approach Spacing Tool. Data base that includes current wind measurements and mathematical models of performances of types of aircraft contributes to effective operation of system.
Fast discrete cosine transform structure suitable for implementation with integer computation
NASA Astrophysics Data System (ADS)
Jeong, Yeonsik; Lee, Imgeun
2000-10-01
The discrete cosine transform (DCT) has wide applications in speech and image coding. We propose a fast DCT scheme with the property of reduced multiplication stages and fewer additions and multiplications. The proposed algorithm is structured so that most multiplications are performed at the final stage, which reduces the propagation error that could occur in the integer computation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1978-03-22
A grid-connected Integrated Community Energy System (ICES) with a coal-burning power plant located on the University of Minnesota campus is planned. The cost benefit analysis performed for this ICES, the cost accounting methods used, and a computer simulation of the operation of the power plant are described. (LCL)
ERIC Educational Resources Information Center
Dogan, Mustafa
2010-01-01
This study explores Turkish primary mathematics trainee teachers' attitudes to computer and technology. A survey was conducted with a self constructed questionnaire. Piloting, factor and reliability ([alpha] = 0.94) analyses were performed. The final version of the questionnaire has three parts with a total of 48 questions including a Likert type…
Challenges and opportunities of cloud computing for atmospheric sciences
NASA Astrophysics Data System (ADS)
Pérez Montes, Diego A.; Añel, Juan A.; Pena, Tomás F.; Wallom, David C. H.
2016-04-01
Cloud computing is an emerging technological solution widely used in many fields. Initially developed as a flexible way of managing peak demand it has began to make its way in scientific research. One of the greatest advantages of cloud computing for scientific research is independence of having access to a large cyberinfrastructure to fund or perform a research project. Cloud computing can avoid maintenance expenses for large supercomputers and has the potential to 'democratize' the access to high-performance computing, giving flexibility to funding bodies for allocating budgets for the computational costs associated with a project. Two of the most challenging problems in atmospheric sciences are computational cost and uncertainty in meteorological forecasting and climate projections. Both problems are closely related. Usually uncertainty can be reduced with the availability of computational resources to better reproduce a phenomenon or to perform a larger number of experiments. Here we expose results of the application of cloud computing resources for climate modeling using cloud computing infrastructures of three major vendors and two climate models. We show how the cloud infrastructure compares in performance to traditional supercomputers and how it provides the capability to complete experiments in shorter periods of time. The monetary cost associated is also analyzed. Finally we discuss the future potential of this technology for meteorological and climatological applications, both from the point of view of operational use and research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goda, Joetta Marie; Miller, Thomas; Grogan, Brandon
2016-10-26
This document contains figures that will be included in an ORNL final report that details computational efforts to model an irradiation experiment performed on the Godiva IV critical assembly. This experiment was a collaboration between LANL and ORNL.
High Performance Computing and Storage Requirements for Nuclear Physics: Target 2017
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerber, Richard; Wasserman, Harvey
2014-04-30
In April 2014, NERSC, ASCR, and the DOE Office of Nuclear Physics (NP) held a review to characterize high performance computing (HPC) and storage requirements for NP research through 2017. This review is the 12th in a series of reviews held by NERSC and Office of Science program offices that began in 2009. It is the second for NP, and the final in the second round of reviews that covered the six Office of Science program offices. This report is the result of that review
Steady and unsteady three-dimensional transonic flow computations by integral equation method
NASA Technical Reports Server (NTRS)
Hu, Hong
1994-01-01
This is the final technical report of the research performed under the grant: NAG1-1170, from the National Aeronautics and Space Administration. The report consists of three parts. The first part presents the work on unsteady flows around a zero-thickness wing. The second part presents the work on steady flows around non-zero thickness wings. The third part presents the massively parallel processing implementation and performance analysis of integral equation computations. At the end of the report, publications resulting from this grant are listed and attached.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray W. S.
Convergence of spectral deferred correction (SDC), where low-order time integration methods are used to construct higher-order methods through iterative refinement, can be accelerated in terms of computational effort by using mixed-precision methods. Using ideas from multi-level SDC (in turn based on FAS multigrid ideas), some of the SDC correction sweeps can use function values computed in reduced precision without adversely impacting the accuracy of the final solution. This is particularly beneficial for the performance of combustion solvers such as S3D [6] which require double precision accuracy but are performance limited by the cost of data motion.
Shock compression response of cold-rolled Ni/Al multilayer composites
Specht, Paul E.; Weihs, Timothy P.; Thadhani, Naresh N.
2017-01-06
Uniaxial strain, plate-on-plate impact experiments were performed on cold-rolled Ni/Al multilayer composites and the resulting Hugoniot was determined through time-resolved measurements combined with impedance matching. The experimental Hugoniot agreed with that previously predicted by two dimensional (2D) meso-scale calculations. Additional 2D meso-scale simulations were performed using the same computational method as the prior study to reproduce the experimentally measured free surface velocities and stress profiles. Finally, these simulations accurately replicated the experimental profiles, providing additional validation for the previous computational work.
NASA Astrophysics Data System (ADS)
Caramia, Maurizio; Montagna, Mario; Furano, Gianluca; Winton, Alistair
2010-08-01
This paper will describe the activities performed by Thales Alenia Space Italia supported by the European Space Agency in the definition of a CAN bus interface to be used on Exomars. The final goal of this activity is the development of an IP core, to be used in a slave node, able to manage both the CAN bus Data Link and Application Layer totally in hardware. The activity has been focused on the needs of the EXOMARS mission where devices with different computational performances are all managed by the onboard computer through the CAN bus.
Havery Mudd 2014-2015 Computer Science Conduit Clinic Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aspesi, G; Bai, J; Deese, R
2015-05-12
Conduit, a new open-source library developed at Lawrence Livermore National Laboratories, provides a C++ application programming interface (API) to describe and access scientific data. Conduit’s primary use is for inmemory data exchange in high performance computing (HPC) applications. Our team tested and improved Conduit to make it more appealing to potential adopters in the HPC community. We extended Conduit’s capabilities by prototyping four libraries: one for parallel communication using MPI, one for I/O functionality, one for aggregating performance data, and one for data visualization.
NASA Astrophysics Data System (ADS)
Somogyi, Gábor
2013-04-01
We finish the definition of a subtraction scheme for computing NNLO corrections to QCD jet cross sections. In particular, we perform the integration of the soft-type contributions to the doubly unresolved counterterms via the method of Mellin-Barnes representations. With these final ingredients in place, the definition of the scheme is complete and the computation of fully differential rates for electron-positron annihilation into two and three jets at NNLO accuracy becomes feasible.
Performance of a supercharged direct-injection stratified-charge rotary combustion engine
NASA Technical Reports Server (NTRS)
Bartrand, Timothy A.; Willis, Edward A.
1990-01-01
A zero-dimensional thermodynamic performance computer model for direct-injection stratified-charge rotary combustion engines was modified and run for a single rotor supercharged engine. Operating conditions for the computer runs were a single boost pressure and a matrix of speeds, loads and engine materials. A representative engine map is presented showing the predicted range of efficient operation. After discussion of the engine map, a number of engine features are analyzed individually. These features are: heat transfer and the influence insulating materials have on engine performance and exhaust energy; intake manifold pressure oscillations and interactions with the combustion chamber; and performance losses and seal friction. Finally, code running times and convergence data are presented.
Final Engineering Report for Computer, Weapon Aiming CP-1444/A.
1982-06-01
computes the required lead angle based upon the stored ballistic constants for the ADEN 30 MM gun and transmits the azimuth and elevation position of the...If PERFORMANCE LE VEL , IM I 11111LUlIൕ.25 g ras OVERALL .10 ....... I7 I KI I 3 03 w o FRQECY-H 18 I 1.6 CONCLUSIONS 1.6.1 The WAC
Radio Interference Modeling and Prediction for Satellite Operation Applications
2015-08-25
et al. Department of Electrical Engineering and Computer Science The Catholic University of America Washington, DC 20064 25 Aug 2015 Final...data included in this document for any purpose other than Government procurement does not in any way obligate the U.S. Government. The fact that...AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBERDepartment of Electrical Engineering and Computer Science The Catholic University of
Analog optical computing primitives in silicon photonics
Jiang, Yunshan; DeVore, Peter T. S.; Jalali, Bahram
2016-03-15
Optical computing accelerators help alleviate bandwidth and power consumption bottlenecks in electronics. In this paper, we show an approach to implementing logarithmic-type analog co-processors in silicon photonics and use it to perform the exponentiation operation and the recovery of a signal in the presence of multiplicative distortion. Finally, the function is realized by exploiting nonlinear-absorption-enhanced Raman amplification saturation in a silicon waveguide.
Geometry definition and grid generation for a complete fighter aircraft
NASA Technical Reports Server (NTRS)
Edwards, T. A.
1986-01-01
Recent advances in computing power and numerical solution procedures have enabled computational fluid dynamicists to attempt increasingly difficult problems. In particular, efforts are focusing on computations of complex three-dimensional flow fields about realistic aerodynamic bodies. To perform such computations, a very accurate and detailed description of the surface geometry must be provided, and a three-dimensional grid must be generated in the space around the body. The geometry must be supplied in a format compatible with the grid generation requirements, and must be verified to be free of inconsistencies. This paper presents a procedure for performing the geometry definition of a fighter aircraft that makes use of a commercial computer-aided design/computer-aided manufacturing system. Furthermore, visual representations of the geometry are generated using a computer graphics system for verification of the body definition. Finally, the three-dimensional grids for fighter-like aircraft are generated by means of an efficient new parabolic grid generation method. This method exhibits good control of grid quality.
Geometry definition and grid generation for a complete fighter aircraft
NASA Technical Reports Server (NTRS)
Edwards, Thomas A.
1986-01-01
Recent advances in computing power and numerical solution procedures have enabled computational fluid dynamicists to attempt increasingly difficult problems. In particular, efforts are focusing on computations of complex three-dimensional flow fields about realistic aerodynamic bodies. To perform such computations, a very accurate and detailed description of the surface geometry must be provided, and a three-dimensional grid must be generated in the space around the body. The geometry must be supplied in a format compatible with the grid generation requirements, and must be verified to be free of inconsistencies. A procedure for performing the geometry definition of a fighter aircraft that makes use of a commercial computer-aided design/computer-aided manufacturing system is presented. Furthermore, visual representations of the geometry are generated using a computer graphics system for verification of the body definition. Finally, the three-dimensional grids for fighter-like aircraft are generated by means of an efficient new parabolic grid generation method. This method exhibits good control of grid quality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greenough, Jeffrey A.; de Supinski, Bronis R.; Yates, Robert K.
2005-04-25
We describe the performance of the block-structured Adaptive Mesh Refinement (AMR) code Raptor on the 32k node IBM BlueGene/L computer. This machine represents a significant step forward towards petascale computing. As such, it presents Raptor with many challenges for utilizing the hardware efficiently. In terms of performance, Raptor shows excellent weak and strong scaling when running in single level mode (no adaptivity). Hardware performance monitors show Raptor achieves an aggregate performance of 3:0 Tflops in the main integration kernel on the 32k system. Results from preliminary AMR runs on a prototype astrophysical problem demonstrate the efficiency of the current softwaremore » when running at large scale. The BG/L system is enabling a physics problem to be considered that represents a factor of 64 increase in overall size compared to the largest ones of this type computed to date. Finally, we provide a description of the development work currently underway to address our inefficiencies.« less
Chen, Qingkui; Zhao, Deyu; Wang, Jingjuan
2017-01-01
This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes’ diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services. PMID:28777325
Fang, Yuling; Chen, Qingkui; Xiong, Neal N; Zhao, Deyu; Wang, Jingjuan
2017-08-04
This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes' diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.
Synthetic Analog and Digital Circuits for Cellular Computation and Memory
Purcell, Oliver; Lu, Timothy K.
2014-01-01
Biological computation is a major area of focus in synthetic biology because it has the potential to enable a wide range of applications. Synthetic biologists have applied engineering concepts to biological systems in order to construct progressively more complex gene circuits capable of processing information in living cells. Here, we review the current state of computational genetic circuits and describe artificial gene circuits that perform digital and analog computation. We then discuss recent progress in designing gene circuits that exhibit memory, and how memory and computation have been integrated to yield more complex systems that can both process and record information. Finally, we suggest new directions for engineering biological circuits capable of computation. PMID:24794536
FORESEE: Fully Outsourced secuRe gEnome Study basEd on homomorphic Encryption
2015-01-01
Background The increasing availability of genome data motivates massive research studies in personalized treatment and precision medicine. Public cloud services provide a flexible way to mitigate the storage and computation burden in conducting genome-wide association studies (GWAS). However, data privacy has been widely concerned when sharing the sensitive information in a cloud environment. Methods We presented a novel framework (FORESEE: Fully Outsourced secuRe gEnome Study basEd on homomorphic Encryption) to fully outsource GWAS (i.e., chi-square statistic computation) using homomorphic encryption. The proposed framework enables secure divisions over encrypted data. We introduced two division protocols (i.e., secure errorless division and secure approximation division) with a trade-off between complexity and accuracy in computing chi-square statistics. Results The proposed framework was evaluated for the task of chi-square statistic computation with two case-control datasets from the 2015 iDASH genome privacy protection challenge. Experimental results show that the performance of FORESEE can be significantly improved through algorithmic optimization and parallel computation. Remarkably, the secure approximation division provides significant performance gain, but without missing any significance SNPs in the chi-square association test using the aforementioned datasets. Conclusions Unlike many existing HME based studies, in which final results need to be computed by the data owner due to the lack of the secure division operation, the proposed FORESEE framework support complete outsourcing to the cloud and output the final encrypted chi-square statistics. PMID:26733391
FORESEE: Fully Outsourced secuRe gEnome Study basEd on homomorphic Encryption.
Zhang, Yuchen; Dai, Wenrui; Jiang, Xiaoqian; Xiong, Hongkai; Wang, Shuang
2015-01-01
The increasing availability of genome data motivates massive research studies in personalized treatment and precision medicine. Public cloud services provide a flexible way to mitigate the storage and computation burden in conducting genome-wide association studies (GWAS). However, data privacy has been widely concerned when sharing the sensitive information in a cloud environment. We presented a novel framework (FORESEE: Fully Outsourced secuRe gEnome Study basEd on homomorphic Encryption) to fully outsource GWAS (i.e., chi-square statistic computation) using homomorphic encryption. The proposed framework enables secure divisions over encrypted data. We introduced two division protocols (i.e., secure errorless division and secure approximation division) with a trade-off between complexity and accuracy in computing chi-square statistics. The proposed framework was evaluated for the task of chi-square statistic computation with two case-control datasets from the 2015 iDASH genome privacy protection challenge. Experimental results show that the performance of FORESEE can be significantly improved through algorithmic optimization and parallel computation. Remarkably, the secure approximation division provides significant performance gain, but without missing any significance SNPs in the chi-square association test using the aforementioned datasets. Unlike many existing HME based studies, in which final results need to be computed by the data owner due to the lack of the secure division operation, the proposed FORESEE framework support complete outsourcing to the cloud and output the final encrypted chi-square statistics.
ERIC Educational Resources Information Center
Quirk, Constance A.
This final report describes the activities and outcomes of a federally funded project designed to produce and field-test two computer-based interactive CD-ROMs: "PEGS! for Preschool" and "PEGS! for Secondary School". These programs, in a game format, provide beginning general and special educators with independent practice in…
ERIC Educational Resources Information Center
Scholtz, R. G.; And Others
This final report of a feasibility study describes the research performed in assessing the requirements for a chemical signature file and search scheme for organic compound identification and information retrieval. The research performed to determined feasibility of identifying an unknown compound involved screening the compound against a file of…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grebennikov, A.N.; Zhitnik, A.K.; Zvenigorodskaya, O.A.
1995-12-31
In conformity with the protocol of the Workshop under Contract {open_quotes}Assessment of RBMK reactor safety using modern Western Codes{close_quotes} VNIIEF performed a neutronics computation series to compare western and VNIIEF codes and assess whether VNIIEF codes are suitable for RBMK type reactor safety assessment computation. The work was carried out in close collaboration with M.I. Rozhdestvensky and L.M. Podlazov, NIKIET employees. The effort involved: (1) cell computations with the WIMS, EKRAN codes (improved modification of the LOMA code) and the S-90 code (VNIIEF Monte Carlo). Cell, polycell, burnup computation; (2) 3D computation of static states with the KORAT-3D and NEUmore » codes and comparison with results of computation with the NESTLE code (USA). The computations were performed in the geometry and using the neutron constants presented by the American party; (3) 3D computation of neutron kinetics with the KORAT-3D and NEU codes. These computations were performed in two formulations, both being developed in collaboration with NIKIET. Formulation of the first problem maximally possibly agrees with one of NESTLE problems and imitates gas bubble travel through a core. The second problem is a model of the RBMK as a whole with imitation of control and protection system controls (CPS) movement in a core.« less
CFD comparison with centrifugal compressor measurements on a wide operating range
NASA Astrophysics Data System (ADS)
Le Sausse, P.; Fabrie, P.; Arnou, D.; Clunet, F.
2013-04-01
Centrifugal compressors are widely used in industrial applications thanks to their high efficiency. They are able to provide a wide operating range before reaching the flow barrier or surge limits. Performances and range are described by compressor maps obtained experimentally. After a description of performance test rig, this article compares measured centrifugal compressor performances with computational fluid dynamics results. These computations are performed at steady conditions with R134a refrigerant as fluid. Navier-Stokes equations, coupled with k-ɛ turbulence model, are solved by the commercial software ANSYS-CFX by means of volume finite method. Input conditions are varied in order to calculate several speed lines. Theoretical isentropic efficiency and theoretical surge line are finally compared to experimental data.
Computational Analyses of Offset Stream Nozzles for Noise Reduction
NASA Technical Reports Server (NTRS)
Dippold, Vance, III; Foster, Lancert; Wiese,Michael
2007-01-01
The Wind computational fluid dynamics code was used to perform a series of simulations on two offset stream nozzle concepts for jet noise reduction. The first concept used an S-duct to direct the secondary stream to the lower side of the nozzle. The second concept used vanes to turn the secondary flow downward. The analyses were completed in preparation of tests conducted in the NASA Glenn Research Center Aeroacoustic Propulsion Laboratory. The offset stream nozzles demonstrated good performance and reduced the amount of turbulence on the lower side of the jet plume. The computer analyses proved instrumental in guiding the development of the final test configurations and giving insight into the flow mechanics of offset stream nozzles. The computational predictions were compared with flowfield results from the jet rig testing and showed excellent agreement.
Robust efficient video fingerprinting
NASA Astrophysics Data System (ADS)
Puri, Manika; Lubin, Jeffrey
2009-02-01
We have developed a video fingerprinting system with robustness and efficiency as the primary and secondary design criteria. In extensive testing, the system has shown robustness to cropping, letter-boxing, sub-titling, blur, drastic compression, frame rate changes, size changes and color changes, as well as to the geometric distortions often associated with camcorder capture in cinema settings. Efficiency is afforded by a novel two-stage detection process in which a fast matching process first computes a number of likely candidates, which are then passed to a second slower process that computes the overall best match with minimal false alarm probability. One key component of the algorithm is a maximally stable volume computation - a three-dimensional generalization of maximally stable extremal regions - that provides a content-centric coordinate system for subsequent hash function computation, independent of any affine transformation or extensive cropping. Other key features include an efficient bin-based polling strategy for initial candidate selection, and a final SIFT feature-based computation for final verification. We describe the algorithm and its performance, and then discuss additional modifications that can provide further improvement to efficiency and accuracy.
SCEAPI: A unified Restful Web API for High-Performance Computing
NASA Astrophysics Data System (ADS)
Rongqiang, Cao; Haili, Xiao; Shasha, Lu; Yining, Zhao; Xiaoning, Wang; Xuebin, Chi
2017-10-01
The development of scientific computing is increasingly moving to collaborative web and mobile applications. All these applications need high-quality programming interface for accessing heterogeneous computing resources consisting of clusters, grid computing or cloud computing. In this paper, we introduce our high-performance computing environment that integrates computing resources from 16 HPC centers across China. Then we present a bundle of web services called SCEAPI and describe how it can be used to access HPC resources with HTTP or HTTPs protocols. We discuss SCEAPI from several aspects including architecture, implementation and security, and address specific challenges in designing compatible interfaces and protecting sensitive data. We describe the functions of SCEAPI including authentication, file transfer and job management for creating, submitting and monitoring, and how to use SCEAPI in an easy-to-use way. Finally, we discuss how to exploit more HPC resources quickly for the ATLAS experiment by implementing the custom ARC compute element based on SCEAPI, and our work shows that SCEAPI is an easy-to-use and effective solution to extend opportunistic HPC resources.
Computer modeling of heat pipe performance
NASA Technical Reports Server (NTRS)
Peterson, G. P.
1983-01-01
A parametric study of the defining equations which govern the steady state operational characteristics of the Grumman monogroove dual passage heat pipe is presented. These defining equations are combined to develop a mathematical model which describes and predicts the operational and performance capabilities of a specific heat pipe given the necessary physical characteristics and working fluid. Included is a brief review of the current literature, a discussion of the governing equations, and a description of both the mathematical and computer model. Final results of preliminary test runs of the model are presented and compared with experimental tests on actual prototypes.
Research in Computational Astrobiology
NASA Technical Reports Server (NTRS)
Chaban, Galina; Jaffe, Richard; Liang, Shoudan; New, Michael H.; Pohorille, Andrew; Wilson, Michael A.
2002-01-01
We present results from several projects in the new field of computational astrobiology, which is devoted to advancing our understanding of the origin, evolution and distribution of life in the Universe using theoretical and computational tools. We have developed a procedure for calculating long-range effects in molecular dynamics using a plane wave expansion of the electrostatic potential. This method is expected to be highly efficient for simulating biological systems on massively parallel supercomputers. We have perform genomics analysis on a family of actin binding proteins. We have performed quantum mechanical calculations on carbon nanotubes and nucleic acids, which simulations will allow us to investigate possible sources of organic material on the early earth. Finally, we have developed a model of protobiological chemistry using neural networks.
Operating manual for coaxial injection combustion model. [for the space shuttle main engine
NASA Technical Reports Server (NTRS)
Sutton, R. D.; Schuman, M. D.; Chadwick, W. D.
1974-01-01
An operating manual for the coaxial injection combustion model (CICM) is presented as the final report for an eleven month effort designed to provide improvement, to verify, and to document the comprehensive computer program for analyzing the performance of thrust chamber operation with gas/liquid coaxial jet injection. The effort culminated in delivery of an operation FORTRAN IV computer program and associated documentation pertaining to the combustion conditions in the space shuttle main engine. The computer program is structured for compatibility with the standardized Joint Army-Navy-NASA-Air Force (JANNAF) performance evaluation procedure. Use of the CICM in conjunction with the JANNAF procedure allows the analysis of engine systems using coaxial gas/liquid injection.
Analytical determination of propeller performance degradation due to ice accretion
NASA Technical Reports Server (NTRS)
Miller, T. L.
1986-01-01
A computer code has been developed which is capable of computing propeller performance for clean, glaze, or rime iced propeller configurations, thereby providing a mechanism for determining the degree of performance degradation which results from a given icing encounter. The inviscid, incompressible flow field at each specified propeller radial location is first computed using the Theodorsen transformation method of conformal mapping. A droplet trajectory computation then calculates droplet impingement points and airfoil collection efficiency for each radial location, at which point several user-selectable empirical correlations are available for determining the aerodynamic penalities which arise due to the ice accretion. Propeller performance is finally computed using strip analysis for either the clean or iced propeller. In the iced mode, the differential thrust and torque coefficient equations are modified by the drag and lift coefficient increments due to ice to obtain the appropriate iced values. Comparison with available experimental propeller icing data shows good agreement in several cases. The code's capability to properly predict iced thrust coefficient, power coefficient, and propeller efficiency is shown to be dependent on the choice of empirical correlation employed as well as proper specification of radial icing extent.
Measurement of fault latency in a digital avionic mini processor, part 2
NASA Technical Reports Server (NTRS)
Mcgough, J.; Swern, F.
1983-01-01
The results of fault injection experiments utilizing a gate-level emulation of the central processor unit of the Bendix BDX-930 digital computer are described. Several earlier programs were reprogrammed, expanding the instruction set to capitalize on the full power of the BDX-930 computer. As a final demonstration of fault coverage an extensive, 3-axis, high performance flght control computation was added. The stages in the development of a CPU self-test program emphasizing the relationship between fault coverage, speed, and quantity of instructions were demonstrated.
Connecting to HPC VPN | High-Performance Computing | NREL
and password will match your NREL network account login/password. From OS X or Linux, open a terminal finalized. Open a Remote Desktop connection using server name WINHPC02 (this is the login node). Mac Mac
NASA Technical Reports Server (NTRS)
Goodwin, Sabine A.; Raj, P.
1999-01-01
Progress to date towards the development and validation of a fast, accurate and cost-effective aeroelastic method for advanced parallel computing platforms such as the IBM SP2 and the SGI Origin 2000 is presented in this paper. The ENSAERO code, developed at the NASA-Ames Research Center has been selected for this effort. The code allows for the computation of aeroelastic responses by simultaneously integrating the Euler or Navier-Stokes equations and the modal structural equations of motion. To assess the computational performance and accuracy of the ENSAERO code, this paper reports the results of the Navier-Stokes simulations of the transonic flow over a flexible aeroelastic wing body configuration. In addition, a forced harmonic oscillation analysis in the frequency domain and an analysis in the time domain are done on a wing undergoing a rigid pitch and plunge motion. Finally, to demonstrate the ENSAERO flutter-analysis capability, aeroelastic Euler and Navier-Stokes computations on an L-1011 wind tunnel model including pylon, nacelle and empennage are underway. All computational solutions are compared with experimental data to assess the level of accuracy of ENSAERO. As the computations described above are performed, a meticulous log of computational performance in terms of wall clock time, execution speed, memory and disk storage is kept. Code scalability is also demonstrated by studying the impact of varying the number of processors on computational performance on the IBM SP2 and the Origin 2000 systems.
Computational models of neuromodulation.
Fellous, J M; Linster, C
1998-05-15
Computational modeling of neural substrates provides an excellent theoretical framework for the understanding of the computational roles of neuromodulation. In this review, we illustrate, with a large number of modeling studies, the specific computations performed by neuromodulation in the context of various neural models of invertebrate and vertebrate preparations. We base our characterization of neuromodulations on their computational and functional roles rather than on anatomical or chemical criteria. We review the main framework in which neuromodulation has been studied theoretically (central pattern generation and oscillations, sensory processing, memory and information integration). Finally, we present a detailed mathematical overview of how neuromodulation has been implemented at the single cell and network levels in modeling studies. Overall, neuromodulation is found to increase and control computational complexity.
Synthetic analog and digital circuits for cellular computation and memory.
Purcell, Oliver; Lu, Timothy K
2014-10-01
Biological computation is a major area of focus in synthetic biology because it has the potential to enable a wide range of applications. Synthetic biologists have applied engineering concepts to biological systems in order to construct progressively more complex gene circuits capable of processing information in living cells. Here, we review the current state of computational genetic circuits and describe artificial gene circuits that perform digital and analog computation. We then discuss recent progress in designing gene networks that exhibit memory, and how memory and computation have been integrated to yield more complex systems that can both process and record information. Finally, we suggest new directions for engineering biological circuits capable of computation. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qian, Xiaoqing; Deng, Z. T.
2009-11-10
This is the final report for the Department of Energy (DOE) project DE-FG02-06ER25746, entitled, "Continuing High Performance Computing Research and Education at AAMU". This three-year project was started in August 15, 2006, and it was ended in August 14, 2009. The objective of this project was to enhance high performance computing research and education capabilities at Alabama A&M University (AAMU), and to train African-American and other minority students and scientists in the computational science field for eventual employment with DOE. AAMU has successfully completed all the proposed research and educational tasks. Through the support of DOE, AAMU was able tomore » provide opportunities to minority students through summer interns and DOE computational science scholarship program. In the past three years, AAMU (1). Supported three graduate research assistants in image processing for hypersonic shockwave control experiment and in computational science related area; (2). Recruited and provided full financial support for six AAMU undergraduate summer research interns to participate Research Alliance in Math and Science (RAMS) program at Oak Ridge National Lab (ORNL); (3). Awarded highly competitive 30 DOE High Performance Computing Scholarships ($1500 each) to qualified top AAMU undergraduate students in science and engineering majors; (4). Improved high performance computing laboratory at AAMU with the addition of three high performance Linux workstations; (5). Conducted image analysis for electromagnetic shockwave control experiment and computation of shockwave interactions to verify the design and operation of AAMU-Supersonic wind tunnel. The high performance computing research and education activities at AAMU created great impact to minority students. As praised by Accreditation Board for Engineering and Technology (ABET) in 2009, ?The work on high performance computing that is funded by the Department of Energy provides scholarships to undergraduate students as computational science scholars. This is a wonderful opportunity to recruit under-represented students.? Three ASEE papers were published in 2007, 2008 and 2009 proceedings of ASEE Annual Conferences, respectively. Presentations of these papers were also made at the ASEE Annual Conferences. It is very critical to continue the research and education activities.« less
Research on Self-Directed Learning to Meet Job Performance Requirements. Final Report.
ERIC Educational Resources Information Center
Munro, Allen; Towne, Douglas M.
Over a two-year period, research was conducted primarily in two areas of cognitive strategies for on-the-job training (OJT). The first area was the development and testing of a computer-based training system to improve selectivity in text processing in order to improve performance during OJT. The second area was the exploration of text-type…
ERIC Educational Resources Information Center
Pennsylvania Blue Shield, Camp Hill.
A project developed a model curriculum to be delivered by computer-based instruction to teach the required literacy skills for entry workers in the health insurance industry. Literacy task analyses were performed for the targeted jobs and then validated with focus groups. The job tasks and related basic skills were divided into modules. The job…
Tablet computers in assessing performance in a high stakes exam: opinion matters.
Currie, G P; Sinha, S; Thomson, F; Cleland, J; Denison, A R
2017-06-01
Background Tablet computers have emerged as a tool to capture, process and store data in examinations, yet evidence relating to their acceptability and usefulness in assessment is limited. Methods We performed an observational study to explore opinions and attitudes relating to tablet computer use in recording performance in a final year objective structured clinical examination at a single UK medical school. Examiners completed a short questionnaire encompassing background, forced-choice and open questions. Forced choice questions were analysed using descriptive statistics and open questions by framework analysis. Results Ninety-two (97% response rate) examiners completed the questionnaire of whom 85% had previous use of tablet computers. Ninety per cent felt checklist mark allocation was 'very/quite easy', while approximately half considered recording 'free-type' comments was 'easy/very easy'. Greater overall efficiency of marking and resource savings were considered the main advantages of tablet computers, while concerns relating to technological failure and ability to record free type comments were raised. Discussion In a context where examiners were familiar with tablet computers, they were preferred to paper checklists, although concerns were raised. This study adds to the limited literature underpinning the use of electronic devices as acceptable tools in objective structured clinical examinations.
Computational Cognitive Neuroscience Modeling of Sequential Skill Learning
2016-09-21
101 EAST 27TH STREET STE 4308 AUSTIN , TX 78712 09/21/2016 Final Report DISTRIBUTION A: Distribution approved for public release. Air Force Research ...5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) The University of Texas at Austin 108 E Dean Keeton Stop A8000 Austin , TX ...AFRL-AFOSR-VA-TR-2016-0320 Computational Cognitive Neuroscience Modeling of Sequential Skill Learning David Schnyer UNIVERSITY OF TEXAS AT AUSTIN
NASA Astrophysics Data System (ADS)
Burnett, W.
2016-12-01
The Department of Defense's (DoD) High Performance Computing Modernization Program (HPCMP) provides high performance computing to address the most significant challenges in computational resources, software application support and nationwide research and engineering networks. Today, the HPCMP has a critical role in ensuring the National Earth System Prediction Capability (N-ESPC) achieves initial operational status in 2019. A 2015 study commissioned by the HPCMP found that N-ESPC computational requirements will exceed interconnect bandwidth capacity due to the additional load from data assimilation and passing connecting data between ensemble codes. Memory bandwidth and I/O bandwidth will continue to be significant bottlenecks for the Navy's Hybrid Coordinate Ocean Model (HYCOM) scalability - by far the major driver of computing resource requirements in the N-ESPC. The study also found that few of the N-ESPC model developers have detailed plans to ensure their respective codes scale through 2024. Three HPCMP initiatives are designed to directly address and support these issues: Productivity Enhancement, Technology, Transfer and Training (PETTT), the HPCMP Applications Software Initiative (HASI), and Frontier Projects. PETTT supports code conversion by providing assistance, expertise and training in scalable and high-end computing architectures. HASI addresses the continuing need for modern application software that executes effectively and efficiently on next-generation high-performance computers. Frontier Projects enable research and development that could not be achieved using typical HPCMP resources by providing multi-disciplinary teams access to exceptional amounts of high performance computing resources. Finally, the Navy's DoD Supercomputing Resource Center (DSRC) currently operates a 6 Petabyte system, of which Naval Oceanography receives 15% of operational computational system use, or approximately 1 Petabyte of the processing capability. The DSRC will provide the DoD with future computing assets to initially operate the N-ESPC in 2019. This talk will further describe how DoD's HPCMP will ensure N-ESPC becomes operational, efficiently and effectively, using next-generation high performance computing.
DOT National Transportation Integrated Search
2009-10-01
In this study, the concept of the hybrid FRP-concrete structural systems was applied to both bridge : superstructure and deck systems. Results from the both experimental and computational analysis for : both the hybrid bridge superstructure and deck ...
Implementation of an ADI method on parallel computers
NASA Technical Reports Server (NTRS)
Fatoohi, Raad A.; Grosch, Chester E.
1987-01-01
The implementation of an ADI method for solving the diffusion equation on three parallel/vector computers is discussed. The computers were chosen so as to encompass a variety of architectures. They are: the MPP, an SIMD machine with 16K bit serial processors; FLEX/32, an MIMD machine with 20 processors; and CRAY/2, an MIMD machine with four vector processors. The Gaussian elimination algorithm is used to solve a set of tridiagonal systems on the FLEX/32 and CRAY/2 while the cyclic elimination algorithm is used to solve these systems on the MPP. The implementation of the method is discussed in relation to these architectures and measures of the performance on each machine are given. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Finally, conclusions are presented.
Implementation of an ADI method on parallel computers
NASA Technical Reports Server (NTRS)
Fatoohi, Raad A.; Grosch, Chester E.
1987-01-01
In this paper the implementation of an ADI method for solving the diffusion equation on three parallel/vector computers is discussed. The computers were chosen so as to encompass a variety of architectures. They are the MPP, an SIMD machine with 16-Kbit serial processors; Flex/32, an MIMD machine with 20 processors; and Cray/2, an MIMD machine with four vector processors. The Gaussian elimination algorithm is used to solve a set of tridiagonal systems on the Flex/32 and Cray/2 while the cyclic elimination algorithm is used to solve these systems on the MPP. The implementation of the method is discussed in relation to these architectures and measures of the performance on each machine are given. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Finally conclusions are presented.
Computation Directorate 2008 Annual Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, D L
2009-03-25
Whether a computer is simulating the aging and performance of a nuclear weapon, the folding of a protein, or the probability of rainfall over a particular mountain range, the necessary calculations can be enormous. Our computers help researchers answer these and other complex problems, and each new generation of system hardware and software widens the realm of possibilities. Building on Livermore's historical excellence and leadership in high-performance computing, Computation added more than 331 trillion floating-point operations per second (teraFLOPS) of power to LLNL's computer room floors in 2008. In addition, Livermore's next big supercomputer, Sequoia, advanced ever closer to itsmore » 2011-2012 delivery date, as architecture plans and the procurement contract were finalized. Hyperion, an advanced technology cluster test bed that teams Livermore with 10 industry leaders, made a big splash when it was announced during Michael Dell's keynote speech at the 2008 Supercomputing Conference. The Wall Street Journal touted Hyperion as a 'bright spot amid turmoil' in the computer industry. Computation continues to measure and improve the costs of operating LLNL's high-performance computing systems by moving hardware support in-house, by measuring causes of outages to apply resources asymmetrically, and by automating most of the account and access authorization and management processes. These improvements enable more dollars to go toward fielding the best supercomputers for science, while operating them at less cost and greater responsiveness to the customers.« less
The effect of feature selection methods on computer-aided detection of masses in mammograms
NASA Astrophysics Data System (ADS)
Hupse, Rianne; Karssemeijer, Nico
2010-05-01
In computer-aided diagnosis (CAD) research, feature selection methods are often used to improve generalization performance of classifiers and shorten computation times. In an application that detects malignant masses in mammograms, we investigated the effect of using a selection criterion that is similar to the final performance measure we are optimizing, namely the mean sensitivity of the system in a predefined range of the free-response receiver operating characteristics (FROC). To obtain the generalization performance of the selected feature subsets, a cross validation procedure was performed on a dataset containing 351 abnormal and 7879 normal regions, each region providing a set of 71 mass features. The same number of noise features, not containing any information, were added to investigate the ability of the feature selection algorithms to distinguish between useful and non-useful features. It was found that significantly higher performances were obtained using feature sets selected by the general test statistic Wilks' lambda than using feature sets selected by the more specific FROC measure. Feature selection leads to better performance when compared to a system in which all features were used.
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Watson, Willie R. (Technical Monitor)
2005-01-01
The overall objectives of this research work are to formulate and validate efficient parallel algorithms, and to efficiently design/implement computer software for solving large-scale acoustic problems, arised from the unified frameworks of the finite element procedures. The adopted parallel Finite Element (FE) Domain Decomposition (DD) procedures should fully take advantages of multiple processing capabilities offered by most modern high performance computing platforms for efficient parallel computation. To achieve this objective. the formulation needs to integrate efficient sparse (and dense) assembly techniques, hybrid (or mixed) direct and iterative equation solvers, proper pre-conditioned strategies, unrolling strategies, and effective processors' communicating schemes. Finally, the numerical performance of the developed parallel finite element procedures will be evaluated by solving series of structural, and acoustic (symmetrical and un-symmetrical) problems (in different computing platforms). Comparisons with existing "commercialized" and/or "public domain" software are also included, whenever possible.
Computer-Based Tools for Evaluating Graphical User Interfaces
NASA Technical Reports Server (NTRS)
Moore, Loretta A.
1997-01-01
The user interface is the component of a software system that connects two very complex system: humans and computers. Each of these two systems impose certain requirements on the final product. The user is the judge of the usability and utility of the system; the computer software and hardware are the tools with which the interface is constructed. Mistakes are sometimes made in designing and developing user interfaces because the designers and developers have limited knowledge about human performance (e.g., problem solving, decision making, planning, and reasoning). Even those trained in user interface design make mistakes because they are unable to address all of the known requirements and constraints on design. Evaluation of the user inter-face is therefore a critical phase of the user interface development process. Evaluation should not be considered the final phase of design; but it should be part of an iterative design cycle with the output of evaluation being feed back into design. The goal of this research was to develop a set of computer-based tools for objectively evaluating graphical user interfaces. The research was organized into three phases. The first phase resulted in the development of an embedded evaluation tool which evaluates the usability of a graphical user interface based on a user's performance. An expert system to assist in the design and evaluation of user interfaces based upon rules and guidelines was developed during the second phase. During the final phase of the research an automatic layout tool to be used in the initial design of graphical inter- faces was developed. The research was coordinated with NASA Marshall Space Flight Center's Mission Operations Laboratory's efforts in developing onboard payload display specifications for the Space Station.
NASA Astrophysics Data System (ADS)
Bolzoni, Paolo; Somogyi, Gábor; Trócsányi, Zoltán
2011-01-01
We perform the integration of all iterated singly-unresolved subtraction terms, as defined in ref. [1], over the two-particle factorized phase space. We also sum over the unresolved parton flavours. The final result can be written as a convolution (in colour space) of the Born cross section and an insertion operator. We spell out the insertion operator in terms of 24 basic integrals that are defined explicitly. We compute the coefficients of the Laurent expansion of these integrals in two different ways, with the method of Mellin-Barnes representations and sector decomposition. Finally, we present the Laurent-expansion of the full insertion operator for the specific examples of electron-positron annihilation into two and three jets.
Final Report for DOE Award ER25756
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kesselman, Carl
2014-11-17
The SciDAC-funded Center for Enabling Distributed Petascale Science (CEDPS) was established to address technical challenges that arise due to the frequent geographic distribution of data producers (in particular, supercomputers and scientific instruments) and data consumers (people and computers) within the DOE laboratory system. Its goal is to produce technical innovations that meet DOE end-user needs for (a) rapid and dependable placement of large quantities of data within a distributed high-performance environment, and (b) the convenient construction of scalable science services that provide for the reliable and high-performance processing of computation and data analysis requests from many remote clients. The Centermore » is also addressing (c) the important problem of troubleshooting these and other related ultra-high-performance distributed activities from the perspective of both performance and functionality« less
A high performance parallel algorithm for 1-D FFT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, R.C.; Gustavson, F.G.; Zubair, M.
1994-12-31
In this paper the authors propose a parallel high performance FFT algorithm based on a multi-dimensional formulation. They use this to solve a commonly encountered FFT based kernel on a distributed memory parallel machine, the IBM scalable parallel system, SP1. The kernel requires a forward FFT computation of an input sequence, multiplication of the transformed data by a coefficient array, and finally an inverse FFT computation of the resultant data. They show that the multi-dimensional formulation helps in reducing the communication costs and also improves the single node performance by effectively utilizing the memory system of the node. They implementedmore » this kernel on the IBM SP1 and observed a performance of 1.25 GFLOPS on a 64-node machine.« less
OpenCluster: A Flexible Distributed Computing Framework for Astronomical Data Processing
NASA Astrophysics Data System (ADS)
Wei, Shoulin; Wang, Feng; Deng, Hui; Liu, Cuiyin; Dai, Wei; Liang, Bo; Mei, Ying; Shi, Congming; Liu, Yingbo; Wu, Jingping
2017-02-01
The volume of data generated by modern astronomical telescopes is extremely large and rapidly growing. However, current high-performance data processing architectures/frameworks are not well suited for astronomers because of their limitations and programming difficulties. In this paper, we therefore present OpenCluster, an open-source distributed computing framework to support rapidly developing high-performance processing pipelines of astronomical big data. We first detail the OpenCluster design principles and implementations and present the APIs facilitated by the framework. We then demonstrate a case in which OpenCluster is used to resolve complex data processing problems for developing a pipeline for the Mingantu Ultrawide Spectral Radioheliograph. Finally, we present our OpenCluster performance evaluation. Overall, OpenCluster provides not only high fault tolerance and simple programming interfaces, but also a flexible means of scaling up the number of interacting entities. OpenCluster thereby provides an easily integrated distributed computing framework for quickly developing a high-performance data processing system of astronomical telescopes and for significantly reducing software development expenses.
Manned versus unmanned rendezvous and capture
NASA Technical Reports Server (NTRS)
Brody, Adam R.
1991-01-01
Rendezvous and capture (docking) operations may be performed either automatically or under manual control. In cases where humans are far from the mission site, or high-bandwidth communications lines are not in place, automation is the only option. Such might be the case with unmanned missions to the moon or Mars that involve orbital docking or cargo transfer. In crewed situations where sensors, computation capabilities, and other necessary instrumentation are unavailable, manual control is the only alternative. Power, mass, cost, or other restrictions may limit the availability of the machinery required for an automated rendezvous and capture. The only occasions for which there is a choice about whether to use automated or manual control are those where the vehicle(s) have both the crew and instrumentation necessary to perform the mission either way. The following discussion will focus on the final approach or capture (docking) maneuver. The maneuvers required for long-range rendezvous operations are calculated by computers. It is almost irrelevant whether it is an astronaut, watching a count-down timer who pushes the button firing the thruster or whether the computer keeps track of the time and fires with the astronaut monitoring. The actual manual workload associated with a mission that may take as long as hours or days to perform is small. The workload per unit time increases tremendously during the final approach (docking) phase and this is where the issue of manual versus automatic is more important.
A service based adaptive U-learning system using UX.
Jeong, Hwa-Young; Yi, Gangman
2014-01-01
In recent years, traditional development techniques for e-learning systems have been changing to become more convenient and efficient. One new technology in the development of application systems includes both cloud and ubiquitous computing. Cloud computing can support learning system processes by using services while ubiquitous computing can provide system operation and management via a high performance technical process and network. In the cloud computing environment, a learning service application can provide a business module or process to the user via the internet. This research focuses on providing the learning material and processes of courses by learning units using the services in a ubiquitous computing environment. And we also investigate functions that support users' tailored materials according to their learning style. That is, we analyzed the user's data and their characteristics in accordance with their user experience. We subsequently applied the learning process to fit on their learning performance and preferences. Finally, we demonstrate how the proposed system outperforms learning effects to learners better than existing techniques.
A Service Based Adaptive U-Learning System Using UX
Jeong, Hwa-Young
2014-01-01
In recent years, traditional development techniques for e-learning systems have been changing to become more convenient and efficient. One new technology in the development of application systems includes both cloud and ubiquitous computing. Cloud computing can support learning system processes by using services while ubiquitous computing can provide system operation and management via a high performance technical process and network. In the cloud computing environment, a learning service application can provide a business module or process to the user via the internet. This research focuses on providing the learning material and processes of courses by learning units using the services in a ubiquitous computing environment. And we also investigate functions that support users' tailored materials according to their learning style. That is, we analyzed the user's data and their characteristics in accordance with their user experience. We subsequently applied the learning process to fit on their learning performance and preferences. Finally, we demonstrate how the proposed system outperforms learning effects to learners better than existing techniques. PMID:25147832
Side impact test and analyses of a DOT-111 tank car : final report.
DOT National Transportation Integrated Search
2015-10-01
Transportation Technology Center, Inc. conducted a side impact test on a DOT-111 tank car to evaluate the performance of the : tank car under dynamic impact conditions and to provide data for the verification and refinement of a computational model. ...
A computational framework to characterize and compare the geometry of coronary networks.
Bulant, C A; Blanco, P J; Lima, T P; Assunção, A N; Liberato, G; Parga, J R; Ávila, L F R; Pereira, A C; Feijóo, R A; Lemos, P A
2017-03-01
This work presents a computational framework to perform a systematic and comprehensive assessment of the morphometry of coronary arteries from in vivo medical images. The methodology embraces image segmentation, arterial vessel representation, characterization and comparison, data storage, and finally analysis. Validation is performed using a sample of 48 patients. Data mining of morphometric information of several coronary arteries is presented. Results agree to medical reports in terms of basic geometric and anatomical variables. Concerning geometric descriptors, inter-artery and intra-artery correlations are studied. Data reported here can be useful for the construction and setup of blood flow models of the coronary circulation. Finally, as an application example, similarity criterion to assess vasculature likelihood based on geometric features is presented and used to test geometric similarity among sibling patients. Results indicate that likelihood, measured through geometric descriptors, is stronger between siblings compared with non-relative patients. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Computer Simulation of Protein-Protein and Protein-Peptide Interactions
1983-12-08
a full molecular dynamic z simulation is performed, with resulting dipolar re - laxation. However, this is prohibitive when a large . number of...1993 Dr. Mike Marron Program Manager Molecular Biology Office of Naval Research 800 N. Quincy Street Arlington, VA 22217 Dear Mike, Here is the...rztnbutior is unLi--ited. , 93-30630 98 12 12/08/93 01/0/92-;03/31/93: Final Report, Computer Simulation of Protein-Protein and Protein-Peptide
NASA Astrophysics Data System (ADS)
Hossa, Robert; Górski, Maksymilian
2010-09-01
In the paper we analyze the influence of RF channels mismatch and mutual coupling effect on the performance of the multistatic passive radar with Uniform Circular Array (UCA) configuration. The problem was tested intensively in numerous different scenarios with a reference virtual multistatic passive radar. Finally, exemplary results of the computer software simulations are provided and discussed.
Berthing mechanism final test report and program assessment
NASA Technical Reports Server (NTRS)
1988-01-01
The purpose is to document the testing performed on both hardware and software developed under the Space Station Berthing Mechanisms Program. Testing of the mechanism occurred at three locations. Several system components, e.g., actuators and computer systems, were functionally tested before assembly. A series of post assembly tests were performed. The post assembly tests, as well as the dynamic testing of the mechanism, are presented.
Bistatic passive radar simulator with spatial filtering subsystem
NASA Astrophysics Data System (ADS)
Hossa, Robert; Szlachetko, Boguslaw; Lewandowski, Andrzej; Górski, Maksymilian
2009-06-01
The purpose of this paper is to briefly introduce the structure and features of the developed virtual passive FM radar implemented in Matlab system of numerical computations and to present many alternative ways of its performance. An idea of the proposed solution is based on analytic representation of transmitted direct signals and reflected echo signals. As a spatial filtering subsystem a beamforming network of ULA and UCA dipole configuration dedicated to bistatic radar concept is considered and computationally efficient procedures are presented in details. Finally, exemplary results of the computer simulations of the elaborated virtual simulator are provided and discussed.
A Survey of Techniques for Approximate Computing
Mittal, Sparsh
2016-03-18
Approximate computing trades off computation quality with the effort expended and as rising performance demands confront with plateauing resource budgets, approximate computing has become, not merely attractive, but even imperative. Here, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU and FPGA), processor components, memory technologies etc., and programming frameworks for AC. Moreover, we classify these techniques based on several key characteristics to emphasize their similarities and differences. Finally, the aim of this paper is tomore » provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hameka, H.F.; Jensen, J.O.
1993-05-01
This report presents the computed optimized geometry and vibrational IR and Raman frequencies of the V-agent VX. The computations are performed with the Gaussian 90 Program Package using 6-31G* basis sets. We assign the vibrational frequencies and correct each frequency by multiplying it with a previously derived 6-31G* correction factor. The result is a computer-generated prediction of the IR and Raman spectra of VX. This study was intended as a blind test of the utility of IR spectral prediction. Therefore, we intentionally did not look at experimental data on the IR and Raman spectra of VX.... IR Spectra, VX, Ramanmore » spectra, Computer predictions.« less
Computer-Aided Drug Design in Epigenetics
NASA Astrophysics Data System (ADS)
Lu, Wenchao; Zhang, Rukang; Jiang, Hao; Zhang, Huimin; Luo, Cheng
2018-03-01
Epigenetic dysfunction has been widely implicated in several diseases especially cancers thus highlights the therapeutic potential for chemical interventions in this field. With rapid development of computational methodologies and high-performance computational resources, computer-aided drug design has emerged as a promising strategy to speed up epigenetic drug discovery. Herein, we make a brief overview of major computational methods reported in the literature including druggability prediction, virtual screening, homology modeling, scaffold hopping, pharmacophore modeling, molecular dynamics simulations, quantum chemistry calculation and 3D quantitative structure activity relationship that have been successfully applied in the design and discovery of epi-drugs and epi-probes. Finally, we discuss about major limitations of current virtual drug design strategies in epigenetics drug discovery and future directions in this field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geveci, Berk; Maynard, Robert
The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. The XVis project brought together collaborators from predominant DOE projects for visualization on accelerators and combining their respectivemore » features into a new visualization toolkit called VTK-m.« less
Computer-Aided Drug Design in Epigenetics
Lu, Wenchao; Zhang, Rukang; Jiang, Hao; Zhang, Huimin; Luo, Cheng
2018-01-01
Epigenetic dysfunction has been widely implicated in several diseases especially cancers thus highlights the therapeutic potential for chemical interventions in this field. With rapid development of computational methodologies and high-performance computational resources, computer-aided drug design has emerged as a promising strategy to speed up epigenetic drug discovery. Herein, we make a brief overview of major computational methods reported in the literature including druggability prediction, virtual screening, homology modeling, scaffold hopping, pharmacophore modeling, molecular dynamics simulations, quantum chemistry calculation, and 3D quantitative structure activity relationship that have been successfully applied in the design and discovery of epi-drugs and epi-probes. Finally, we discuss about major limitations of current virtual drug design strategies in epigenetics drug discovery and future directions in this field. PMID:29594101
The impact of supercomputers on experimentation: A view from a national laboratory
NASA Technical Reports Server (NTRS)
Peterson, V. L.; Arnold, J. O.
1985-01-01
The relative roles of large scale scientific computers and physical experiments in several science and engineering disciplines are discussed. Increasing dependence on computers is shown to be motivated both by the rapid growth in computer speed and memory, which permits accurate numerical simulation of complex physical phenomena, and by the rapid reduction in the cost of performing a calculation, which makes computation an increasingly attractive complement to experimentation. Computer speed and memory requirements are presented for selected areas of such disciplines as fluid dynamics, aerodynamics, aerothermodynamics, chemistry, atmospheric sciences, astronomy, and astrophysics, together with some examples of the complementary nature of computation and experiment. Finally, the impact of the emerging role of computers in the technical disciplines is discussed in terms of both the requirements for experimentation and the attainment of previously inaccessible information on physical processes.
F-16 Task Analysis Criterion-Referenced Objective and Objectives Hierarchy Report. Volume 4
1981-03-01
Initiation cues: Engine flameout Systems presenting cues: Aircraft fuel, engine STANDARD: Authority: TACR 60-2 Performance precision: TD in first 1/3 of...task: None Initiation cues: On short final Systems preventing cues: N/A STANDARD: Authority: 60-2 Performance precision: +/- .5 AOA; TD zone 150-1000...precision: +/- .05 AOA; TD Zone 150-1000 Computational accuracy: N/A ... . . ... . ... e e m I TASK NO.: 1.9.4 BEHAVIOR: Perform short field landing
NASA Astrophysics Data System (ADS)
Mulla, Ameer K.; Patil, Deepak U.; Chakraborty, Debraj
2018-02-01
N identical agents with bounded inputs aim to reach a common target state (consensus) in the minimum possible time. Algorithms for computing this time-optimal consensus point, the control law to be used by each agent and the time taken for the consensus to occur, are proposed. Two types of multi-agent systems are considered, namely (1) coupled single-integrator agents on a plane and, (2) double-integrator agents on a line. At the initial time instant, each agent is assumed to have access to the state information of all the other agents. An algorithm, using convexity of attainable sets and Helly's theorem, is proposed, to compute the final consensus target state and the minimum time to achieve this consensus. Further, parts of the computation are parallelised amongst the agents such that each agent has to perform computations of O(N2) run time complexity. Finally, local feedback time-optimal control laws are synthesised to drive each agent to the target point in minimum time. During this part of the operation, the controller for each agent uses measurements of only its own states and does not need to communicate with any neighbouring agents.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sunderam, Vaidy S.
2007-01-09
The Harness project has developed novel software frameworks for the execution of high-end simulations in a fault-tolerant manner on distributed resources. The H2O subsystem comprises the kernel of the Harness framework, and controls the key functions of resource management across multiple administrative domains, especially issues of access and allocation. It is based on a “pluggable” architecture that enables the aggregated use of distributed heterogeneous resources for high performance computing. The major contributions of the Harness II project result in significantly enhancing the overall computational productivity of high-end scientific applications by enabling robust, failure-resilient computations on cooperatively pooled resource collections.
Shao, Meiyue; Aktulga, H. Metin; Yang, Chao; ...
2017-09-14
In this paper, we describe a number of recently developed techniques for improving the performance of large-scale nuclear configuration interaction calculations on high performance parallel computers. We show the benefit of using a preconditioned block iterative method to replace the Lanczos algorithm that has traditionally been used to perform this type of computation. The rapid convergence of the block iterative method is achieved by a proper choice of starting guesses of the eigenvectors and the construction of an effective preconditioner. These acceleration techniques take advantage of special structure of the nuclear configuration interaction problem which we discuss in detail. Themore » use of a block method also allows us to improve the concurrency of the computation, and take advantage of the memory hierarchy of modern microprocessors to increase the arithmetic intensity of the computation relative to data movement. Finally, we also discuss the implementation details that are critical to achieving high performance on massively parallel multi-core supercomputers, and demonstrate that the new block iterative solver is two to three times faster than the Lanczos based algorithm for problems of moderate sizes on a Cray XC30 system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, Meiyue; Aktulga, H. Metin; Yang, Chao
In this paper, we describe a number of recently developed techniques for improving the performance of large-scale nuclear configuration interaction calculations on high performance parallel computers. We show the benefit of using a preconditioned block iterative method to replace the Lanczos algorithm that has traditionally been used to perform this type of computation. The rapid convergence of the block iterative method is achieved by a proper choice of starting guesses of the eigenvectors and the construction of an effective preconditioner. These acceleration techniques take advantage of special structure of the nuclear configuration interaction problem which we discuss in detail. Themore » use of a block method also allows us to improve the concurrency of the computation, and take advantage of the memory hierarchy of modern microprocessors to increase the arithmetic intensity of the computation relative to data movement. Finally, we also discuss the implementation details that are critical to achieving high performance on massively parallel multi-core supercomputers, and demonstrate that the new block iterative solver is two to three times faster than the Lanczos based algorithm for problems of moderate sizes on a Cray XC30 system.« less
Rapid Calculation of Max-Min Fair Rates for Multi-Commodity Flows in Fat-Tree Networks
Mollah, Md Atiqul; Yuan, Xin; Pakin, Scott; ...
2017-08-29
Max-min fairness is often used in the performance modeling of interconnection networks. Existing methods to compute max-min fair rates for multi-commodity flows have high complexity and are computationally infeasible for large networks. In this paper, we show that by considering topological features, this problem can be solved efficiently for the fat-tree topology that is widely used in data centers and high performance compute clusters. Several efficient new algorithms are developed for this problem, including a parallel algorithm that can take advantage of multi-core and shared-memory architectures. Using these algorithms, we demonstrate that it is possible to find the max-min fairmore » rate allocation for multi-commodity flows in fat-tree networks that support tens of thousands of nodes. We evaluate the run-time performance of the proposed algorithms and show improvement in orders of magnitude over the previously best known method. Finally, we further demonstrate a new application of max-min fair rate allocation that is only computationally feasible using our new algorithms.« less
Rapid Calculation of Max-Min Fair Rates for Multi-Commodity Flows in Fat-Tree Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mollah, Md Atiqul; Yuan, Xin; Pakin, Scott
Max-min fairness is often used in the performance modeling of interconnection networks. Existing methods to compute max-min fair rates for multi-commodity flows have high complexity and are computationally infeasible for large networks. In this paper, we show that by considering topological features, this problem can be solved efficiently for the fat-tree topology that is widely used in data centers and high performance compute clusters. Several efficient new algorithms are developed for this problem, including a parallel algorithm that can take advantage of multi-core and shared-memory architectures. Using these algorithms, we demonstrate that it is possible to find the max-min fairmore » rate allocation for multi-commodity flows in fat-tree networks that support tens of thousands of nodes. We evaluate the run-time performance of the proposed algorithms and show improvement in orders of magnitude over the previously best known method. Finally, we further demonstrate a new application of max-min fair rate allocation that is only computationally feasible using our new algorithms.« less
NASA Technical Reports Server (NTRS)
Carroll, Chester C.; Youngblood, John N.; Saha, Aindam
1987-01-01
Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carroll, C.C.; Youngblood, J.N.; Saha, A.
1987-12-01
Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processingmore » elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.« less
NASA Technical Reports Server (NTRS)
Schmid, Beat; Bergstrom, Robert W.; Redemann, Jens
2002-01-01
This report is the final report for "Analysis of Atmospheric Aerosol Data Sets and Application of Radiative Transfer Models to Compute Aerosol Effects". It is a bibliographic compilation of 29 peer-reviewed publications (published, in press or submitted) produced under this Cooperative Agreement and 30 first-authored conference presentations. The tasks outlined in the various proposals are listed below with a brief comment as to the research performed. Copies of title/abstract pages of peer-reviewed publications are attached.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, Amjad Majid; Albert, Don; Andersson, Par
SLURM is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small computer clusters. As a cluster resource manager, SLURM has three key functions. First, it allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work 9normally a parallel job) on the set of allocated nodes. Finally, it arbitrates conflicting requests for resources by managing a queue of pending work.
Now and next-generation sequencing techniques: future of sequence analysis using cloud computing.
Thakur, Radhe Shyam; Bandopadhyay, Rajib; Chaudhary, Bratati; Chatterjee, Sourav
2012-01-01
Advances in the field of sequencing techniques have resulted in the greatly accelerated production of huge sequence datasets. This presents immediate challenges in database maintenance at datacenters. It provides additional computational challenges in data mining and sequence analysis. Together these represent a significant overburden on traditional stand-alone computer resources, and to reach effective conclusions quickly and efficiently, the virtualization of the resources and computation on a pay-as-you-go concept (together termed "cloud computing") has recently appeared. The collective resources of the datacenter, including both hardware and software, can be available publicly, being then termed a public cloud, the resources being provided in a virtual mode to the clients who pay according to the resources they employ. Examples of public companies providing these resources include Amazon, Google, and Joyent. The computational workload is shifted to the provider, which also implements required hardware and software upgrades over time. A virtual environment is created in the cloud corresponding to the computational and data storage needs of the user via the internet. The task is then performed, the results transmitted to the user, and the environment finally deleted after all tasks are completed. In this discussion, we focus on the basics of cloud computing, and go on to analyze the prerequisites and overall working of clouds. Finally, the applications of cloud computing in biological systems, particularly in comparative genomics, genome informatics, and SNP detection are discussed with reference to traditional workflows.
NASA Technical Reports Server (NTRS)
Lee, C. S. G.; Chen, C. L.
1989-01-01
Two efficient mapping algorithms for scheduling the robot inverse dynamics computation consisting of m computational modules with precedence relationship to be executed on a multiprocessor system consisting of p identical homogeneous processors with processor and communication costs to achieve minimum computation time are presented. An objective function is defined in terms of the sum of the processor finishing time and the interprocessor communication time. The minimax optimization is performed on the objective function to obtain the best mapping. This mapping problem can be formulated as a combination of the graph partitioning and the scheduling problems; both have been known to be NP-complete. Thus, to speed up the searching for a solution, two heuristic algorithms were proposed to obtain fast but suboptimal mapping solutions. The first algorithm utilizes the level and the communication intensity of the task modules to construct an ordered priority list of ready modules and the module assignment is performed by a weighted bipartite matching algorithm. For a near-optimal mapping solution, the problem can be solved by the heuristic algorithm with simulated annealing. These proposed optimization algorithms can solve various large-scale problems within a reasonable time. Computer simulations were performed to evaluate and verify the performance and the validity of the proposed mapping algorithms. Finally, experiments for computing the inverse dynamics of a six-jointed PUMA-like manipulator based on the Newton-Euler dynamic equations were implemented on an NCUBE/ten hypercube computer to verify the proposed mapping algorithms. Computer simulation and experimental results are compared and discussed.
DOT National Transportation Integrated Search
2016-01-01
Human attention is a finite resource. When interrupted while performing a task, this : resource is split between two interactive tasks. People have to decide whether the benefits : from the interruptive interaction will be enough to offset the loss o...
ERIC Educational Resources Information Center
MacCabe, Bruce
The Literacy Learning Center Project, a project of the Meriden Public Library (Connecticut), targeted the educationally underserved and functionally illiterate, and involved recruitment, retention, space renovation, coalition building, public awareness, training, basic literacy, collection development, tutoring, computer assisted services, and…
MULTIPLE PROJECTIONS SYSTEM (MPS) - USER'S MANUAL VERSION 1.0
The report is a user's manual for version 1.0 of the Multiple Projections Systems (MPS), a computer system that can perform "what if" scenario analysis and report the final results (i.e., Rate of Further Progress - ROP - inventories) to EPA (i.e., the Aerometric Information Retri...
Sensory Discrimination, Generalization and Language Training of Autistic Children. Final Report.
ERIC Educational Resources Information Center
Blanton, Richard L.; And Others
The report presents summaries of 11 studies performed on 25-45 autistic students in a residential center to investigate processes of discrimination and response acquisition using automated reinforcement technology and exact timing procedures. The computer operated display and recording system for language and discrimination training is described…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tournier, J.; El-Genk, M.S.; Huang, L.
1999-01-01
The Institute of Space and Nuclear Power Studies at the University of New Mexico has developed a computer simulation of cylindrical geometry alkali metal thermal-to-electric converter cells using a standard Fortran 77 computer code. The objective and use of this code was to compare the experimental measurements with computer simulations, upgrade the model as appropriate, and conduct investigations of various methods to improve the design and performance of the devices for improved efficiency, durability, and longer operational lifetime. The Institute of Space and Nuclear Power Studies participated in vacuum testing of PX series alkali metal thermal-to-electric converter cells and developedmore » the alkali metal thermal-to-electric converter Performance Evaluation and Analysis Model. This computer model consisted of a sodium pressure loss model, a cell electrochemical and electric model, and a radiation/conduction heat transfer model. The code closely predicted the operation and performance of a wide variety of PX series cells which led to suggestions for improvements to both lifetime and performance. The code provides valuable insight into the operation of the cell, predicts parameters of components within the cell, and is a useful tool for predicting both the transient and steady state performance of systems of cells.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tournier, J.; El-Genk, M.S.; Huang, L.
1999-01-01
The Institute of Space and Nuclear Power Studies at the University of New Mexico has developed a computer simulation of cylindrical geometry alkali metal thermal-to-electric converter cells using a standard Fortran 77 computer code. The objective and use of this code was to compare the experimental measurements with computer simulations, upgrade the model as appropriate, and conduct investigations of various methods to improve the design and performance of the devices for improved efficiency, durability, and longer operational lifetime. The Institute of Space and Nuclear Power Studies participated in vacuum testing of PX series alkali metal thermal-to-electric converter cells and developedmore » the alkali metal thermal-to-electric converter Performance Evaluation and Analysis Model. This computer model consisted of a sodium pressure loss model, a cell electrochemical and electric model, and a radiation/conduction heat transfer model. The code closely predicted the operation and performance of a wide variety of PX series cells which led to suggestions for improvements to both lifetime and performance. The code provides valuable insight into the operation of the cell, predicts parameters of components within the cell, and is a useful tool for predicting both the transient and steady state performance of systems of cells.« less
Feasibility of Computer-Based Videogame Therapy for Children with Cerebral Palsy
Radtka, Sandra; Hone, Robert; Brown, Charles; Mastick, Judy; Melnick, Marsha E.
2013-01-01
Abstract Objectives Standing and gait balance problems are common in children with cerebral palsy (CP), resulting in falls and injuries. Task-oriented exercises to strengthen and stretch muscles that shift the center of mass and change the base of support are effective in improving balance. Gaming environments can be challenging and fun, encouraging children to engage in exercises at home. The aims of this project were to demonstrate the technical feasibility, ease of use, appeal, and safety of a computer-based videogame program designed to improve balance in children with CP. Materials and Methods This study represents a close collaboration between computer design and clinical team members. The first two phases were performed in the laboratory, and the final phase was done in subjects' homes. The prototype balance game was developed using computer-based real-time three-dimensional programming that enabled the team to capture engineering data necessary to tune the system. Videogame modifications, including identifying compensatory movements, were made in an iterative fashion based on feedback from subjects and observations of clinical and software team members. Results Subjects (n=14) scored the game 21.5 out of 30 for ease of use and appeal, 4.0 out of 5 for enjoyment, and 3.5 on comprehension. There were no safety issues, and the games performed without technical flaws in final testing. Conclusions A computer-based videogame incorporating therapeutic movements to improve gait and balance in children with CP was appealing and feasible for home use. A follow-up study examining its effectiveness in improving balance in children with CP is recommended. PMID:24761324
Feasibility of Computer-Based Videogame Therapy for Children with Cerebral Palsy.
Radtka, Sandra; Hone, Robert; Brown, Charles; Mastick, Judy; Melnick, Marsha E; Dowling, Glenna A
2013-08-01
Standing and gait balance problems are common in children with cerebral palsy (CP), resulting in falls and injuries. Task-oriented exercises to strengthen and stretch muscles that shift the center of mass and change the base of support are effective in improving balance. Gaming environments can be challenging and fun, encouraging children to engage in exercises at home. The aims of this project were to demonstrate the technical feasibility, ease of use, appeal, and safety of a computer-based videogame program designed to improve balance in children with CP. This study represents a close collaboration between computer design and clinical team members. The first two phases were performed in the laboratory, and the final phase was done in subjects' homes. The prototype balance game was developed using computer-based real-time three-dimensional programming that enabled the team to capture engineering data necessary to tune the system. Videogame modifications, including identifying compensatory movements, were made in an iterative fashion based on feedback from subjects and observations of clinical and software team members. Subjects ( n =14) scored the game 21.5 out of 30 for ease of use and appeal, 4.0 out of 5 for enjoyment, and 3.5 on comprehension. There were no safety issues, and the games performed without technical flaws in final testing. A computer-based videogame incorporating therapeutic movements to improve gait and balance in children with CP was appealing and feasible for home use. A follow-up study examining its effectiveness in improving balance in children with CP is recommended.
NASA Astrophysics Data System (ADS)
Lin, Chern-Sheng; Chen, Chia-Tse; Shei, Hung-Jung; Lay, Yun-Long; Chiu, Chuang-Chien
2012-09-01
This study develops a body motion interactive system with computer vision technology. This application combines interactive games, art performing, and exercise training system. Multiple image processing and computer vision technologies are used in this study. The system can calculate the characteristics of an object color, and then perform color segmentation. When there is a wrong action judgment, the system will avoid the error with a weight voting mechanism, which can set the condition score and weight value for the action judgment, and choose the best action judgment from the weight voting mechanism. Finally, this study estimated the reliability of the system in order to make improvements. The results showed that, this method has good effect on accuracy and stability during operations of the human-machine interface of the sports training system.
DNS of Flow in a Low-Pressure Turbine Cascade Using a Discontinuous-Galerkin Spectral-Element Method
NASA Technical Reports Server (NTRS)
Garai, Anirban; Diosady, Laslo Tibor; Murman, Scott; Madavan, Nateri
2015-01-01
A new computational capability under development for accurate and efficient high-fidelity direct numerical simulation (DNS) and large eddy simulation (LES) of turbomachinery is described. This capability is based on an entropy-stable Discontinuous-Galerkin spectral-element approach that extends to arbitrarily high orders of spatial and temporal accuracy and is implemented in a computationally efficient manner on a modern high performance computer architecture. A validation study using this method to perform DNS of flow in a low-pressure turbine airfoil cascade are presented. Preliminary results indicate that the method captures the main features of the flow. Discrepancies between the predicted results and the experiments are likely due to the effects of freestream turbulence not being included in the simulation and will be addressed in the final paper.
Computational modeling of neural plasticity for self-organization of neural networks.
Chrol-Cannon, Joseph; Jin, Yaochu
2014-11-01
Self-organization in biological nervous systems during the lifetime is known to largely occur through a process of plasticity that is dependent upon the spike-timing activity in connected neurons. In the field of computational neuroscience, much effort has been dedicated to building up computational models of neural plasticity to replicate experimental data. Most recently, increasing attention has been paid to understanding the role of neural plasticity in functional and structural neural self-organization, as well as its influence on the learning performance of neural networks for accomplishing machine learning tasks such as classification and regression. Although many ideas and hypothesis have been suggested, the relationship between the structure, dynamics and learning performance of neural networks remains elusive. The purpose of this article is to review the most important computational models for neural plasticity and discuss various ideas about neural plasticity's role. Finally, we suggest a few promising research directions, in particular those along the line that combines findings in computational neuroscience and systems biology, and their synergetic roles in understanding learning, memory and cognition, thereby bridging the gap between computational neuroscience, systems biology and computational intelligence. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Space Radiation Transport Methods Development
NASA Technical Reports Server (NTRS)
Wilson, J. W.; Tripathi, R. K.; Qualls, G. D.; Cucinotta, F. A.; Prael, R. E.; Norbury, J. W.; Heinbockel, J. H.; Tweed, J.
2002-01-01
Improved spacecraft shield design requires early entry of radiation constraints into the design process to maximize performance and minimize costs. As a result, we have been investigating high-speed computational procedures to allow shield analysis from the preliminary design concepts to the final design. In particular, we will discuss the progress towards a full three-dimensional and computationally efficient deterministic code for which the current HZETRN evaluates the lowest order asymptotic term. HZETRN is the first deterministic solution to the Boltzmann equation allowing field mapping within the International Space Station (ISS) in tens of minutes using standard Finite Element Method (FEM) geometry common to engineering design practice enabling development of integrated multidisciplinary design optimization methods. A single ray trace in ISS FEM geometry requires 14 milliseconds and severely limits application of Monte Carlo methods to such engineering models. A potential means of improving the Monte Carlo efficiency in coupling to spacecraft geometry is given in terms of reconfigurable computing and could be utilized in the final design as verification of the deterministic method optimized design.
Experimental Realization of High-Efficiency Counterfactual Computation.
Kong, Fei; Ju, Chenyong; Huang, Pu; Wang, Pengfei; Kong, Xi; Shi, Fazhan; Jiang, Liang; Du, Jiangfeng
2015-08-21
Counterfactual computation (CFC) exemplifies the fascinating quantum process by which the result of a computation may be learned without actually running the computer. In previous experimental studies, the counterfactual efficiency is limited to below 50%. Here we report an experimental realization of the generalized CFC protocol, in which the counterfactual efficiency can break the 50% limit and even approach unity in principle. The experiment is performed with the spins of a negatively charged nitrogen-vacancy color center in diamond. Taking advantage of the quantum Zeno effect, the computer can remain in the not-running subspace due to the frequent projection by the environment, while the computation result can be revealed by final detection. The counterfactual efficiency up to 85% has been demonstrated in our experiment, which opens the possibility of many exciting applications of CFC, such as high-efficiency quantum integration and imaging.
Experimental Realization of High-Efficiency Counterfactual Computation
NASA Astrophysics Data System (ADS)
Kong, Fei; Ju, Chenyong; Huang, Pu; Wang, Pengfei; Kong, Xi; Shi, Fazhan; Jiang, Liang; Du, Jiangfeng
2015-08-01
Counterfactual computation (CFC) exemplifies the fascinating quantum process by which the result of a computation may be learned without actually running the computer. In previous experimental studies, the counterfactual efficiency is limited to below 50%. Here we report an experimental realization of the generalized CFC protocol, in which the counterfactual efficiency can break the 50% limit and even approach unity in principle. The experiment is performed with the spins of a negatively charged nitrogen-vacancy color center in diamond. Taking advantage of the quantum Zeno effect, the computer can remain in the not-running subspace due to the frequent projection by the environment, while the computation result can be revealed by final detection. The counterfactual efficiency up to 85% has been demonstrated in our experiment, which opens the possibility of many exciting applications of CFC, such as high-efficiency quantum integration and imaging.
Final Project Report: Data Locality Enhancement of Dynamic Simulations for Exascale Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Xipeng
The goal of this project is to develop a set of techniques and software tools to enhance the matching between memory accesses in dynamic simulations and the prominent features of modern and future manycore systems, alleviating the memory performance issues for exascale computing. In the first three years, the PI and his group have achieves some significant progress towards the goal, producing a set of novel techniques for improving the memory performance and data locality in manycore systems, yielding 18 conference and workshop papers and 4 journal papers and graduating 6 Ph.Ds. This report summarizes the research results of thismore » project through that period.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paris, Mark W.
The current one-year project allocation (w17 burst) supports the continuation of research performed in the two-year Institutional Computing allocation (w14 bigbangnucleosynthesis). The project has supported development and production runs resulting in several publications[1, 2, 3, 4] in peer-review journals and talks. Most signi cantly, we have recently achieved a signi cant improvement in code performance. This improvement was essential to the prospect of making further progress on this heretofore unsolved multiphysics problem that lies at the intersection of nuclear and particle theory and the kinetic theory of energy transport in a system with internal (quantum) degrees of freedom.
High-Productivity Computing in Computational Physics Education
NASA Astrophysics Data System (ADS)
Tel-Zur, Guy
2011-03-01
We describe the development of a new course in Computational Physics at the Ben-Gurion University. This elective course for 3rd year undergraduates and MSc. students is being taught during one semester. Computational Physics is by now well accepted as the Third Pillar of Science. This paper's claim is that modern Computational Physics education should deal also with High-Productivity Computing. The traditional approach of teaching Computational Physics emphasizes ``Correctness'' and then ``Accuracy'' and we add also ``Performance.'' Along with topics in Mathematical Methods and case studies in Physics the course deals a significant amount of time with ``Mini-Courses'' in topics such as: High-Throughput Computing - Condor, Parallel Programming - MPI and OpenMP, How to build a Beowulf, Visualization and Grid and Cloud Computing. The course does not intend to teach neither new physics nor new mathematics but it is focused on an integrated approach for solving problems starting from the physics problem, the corresponding mathematical solution, the numerical scheme, writing an efficient computer code and finally analysis and visualization.
Tablet computer enhanced training improves internal medicine exam performance.
Baumgart, Daniel C; Wende, Ilja; Grittner, Ulrike
2017-01-01
Traditional teaching concepts in medical education do not take full advantage of current information technology. We aimed to objectively determine the impact of Tablet PC enhanced training on learning experience and MKSAP® (medical knowledge self-assessment program) exam performance. In this single center, prospective, controlled study final year medical students and medical residents doing an inpatient service rotation were alternatingly assigned to either the active test (Tablet PC with custom multimedia education software package) or traditional education (control) group, respectively. All completed an extensive questionnaire to collect their socio-demographic data, evaluate educational status, computer affinity and skills, problem solving, eLearning knowledge and self-rated medical knowledge. Both groups were MKSAP® tested at the beginning and the end of their rotation. The MKSAP® score at the final exam was the primary endpoint. Data of 55 (tablet n = 24, controls n = 31) male 36.4%, median age 28 years, 65.5% students, were evaluable. The mean MKSAP® score improved in the tablet PC (score Δ + 8 SD: 11), but not the control group (score Δ- 7, SD: 11), respectively. After adjustment for baseline score and confounders the Tablet PC group showed on average 11% better MKSAP® test results compared to the control group (p<0.001). The most commonly used resources for medical problem solving were journal articles looked up on PubMed or Google®, and books. Our study provides evidence, that tablet computer based integrated training and clinical practice enhances medical education and exam performance. Larger, multicenter trials are required to independently validate our data. Residency and fellowship directors are encouraged to consider adding portable computer devices, multimedia content and introduce blended learning to their respective training programs.
Tablet computer enhanced training improves internal medicine exam performance
Wende, Ilja; Grittner, Ulrike
2017-01-01
Background Traditional teaching concepts in medical education do not take full advantage of current information technology. We aimed to objectively determine the impact of Tablet PC enhanced training on learning experience and MKSAP® (medical knowledge self-assessment program) exam performance. Methods In this single center, prospective, controlled study final year medical students and medical residents doing an inpatient service rotation were alternatingly assigned to either the active test (Tablet PC with custom multimedia education software package) or traditional education (control) group, respectively. All completed an extensive questionnaire to collect their socio-demographic data, evaluate educational status, computer affinity and skills, problem solving, eLearning knowledge and self-rated medical knowledge. Both groups were MKSAP® tested at the beginning and the end of their rotation. The MKSAP® score at the final exam was the primary endpoint. Results Data of 55 (tablet n = 24, controls n = 31) male 36.4%, median age 28 years, 65.5% students, were evaluable. The mean MKSAP® score improved in the tablet PC (score Δ + 8 SD: 11), but not the control group (score Δ- 7, SD: 11), respectively. After adjustment for baseline score and confounders the Tablet PC group showed on average 11% better MKSAP® test results compared to the control group (p<0.001). The most commonly used resources for medical problem solving were journal articles looked up on PubMed or Google®, and books. Conclusions Our study provides evidence, that tablet computer based integrated training and clinical practice enhances medical education and exam performance. Larger, multicenter trials are required to independently validate our data. Residency and fellowship directors are encouraged to consider adding portable computer devices, multimedia content and introduce blended learning to their respective training programs. PMID:28369063
Apollo LM guidance computer software for the final lunar descent.
NASA Technical Reports Server (NTRS)
Eyles, D.
1973-01-01
In all manned lunar landings to date, the lunar module Commander has taken partial manual control of the spacecraft during the final stage of the descent, below roughly 500 ft altitude. This report describes programs developed at the Charles Stark Draper Laboratory, MIT, for use in the LM's guidance computer during the final descent. At this time computational demands on the on-board computer are at a maximum, and particularly close interaction with the crew is necessary. The emphasis is on the design of the computer software rather than on justification of the particular guidance algorithms employed. After the computer and the mission have been introduced, the current configuration of the final landing programs and an advanced version developed experimentally by the author are described.
Design for pressure regulating components
NASA Technical Reports Server (NTRS)
Wichmann, H.
1973-01-01
The design development for Pressure Regulating Components included a regulator component trade-off study with analog computer performance verification to arrive at a final optimized regulator configuration for the Space Storable Propulsion Module, under development for a Jupiter Orbiter mission. This application requires the pressure regulator to be capable of long-term fluorine exposure. In addition, individual but basically identical (for purposes of commonality) units are required for separate oxidizer and fuel pressurization. The need for dual units requires improvement in the regulation accuracy over present designs. An advanced regulator concept was prepared featuring redundant bellows, all metallic/ceramic construction, friction-free guidance of moving parts, gas damping, and the elimination of coil springs normally used for reference forces. The activities included testing of actual size seat/poppet components to determine actual discharge coefficients and flow forces. The resulting data was inserted into the computer model of the regulator. Computer simulation of the propulsion module performance over two mission profiles indicated satisfactory minimization of propellant residual requirements imposed by regulator performance uncertainties.
NASA Technical Reports Server (NTRS)
Elliott, Kenny B.; Ugoletti, Roberto; Sulla, Jeff
1992-01-01
The evolution and optimization of a real-time digital control system is presented. The control system is part of a testbed used to perform focused technology research on the interactions of spacecraft platform and instrument controllers with the flexible-body dynamics of the platform and platform appendages. The control system consists of Computer Automated Measurement and Control (CAMAC) standard data acquisition equipment interfaced to a workstation computer. The goal of this work is to optimize the control system's performance to support controls research using controllers with up to 50 states and frame rates above 200 Hz. The original system could support a 16-state controller operating at a rate of 150 Hz. By using simple yet effective software improvements, Input/Output (I/O) latencies and contention problems are reduced or eliminated in the control system. The final configuration can support a 16-state controller operating at 475 Hz. Effectively the control system's performance was increased by a factor of 3.
Information granules in image histogram analysis.
Wieclawek, Wojciech
2018-04-01
A concept of granular computing employed in intensity-based image enhancement is discussed. First, a weighted granular computing idea is introduced. Then, the implementation of this term in the image processing area is presented. Finally, multidimensional granular histogram analysis is introduced. The proposed approach is dedicated to digital images, especially to medical images acquired by Computed Tomography (CT). As the histogram equalization approach, this method is based on image histogram analysis. Yet, unlike the histogram equalization technique, it works on a selected range of the pixel intensity and is controlled by two parameters. Performance is tested on anonymous clinical CT series. Copyright © 2017 Elsevier Ltd. All rights reserved.
Lattice dynamics calculations based on density-functional perturbation theory in real space
NASA Astrophysics Data System (ADS)
Shang, Honghui; Carbogno, Christian; Rinke, Patrick; Scheffler, Matthias
2017-06-01
A real-space formalism for density-functional perturbation theory (DFPT) is derived and applied for the computation of harmonic vibrational properties in molecules and solids. The practical implementation using numeric atom-centered orbitals as basis functions is demonstrated exemplarily for the all-electron Fritz Haber Institute ab initio molecular simulations (FHI-aims) package. The convergence of the calculations with respect to numerical parameters is carefully investigated and a systematic comparison with finite-difference approaches is performed both for finite (molecules) and extended (periodic) systems. Finally, the scaling tests and scalability tests on massively parallel computer systems demonstrate the computational efficiency.
NASA Astrophysics Data System (ADS)
Pour Yousefian Barfeh, Davood; Ebron, Jonalyn G.; Pabico, Jaderick P.
2018-02-01
In this study researchers pay attention to the essence of Insertion Sort and propose a sorter in Membrane Computing. This research shows how a theoretical computing device same as Membrane Computing can perform the basic concepts same as sorting. In this regard, researches introduce conditional reproduction rule such that each membrane can reproduce another membrane having same structure with the original membrane. The researchers use the functionality of comparator P system as a basis in which two multisets are compared and then stored in two adjacent membranes. And finally, the researchers present the process of sorting as a collection of transactions implemented in four levels while each level has different steps.
NASA Technical Reports Server (NTRS)
Wattson, R. B.; Harvey, P.; Swift, R.
1975-01-01
An intrinsic silicon charge injection device (CID) television sensor array has been used in conjunction with a CaMoO4 colinear tunable acousto optic filter, a 61 inch reflector, a sophisticated computer system, and a digital color TV scan converter/computer to produce near IR images of Saturn and Jupiter with 10A spectral resolution and approximately 3 inch spatial resolution. The CID camera has successfully obtained digitized 100 x 100 array images with 5 minutes of exposure time, and slow-scanned readout to a computer. Details of the equipment setup, innovations, problems, experience, data and final equipment performance limits are given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madduri, Kamesh; Ediger, David; Jiang, Karl
2009-02-15
We present a new lock-free parallel algorithm for computing betweenness centralityof massive small-world networks. With minor changes to the data structures, ouralgorithm also achieves better spatial cache locality compared to previous approaches. Betweenness centrality is a key algorithm kernel in HPCS SSCA#2, a benchmark extensively used to evaluate the performance of emerging high-performance computing architectures for graph-theoretic computations. We design optimized implementations of betweenness centrality and the SSCA#2 benchmark for two hardware multithreaded systems: a Cray XMT system with the Threadstorm processor, and a single-socket Sun multicore server with the UltraSPARC T2 processor. For a small-world network of 134 millionmore » vertices and 1.073 billion edges, the 16-processor XMT system and the 8-core Sun Fire T5120 server achieve TEPS scores (an algorithmic performance count for the SSCA#2 benchmark) of 160 million and 90 million respectively, which corresponds to more than a 2X performance improvement over the previous parallel implementations. To better characterize the performance of these multithreaded systems, we correlate the SSCA#2 performance results with data from the memory-intensive STREAM and RandomAccess benchmarks. Finally, we demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for a large-scale IMDb movie-actor network.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madduri, Kamesh; Ediger, David; Jiang, Karl
2009-05-29
We present a new lock-free parallel algorithm for computing betweenness centrality of massive small-world networks. With minor changes to the data structures, our algorithm also achieves better spatial cache locality compared to previous approaches. Betweenness centrality is a key algorithm kernel in the HPCS SSCA#2 Graph Analysis benchmark, which has been extensively used to evaluate the performance of emerging high-performance computing architectures for graph-theoretic computations. We design optimized implementations of betweenness centrality and the SSCA#2 benchmark for two hardware multithreaded systems: a Cray XMT system with the ThreadStorm processor, and a single-socket Sun multicore server with the UltraSparc T2 processor.more » For a small-world network of 134 million vertices and 1.073 billion edges, the 16-processor XMT system and the 8-core Sun Fire T5120 server achieve TEPS scores (an algorithmic performance count for the SSCA#2 benchmark) of 160 million and 90 million respectively, which corresponds to more than a 2X performance improvement over the previous parallel implementations. To better characterize the performance of these multithreaded systems, we correlate the SSCA#2 performance results with data from the memory-intensive STREAM and RandomAccess benchmarks. Finally, we demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for a large-scale IMDb movie-actor network.« less
Now and Next-Generation Sequencing Techniques: Future of Sequence Analysis Using Cloud Computing
Thakur, Radhe Shyam; Bandopadhyay, Rajib; Chaudhary, Bratati; Chatterjee, Sourav
2012-01-01
Advances in the field of sequencing techniques have resulted in the greatly accelerated production of huge sequence datasets. This presents immediate challenges in database maintenance at datacenters. It provides additional computational challenges in data mining and sequence analysis. Together these represent a significant overburden on traditional stand-alone computer resources, and to reach effective conclusions quickly and efficiently, the virtualization of the resources and computation on a pay-as-you-go concept (together termed “cloud computing”) has recently appeared. The collective resources of the datacenter, including both hardware and software, can be available publicly, being then termed a public cloud, the resources being provided in a virtual mode to the clients who pay according to the resources they employ. Examples of public companies providing these resources include Amazon, Google, and Joyent. The computational workload is shifted to the provider, which also implements required hardware and software upgrades over time. A virtual environment is created in the cloud corresponding to the computational and data storage needs of the user via the internet. The task is then performed, the results transmitted to the user, and the environment finally deleted after all tasks are completed. In this discussion, we focus on the basics of cloud computing, and go on to analyze the prerequisites and overall working of clouds. Finally, the applications of cloud computing in biological systems, particularly in comparative genomics, genome informatics, and SNP detection are discussed with reference to traditional workflows. PMID:23248640
Multidisciplinary Shape Optimization of a Composite Blended Wing Body Aircraft
NASA Astrophysics Data System (ADS)
Boozer, Charles Maxwell
A multidisciplinary shape optimization tool coupling aerodynamics, structure, and performance was developed for battery powered aircraft. Utilizing high-fidelity computational fluid dynamics analysis tools and a structural wing weight tool, coupled based on the multidisciplinary feasible optimization architecture; aircraft geometry is modified in the optimization of the aircraft's range or endurance. The developed tool is applied to three geometries: a hybrid blended wing body, delta wing UAS, the ONERA M6 wing, and a modified ONERA M6 wing. First, the optimization problem is presented with the objective function, constraints, and design vector. Next, the tool's architecture and the analysis tools that are utilized are described. Finally, various optimizations are described and their results analyzed for all test subjects. Results show that less computationally expensive inviscid optimizations yield positive performance improvements using planform, airfoil, and three-dimensional degrees of freedom. From the results obtained through a series of optimizations, it is concluded that the newly developed tool is both effective at improving performance and serves as a platform ready to receive additional performance modules, further improving its computational design support potential.
Ferguson, Kristi J; Kreiter, Clarence D; Peterson, Michael W; Rowat, Jane A; Elliott, Scott T
2002-01-01
Whether examinees benefit from the opportunity to change answers to examination questions has been discussed widely. This study was undertaken to document the impact of answer changing on exam performance on a computer-based course examination in a second-year medical school course. This study analyzed data from a 2 hour, 80-item computer delivered multiple-choice exam administered to 190 students (166 second-year medical students and 24 physician's assistant students). There was a small but significant net improvement in overall score when answers were changed: one student's score increased by 7 points, 93 increased by 1 to 4 points, and 38 decreased by 1 to 3 points. On average, lower-performing students benefited slightly less than higher-performing students. Students spent more time on questions for which they changed the answers and were more likely to change items that were more difficult. Students should not be discouraged from changing answers, especially to difficult questions that require careful consideration, although the net effect is quite small.
Message Passing and Shared Address Space Parallelism on an SMP Cluster
NASA Technical Reports Server (NTRS)
Shan, Hongzhang; Singh, Jaswinder P.; Oliker, Leonid; Biswas, Rupak; Biegel, Bryan (Technical Monitor)
2002-01-01
Currently, message passing (MP) and shared address space (SAS) are the two leading parallel programming paradigms. MP has been standardized with MPI, and is the more common and mature approach; however, code development can be extremely difficult, especially for irregularly structured computations. SAS offers substantial ease of programming, but may suffer from performance limitations due to poor spatial locality and high protocol overhead. In this paper, we compare the performance of and the programming effort required for six applications under both programming models on a 32-processor PC-SMP cluster, a platform that is becoming increasingly attractive for high-end scientific computing. Our application suite consists of codes that typically do not exhibit scalable performance under shared-memory programming due to their high communication-to-computation ratios and/or complex communication patterns. Results indicate that SAS can achieve about half the parallel efficiency of MPI for most of our applications, while being competitive for the others. A hybrid MPI+SAS strategy shows only a small performance advantage over pure MPI in some cases. Finally, improved implementations of two MPI collective operations on PC-SMP clusters are presented.
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.
1989-01-01
Several techniques to perform static and dynamic load balancing techniques for vision systems are presented. These techniques are novel in the sense that they capture the computational requirements of a task by examining the data when it is produced. Furthermore, they can be applied to many vision systems because many algorithms in different systems are either the same, or have similar computational characteristics. These techniques are evaluated by applying them on a parallel implementation of the algorithms in a motion estimation system on a hypercube multiprocessor system. The motion estimation system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from different time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters. It is shown that the performance gains when these data decomposition and load balancing techniques are used are significant and the overhead of using these techniques is minimal.
Hardware Acceleration of Adaptive Neural Algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
James, Conrad D.
As tradit ional numerical computing has faced challenges, researchers have turned towards alternative computing approaches to reduce power - per - computation metrics and improve algorithm performance. Here, we describe an approach towards non - conventional computing that strengthens the connection between machine learning and neuroscience concepts. The Hardware Acceleration of Adaptive Neural Algorithms (HAANA) project ha s develop ed neural machine learning algorithms and hardware for applications in image processing and cybersecurity. While machine learning methods are effective at extracting relevant features from many types of data, the effectiveness of these algorithms degrades when subjected to real - worldmore » conditions. Our team has generated novel neural - inspired approa ches to improve the resiliency and adaptability of machine learning algorithms. In addition, we have also designed and fabricated hardware architectures and microelectronic devices specifically tuned towards the training and inference operations of neural - inspired algorithms. Finally, our multi - scale simulation framework allows us to assess the impact of microelectronic device properties on algorithm performance.« less
The role of voice input for human-machine communication.
Cohen, P R; Oviatt, S L
1995-01-01
Optimism is growing that the near future will witness rapid growth in human-computer interaction using voice. System prototypes have recently been built that demonstrate speaker-independent real-time speech recognition, and understanding of naturally spoken utterances with vocabularies of 1000 to 2000 words, and larger. Already, computer manufacturers are building speech recognition subsystems into their new product lines. However, before this technology can be broadly useful, a substantial knowledge base is needed about human spoken language and performance during computer-based spoken interaction. This paper reviews application areas in which spoken interaction can play a significant role, assesses potential benefits of spoken interaction with machines, and compares voice with other modalities of human-computer interaction. It also discusses information that will be needed to build a firm empirical foundation for the design of future spoken and multimodal interfaces. Finally, it argues for a more systematic and scientific approach to investigating spoken input and performance with future language technology. PMID:7479803
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snyder, L.; Notkin, D.; Adams, L.
1990-03-31
This task relates to research on programming massively parallel computers. Previous work on the Ensamble concept of programming was extended and investigation into nonshared memory models of parallel computation was undertaken. Previous work on the Ensamble concept defined a set of programming abstractions and was used to organize the programming task into three distinct levels; Composition of machine instruction, composition of processes, and composition of phases. It was applied to shared memory models of computations. During the present research period, these concepts were extended to nonshared memory models. During the present research period, one Ph D. thesis was completed, onemore » book chapter, and six conference proceedings were published.« less
Quantum-Enhanced Cyber Security: Experimental Computation on Quantum-Encrypted Data
2017-03-02
AFRL-AFOSR-UK-TR-2017-0020 Quantum-Enhanced Cyber Security: Experimental Computation on Quantum-Encrypted Data Philip Walther UNIVERSITT WIEN Final...REPORT TYPE Final 3. DATES COVERED (From - To) 15 Oct 2015 to 31 Dec 2016 4. TITLE AND SUBTITLE Quantum-Enhanced Cyber Security: Experimental Computation...FORM SF 298 Final Report for FA9550-1-6-1-0004 Quantum-enhanced cyber security: Experimental quantum computation with quantum-encrypted data
Internal aerodynamics of a generic three-dimensional scramjet inlet at Mach 10
NASA Technical Reports Server (NTRS)
Holland, Scott D.
1995-01-01
A combined computational and experimental parametric study of the internal aerodynamics of a generic three-dimensional sidewall compression scramjet inlet configuration at Mach 10 has been performed. The study was designed to demonstrate the utility of computational fluid dynamics as a design tool in hypersonic inlet flow fields, to provide a detailed account of the nature and structure of the internal flow interactions, and to provide a comprehensive surface property and flow field database to determine the effects of contraction ratio, cowl position, and Reynolds number on the performance of a hypersonic scramjet inlet configuration. The work proceeded in several phases: the initial inviscid assessment of the internal shock structure, the preliminary computational parametric study, the coupling of the optimized configuration with the physical limitations of the facility, the wind tunnel blockage assessment, and the computational and experimental parametric study of the final configuration. Good agreement between computation and experimentation was observed in the magnitude and location of the interactions, particularly for weakly interacting flow fields. Large-scale forward separations resulted when the interaction strength was increased by increasing the contraction ratio or decreasing the Reynolds number.
1993-12-01
Mechanical Engineering Associate, PhD Laboratory: PL/VT Division Engineering University of Texas, San Anton Vol-Page No: 3-26 San Antonio, TX 7824-9065...parameters. The modules can be primitive or compound. Primitive modules represent the elementary computation units and define their interfaces. The... linear under varying conditions for the range of processor numbers. Discussion Performance: Our evaluation of the performance measurement results is the
Portable multi-node LQCD Monte Carlo simulations using OpenACC
NASA Astrophysics Data System (ADS)
Bonati, Claudio; Calore, Enrico; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Sanfilippo, Francesco; Schifano, Sebastiano Fabio; Silvi, Giorgio; Tripiccione, Raffaele
This paper describes a state-of-the-art parallel Lattice QCD Monte Carlo code for staggered fermions, purposely designed to be portable across different computer architectures, including GPUs and commodity CPUs. Portability is achieved using the OpenACC parallel programming model, used to develop a code that can be compiled for several processor architectures. The paper focuses on parallelization on multiple computing nodes using OpenACC to manage parallelism within the node, and OpenMPI to manage parallelism among the nodes. We first discuss the available strategies to be adopted to maximize performances, we then describe selected relevant details of the code, and finally measure the level of performance and scaling-performance that we are able to achieve. The work focuses mainly on GPUs, which offer a significantly high level of performances for this application, but also compares with results measured on other processors.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-13
... false positive match rate of 10 percent. Making the match mandatory for the States who did not perform... number of prisoners from 1995 to 2013 and assumed a 10 percent false positive match rate. Finally, we... matches are false positives. We estimate that mandatory matches at certification will identify an...
Engineering Students Designing a Statistical Procedure for Quantifying Variability
ERIC Educational Resources Information Center
Hjalmarson, Margret A.
2007-01-01
The study examined first-year engineering students' responses to a statistics task that asked them to generate a procedure for quantifying variability in a data set from an engineering context. Teams used technological tools to perform computations, and their final product was a ranking procedure. The students could use any statistical measures,…
Low-Cost Terminal Alternative for Learning Center Managers. Final Report.
ERIC Educational Resources Information Center
Nix, C. Jerome; And Others
This study established the feasibility of replacing high performance and relatively expensive computer terminals with less expensive ones adequate for supporting specific tasks of Advanced Instructional System (AIS) at Lowry AFB, Colorado. Surveys of user requirements and available devices were conducted and the results used in a system analysis.…
Evaluation of the CMI Instructor Role Training Program in the Navy and Air Force. Final Report.
ERIC Educational Resources Information Center
McCombs, Barbara L.; And Others
A computer managed instruction (CMI) instructor role definition and training package was designed to help CMI teachers acquire the skills necessary to perform seven theoretically-based instructor roles: planner, implementer/monitor, evaluator/provider, diagnostician, remediator, counselor/advisor, and tutor/modeler. Data for the evaluation of the…
ERIC Educational Resources Information Center
Smyth, Carol B.; Grannell, Dorothy S.; Moore, Miriam
The Literacy Resource Center project, a program of the Wayne Township Public Library also known as the Morrisson-Reeves Library (Richmond, Indiana), involved recruitment, retention, coalition building, public awareness, training, basic literacy, collection development, tutoring, computer-assisted, other technology, employment oriented,…
ERIC Educational Resources Information Center
Nevels, Vada Germaine
The Hopkinsville-Christian County Library (Kentucky) conducted a project that involved recruitment, public awareness, basic literacy, collection development, tutoring, computer-assisted, other technology, and intergenerational/family programs. The project served a community of 50,000-100,000 people, and targeted the learning disabled,…
Algorithms for parallel and vector computations
NASA Technical Reports Server (NTRS)
Ortega, James M.
1995-01-01
This is a final report on work performed under NASA grant NAG-1-1112-FOP during the period March, 1990 through February 1995. Four major topics are covered: (1) solution of nonlinear poisson-type equations; (2) parallel reduced system conjugate gradient method; (3) orderings for conjugate gradient preconditioners, and (4) SOR as a preconditioner.
ERIC Educational Resources Information Center
Cole, Lucy; Fraser, Ruth
The Columbia County Public Library (Lake City, Florida) conducted a project that involved recruitment, retention, public awareness, training, basic literacy, collection development, tutoring, computer- assisted, other technology, intergenerational/family, and English as a Second Language (ESL) programs. The project served a community of…
The Development of Reading for Comprehension: An Information Processing Analysis. Final Report.
ERIC Educational Resources Information Center
Schadler, Margaret; Juola, James F.
This report summarizes research performed at the Universtiy of Kansas that involved several topics related to reading and learning to read, including the development of automatic word recognition processes, reading for comprehension, and the development of new computer technologies designed to facilitate the reading process. The first section…
Hammond Workforce 2000: Literacy for Older Adults. Final Performance Report.
ERIC Educational Resources Information Center
Hammond Public Library, IN.
From October 1993 to September 1994, a project provided equipment and materials to extend literacy efforts to older adults at the Hammond Public Library, Indiana. Notebook computers containing user-friendly software, used in coordination with the local Laubach Literacy Program, as well as books, audiocassettes, videocassettes, and BiFolkal media…
NASA Technical Reports Server (NTRS)
Thomas, P. D.
1980-01-01
A computer implemented numerical method for predicting the flow in and about an isolated three dimensional jet exhaust nozzle is summarized. The approach is based on an implicit numerical method to solve the unsteady Navier-Stokes equations in a boundary conforming curvilinear coordinate system. Recent improvements to the original numerical algorithm are summarized. Equations are given for evaluating nozzle thrust and discharge coefficient in terms of computed flowfield data. The final formulation of models that are used to simulate flow turbulence effect is presented. Results are presented from numerical experiments to explore the effect of various quantities on the rate of convergence to steady state and on the final flowfield solution. Detailed flowfield predictions for several two and three dimensional nozzle configurations are presented and compared with wind tunnel experimental data.
Di Girolamo, Nicola; Selleri, Paolo; Nardini, Giordano; Corlazzoli, Daniele; Fonti, Paolo; Rossier, Christophe; Della Salda, Leonardo; Schilliger, Lionel; Vignoli, Massimo; Bongiovanni, Laura
2014-12-01
Two boa constrictors (Boa constrictor imperator) presented with paresis of the trunk originating cranial to the cloaca. Radiographs were consistent with proliferative bone lesions involving several vertebrae. Computed tomography (CT) demonstrated the presence of lytic/expansile lesions. Computed tomography-guided biopsies of the lesions were performed without complications. Histology was consistent with bacterial osteomyelitis and osteoarthritis. Gram-negative bacteria (Salmonella sp. and Pseudomonas sp.) were isolated from cultures of the biopsies. Medical treatment with specific antibiotics was attempted for several weeks in both cases without clinical or radiographic improvements. The animals were euthanized, and necropsy confirmed the findings observed upon CT. To the authors' knowledge, this is the first report of the use of CT-guided biopsies to evaluate proliferative vertebral lesions in snakes. In the present report, CT-guided biopsies were easily performed, and both histologic and microbiologic results were consistent with the final diagnosis.
An empirical generative framework for computational modeling of language acquisition.
Waterfall, Heidi R; Sandbank, Ben; Onnis, Luca; Edelman, Shimon
2010-06-01
This paper reports progress in developing a computer model of language acquisition in the form of (1) a generative grammar that is (2) algorithmically learnable from realistic corpus data, (3) viable in its large-scale quantitative performance and (4) psychologically real. First, we describe new algorithmic methods for unsupervised learning of generative grammars from raw CHILDES data and give an account of the generative performance of the acquired grammars. Next, we summarize findings from recent longitudinal and experimental work that suggests how certain statistically prominent structural properties of child-directed speech may facilitate language acquisition. We then present a series of new analyses of CHILDES data indicating that the desired properties are indeed present in realistic child-directed speech corpora. Finally, we suggest how our computational results, behavioral findings, and corpus-based insights can be integrated into a next-generation model aimed at meeting the four requirements of our modeling framework.
Probabilistic Structural Analysis Theory Development
NASA Technical Reports Server (NTRS)
Burnside, O. H.
1985-01-01
The objective of the Probabilistic Structural Analysis Methods (PSAM) project is to develop analysis techniques and computer programs for predicting the probabilistic response of critical structural components for current and future space propulsion systems. This technology will play a central role in establishing system performance and durability. The first year's technical activity is concentrating on probabilistic finite element formulation strategy and code development. Work is also in progress to survey critical materials and space shuttle mian engine components. The probabilistic finite element computer program NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) is being developed. The final probabilistic code will have, in the general case, the capability of performing nonlinear dynamic of stochastic structures. It is the goal of the approximate methods effort to increase problem solving efficiency relative to finite element methods by using energy methods to generate trial solutions which satisfy the structural boundary conditions. These approximate methods will be less computer intensive relative to the finite element approach.
A High-Performance Genetic Algorithm: Using Traveling Salesman Problem as a Case
Tsai, Chun-Wei; Tseng, Shih-Pang; Yang, Chu-Sing
2014-01-01
This paper presents a simple but efficient algorithm for reducing the computation time of genetic algorithm (GA) and its variants. The proposed algorithm is motivated by the observation that genes common to all the individuals of a GA have a high probability of surviving the evolution and ending up being part of the final solution; as such, they can be saved away to eliminate the redundant computations at the later generations of a GA. To evaluate the performance of the proposed algorithm, we use it not only to solve the traveling salesman problem but also to provide an extensive analysis on the impact it may have on the quality of the end result. Our experimental results indicate that the proposed algorithm can significantly reduce the computation time of GA and GA-based algorithms while limiting the degradation of the quality of the end result to a very small percentage compared to traditional GA. PMID:24892038
A high-performance genetic algorithm: using traveling salesman problem as a case.
Tsai, Chun-Wei; Tseng, Shih-Pang; Chiang, Ming-Chao; Yang, Chu-Sing; Hong, Tzung-Pei
2014-01-01
This paper presents a simple but efficient algorithm for reducing the computation time of genetic algorithm (GA) and its variants. The proposed algorithm is motivated by the observation that genes common to all the individuals of a GA have a high probability of surviving the evolution and ending up being part of the final solution; as such, they can be saved away to eliminate the redundant computations at the later generations of a GA. To evaluate the performance of the proposed algorithm, we use it not only to solve the traveling salesman problem but also to provide an extensive analysis on the impact it may have on the quality of the end result. Our experimental results indicate that the proposed algorithm can significantly reduce the computation time of GA and GA-based algorithms while limiting the degradation of the quality of the end result to a very small percentage compared to traditional GA.
RFA Guardian: Comprehensive Simulation of Radiofrequency Ablation Treatment of Liver Tumors.
Voglreiter, Philip; Mariappan, Panchatcharam; Pollari, Mika; Flanagan, Ronan; Blanco Sequeiros, Roberto; Portugaller, Rupert Horst; Fütterer, Jurgen; Schmalstieg, Dieter; Kolesnik, Marina; Moche, Michael
2018-01-15
The RFA Guardian is a comprehensive application for high-performance patient-specific simulation of radiofrequency ablation of liver tumors. We address a wide range of usage scenarios. These include pre-interventional planning, sampling of the parameter space for uncertainty estimation, treatment evaluation and, in the worst case, failure analysis. The RFA Guardian is the first of its kind that exhibits sufficient performance for simulating treatment outcomes during the intervention. We achieve this by combining a large number of high-performance image processing, biomechanical simulation and visualization techniques into a generalized technical workflow. Further, we wrap the feature set into a single, integrated application, which exploits all available resources of standard consumer hardware, including massively parallel computing on graphics processing units. This allows us to predict or reproduce treatment outcomes on a single personal computer with high computational performance and high accuracy. The resulting low demand for infrastructure enables easy and cost-efficient integration into the clinical routine. We present a number of evaluation cases from the clinical practice where users performed the whole technical workflow from patient-specific modeling to final validation and highlight the opportunities arising from our fast, accurate prediction techniques.
Applied Computational Fluid Dynamics at NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Kwak, Dochan (Technical Monitor)
1994-01-01
The field of Computational Fluid Dynamics (CFD) has advanced to the point where it can now be used for many applications in fluid mechanics research and aerospace vehicle design. A few applications being explored at NASA Ames Research Center will be presented and discussed. The examples presented will range in speed from hypersonic to low speed incompressible flow applications. Most of the results will be from numerical solutions of the Navier-Stokes or Euler equations in three space dimensions for general geometry applications. Computational results will be used to highlight the presentation as appropriate. Advances in computational facilities including those associated with NASA's CAS (Computational Aerosciences) Project of the Federal HPCC (High Performance Computing and Communications) Program will be discussed. Finally, opportunities for future research will be presented and discussed. All material will be taken from non-sensitive, previously-published and widely-disseminated work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, James C. Jr.; Mason, Thomas; Guerrieri, Bruno
1997-10-01
Programs have been established at Florida A & M University to attract minority students to research careers in mathematics and computational science. The primary goal of the program was to increase the number of such students studying computational science via an interactive multimedia learning environment One mechanism used for meeting this goal was the development of educational modules. This academic year program established within the mathematics department at Florida A&M University, introduced students to computational science projects using high-performance computers. Additional activities were conducted during the summer, these included workshops, meetings, and lectures. Through the exposure provided by this programmore » to scientific ideas and research in computational science, it is likely that their successful applications of tools from this interdisciplinary field will be high.« less
Simple Linux Utility for Resource Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jette, M.
2009-09-09
SLURM is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small computer clusters. As a cluster resource manager, SLURM has three key functions. First, it allocates exclusive and/or non exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allciated nodes. Finally, it arbitrates conflicting requests for resouces by managing a queue of pending work.
Iterative algorithms for computing the feedback Nash equilibrium point for positive systems
NASA Astrophysics Data System (ADS)
Ivanov, I.; Imsland, Lars; Bogdanova, B.
2017-03-01
The paper studies N-player linear quadratic differential games on an infinite time horizon with deterministic feedback information structure. It introduces two iterative methods (the Newton method as well as its accelerated modification) in order to compute the stabilising solution of a set of generalised algebraic Riccati equations. The latter is related to the Nash equilibrium point of the considered game model. Moreover, we derive the sufficient conditions for convergence of the proposed methods. Finally, we discuss two numerical examples so as to illustrate the performance of both of the algorithms.
Solar heating and cooling system installed at RKL Controls Company, Lumberton, New Jersey
NASA Technical Reports Server (NTRS)
1981-01-01
The final results of the design and operation of a computer controlled solar heated and cooled 40,000 square foot manufacturing building, sales office, and computer control center/display room are summarized. The system description, test data, major problems and resolutions, performance, operation and maintenance manual, equipment manufacturers' literature, and as-built drawings are presented. The solar system is composed of 6,000 square feet of flat plate collectors, external above ground storage subsystem, controls, absorption chiller, heat recovery, and a cooling tower.
Structure identification methods for atomistic simulations of crystalline materials
Stukowski, Alexander
2012-05-28
Here, we discuss existing and new computational analysis techniques to classify local atomic arrangements in large-scale atomistic computer simulations of crystalline solids. This article includes a performance comparison of typical analysis algorithms such as common neighbor analysis (CNA), centrosymmetry analysis, bond angle analysis, bond order analysis and Voronoi analysis. In addition we propose a simple extension to the CNA method that makes it suitable for multi-phase systems. Finally, we introduce a new structure identification algorithm, the neighbor distance analysis, which is designed to identify atomic structure units in grain boundaries.
Healthcare4VideoStorm: Making Smart Decisions Based on Storm Metrics.
Zhang, Weishan; Duan, Pengcheng; Chen, Xiufeng; Lu, Qinghua
2016-04-23
Storm-based stream processing is widely used for real-time large-scale distributed processing. Knowing the run-time status and ensuring performance is critical to providing expected dependability for some applications, e.g., continuous video processing for security surveillance. The existing scheduling strategies' granularity is too coarse to have good performance, and mainly considers network resources without computing resources while scheduling. In this paper, we propose Healthcare4Storm, a framework that finds Storm insights based on Storm metrics to gain knowledge from the health status of an application, finally ending up with smart scheduling decisions. It takes into account both network and computing resources and conducts scheduling at a fine-grained level using tuples instead of topologies. The comprehensive evaluation shows that the proposed framework has good performance and can improve the dependability of the Storm-based applications.
Computer-Vision-Assisted Palm Rehabilitation With Supervised Learning.
Vamsikrishna, K M; Dogra, Debi Prosad; Desarkar, Maunendra Sankar
2016-05-01
Physical rehabilitation supported by the computer-assisted-interface is gaining popularity among health-care fraternity. In this paper, we have proposed a computer-vision-assisted contactless methodology to facilitate palm and finger rehabilitation. Leap motion controller has been interfaced with a computing device to record parameters describing 3-D movements of the palm of a user undergoing rehabilitation. We have proposed an interface using Unity3D development platform. Our interface is capable of analyzing intermediate steps of rehabilitation without the help of an expert, and it can provide online feedback to the user. Isolated gestures are classified using linear discriminant analysis (DA) and support vector machines (SVM). Finally, a set of discrete hidden Markov models (HMM) have been used to classify gesture sequence performed during rehabilitation. Experimental validation using a large number of samples collected from healthy volunteers reveals that DA and SVM perform similarly while applied on isolated gesture recognition. We have compared the results of HMM-based sequence classification with CRF-based techniques. Our results confirm that both HMM and CRF perform quite similarly when tested on gesture sequences. The proposed system can be used for home-based palm or finger rehabilitation in the absence of experts.
A General Cross-Layer Cloud Scheduling Framework for Multiple IoT Computer Tasks.
Wu, Guanlin; Bao, Weidong; Zhu, Xiaomin; Zhang, Xiongtao
2018-05-23
The diversity of IoT services and applications brings enormous challenges to improving the performance of multiple computer tasks' scheduling in cross-layer cloud computing systems. Unfortunately, the commonly-employed frameworks fail to adapt to the new patterns on the cross-layer cloud. To solve this issue, we design a new computer task scheduling framework for multiple IoT services in cross-layer cloud computing systems. Specifically, we first analyze the features of the cross-layer cloud and computer tasks. Then, we design the scheduling framework based on the analysis and present detailed models to illustrate the procedures of using the framework. With the proposed framework, the IoT services deployed in cross-layer cloud computing systems can dynamically select suitable algorithms and use resources more effectively to finish computer tasks with different objectives. Finally, the algorithms are given based on the framework, and extensive experiments are also given to validate its effectiveness, as well as its superiority.
Performance of VPIC on Trinity
NASA Astrophysics Data System (ADS)
Nystrom, W. D.; Bergen, B.; Bird, R. F.; Bowers, K. J.; Daughton, W. S.; Guo, F.; Li, H.; Nam, H. A.; Pang, X.; Rust, W. N., III; Wohlbier, J.; Yin, L.; Albright, B. J.
2016-10-01
Trinity is a new major DOE computing resource which is going through final acceptance testing at Los Alamos National Laboratory. Trinity has several new and unique architectural features including two compute partitions, one with dual socket Intel Haswell Xeon compute nodes and one with Intel Knights Landing (KNL) Xeon Phi compute nodes. Additional unique features include use of on package high bandwidth memory (HBM) for the KNL nodes, the ability to configure the KNL nodes with respect to HBM model and on die network topology in a variety of operational modes at run time, and use of solid state storage via burst buffer technology to reduce time required to perform I/O. An effort is in progress to port and optimize VPIC to Trinity and evaluate its performance. Because VPIC was recently released as Open Source, it is being used as part of acceptance testing for Trinity and is participating in the Trinity Open Science Program which has resulted in excellent collaboration activities with both Cray and Intel. Results of this work will be presented on performance of VPIC on both Haswell and KNL partitions for both single node runs and runs at scale. Work performed under the auspices of the U.S. Dept. of Energy by the Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by the LANL LDRD program.
Portable parallel stochastic optimization for the design of aeropropulsion components
NASA Technical Reports Server (NTRS)
Sues, Robert H.; Rhodes, G. S.
1994-01-01
This report presents the results of Phase 1 research to develop a methodology for performing large-scale Multi-disciplinary Stochastic Optimization (MSO) for the design of aerospace systems ranging from aeropropulsion components to complete aircraft configurations. The current research recognizes that such design optimization problems are computationally expensive, and require the use of either massively parallel or multiple-processor computers. The methodology also recognizes that many operational and performance parameters are uncertain, and that uncertainty must be considered explicitly to achieve optimum performance and cost. The objective of this Phase 1 research was to initialize the development of an MSO methodology that is portable to a wide variety of hardware platforms, while achieving efficient, large-scale parallelism when multiple processors are available. The first effort in the project was a literature review of available computer hardware, as well as review of portable, parallel programming environments. The first effort was to implement the MSO methodology for a problem using the portable parallel programming language, Parallel Virtual Machine (PVM). The third and final effort was to demonstrate the example on a variety of computers, including a distributed-memory multiprocessor, a distributed-memory network of workstations, and a single-processor workstation. Results indicate the MSO methodology can be well-applied towards large-scale aerospace design problems. Nearly perfect linear speedup was demonstrated for computation of optimization sensitivity coefficients on both a 128-node distributed-memory multiprocessor (the Intel iPSC/860) and a network of workstations (speedups of almost 19 times achieved for 20 workstations). Very high parallel efficiencies (75 percent for 31 processors and 60 percent for 50 processors) were also achieved for computation of aerodynamic influence coefficients on the Intel. Finally, the multi-level parallelization strategy that will be needed for large-scale MSO problems was demonstrated to be highly efficient. The same parallel code instructions were used on both platforms, demonstrating portability. There are many applications for which MSO can be applied, including NASA's High-Speed-Civil Transport, and advanced propulsion systems. The use of MSO will reduce design and development time and testing costs dramatically.
2010-03-01
DATES COVERED (From - To) October 2008 – October 2009 4 . TITLE AND SUBTITLE PERFORMANCE AND POWER OPTIMIZATION FOR COGNITIVE PROCESSOR DESIGN USING...Computations 2 2.2 Cognitive Models and Algorithms for Intelligent Text Recognition 4 2.2.1 Brain-State-in-a-Box Neural Network Model. 4 2.2.2...The ASIC-style design and synthesis flow for FPU 8 Figure 4 : Screen shots of the final layouts 10 Figure 5: Projected performance and power roadmap
Lytro camera technology: theory, algorithms, performance analysis
NASA Astrophysics Data System (ADS)
Georgiev, Todor; Yu, Zhan; Lumsdaine, Andrew; Goma, Sergio
2013-03-01
The Lytro camera is the first implementation of a plenoptic camera for the consumer market. We consider it a successful example of the miniaturization aided by the increase in computational power characterizing mobile computational photography. The plenoptic camera approach to radiance capture uses a microlens array as an imaging system focused on the focal plane of the main camera lens. This paper analyzes the performance of Lytro camera from a system level perspective, considering the Lytro camera as a black box, and uses our interpretation of Lytro image data saved by the camera. We present our findings based on our interpretation of Lytro camera file structure, image calibration and image rendering; in this context, artifacts and final image resolution are discussed.
Electromagnetic physics models for parallel computing architectures
Amadio, G.; Ananya, A.; Apostolakis, J.; ...
2016-11-21
The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part ofmore » the GeantV project. Finally, the results of preliminary performance evaluation and physics validation are presented as well.« less
Performance evaluation of the Engineering Analysis and Data Systems (EADS) 2
NASA Technical Reports Server (NTRS)
Debrunner, Linda S.
1994-01-01
The Engineering Analysis and Data System (EADS)II (1) was installed in March 1993 to provide high performance computing for science and engineering at Marshall Space Flight Center (MSFC). EADS II increased the computing capabilities over the existing EADS facility in the areas of throughput and mass storage. EADS II includes a Vector Processor Compute System (VPCS), a Virtual Memory Compute System (CFS), a Common Output System (COS), as well as Image Processing Station, Mini Super Computers, and Intelligent Workstations. These facilities are interconnected by a sophisticated network system. This work considers only the performance of the VPCS and the CFS. The VPCS is a Cray YMP. The CFS is implemented on an RS 6000 using the UniTree Mass Storage System. To better meet the science and engineering computing requirements, EADS II must be monitored, its performance analyzed, and appropriate modifications for performance improvement made. Implementing this approach requires tool(s) to assist in performance monitoring and analysis. In Spring 1994, PerfStat 2.0 was purchased to meet these needs for the VPCS and the CFS. PerfStat(2) is a set of tools that can be used to analyze both historical and real-time performance data. Its flexible design allows significant user customization. The user identifies what data is collected, how it is classified, and how it is displayed for evaluation. Both graphical and tabular displays are supported. The capability of the PerfStat tool was evaluated, appropriate modifications to EADS II to optimize throughput and enhance productivity were suggested and implemented, and the effects of these modifications on the systems performance were observed. In this paper, the PerfStat tool is described, then its use with EADS II is outlined briefly. Next, the evaluation of the VPCS, as well as the modifications made to the system are described. Finally, conclusions are drawn and recommendations for future worked are outlined.
Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F
2011-09-01
Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.
Experiences in autotuning matrix multiplication for energy minimization on GPUs
Anzt, Hartwig; Haugen, Blake; Kurzak, Jakub; ...
2015-05-20
In this study, we report extensive results and analysis of autotuning the computationally intensive graphics processing units kernel for dense matrix–matrix multiplication in double precision. In contrast to traditional autotuning and/or optimization for runtime performance only, we also take the energy efficiency into account. For kernels achieving equal performance, we show significant differences in their energy balance. We also identify the memory throughput as the most influential metric that trades off performance and energy efficiency. Finally, as a result, the performance optimal case ends up not being the most efficient kernel in overall resource use.
Performance of concatenated Reed-Solomon/Viterbi channel coding
NASA Technical Reports Server (NTRS)
Divsalar, D.; Yuen, J. H.
1982-01-01
The concatenated Reed-Solomon (RS)/Viterbi coding system is reviewed. The performance of the system is analyzed and results are derived with a new simple approach. A functional model for the input RS symbol error probability is presented. Based on this new functional model, we compute the performance of a concatenated system in terms of RS word error probability, output RS symbol error probability, bit error probability due to decoding failure, and bit error probability due to decoding error. Finally we analyze the effects of the noisy carrier reference and the slow fading on the system performance.
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Baysal, Oktay
1997-01-01
A gradient-based shape optimization based on quasi-analytical sensitivities has been extended for practical three-dimensional aerodynamic applications. The flow analysis has been rendered by a fully implicit, finite-volume formulation of the Euler and Thin-Layer Navier-Stokes (TLNS) equations. Initially, the viscous laminar flow analysis for a wing has been compared with an independent computational fluid dynamics (CFD) code which has been extensively validated. The new procedure has been demonstrated in the design of a cranked arrow wing at Mach 2.4 with coarse- and fine-grid based computations performed with Euler and TLNS equations. The influence of the initial constraints on the geometry and aerodynamics of the optimized shape has been explored. Various final shapes generated for an identical initial problem formulation but with different optimization path options (coarse or fine grid, Euler or TLNS), have been aerodynamically evaluated via a common fine-grid TLNS-based analysis. The initial constraint conditions show significant bearing on the optimization results. Also, the results demonstrate that to produce an aerodynamically efficient design, it is imperative to include the viscous physics in the optimization procedure with the proper resolution. Based upon the present results, to better utilize the scarce computational resources, it is recommended that, a number of viscous coarse grid cases using either a preconditioned bi-conjugate gradient (PbCG) or an alternating-direction-implicit (ADI) method, should initially be employed to improve the optimization problem definition, the design space and initial shape. Optimized shapes should subsequently be analyzed using a high fidelity (viscous with fine-grid resolution) flow analysis to evaluate their true performance potential. Finally, a viscous fine-grid-based shape optimization should be conducted, using an ADI method, to accurately obtain the final optimized shape.
Final report for the Tera Computer TTI CRADA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidson, G.S.; Pavlakos, C.; Silva, C.
1997-01-01
Tera Computer and Sandia National Laboratories have completed a CRADA, which examined the Tera Multi-Threaded Architecture (MTA) for use with large codes of importance to industry and DOE. The MTA is an innovative architecture that uses parallelism to mask latency between memories and processors. The physical implementation is a parallel computer with high cross-section bandwidth and GaAs processors designed by Tera, which support many small computation threads and fast, lightweight context switches between them. When any thread blocks while waiting for memory accesses to complete, another thread immediately begins execution so that high CPU utilization is maintained. The Tera MTAmore » parallel computer has a single, global address space, which is appealing when porting existing applications to a parallel computer. This ease of porting is further enabled by compiler technology that helps break computations into parallel threads. DOE and Sandia National Laboratories were interested in working with Tera to further develop this computing concept. While Tera Computer would continue the hardware development and compiler research, Sandia National Laboratories would work with Tera to ensure that their compilers worked well with important Sandia codes, most particularly CTH, a shock physics code used for weapon safety computations. In addition to that important code, Sandia National Laboratories would complete research on a robotic path planning code, SANDROS, which is important in manufacturing applications, and would evaluate the MTA performance on this code. Finally, Sandia would work directly with Tera to develop 3D visualization codes, which would be appropriate for use with the MTA. Each of these tasks has been completed to the extent possible, given that Tera has just completed the MTA hardware. All of the CRADA work had to be done on simulators.« less
Complex optimization for big computational and experimental neutron datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, Feng; Oak Ridge National Lab.; Archibald, Richard
Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less
Automated error correction in IBM quantum computer and explicit generalization
NASA Astrophysics Data System (ADS)
Ghosh, Debjit; Agarwal, Pratik; Pandey, Pratyush; Behera, Bikash K.; Panigrahi, Prasanta K.
2018-06-01
Construction of a fault-tolerant quantum computer remains a challenging problem due to unavoidable noise and fragile quantum states. However, this goal can be achieved by introducing quantum error-correcting codes. Here, we experimentally realize an automated error correction code and demonstrate the nondestructive discrimination of GHZ states in IBM 5-qubit quantum computer. After performing quantum state tomography, we obtain the experimental results with a high fidelity. Finally, we generalize the investigated code for maximally entangled n-qudit case, which could both detect and automatically correct any arbitrary phase-change error, or any phase-flip error, or any bit-flip error, or combined error of all types of error.
Complex optimization for big computational and experimental neutron datasets
Bao, Feng; Oak Ridge National Lab.; Archibald, Richard; ...
2016-11-07
Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less
SRM Internal Flow Test and Computational Fluid Dynamic Analysis. Volume 1; Major Task Summaries
NASA Technical Reports Server (NTRS)
Whitesides, R. Harold; Dill, Richard A.; Purinton, David C.
1995-01-01
During the four year period of performance for NASA contract, NASB-39095, ERC has performed a wide variety of tasks to support the design and continued development of new and existing solid rocket motors and the resolution of operational problems associated with existing solid rocket motor's at NASA MSFC. This report summarizes the support provided to NASA MSFC during the contractual period of performance. The report is divided into three main sections. The first section presents summaries for the major tasks performed. These tasks are grouped into three major categories: full scale motor analysis, subscale motor analysis and cold flow analysis. The second section includes summaries describing the computational fluid dynamics (CFD) tasks performed. The third section, the appendices of the report, presents detailed descriptions of the analysis efforts as well as published papers, memoranda and final reports associated with specific tasks. These appendices are referenced in the summaries. The subsection numbers for the three sections correspond to the same topics for direct cross referencing.
The Secret Life of Quarks, Final Report for the University of North Carolina at Chapel Hill
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fowler, Robert J.
This final report summarizes activities and results at the University of North Carolina as part of the the SciDAC-2 Project The Secret Life of Quarks: National Computational Infrastructure for Lattice Quantum Chromodynamics. The overall objective of the project is to construct the software needed to study quantum chromo- dynamics (QCD), the theory of the strong interactions of subatomic physics, and similar strongly coupled gauge theories anticipated to be of importance in the LHC era. It built upon the successful efforts of the SciDAC-1 project National Computational Infrastructure for Lattice Gauge Theory, in which a QCD Applications Programming Interface (QCD API)more » was developed that enables lat- tice gauge theorists to make effective use of a wide variety of massively parallel computers. In the SciDAC-2 project, optimized versions of the QCD API were being created for the IBM Blue- Gene/L (BG/L) and BlueGene/P (BG/P), the Cray XT3/XT4 and its successors, and clusters based on multi-core processors and Infiniband communications networks. The QCD API is being used to enhance the performance of the major QCD community codes and to create new applications. Software libraries of physics tools have been expanded to contain sharable building blocks for inclusion in application codes, performance analysis and visualization tools, and software for au- tomation of physics work flow. New software tools were designed for managing the large data sets generated in lattice QCD simulations, and for sharing them through the International Lattice Data Grid consortium. As part of the overall project, researchers at UNC were funded through ASCR to work in three general areas. The main thrust has been performance instrumentation and analysis in support of the SciDAC QCD code base as it evolved and as it moved to new computation platforms. In support of the performance activities, performance data was to be collected in a database for the purpose of broader analysis. Third, the UNC work was done at RENCI (Renaissance Computing Institute), which has extensive expertise and facilities for scientific data visualization, so we acted in an ongoing consulting and support role in that area.« less
NASA Astrophysics Data System (ADS)
Ford, Gregory Scott
2007-12-01
Title. Effect of computer-aided instruction versus traditional modes on student PT's learning musculoskeletal special tests. Problem. Lack of quantitative evidence to support the use of computer-aided instruction (CAI) in PT education for both the cognitive and psychomotor domains and lack of qualitative support as to an understanding why CAI may or may not be effective. Design. 3 group single-blind pre-test, immediate post-test, final post-test repeated measures with qualitative survey for the CAI group. Methods. Subjects were randomly assigned to CAI, live demonstration or textbook learning groups. Three novel special tests were instructed. Analysis of performance on written and practical examinations was conducted across the 3 repeated measures. A qualitative survey was completed by the CAI group post intervention. Results. CAI is equally as effective as live demonstration and textbook learning of musculoskeletal special tests in the cognitive domain, however, CAI was superior to live demonstration and textbook instruction at final post-testing. Significance. The significance of this research is that a gap in the literature of PT education needs to be bridged as it pertains to the effect of CAI on learning in both the cognitive and psychomotor domains as well as attempt to understand why CAI results in certain student performance. The methods of this study allowed for a wide range of generalizability to any and all PT programs across the country.
ERIC Educational Resources Information Center
Mooney, Sharon Lopez
The West Marin Literacy Project, a project of the Marin County Free Library (San Rafael, California), involved recruitment, retention, coalition building, public awareness, training, rural oriented, tutoring, computer- assisted, intergenerational/family, and English as a Second Language (ESL) programs. The project served a community of under…
Collection Development Analysis Using OCLC Archival Tapes. Final Report.
ERIC Educational Resources Information Center
Evans, Glyn T.; And Others
The purpose of this project is to develop a set of computer programs to perform a variety of collection development analyses on the machine-readable cataloging (MARC) records that are produced as a byproduct of use of the online cataloging subsystem of the Ohio College Library System (OCLC), and made available through the OCLC Distribution Tape…
Application of Simulation to Individualized Self-Paced Training. Final Report. TAEG Report No. 11-2.
ERIC Educational Resources Information Center
Lindahl, William H.; Gardner, James H.
Computer simulation is recognized as a valuable systems analysis research tool which enables the detailed examination, evaluation, and manipulation, under stated conditions, of a system without direct action on the system. This technique provides management with quantitative data on system performance and capabilities which can be used to compare…
ERIC Educational Resources Information Center
Hess, Therese M.
The Martinsburg-Berkeley County Public Library (West Virginia) conducted a project that involved recruitment, retention, coalition building, public awareness, training, basic literacy, collection development, tutoring, computer assisted, other technology, and English as a Second Language (ESL) programs. The project served a three-county community…
Computational Performance of Group IV Personnel in Vocational Training Programs. Final Report.
ERIC Educational Resources Information Center
Main, Ray E.; Harrigan, Robert J.
The document evaluates Navy Group Four personnel gains in basic arithmetic skills after taking experimental courses in linear measurement and recipe conversion. Categorized as Mental Group Four by receiving scores from the 10th to the 30th percentile of the Armed Forces Qualification Test, trainees received instruction tailored to the level of…
Computer-Based Feedback in Linear Algebra: Effects on Transfer Performance and Motivation
ERIC Educational Resources Information Center
Corbalan, Gemma; Paas, Fred; Cuypers, Hans
2010-01-01
Two studies investigated the effects on students' perceptions (Study 1) and learning and motivation (Study 2) of different levels of feedback in mathematical problems. In these problems, an error made in one step of the problem-solving procedure will carry over to the following steps and consequently to the final solution. Providing immediate…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Yunshan; DeVore, Peter T. S.; Jalali, Bahram
Optical computing accelerators help alleviate bandwidth and power consumption bottlenecks in electronics. In this paper, we show an approach to implementing logarithmic-type analog co-processors in silicon photonics and use it to perform the exponentiation operation and the recovery of a signal in the presence of multiplicative distortion. Finally, the function is realized by exploiting nonlinear-absorption-enhanced Raman amplification saturation in a silicon waveguide.
Next Generation Distributed Computing for Cancer Research
Agarwal, Pankaj; Owzar, Kouros
2014-01-01
Advances in next generation sequencing (NGS) and mass spectrometry (MS) technologies have provided many new opportunities and angles for extending the scope of translational cancer research while creating tremendous challenges in data management and analysis. The resulting informatics challenge is invariably not amenable to the use of traditional computing models. Recent advances in scalable computing and associated infrastructure, particularly distributed computing for Big Data, can provide solutions for addressing these challenges. In this review, the next generation of distributed computing technologies that can address these informatics problems is described from the perspective of three key components of a computational platform, namely computing, data storage and management, and networking. A broad overview of scalable computing is provided to set the context for a detailed description of Hadoop, a technology that is being rapidly adopted for large-scale distributed computing. A proof-of-concept Hadoop cluster, set up for performance benchmarking of NGS read alignment, is described as an example of how to work with Hadoop. Finally, Hadoop is compared with a number of other current technologies for distributed computing. PMID:25983539
Next generation distributed computing for cancer research.
Agarwal, Pankaj; Owzar, Kouros
2014-01-01
Advances in next generation sequencing (NGS) and mass spectrometry (MS) technologies have provided many new opportunities and angles for extending the scope of translational cancer research while creating tremendous challenges in data management and analysis. The resulting informatics challenge is invariably not amenable to the use of traditional computing models. Recent advances in scalable computing and associated infrastructure, particularly distributed computing for Big Data, can provide solutions for addressing these challenges. In this review, the next generation of distributed computing technologies that can address these informatics problems is described from the perspective of three key components of a computational platform, namely computing, data storage and management, and networking. A broad overview of scalable computing is provided to set the context for a detailed description of Hadoop, a technology that is being rapidly adopted for large-scale distributed computing. A proof-of-concept Hadoop cluster, set up for performance benchmarking of NGS read alignment, is described as an example of how to work with Hadoop. Finally, Hadoop is compared with a number of other current technologies for distributed computing.
Interactive computer simulations of knee-replacement surgery.
Gunther, Stephen B; Soto, Gabriel E; Colman, William W
2002-07-01
Current surgical training programs in the United States are based on an apprenticeship model. This model is outdated because it does not provide conceptual scaffolding, promote collaborative learning, or offer constructive reinforcement. Our objective was to create a more useful approach by preparing students and residents for operative cases using interactive computer simulations of surgery. Total-knee-replacement surgery (TKR) is an ideal procedure to model on the computer because there is a systematic protocol for the procedure. Also, this protocol is difficult to learn by the apprenticeship model because of the multiple instruments that must be used in a specific order. We designed an interactive computer tutorial to teach medical students and residents how to perform knee-replacement surgery. We also aimed to reinforce the specific protocol of the operative procedure. Our final goal was to provide immediate, constructive feedback. We created a computer tutorial by generating three-dimensional wire-frame models of the surgical instruments. Next, we applied a surface to the wire-frame models using three-dimensional modeling. Finally, the three-dimensional models were animated to simulate the motions of an actual TKR. The tutorial is a step-by-step tutorial that teaches and tests the correct sequence of steps in a TKR. The student or resident must select the correct instruments in the correct order. The learner is encouraged to learn the stepwise surgical protocol through repetitive use of the computer simulation. Constructive feedback is acquired through a grading system, which rates the student's or resident's ability to perform the task in the correct order. The grading system also accounts for the time required to perform the simulated procedure. We evaluated the efficacy of this teaching technique by testing medical students who learned by the computer simulation and those who learned by reading the surgical protocol manual. Both groups then performed TKR on manufactured bone models using real instruments. Their technique was graded with the standard protocol. The students who learned on the computer simulation performed the task in a shorter time and with fewer errors than the control group. They were also more engaged in the learning process. Surgical training programs generally lack a consistent approach to preoperative education related to surgical procedures. This interactive computer tutorial has allowed us to make a quantum leap in medical student and resident teaching in our orthopedic department because the students actually participate in the entire process. Our technique provides a linear, sequential method of skill acquisition and direct feedback, which is ideally suited for learning stepwise surgical protocols. Since our initial evaluation has shown the efficacy of this program, we have implemented this teaching tool into our orthopedic curriculum. Our plans for future work with this simulator include modeling procedures involving other anatomic areas of interest, such as the hip and shoulder.
A space radiation transport method development
NASA Technical Reports Server (NTRS)
Wilson, J. W.; Tripathi, R. K.; Qualls, G. D.; Cucinotta, F. A.; Prael, R. E.; Norbury, J. W.; Heinbockel, J. H.; Tweed, J.
2004-01-01
Improved spacecraft shield design requires early entry of radiation constraints into the design process to maximize performance and minimize costs. As a result, we have been investigating high-speed computational procedures to allow shield analysis from the preliminary design concepts to the final design. In particular, we will discuss the progress towards a full three-dimensional and computationally efficient deterministic code for which the current HZETRN evaluates the lowest-order asymptotic term. HZETRN is the first deterministic solution to the Boltzmann equation allowing field mapping within the International Space Station (ISS) in tens of minutes using standard finite element method (FEM) geometry common to engineering design practice enabling development of integrated multidisciplinary design optimization methods. A single ray trace in ISS FEM geometry requires 14 ms and severely limits application of Monte Carlo methods to such engineering models. A potential means of improving the Monte Carlo efficiency in coupling to spacecraft geometry is given in terms of re-configurable computing and could be utilized in the final design as verification of the deterministic method optimized design. Published by Elsevier Ltd on behalf of COSPAR.
Real-time dynamics and control strategies for space operations of flexible structures
NASA Technical Reports Server (NTRS)
Park, K. C.; Alvin, K. F.; Alexander, S.
1993-01-01
This project (NAG9-574) was meant to be a three-year research project. However, due to NASA's reorganizations during 1992, the project was funded only for one year. Accordingly, every effort was made to make the present final report as if the project was meant to be for one-year duration. Originally, during the first year we were planning to accomplish the following: we were to start with a three dimensional flexible manipulator beam with articulated joints and with a linear control-based controller applied at the joints; using this simple example, we were to design the software systems requirements for real-time processing, introduce the streamlining of various computational algorithms, perform the necessary reorganization of the partitioned simulation procedures, and assess the potential speed-up realization of the solution process by parallel computations. The three reports included as part of the final report address: the streamlining of various computational algorithms; the necessary reorganization of the partitioned simulation procedures, in particular the observer models; and an initial attempt of reconfiguring the flexible space structures.
Computation of multi-dimensional viscous supersonic jet flow
NASA Technical Reports Server (NTRS)
Kim, Y. N.; Buggeln, R. C.; Mcdonald, H.
1986-01-01
A new method has been developed for two- and three-dimensional computations of viscous supersonic flows with embedded subsonic regions adjacent to solid boundaries. The approach employs a reduced form of the Navier-Stokes equations which allows solution as an initial-boundary value problem in space, using an efficient noniterative forward marching algorithm. Numerical instability associated with forward marching algorithms for flows with embedded subsonic regions is avoided by approximation of the reduced form of the Navier-Stokes equations in the subsonic regions of the boundary layers. Supersonic and subsonic portions of the flow field are simultaneously calculated by a consistently split linearized block implicit computational algorithm. The results of computations for a series of test cases relevant to internal supersonic flow is presented and compared with data. Comparison between data and computation are in general excellent thus indicating that the computational technique has great promise as a tool for calculating supersonic flow with embedded subsonic regions. Finally, a User's Manual is presented for the computer code used to perform the calculations.
Multi-chain Markov chain Monte Carlo methods for computationally expensive models
NASA Astrophysics Data System (ADS)
Huang, M.; Ray, J.; Ren, H.; Hou, Z.; Bao, J.
2017-12-01
Markov chain Monte Carlo (MCMC) methods are used to infer model parameters from observational data. The parameters are inferred as probability densities, thus capturing estimation error due to sparsity of the data, and the shortcomings of the model. Multiple communicating chains executing the MCMC method have the potential to explore the parameter space better, and conceivably accelerate the convergence to the final distribution. We present results from tests conducted with the multi-chain method to show how the acceleration occurs i.e., for loose convergence tolerances, the multiple chains do not make much of a difference. The ensemble of chains also seems to have the ability to accelerate the convergence of a few chains that might start from suboptimal starting points. Finally, we show the performance of the chains in the estimation of O(10) parameters using computationally expensive forward models such as the Community Land Model, where the sampling burden is distributed over multiple chains.
Abbey, Craig K.; Zemp, Roger J.; Liu, Jie; Lindfors, Karen K.; Insana, Michael F.
2009-01-01
We investigate and extend the ideal observer methodology developed by Smith and Wagner to detection and discrimination tasks related to breast sonography. We provide a numerical approach for evaluating the ideal observer acting on radio-frequency (RF) frame data, which involves inversion of large nonstationary covariance matrices, and we describe a power-series approach to computing this inverse. Considering a truncated power series suggests that the RF data be Wiener-filtered before forming the final envelope image. We have compared human performance for Wiener-filtered and conventional B-mode envelope images using psychophysical studies for 5 tasks related to breast cancer classification. We find significant improvements in visual detection and discrimination efficiency in four of these five tasks. We also use the Smith-Wagner approach to distinguish between human and processing inefficiencies, and find that generally the principle limitation comes from the information lost in computing the final envelope image. PMID:16468454
Computer-Supported Feedback Message Tailoring for Healthcare Providers in Malawi: Proof-of-Concept.
Landis-Lewis, Zach; Douglas, Gerald P; Hochheiser, Harry; Kam, Matthew; Gadabu, Oliver; Bwanali, Mwatha; Jacobson, Rebecca S
2015-01-01
Although performance feedback has the potential to help clinicians improve the quality and safety of care, healthcare organizations generally lack knowledge about how this guidance is best provided. In low-resource settings, tools for theory-informed feedback tailoring may enhance limited clinical supervision resources. Our objectives were to establish proof-of-concept for computer-supported feedback message tailoring in Malawi, Africa. We conducted this research in five stages: clinical performance measurement, modeling the influence of feedback on antiretroviral therapy (ART) performance, creating a rule-based message tailoring process, generating tailored messages for recipients, and finally analysis of performance and message tailoring data. We retrospectively generated tailored messages for 7,448 monthly performance reports from 11 ART clinics. We found that tailored feedback could be routinely generated for four guideline-based performance indicators, with 35% of reports having messages prioritized to optimize the effect of feedback. This research establishes proof-of-concept for a novel approach to improving the use of clinical performance feedback in low-resource settings and suggests possible directions for prospective evaluations comparing alternative designs of feedback messages.
Stocco, Andrea; Yamasaki, Brianna L; Prat, Chantel S
2018-04-01
This article describes the data analyzed in the paper "Individual differences in the Simon effect are underpinned by differences in the competitive dynamics in the basal ganglia: An experimental verification and a computational model" (Stocco et al., 2017) [1]. The data includes behavioral results from participants performing three cognitive tasks (Probabilistic Stimulus Selection (Frank et al., 2004) [2], Simon task (Craft and Simon, 1970) [3], and Automated Operation Span (Unsworth et al., 2005) [4]), as well as simulationed traces generated by a computational neurocognitive model that accounts for individual variations in human performance across the tasks. The experimental data encompasses individual data files (in both preprocessed and native output format) as well as group-level summary files. The simulation data includes the entire model code, the results of a full-grid search of the model's parameter space, and the code used to partition the model space and parallelize the simulations. Finally, the repository includes the R scripts used to carry out the statistical analyses reported in the original paper.
Implementation and analysis of a Navier-Stokes algorithm on parallel computers
NASA Technical Reports Server (NTRS)
Fatoohi, Raad A.; Grosch, Chester E.
1988-01-01
The results of the implementation of a Navier-Stokes algorithm on three parallel/vector computers are presented. The object of this research is to determine how well, or poorly, a single numerical algorithm would map onto three different architectures. The algorithm is a compact difference scheme for the solution of the incompressible, two-dimensional, time-dependent Navier-Stokes equations. The computers were chosen so as to encompass a variety of architectures. They are the following: the MPP, an SIMD machine with 16K bit serial processors; Flex/32, an MIMD machine with 20 processors; and Cray/2. The implementation of the algorithm is discussed in relation to these architectures and measures of the performance on each machine are given. The basic comparison is among SIMD instruction parallelism on the MPP, MIMD process parallelism on the Flex/32, and vectorization of a serial code on the Cray/2. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Finally, conclusions are presented.
NASA Astrophysics Data System (ADS)
Sánchez-Martínez, V.; Borges, G.; Borrego, C.; del Peso, J.; Delfino, M.; Gomes, J.; González de la Hoz, S.; Pacheco Pages, A.; Salt, J.; Sedov, A.; Villaplana, M.; Wolters, H.
2014-06-01
In this contribution we describe the performance of the Iberian (Spain and Portugal) ATLAS cloud during the first LHC running period (March 2010-January 2013) in the context of the GRID Computing and Data Distribution Model. The evolution of the resources for CPU, disk and tape in the Iberian Tier-1 and Tier-2s is summarized. The data distribution over all ATLAS destinations is shown, focusing on the number of files transferred and the size of the data. The status and distribution of simulation and analysis jobs within the cloud are discussed. The Distributed Analysis tools used to perform physics analysis are explained as well. Cloud performance in terms of the availability and reliability of its sites is discussed. The effect of the changes in the ATLAS Computing Model on the cloud is analyzed. Finally, the readiness of the Iberian Cloud towards the first Long Shutdown (LS1) is evaluated and an outline of the foreseen actions to take in the coming years is given. The shutdown will be a good opportunity to improve and evolve the ATLAS Distributed Computing system to prepare for the future challenges of the LHC operation.
NASA Technical Reports Server (NTRS)
Kavi, K. M.
1984-01-01
There have been a number of simulation packages developed for the purpose of designing, testing and validating computer systems, digital systems and software systems. Complex analytical tools based on Markov and semi-Markov processes have been designed to estimate the reliability and performance of simulated systems. Petri nets have received wide acceptance for modeling complex and highly parallel computers. In this research data flow models for computer systems are investigated. Data flow models can be used to simulate both software and hardware in a uniform manner. Data flow simulation techniques provide the computer systems designer with a CAD environment which enables highly parallel complex systems to be defined, evaluated at all levels and finally implemented in either hardware or software. Inherent in data flow concept is the hierarchical handling of complex systems. In this paper we will describe how data flow can be used to model computer system.
Computation of Asteroid Proper Elements: Recent Advances
NASA Astrophysics Data System (ADS)
Knežević, Z.
2017-12-01
The recent advances in computation of asteroid proper elements are briefly reviewed. Although not representing real breakthroughs in computation and stability assessment of proper elements, these advances can still be considered as important improvements offering solutions to some practical problems encountered in the past. The problem of getting unrealistic values of perihelion frequency for very low eccentricity orbits is solved by computing frequencies using the frequency-modified Fourier transform. The synthetic resonant proper elements adjusted to a given secular resonance helped to prove the existence of Astraea asteroid family. The preliminary assessment of stability with time of proper elements computed by means of the analytical theory provides a good indication of their poorer performance with respect to their synthetic counterparts, and advocates in favor of ceasing their regular maintenance; the final decision should, however, be taken on the basis of more comprehensive and reliable direct estimate of their individual and sample average deviations from constancy.
Electrolytic hydrogen production: An analysis and review
NASA Technical Reports Server (NTRS)
Evangelista, J.; Phillips, B.; Gordon, L.
1975-01-01
The thermodynamics of water electrolysis cells is presented, followed by a review of current and future technology of commercial cells. The irreversibilities involved are analyzed and the resulting equations assembled into a computer simulation model of electrolysis cell efficiency. The model is tested by comparing predictions based on the model to actual commercial cell performance, and a parametric investigation of operating conditions is performed. Finally, the simulation model is applied to a study of electrolysis cell dynamics through consideration of an ideal pulsed electrolyzer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brinkman, Kyle; Bordia, Rajendra; Reifsnider, Kenneth
This project fabricated model multiphase ceramic waste forms with processing-controlled microstructures followed by advanced characterization with synchrotron and electron microscopy-based 3D tomography to provide elemental and chemical state-specific information resulting in compositional phase maps of ceramic composites. Details of 3D microstructural features were incorporated into computer-based simulations using durability data for individual constituent phases as inputs in order to predict the performance of multiphase waste forms with varying microstructure and phase connectivity.
Radio Synthesis Imaging - A High Performance Computing and Communications Project
NASA Astrophysics Data System (ADS)
Crutcher, Richard M.
The National Science Foundation has funded a five-year High Performance Computing and Communications project at the National Center for Supercomputing Applications (NCSA) for the direct implementation of several of the computing recommendations of the Astronomy and Astrophysics Survey Committee (the "Bahcall report"). This paper is a summary of the project goals and a progress report. The project will implement a prototype of the next generation of astronomical telescope systems - remotely located telescopes connected by high-speed networks to very high performance, scalable architecture computers and on-line data archives, which are accessed by astronomers over Gbit/sec networks. Specifically, a data link has been installed between the BIMA millimeter-wave synthesis array at Hat Creek, California and NCSA at Urbana, Illinois for real-time transmission of data to NCSA. Data are automatically archived, and may be browsed and retrieved by astronomers using the NCSA Mosaic software. In addition, an on-line digital library of processed images will be established. BIMA data will be processed on a very high performance distributed computing system, with I/O, user interface, and most of the software system running on the NCSA Convex C3880 supercomputer or Silicon Graphics Onyx workstations connected by HiPPI to the high performance, massively parallel Thinking Machines Corporation CM-5. The very computationally intensive algorithms for calibration and imaging of radio synthesis array observations will be optimized for the CM-5 and new algorithms which utilize the massively parallel architecture will be developed. Code running simultaneously on the distributed computers will communicate using the Data Transport Mechanism developed by NCSA. The project will also use the BLANCA Gbit/s testbed network between Urbana and Madison, Wisconsin to connect an Onyx workstation in the University of Wisconsin Astronomy Department to the NCSA CM-5, for development of long-distance distributed computing. Finally, the project is developing 2D and 3D visualization software as part of the international AIPS++ project. This research and development project is being carried out by a team of experts in radio astronomy, algorithm development for massively parallel architectures, high-speed networking, database management, and Thinking Machines Corporation personnel. The development of this complete software, distributed computing, and data archive and library solution to the radio astronomy computing problem will advance our expertise in high performance computing and communications technology and the application of these techniques to astronomical data processing.
Final report for “Extreme-scale Algorithms and Solver Resilience”
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gropp, William Douglas
2017-06-30
This is a joint project with principal investigators at Oak Ridge National Laboratory, Sandia National Laboratories, the University of California at Berkeley, and the University of Tennessee. Our part of the project involves developing performance models for highly scalable algorithms and the development of latency tolerant iterative methods. During this project, we extended our performance models for the Multigrid method for solving large systems of linear equations and conducted experiments with highly scalable variants of conjugate gradient methods that avoid blocking synchronization. In addition, we worked with the other members of the project on alternative techniques for resilience and reproducibility.more » We also presented an alternative approach for reproducible dot-products in parallel computations that performs almost as well as the conventional approach by separating the order of computation from the details of the decomposition of vectors across the processes.« less
Computational logic with square rings of nanomagnets
NASA Astrophysics Data System (ADS)
Arava, Hanu; Derlet, Peter M.; Vijayakumar, Jaianth; Cui, Jizhai; Bingham, Nicholas S.; Kleibert, Armin; Heyderman, Laura J.
2018-06-01
Nanomagnets are a promising low-power alternative to traditional computing. However, the successful implementation of nanomagnets in logic gates has been hindered so far by a lack of reliability. Here, we present a novel design with dipolar-coupled nanomagnets arranged on a square lattice to (i) support transfer of information and (ii) perform logic operations. We introduce a thermal protocol, using thermally active nanomagnets as a means to perform computation. Within this scheme, the nanomagnets are initialized by a global magnetic field and thermally relax on raising the temperature with a resistive heater. We demonstrate error-free transfer of information in chains of up to 19 square rings and we show a high level of reliability with successful gate operations of ∼94% across more than 2000 logic gates. Finally, we present a functionally complete prototype NAND/NOR logic gate that could be implemented for advanced logic operations. Here we support our experiments with simulations of the thermally averaged output and determine the optimal gate parameters. Our approach provides a new pathway to a long standing problem concerning reliability in the use of nanomagnets for computation.
Logic circuits from zero forcing.
Burgarth, Daniel; Giovannetti, Vittorio; Hogben, Leslie; Severini, Simone; Young, Michael
We design logic circuits based on the notion of zero forcing on graphs; each gate of the circuits is a gadget in which zero forcing is performed. We show that such circuits can evaluate every monotone Boolean function. By using two vertices to encode each logical bit, we obtain universal computation. We also highlight a phenomenon of "back forcing" as a property of each function. Such a phenomenon occurs in a circuit when the input of gates which have been already used at a given time step is further modified by a computation actually performed at a later stage. Finally, we show that zero forcing can be also used to implement reversible computation. The model introduced here provides a potentially new tool in the analysis of Boolean functions, with particular attention to monotonicity. Moreover, in the light of applications of zero forcing in quantum mechanics, the link with Boolean functions may suggest a new directions in quantum control theory and in the study of engineered quantum spin systems. It is an open technical problem to verify whether there is a link between zero forcing and computation with contact circuits.
Propulsive efficiency of frog swimming with different feet and swimming patterns
Jizhuang, Fan; Wei, Zhang; Bowen, Yuan; Gangfeng, Liu
2017-01-01
ABSTRACT Aquatic and terrestrial animals have different swimming performances and mechanical efficiencies based on their different swimming methods. To explore propulsion in swimming frogs, this study calculated mechanical efficiencies based on data describing aquatic and terrestrial webbed-foot shapes and swimming patterns. First, a simplified frog model and dynamic equation were established, and hydrodynamic forces on the foot were computed according to computational fluid dynamic calculations. Then, a two-link mechanism was used to stand in for the diverse and complicated hind legs found in different frog species, in order to simplify the input work calculation. Joint torques were derived based on the virtual work principle to compute the efficiency of foot propulsion. Finally, two feet and swimming patterns were combined to compute propulsive efficiency. The aquatic frog demonstrated a propulsive efficiency (43.11%) between those of drag-based and lift-based propulsions, while the terrestrial frog efficiency (29.58%) fell within the range of drag-based propulsion. The results illustrate the main factor of swimming patterns for swimming performance and efficiency. PMID:28302669
Piro, M.H.A; Wassermann, F.; Grundmann, S.; ...
2017-05-23
The current work presents experimental and computational investigations of fluid flow through a 37 element CANDU nuclear fuel bundle. Experiments based on Magnetic Resonance Velocimetry (MRV) permit three-dimensional, three-component fluid velocity measurements to be made within the bundle with sub-millimeter resolution that are non-intrusive, do not require tracer particles or optical access of the flow field. Computational fluid dynamic (CFD) simulations of the foregoing experiments were performed with the hydra-th code using implicit large eddy simulation, which were in good agreement with experimental measurements of the fluid velocity. Greater understanding has been gained in the evolution of geometry-induced inter-subchannel mixing,more » the local effects of obstructed debris on the local flow field, and various turbulent effects, such as recirculation, swirl and separation. These capabilities are not available with conventional experimental techniques or thermal-hydraulic codes. Finally, the overall goal of this work is to continue developing experimental and computational capabilities for further investigations that reliably support nuclear reactor performance and safety.« less
A high-speed linear algebra library with automatic parallelism
NASA Technical Reports Server (NTRS)
Boucher, Michael L.
1994-01-01
Parallel or distributed processing is key to getting highest performance workstations. However, designing and implementing efficient parallel algorithms is difficult and error-prone. It is even more difficult to write code that is both portable to and efficient on many different computers. Finally, it is harder still to satisfy the above requirements and include the reliability and ease of use required of commercial software intended for use in a production environment. As a result, the application of parallel processing technology to commercial software has been extremely small even though there are numerous computationally demanding programs that would significantly benefit from application of parallel processing. This paper describes DSSLIB, which is a library of subroutines that perform many of the time-consuming computations in engineering and scientific software. DSSLIB combines the high efficiency and speed of parallel computation with a serial programming model that eliminates many undesirable side-effects of typical parallel code. The result is a simple way to incorporate the power of parallel processing into commercial software without compromising maintainability, reliability, or ease of use. This gives significant advantages over less powerful non-parallel entries in the market.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piro, M.H.A; Wassermann, F.; Grundmann, S.
The current work presents experimental and computational investigations of fluid flow through a 37 element CANDU nuclear fuel bundle. Experiments based on Magnetic Resonance Velocimetry (MRV) permit three-dimensional, three-component fluid velocity measurements to be made within the bundle with sub-millimeter resolution that are non-intrusive, do not require tracer particles or optical access of the flow field. Computational fluid dynamic (CFD) simulations of the foregoing experiments were performed with the hydra-th code using implicit large eddy simulation, which were in good agreement with experimental measurements of the fluid velocity. Greater understanding has been gained in the evolution of geometry-induced inter-subchannel mixing,more » the local effects of obstructed debris on the local flow field, and various turbulent effects, such as recirculation, swirl and separation. These capabilities are not available with conventional experimental techniques or thermal-hydraulic codes. Finally, the overall goal of this work is to continue developing experimental and computational capabilities for further investigations that reliably support nuclear reactor performance and safety.« less
On the precision of aero-thermal simulations for TMT
NASA Astrophysics Data System (ADS)
Vogiatzis, Konstantinos; Thompson, Hugh
2016-08-01
Environmental effects on the Image Quality (IQ) of the Thirty Meter Telescope (TMT) are estimated by aero-thermal numerical simulations. These simulations utilize Computational Fluid Dynamics (CFD) to estimate, among others, thermal (dome and mirror) seeing as well as wind jitter and blur. As the design matures, guidance obtained from these numerical experiments can influence significant cost-performance trade-offs and even component survivability. The stochastic nature of environmental conditions results in the generation of a large computational solution matrix in order to statistically predict Observatory Performance. Moreover, the relative contribution of selected key subcomponents to IQ increases the parameter space and thus computational cost, while dictating a reduced prediction error bar. The current study presents the strategy followed to minimize prediction time and computational resources, the subsequent physical and numerical limitations and finally the approach to mitigate the issues experienced. In particular, the paper describes a mesh-independence study, the effect of interpolation of CFD results on the TMT IQ metric, and an analysis of the sensitivity of IQ to certain important heat sources and geometric features.
Optimization of thermal protection systems for the space shuttle vehicle. Volume 1: Final report
NASA Technical Reports Server (NTRS)
1972-01-01
A study performed to continue development of computational techniques for the Space Shuttle Thermal Protection System is reported. The resulting computer code was used to perform some additional optimization studies on several TPS configurations. The program was developed in Fortran 4 for the CDC 6400, and it was converted to Fortran 5 to be used for the Univac 1108. The computational methodology is developed in modular fashion to facilitate changes and updating of the techniques and to allow overlaying the computer code to fit into approximately 131,000 octal words of core storage. The program logic involves subroutines which handle input and output of information between computer and user, thermodynamic stress, dynamic, and weight/estimate analyses of a variety of panel configurations. These include metallic, ablative, RSI (with and without an underlying phase change material), and a thermodynamic analysis only of carbon-carbon systems applied to the leading edge and flat cover panels. Two different thermodynamic analyses are used. The first is a two-dimensional, explicit precedure with variable time steps which is used to describe the behavior of metallic and carbon-carbon leading edges. The second is a one-dimensional implicity technique used to predict temperature in the charring ablator and the noncharring RSI. The latter analysis is performed simply by suppressing the chemical reactions and pyrolysis of the TPS material.
Pai, Yun Suen; Yap, Hwa Jen; Md Dawal, Siti Zawiah; Ramesh, S.; Phoon, Sin Ye
2016-01-01
This study presents a modular-based implementation of augmented reality to provide an immersive experience in learning or teaching the planning phase, control system, and machining parameters of a fully automated work cell. The architecture of the system consists of three code modules that can operate independently or combined to create a complete system that is able to guide engineers from the layout planning phase to the prototyping of the final product. The layout planning module determines the best possible arrangement in a layout for the placement of various machines, in this case a conveyor belt for transportation, a robot arm for pick-and-place operations, and a computer numerical control milling machine to generate the final prototype. The robotic arm module simulates the pick-and-place operation offline from the conveyor belt to a computer numerical control (CNC) machine utilising collision detection and inverse kinematics. Finally, the CNC module performs virtual machining based on the Uniform Space Decomposition method and axis aligned bounding box collision detection. The conducted case study revealed that given the situation, a semi-circle shaped arrangement is desirable, whereas the pick-and-place system and the final generated G-code produced the highest deviation of 3.83 mm and 5.8 mm respectively. PMID:27271840
Pai, Yun Suen; Yap, Hwa Jen; Md Dawal, Siti Zawiah; Ramesh, S; Phoon, Sin Ye
2016-06-07
This study presents a modular-based implementation of augmented reality to provide an immersive experience in learning or teaching the planning phase, control system, and machining parameters of a fully automated work cell. The architecture of the system consists of three code modules that can operate independently or combined to create a complete system that is able to guide engineers from the layout planning phase to the prototyping of the final product. The layout planning module determines the best possible arrangement in a layout for the placement of various machines, in this case a conveyor belt for transportation, a robot arm for pick-and-place operations, and a computer numerical control milling machine to generate the final prototype. The robotic arm module simulates the pick-and-place operation offline from the conveyor belt to a computer numerical control (CNC) machine utilising collision detection and inverse kinematics. Finally, the CNC module performs virtual machining based on the Uniform Space Decomposition method and axis aligned bounding box collision detection. The conducted case study revealed that given the situation, a semi-circle shaped arrangement is desirable, whereas the pick-and-place system and the final generated G-code produced the highest deviation of 3.83 mm and 5.8 mm respectively.
NASA Astrophysics Data System (ADS)
Pai, Yun Suen; Yap, Hwa Jen; Md Dawal, Siti Zawiah; Ramesh, S.; Phoon, Sin Ye
2016-06-01
This study presents a modular-based implementation of augmented reality to provide an immersive experience in learning or teaching the planning phase, control system, and machining parameters of a fully automated work cell. The architecture of the system consists of three code modules that can operate independently or combined to create a complete system that is able to guide engineers from the layout planning phase to the prototyping of the final product. The layout planning module determines the best possible arrangement in a layout for the placement of various machines, in this case a conveyor belt for transportation, a robot arm for pick-and-place operations, and a computer numerical control milling machine to generate the final prototype. The robotic arm module simulates the pick-and-place operation offline from the conveyor belt to a computer numerical control (CNC) machine utilising collision detection and inverse kinematics. Finally, the CNC module performs virtual machining based on the Uniform Space Decomposition method and axis aligned bounding box collision detection. The conducted case study revealed that given the situation, a semi-circle shaped arrangement is desirable, whereas the pick-and-place system and the final generated G-code produced the highest deviation of 3.83 mm and 5.8 mm respectively.
NASA Astrophysics Data System (ADS)
Bird, Robert; Nystrom, David; Albright, Brian
2017-10-01
The ability of scientific simulations to effectively deliver performant computation is increasingly being challenged by successive generations of high-performance computing architectures. Code development to support efficient computation on these modern architectures is both expensive, and highly complex; if it is approached without due care, it may also not be directly transferable between subsequent hardware generations. Previous works have discussed techniques to support the process of adapting a legacy code for modern hardware generations, but despite the breakthroughs in the areas of mini-app development, portable-performance, and cache oblivious algorithms the problem still remains largely unsolved. In this work we demonstrate how a focus on platform agnostic modern code-development can be applied to Particle-in-Cell (PIC) simulations to facilitate effective scientific delivery. This work builds directly on our previous work optimizing VPIC, in which we replaced intrinsic based vectorisation with compile generated auto-vectorization to improve the performance and portability of VPIC. In this work we present the use of a specialized SIMD queue for processing some particle operations, and also preview a GPU capable OpenMP variant of VPIC. Finally we include a lessons learnt. Work performed under the auspices of the U.S. Dept. of Energy by the Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by the LANL LDRD program.
NASA Technical Reports Server (NTRS)
Collazo, Carlimar
2011-01-01
The statement of purpose is to analyze network monitoring logs to support the computer incident response team. Specifically, gain a clear understanding of the Uniform Resource Locator (URL) and its structure, and provide a way to breakdown a URL based on protocol, host name domain name, path, and other attributes. Finally, provide a method to perform data reduction by identifying the different types of advertisements shown on a webpage for incident data analysis. The procedures used for analysis and data reduction will be a computer program which would analyze the URL and identify and advertisement links from the actual content links.
Computing Lives And Reliabilities Of Turboprop Transmissions
NASA Technical Reports Server (NTRS)
Coy, J. J.; Savage, M.; Radil, K. C.; Lewicki, D. G.
1991-01-01
Computer program PSHFT calculates lifetimes of variety of aircraft transmissions. Consists of main program, series of subroutines applying to specific configurations, generic subroutines for analysis of properties of components, subroutines for analysis of system, and common block. Main program selects routines used in analysis and causes them to operate in desired sequence. Series of configuration-specific subroutines put in configuration data, perform force and life analyses for components (with help of generic component-property-analysis subroutines), fill property array, call up system-analysis routines, and finally print out results of analysis for system and components. Written in FORTRAN 77(IV).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barieau, R.E.
1977-03-01
The PROP Program of Wilson and Lissaman has been modified by adding the Newton-Raphson Method and a Step Wise Search Method, as options for the method of solution. In addition, an optimization method is included. Twist angles, tip speed ratio and the pitch angle may be varied to produce maximum power coefficient. The computer program listing is presented along with sample input and output data. Further improvements to the program are discussed.
Li, Zhi; Yang, Rong-Tao; Li, Zu-Bing
2015-09-01
Computer-assisted navigation has been widely used in oral and maxillofacial surgery. The purpose of this study was to describe the applications of computer-assisted navigation for the minimally invasive reduction of isolated zygomatic arch fractures. All patients identified as having isolated zygomatic arch fractures presenting to the authors' department from April 2013 through November 2014 were included in this prospective study. Minimally invasive reductions of isolated zygomatic arch fractures were performed on these patients under the guidance of computer-assisted navigation. The reduction status was evaluated by postoperative computed tomography (CT) 1 week after the operation. Postoperative complications and facial contours were evaluated during follow-up. Functional recovery was evaluated by the difference between the preoperative maximum interincisal mouth opening and that at the final follow-up. Twenty-three patients were included in this case series. The operation proceeded well in all patients. Postoperatively, all patients displayed uneventful healing without postoperative complication. Postoperative CT showed exact reduction in all cases. Satisfactory facial contour and functional recovery were observed in all patients. The preoperative maximal mouth opening ranged from 8 to 25 mm, and the maximal mouth opening at the final follow-up ranged from 36 to 42 mm. Computer-assisted navigation can be used not only for guiding zygomatic arch fracture reduction, but also for assessing reduction. Computer-assisted navigation is an effective and minimally invasive technique that can be applied in the reduction of isolated zygomatic arch fractures. Copyright © 2015 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Opportunities and choice in a new vector era
NASA Astrophysics Data System (ADS)
Nowak, A.
2014-06-01
This work discusses the significant changes in computing landscape related to the progression of Moore's Law, and the implications on scientific computing. Particular attention is devoted to the High Energy Physics domain (HEP), which has always made good use of threading, but levels of parallelism closer to the hardware were often left underutilized. Findings of the CERN openlab Platform Competence Center are reported in the context of expanding "performance dimensions", and especially the resurgence of vectors. These suggest that data oriented designs are feasible in HEP and have considerable potential for performance improvements on multiple levels, but will rarely trump algorithmic enhancements. Finally, an analysis of upcoming hardware and software technologies identifies heterogeneity as a major challenge for software, which will require more emphasis on scalable, efficient design.
Learning with On-Line and Hardcopy Tutorials. A Final Report. CDC Technical Report No. 32.
ERIC Educational Resources Information Center
Duffy, T. M.; And Others
Intended to aid in the design of computer systems that promote efficient learning and performance, this study compared the effects of using hard copy and online format tutorials on the learning activities of 48 undergraduate students in either design or engineering. The tutorials, which provided instruction on the use of the equipment and basic…
ERIC Educational Resources Information Center
Semmel, Melvyn I.; And Others
The effects of Computer-Assisted Teacher Training System (CATTS) feedback in a preservice special education teacher training program are discussed. It is explained that a series of studies were conducted to test the efficacy of CATTS feedback in effecting teacher trainees' acquisition and performance of specific teaching skills. Chapter 1 presents…
ERIC Educational Resources Information Center
Martin, Elizabeth L.; Cataneo, Daniel F.
A study was conducted by the Air Force to determine the extent to which takeoff/landing skills learned in a simulator equipped with a night visual system would transfer to daytime performance in the aircraft. A transfer-of-training design was used to assess the differential effectiveness of simulator training with a day versus a night…
Algorithms and Architectures for Elastic-Wave Inversion Final Report CRADA No. TC02144.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larsen, S.; Lindtjorn, O.
2017-08-15
This was a collaborative effort between Lawrence Livermore National Security, LLC as manager and operator of Lawrence Livermore National Laboratory (LLNL) and Schlumberger Technology Corporation (STC), to perform a computational feasibility study that investigates hardware platforms and software algorithms applicable to STC for Reverse Time Migration (RTM) / Reverse Time Inversion (RTI) of 3-D seismic data.
Running High-Throughput Jobs on Peregrine | High-Performance Computing |
unique name (using "name=") and usse the task name to create a unique output file name. For runs on and how many tasks to give to each worker at a time using the NITRO_COORD_OPTIONS environment . Finally, you start Nitro by executing launch_nitro.sh. Sample Nitro job script To run a job using the
Optic Glomeruli: Biological Circuits that Compute Target Identity
2013-11-01
vitripennis. Insect Mol. Biol. Suppl. 1:121-36. Strausfeld NJ. 2012. Arthropod Brains. Evolution , Functional Elegance and Historical Significance. Harvard...Neuroscience and Center for Insect Science University of Arizona Tucson, AZ 85721 Contract No. FA8651-10-1-0001 November 2013 FINAL REPORT...PERFORMING ORGANIZATION REPORT NUMBER Department of Neuroscience and Center for Insect Science University of Arizona Tucson, AZ 85721
Merged Vision and GPS Control of a Semi-Autonomous, Small Helicopter
NASA Technical Reports Server (NTRS)
Rock, Stephen M.
1999-01-01
This final report documents the activities performed during the research period from April 1, 1996 to September 30, 1997. It contains three papers: Carrier Phase GPS and Computer Vision for Control of an Autonomous Helicopter; A Contestant in the 1997 International Aerospace Robotics Laboratory Stanford University; and Combined CDGPS and Vision-Based Control of a Small Autonomous Helicopter.
Optimum Design of Forging Process Parameters and Preform Shape under Uncertainties
NASA Astrophysics Data System (ADS)
Repalle, Jalaja; Grandhi, Ramana V.
2004-06-01
Forging is a highly complex non-linear process that is vulnerable to various uncertainties, such as variations in billet geometry, die temperature, material properties, workpiece and forging equipment positional errors and process parameters. A combination of these uncertainties could induce heavy manufacturing losses through premature die failure, final part geometric distortion and production risk. Identifying the sources of uncertainties, quantifying and controlling them will reduce risk in the manufacturing environment, which will minimize the overall cost of production. In this paper, various uncertainties that affect forging tool life and preform design are identified, and their cumulative effect on the forging process is evaluated. Since the forging process simulation is computationally intensive, the response surface approach is used to reduce time by establishing a relationship between the system performance and the critical process design parameters. Variability in system performance due to randomness in the parameters is computed by applying Monte Carlo Simulations (MCS) on generated Response Surface Models (RSM). Finally, a Robust Methodology is developed to optimize forging process parameters and preform shape. The developed method is demonstrated by applying it to an axisymmetric H-cross section disk forging to improve the product quality and robustness.
Computer assisted optical biopsy for colorectal polyps
NASA Astrophysics Data System (ADS)
Navarro-Avila, Fernando J.; Saint-Hill-Febles, Yadira; Renner, Janis; Klare, Peter; von Delius, Stefan; Navab, Nassir; Mateus, Diana
2017-03-01
We propose a method for computer-assisted optical biopsy for colorectal polyps, with the final goal of assisting the medical expert during the colonoscopy. In particular, we target the problem of automatic classification of polyp images in two classes: adenomatous vs non-adenoma. Our approach is based on recent advancements in convolutional neural networks (CNN) for image representation. In the paper, we describe and compare four different methodologies to address the binary classification task: a baseline with classical features and a Random Forest classifier, two methods based on features obtained from a pre-trained network, and finally, the end-to-end training of a CNN. With the pre-trained network, we show the feasibility of transferring a feature extraction mechanism trained on millions of natural images, to the task of classifying adenomatous polyps. We then demonstrate further performance improvements when training the CNN for our specific classification task. In our study, 776 polyp images were acquired and histologically analyzed after polyp resection. We report a performance increase of the CNN-based approaches with respect to both, the conventional engineered features and to a state-of-the-art method based on videos and 3D shape features.
Integrating Cache Performance Modeling and Tuning Support in Parallelization Tools
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)
1998-01-01
With the resurgence of distributed shared memory (DSM) systems based on cache-coherent Non Uniform Memory Access (ccNUMA) architectures and increasing disparity between memory and processors speeds, data locality overheads are becoming the greatest bottlenecks in the way of realizing potential high performance of these systems. While parallelization tools and compilers facilitate the users in porting their sequential applications to a DSM system, a lot of time and effort is needed to tune the memory performance of these applications to achieve reasonable speedup. In this paper, we show that integrating cache performance modeling and tuning support within a parallelization environment can alleviate this problem. The Cache Performance Modeling and Prediction Tool (CPMP), employs trace-driven simulation techniques without the overhead of generating and managing detailed address traces. CPMP predicts the cache performance impact of source code level "what-if" modifications in a program to assist a user in the tuning process. CPMP is built on top of a customized version of the Computer Aided Parallelization Tools (CAPTools) environment. Finally, we demonstrate how CPMP can be applied to tune a real Computational Fluid Dynamics (CFD) application.
Heads in the Cloud: A Primer on Neuroimaging Applications of High Performance Computing.
Shatil, Anwar S; Younas, Sohail; Pourreza, Hossein; Figley, Chase R
2015-01-01
With larger data sets and more sophisticated analyses, it is becoming increasingly common for neuroimaging researchers to push (or exceed) the limitations of standalone computer workstations. Nonetheless, although high-performance computing platforms such as clusters, grids and clouds are already in routine use by a small handful of neuroimaging researchers to increase their storage and/or computational power, the adoption of such resources by the broader neuroimaging community remains relatively uncommon. Therefore, the goal of the current manuscript is to: 1) inform prospective users about the similarities and differences between computing clusters, grids and clouds; 2) highlight their main advantages; 3) discuss when it may (and may not) be advisable to use them; 4) review some of their potential problems and barriers to access; and finally 5) give a few practical suggestions for how interested new users can start analyzing their neuroimaging data using cloud resources. Although the aim of cloud computing is to hide most of the complexity of the infrastructure management from end-users, we recognize that this can still be an intimidating area for cognitive neuroscientists, psychologists, neurologists, radiologists, and other neuroimaging researchers lacking a strong computational background. Therefore, with this in mind, we have aimed to provide a basic introduction to cloud computing in general (including some of the basic terminology, computer architectures, infrastructure and service models, etc.), a practical overview of the benefits and drawbacks, and a specific focus on how cloud resources can be used for various neuroimaging applications.
Heads in the Cloud: A Primer on Neuroimaging Applications of High Performance Computing
Shatil, Anwar S.; Younas, Sohail; Pourreza, Hossein; Figley, Chase R.
2015-01-01
With larger data sets and more sophisticated analyses, it is becoming increasingly common for neuroimaging researchers to push (or exceed) the limitations of standalone computer workstations. Nonetheless, although high-performance computing platforms such as clusters, grids and clouds are already in routine use by a small handful of neuroimaging researchers to increase their storage and/or computational power, the adoption of such resources by the broader neuroimaging community remains relatively uncommon. Therefore, the goal of the current manuscript is to: 1) inform prospective users about the similarities and differences between computing clusters, grids and clouds; 2) highlight their main advantages; 3) discuss when it may (and may not) be advisable to use them; 4) review some of their potential problems and barriers to access; and finally 5) give a few practical suggestions for how interested new users can start analyzing their neuroimaging data using cloud resources. Although the aim of cloud computing is to hide most of the complexity of the infrastructure management from end-users, we recognize that this can still be an intimidating area for cognitive neuroscientists, psychologists, neurologists, radiologists, and other neuroimaging researchers lacking a strong computational background. Therefore, with this in mind, we have aimed to provide a basic introduction to cloud computing in general (including some of the basic terminology, computer architectures, infrastructure and service models, etc.), a practical overview of the benefits and drawbacks, and a specific focus on how cloud resources can be used for various neuroimaging applications. PMID:27279746
Wells, I G; Cartwright, R Y; Farnan, L P
1993-12-15
The computing strategy in our laboratories evolved from research in Artificial Intelligence, and is based on powerful software tools running on high performance desktop computers with a graphical user interface. This allows most tasks to be regarded as design problems rather than implementation projects, and both rapid prototyping and an object-oriented approach to be employed during the in-house development and enhancement of the laboratory information systems. The practical application of this strategy is discussed, with particular reference to the system designer, the laboratory user and the laboratory customer. Routine operation covers five departments, and the systems are stable, flexible and well accepted by the users. Client-server computing, currently undergoing final trials, is seen as the key to further development, and this approach to Pathology computing has considerable potential for the future.
Liu, Jun; Zhang, Liqun; Cao, Dapeng; Wang, Wenchuan
2009-12-28
Polymer nanocomposites (PNCs) often exhibit excellent mechanical, thermal, electrical and optical properties, because they combine the performances of both polymers and inorganic or organic nanoparticles. Recently, computer modeling and simulation are playing an important role in exploring the reinforcement mechanism of the PNCs and even the design of functional PNCs. This report provides an overview of the progress made in past decades in the investigation of the static, rheological and mechanical properties of polymer nanocomposites studied by computer modeling and simulation. Emphases are placed on exploring the mechanisms at the molecular level for the dispersion of nanoparticles in nanocomposites, the effects of nanoparticles on chain conformation and glass transition temperature (T(g)), as well as viscoelastic and mechanical properties. Finally, some future challenges and opportunities in computer modeling and simulation of PNCs are addressed.
Performance Analysis of Scientific and Engineering Applications Using MPInside and TAU
NASA Technical Reports Server (NTRS)
Saini, Subhash; Mehrotra, Piyush; Taylor, Kenichi Jun Haeng; Shende, Sameer Suresh; Biswas, Rupak
2010-01-01
In this paper, we present performance analysis of two NASA applications using performance tools like Tuning and Analysis Utilities (TAU) and SGI MPInside. MITgcmUV and OVERFLOW are two production-quality applications used extensively by scientists and engineers at NASA. MITgcmUV is a global ocean simulation model, developed by the Estimating the Circulation and Climate of the Ocean (ECCO) Consortium, for solving the fluid equations of motion using the hydrostatic approximation. OVERFLOW is a general-purpose Navier-Stokes solver for computational fluid dynamics (CFD) problems. Using these tools, we analyze the MPI functions (MPI_Sendrecv, MPI_Bcast, MPI_Reduce, MPI_Allreduce, MPI_Barrier, etc.) with respect to message size of each rank, time consumed by each function, and how ranks communicate. MPI communication is further analyzed by studying the performance of MPI functions used in these two applications as a function of message size and number of cores. Finally, we present the compute time, communication time, and I/O time as a function of the number of cores.
Improving scanner wafer alignment performance by target optimization
NASA Astrophysics Data System (ADS)
Leray, Philippe; Jehoul, Christiane; Socha, Robert; Menchtchikov, Boris; Raghunathan, Sudhar; Kent, Eric; Schoonewelle, Hielke; Tinnemans, Patrick; Tuffy, Paul; Belen, Jun; Wise, Rich
2016-03-01
In the process nodes of 10nm and below, the patterning complexity along with the processing and materials required has resulted in a need to optimize alignment targets in order to achieve the required precision, accuracy and throughput performance. Recent industry publications on the metrology target optimization process have shown a move from the expensive and time consuming empirical methodologies, towards a faster computational approach. ASML's Design for Control (D4C) application, which is currently used to optimize YieldStar diffraction based overlay (DBO) metrology targets, has been extended to support the optimization of scanner wafer alignment targets. This allows the necessary process information and design methodology, used for DBO target designs, to be leveraged for the optimization of alignment targets. In this paper, we show how we applied this computational approach to wafer alignment target design. We verify the correlation between predictions and measurements for the key alignment performance metrics and finally show the potential alignment and overlay performance improvements that an optimized alignment target could achieve.
Some issues related to simulation of the tracking and communications computer network
NASA Technical Reports Server (NTRS)
Lacovara, Robert C.
1989-01-01
The Communications Performance and Integration branch of the Tracking and Communications Division has an ongoing involvement in the simulation of its flight hardware for Space Station Freedom. Specifically, the communication process between central processor(s) and orbital replaceable units (ORU's) is simulated with varying degrees of fidelity. The results of investigations into three aspects of this simulation effort are given. The most general area involves the use of computer assisted software engineering (CASE) tools for this particular simulation. The second area of interest is simulation methods for systems of mixed hardware and software. The final area investigated is the application of simulation methods to one of the proposed computer network protocols for space station, specifically IEEE 802.4.
Web-Based Integrated Research Environment for Aerodynamic Analyses and Design
NASA Astrophysics Data System (ADS)
Ahn, Jae Wan; Kim, Jin-Ho; Kim, Chongam; Cho, Jung-Hyun; Hur, Cinyoung; Kim, Yoonhee; Kang, Sang-Hyun; Kim, Byungsoo; Moon, Jong Bae; Cho, Kum Won
e-AIRS[1,2], an abbreviation of ‘e-Science Aerospace Integrated Research System,' is a virtual organization designed to support aerodynamic flow analyses in aerospace engineering using the e-Science environment. As the first step toward a virtual aerospace engineering organization, e-AIRS intends to give a full support of aerodynamic research process. Currently, e-AIRS can handle both the computational and experimental aerodynamic research on the e-Science infrastructure. In detail, users can conduct a full CFD (Computational Fluid Dynamics) research process, request wind tunnel experiment, perform comparative analysis between computational prediction and experimental measurement, and finally, collaborate with other researchers using the web portal. The present paper describes those services and the internal architecture of the e-AIRS system.
Childers, J. T.; Uram, T. D.; LeCompte, T. J.; ...
2016-09-29
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. Finally, this paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. Finally, this paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less
Pilot interaction with automated airborne decision making systems
NASA Technical Reports Server (NTRS)
Rouse, W. B.; Chu, Y. Y.; Greenstein, J. S.; Walden, R. S.
1976-01-01
An investigation was made of interaction between a human pilot and automated on-board decision making systems. Research was initiated on the topic of pilot problem solving in automated and semi-automated flight management systems and attempts were made to develop a model of human decision making in a multi-task situation. A study was made of allocation of responsibility between human and computer, and discussed were various pilot performance parameters with varying degrees of automation. Optimal allocation of responsibility between human and computer was considered and some theoretical results found in the literature were presented. The pilot as a problem solver was discussed. Finally the design of displays, controls, procedures, and computer aids for problem solving tasks in automated and semi-automated systems was considered.
Some issues related to simulation of the tracking and communications computer network
NASA Astrophysics Data System (ADS)
Lacovara, Robert C.
1989-12-01
The Communications Performance and Integration branch of the Tracking and Communications Division has an ongoing involvement in the simulation of its flight hardware for Space Station Freedom. Specifically, the communication process between central processor(s) and orbital replaceable units (ORU's) is simulated with varying degrees of fidelity. The results of investigations into three aspects of this simulation effort are given. The most general area involves the use of computer assisted software engineering (CASE) tools for this particular simulation. The second area of interest is simulation methods for systems of mixed hardware and software. The final area investigated is the application of simulation methods to one of the proposed computer network protocols for space station, specifically IEEE 802.4.
NASA Technical Reports Server (NTRS)
Tezduyar, Tayfun E.
1998-01-01
This is a final report as far as our work at University of Minnesota is concerned. The report describes our research progress and accomplishments in development of high performance computing methods and tools for 3D finite element computation of aerodynamic characteristics and fluid-structure interactions (FSI) arising in airdrop systems, namely ram-air parachutes and round parachutes. This class of simulations involves complex geometries, flexible structural components, deforming fluid domains, and unsteady flow patterns. The key components of our simulation toolkit are a stabilized finite element flow solver, a nonlinear structural dynamics solver, an automatic mesh moving scheme, and an interface between the fluid and structural solvers; all of these have been developed within a parallel message-passing paradigm.
Study 2.5 final report. DORCA computer program. Volume 5: Analysis report
NASA Technical Reports Server (NTRS)
Campbell, N.
1972-01-01
A modification of the Dynamic Operational Requirements and Cost Analysis Program to perform traffic analyses of the automated satellite program is described. Inherent in the analyses of the automated satellite program was the assumption that a number of vehicles were available to perform any or all of the missions within the satellite program. The objective of the modification was to select a vehicle or group of vehicles for performing all of the missions at the lowest possible cost. A vehicle selection routine and the capability to simulate ground based vehicle operational modes were incorporated into the program.
Design and Analysis of Self-Adapted Task Scheduling Strategies in Wireless Sensor Networks
Guo, Wenzhong; Xiong, Naixue; Chao, Han-Chieh; Hussain, Sajid; Chen, Guolong
2011-01-01
In a wireless sensor network (WSN), the usage of resources is usually highly related to the execution of tasks which consume a certain amount of computing and communication bandwidth. Parallel processing among sensors is a promising solution to provide the demanded computation capacity in WSNs. Task allocation and scheduling is a typical problem in the area of high performance computing. Although task allocation and scheduling in wired processor networks has been well studied in the past, their counterparts for WSNs remain largely unexplored. Existing traditional high performance computing solutions cannot be directly implemented in WSNs due to the limitations of WSNs such as limited resource availability and the shared communication medium. In this paper, a self-adapted task scheduling strategy for WSNs is presented. First, a multi-agent-based architecture for WSNs is proposed and a mathematical model of dynamic alliance is constructed for the task allocation problem. Then an effective discrete particle swarm optimization (PSO) algorithm for the dynamic alliance (DPSO-DA) with a well-designed particle position code and fitness function is proposed. A mutation operator which can effectively improve the algorithm’s ability of global search and population diversity is also introduced in this algorithm. Finally, the simulation results show that the proposed solution can achieve significant better performance than other algorithms. PMID:22163971
Computation of multi-dimensional viscous supersonic flow
NASA Technical Reports Server (NTRS)
Buggeln, R. C.; Kim, Y. N.; Mcdonald, H.
1986-01-01
A method has been developed for two- and three-dimensional computations of viscous supersonic jet flows interacting with an external flow. The approach employs a reduced form of the Navier-Stokes equations which allows solution as an initial-boundary value problem in space, using an efficient noniterative forward marching algorithm. Numerical instability associated with forward marching algorithms for flows with embedded subsonic regions is avoided by approximation of the reduced form of the Navier-Stokes equations in the subsonic regions of the boundary layers. Supersonic and subsonic portions of the flow field are simultaneously calculated by a consistently split linearized block implicit computational algorithm. The results of computations for a series of test cases associated with supersonic jet flow is presented and compared with other calculations for axisymmetric cases. Demonstration calculations indicate that the computational technique has great promise as a tool for calculating a wide range of supersonic flow problems including jet flow. Finally, a User's Manual is presented for the computer code used to perform the calculations.
Final Report for ALCC Allocation: Predictive Simulation of Complex Flow in Wind Farms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barone, Matthew F.; Ananthan, Shreyas; Churchfield, Matt
This report documents work performed using ALCC computing resources granted under a proposal submitted in February 2016, with the resource allocation period spanning the period July 2016 through June 2017. The award allocation was 10.7 million processor-hours at the National Energy Research Scientific Computing Center. The simulations performed were in support of two projects: the Atmosphere to Electrons (A2e) project, supported by the DOE EERE office; and the Exascale Computing Project (ECP), supported by the DOE Office of Science. The project team for both efforts consists of staff scientists and postdocs from Sandia National Laboratories and the National Renewable Energymore » Laboratory. At the heart of these projects is the open-source computational-fluid-dynamics (CFD) code, Nalu. Nalu solves the low-Mach-number Navier-Stokes equations using an unstructured- grid discretization. Nalu leverages the open-source Trilinos solver library and the Sierra Toolkit (STK) for parallelization and I/O. This report documents baseline computational performance of the Nalu code on problems of direct relevance to the wind plant physics application - namely, Large Eddy Simulation (LES) of an atmospheric boundary layer (ABL) flow and wall-modeled LES of a flow past a static wind turbine rotor blade. Parallel performance of Nalu and its constituent solver routines residing in the Trilinos library has been assessed previously under various campaigns. However, both Nalu and Trilinos have been, and remain, in active development and resources have not been available previously to rigorously track code performance over time. With the initiation of the ECP, it is important to establish and document baseline code performance on the problems of interest. This will allow the project team to identify and target any deficiencies in performance, as well as highlight any performance bottlenecks as we exercise the code on a greater variety of platforms and at larger scales. The current study is rather modest in scale, examining performance on problem sizes of O(100 million) elements and core counts up to 8k cores. This will be expanded as more computational resources become available to the projects.« less
A systematic investigation of computation models for predicting Adverse Drug Reactions (ADRs).
Kuang, Qifan; Wang, MinQi; Li, Rong; Dong, YongCheng; Li, Yizhou; Li, Menglong
2014-01-01
Early and accurate identification of adverse drug reactions (ADRs) is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs. In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper. Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms.
Automated analysis and classification of melanocytic tumor on skin whole slide images.
Xu, Hongming; Lu, Cheng; Berendt, Richard; Jha, Naresh; Mandal, Mrinal
2018-06-01
This paper presents a computer-aided technique for automated analysis and classification of melanocytic tumor on skin whole slide biopsy images. The proposed technique consists of four main modules. First, skin epidermis and dermis regions are segmented by a multi-resolution framework. Next, epidermis analysis is performed, where a set of epidermis features reflecting nuclear morphologies and spatial distributions is computed. In parallel with epidermis analysis, dermis analysis is also performed, where dermal cell nuclei are segmented and a set of textural and cytological features are computed. Finally, the skin melanocytic image is classified into different categories such as melanoma, nevus or normal tissue by using a multi-class support vector machine (mSVM) with extracted epidermis and dermis features. Experimental results on 66 skin whole slide images indicate that the proposed technique achieves more than 95% classification accuracy, which suggests that the technique has the potential to be used for assisting pathologists on skin biopsy image analysis and classification. Copyright © 2018 Elsevier Ltd. All rights reserved.
Final Report. Institute for Ultralscale Visualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Kwan-Liu; Galli, Giulia; Gygi, Francois
The SciDAC Institute for Ultrascale Visualization brought together leading experts from visualization, high-performance computing, and science application areas to make advanced visualization solutions for SciDAC scientists and the broader community. Over the five-year project, the Institute introduced many new enabling visualization techniques, which have significantly enhanced scientists’ ability to validate their simulations, interpret their data, and communicate with others about their work and findings. This Institute project involved a large number of junior and student researchers, who received the opportunities to work on some of the most challenging science applications and gain access to the most powerful high-performance computing facilitiesmore » in the world. They were readily trained and prepared for facing the greater challenges presented by extreme-scale computing. The Institute’s outreach efforts, through publications, workshops and tutorials, successfully disseminated the new knowledge and technologies to the SciDAC and the broader scientific communities. The scientific findings and experience of the Institute team helped plan the SciDAC3 program.« less
Implementing Molecular Dynamics on Hybrid High Performance Computers - Three-Body Potentials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, W Michael; Yamada, Masako
The use of coprocessors or accelerators such as graphics processing units (GPUs) has become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power re- quirements. Hybrid high-performance computers, defined as machines with nodes containing more than one type of floating-point processor (e.g. CPU and GPU), are now becoming more prevalent due to these advantages. Although there has been extensive research into methods to efficiently use accelerators to improve the performance of molecular dynamics (MD) employing pairwise potential energy models, little is reported in the literature for models that includemore » many-body effects. 3-body terms are required for many popular potentials such as MEAM, Tersoff, REBO, AIREBO, Stillinger-Weber, Bond-Order Potentials, and others. Because the per-atom simulation times are much higher for models incorporating 3-body terms, there is a clear need for efficient algo- rithms usable on hybrid high performance computers. Here, we report a shared-memory force-decomposition for 3-body potentials that avoids memory conflicts to allow for a deterministic code with substantial performance improvements on hybrid machines. We describe modifications necessary for use in distributed memory MD codes and show results for the simulation of water with Stillinger-Weber on the hybrid Titan supercomputer. We compare performance of the 3-body model to the SPC/E water model when using accelerators. Finally, we demonstrate that our approach can attain a speedup of 5.1 with acceleration on Titan for production simulations to study water droplet freezing on a surface.« less
Augmented Reality-Guided Lumbar Facet Joint Injections.
Agten, Christoph A; Dennler, Cyrill; Rosskopf, Andrea B; Jaberg, Laurenz; Pfirrmann, Christian W A; Farshad, Mazda
2018-05-08
The aim of this study was to assess feasibility and accuracy of augmented reality-guided lumbar facet joint injections. A spine phantom completely embedded in hardened opaque agar with 3 ring markers was built. A 3-dimensional model of the phantom was uploaded to an augmented reality headset (Microsoft HoloLens). Two radiologists independently performed 20 augmented reality-guided and 20 computed tomography (CT)-guided facet joint injections each: for each augmented reality-guided injection, the hologram was manually aligned with the phantom container using the ring markers. The radiologists targeted the virtual facet joint and tried to place the needle tip in the holographic joint space. Computed tomography was performed after each needle placement to document final needle tip position. Time needed from grabbing the needle to final needle placement was measured for each simulated injection. An independent radiologist rated images of all needle placements in a randomized order blinded to modality (augmented reality vs CT) and performer as perfect, acceptable, incorrect, or unsafe. Accuracy and time to place needles were compared between augmented reality-guided and CT-guided facet joint injections. In total, 39/40 (97.5%) of augmented reality-guided needle placements were either perfect or acceptable compared with 40/40 (100%) CT-guided needle placements (P = 0.5). One augmented reality-guided injection missed the facet joint space by 2 mm. No unsafe needle placements occurred. Time to final needle placement was substantially faster with augmented reality guidance (mean 14 ± 6 seconds vs 39 ± 15 seconds, P < 0.001 for both readers). Augmented reality-guided facet joint injections are feasible and accurate without potentially harmful needle placement in an experimental setting.
NASA Astrophysics Data System (ADS)
Grinberg, Horacio; Freed, Karl F.; Williams, Carl J.
1997-08-01
The analytical infinite order sudden (IOS) quantum theory of triatomic photodissociation, developed in paper I, is applied to study the indirect photodissociation of NOCl through a real or virtual intermediate state. The theory uses the IOS approximation for the dynamics in the final dissociative channels and an Airy function approximation for the continuum functions. The transition is taken as polarized in the plane of the molecule; symmetric top wave functions are used for both the initial and intermediate bound states; and simple semiempirical model potentials are employed for each state. The theory provides analytical expressions for the photofragment yield spectrum for producing particular final fragment ro-vibrational states as a function of the photon excitation energy. Computations are made of the photofragment excitation spectrum of NOCl in the region of the T1(13A″)←S0(11A') transition for producing the NO fragment in the vibrational states nNO=0, 1, and 2. The computed spectra for the unexcited nNO==0 and excited nNO=2 states are in reasonable agreement with experiment. However, some discrepancies are observed for the singly excited nNO=1 vibrational state, indicating deficiencies in the semiempirical potential energy surface. Computations for two different orientations of the in-plane transition dipole moment produce very similar excitation spectra. Calculations of fragment rotational distributions are performed for high values of the total angular momentum J, a feature that would be very difficult to perform with close-coupled methods. Computations are also made of the thermally averaged rotational energy distributions to simulate the conditions in actual supersonic jet experiments.
Compute as Fast as the Engineers Can Think! ULTRAFAST COMPUTING TEAM FINAL REPORT
NASA Technical Reports Server (NTRS)
Biedron, R. T.; Mehrotra, P.; Nelson, M. L.; Preston, M. L.; Rehder, J. J.; Rogersm J. L.; Rudy, D. H.; Sobieski, J.; Storaasli, O. O.
1999-01-01
This report documents findings and recommendations by the Ultrafast Computing Team (UCT). In the period 10-12/98, UCT reviewed design case scenarios for a supersonic transport and a reusable launch vehicle to derive computing requirements necessary for support of a design process with efficiency so radically improved that human thought rather than the computer paces the process. Assessment of the present computing capability against the above requirements indicated a need for further improvement in computing speed by several orders of magnitude to reduce time to solution from tens of hours to seconds in major applications. Evaluation of the trends in computer technology revealed a potential to attain the postulated improvement by further increases of single processor performance combined with massively parallel processing in a heterogeneous environment. However, utilization of massively parallel processing to its full capability will require redevelopment of the engineering analysis and optimization methods, including invention of new paradigms. To that end UCT recommends initiation of a new activity at LaRC called Computational Engineering for development of new methods and tools geared to the new computer architectures in disciplines, their coordination, and validation and benefit demonstration through applications.
Efficient computation of kinship and identity coefficients on large pedigrees.
Cheng, En; Elliott, Brendan; Ozsoyoglu, Z Meral
2009-06-01
With the rapidly expanding field of medical genetics and genetic counseling, genealogy information is becoming increasingly abundant. An important computation on pedigree data is the calculation of identity coefficients, which provide a complete description of the degree of relatedness of a pair of individuals. The areas of application of identity coefficients are numerous and diverse, from genetic counseling to disease tracking, and thus, the computation of identity coefficients merits special attention. However, the computation of identity coefficients is not done directly, but rather as the final step after computing a set of generalized kinship coefficients. In this paper, we first propose a novel Path-Counting Formula for calculating generalized kinship coefficients, which is motivated by Wright's path-counting method for computing inbreeding coefficient. We then present an efficient and scalable scheme for calculating generalized kinship coefficients on large pedigrees using NodeCodes, a special encoding scheme for expediting the evaluation of queries on pedigree graph structures. Furthermore, we propose an improved scheme using Family NodeCodes for the computation of generalized kinship coefficients, which is motivated by the significant improvement of using Family NodeCodes for inbreeding coefficient over the use of NodeCodes. We also perform experiments for evaluating the efficiency of our method, and compare it with the performance of the traditional recursive algorithm for three individuals. Experimental results demonstrate that the resulting scheme is more scalable and efficient than the traditional recursive methods for computing generalized kinship coefficients.
Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue
Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey; ...
2017-10-01
Here, the Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a modelmore » does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.« less
Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey
Here, the Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a modelmore » does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.« less
Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue
NASA Astrophysics Data System (ADS)
Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey; Bagliesi, Giuseppe; Belforte, Stephano; Campana, Simone; Dimou, Maria; Flix, Jose; Forti, Alessandra; di Girolamo, A.; Karavakis, Edward; Lammel, Stephan; Litmaath, Maarten; Sciaba, Andrea; Valassi, Andrea
2017-10-01
The Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a model does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.
Medical student web-based formative assessment tool for renal pathology.
Bijol, Vanesa; Byrne-Dugan, Cathryn J; Hoenig, Melanie P
2015-01-01
Background Web-based formative assessment tools have become widely recognized in medical education as valuable resources for self-directed learning. Objectives To explore the educational value of formative assessment using online quizzes for kidney pathology learning in our renal pathophysiology course. Methods Students were given unrestricted and optional access to quizzes. Performance on quizzed and non-quizzed materials of those who used ('quizzers') and did not use the tool ('non-quizzers') was compared. Frequency of tool usage was analyzed and satisfaction surveys were utilized at the end of the course. Results In total, 82.6% of the students used quizzes. The greatest usage was observed on the day before the final exam. Students repeated interactive and more challenging quizzes more often. Average means between final exam scores for quizzed and unrelated materials were almost equal for 'quizzers' and 'non-quizzers', but 'quizzers' performed statistically better than 'non-quizzers' on both, quizzed (p=0.001) and non-quizzed (p=0.024) topics. In total, 89% of surveyed students thought quizzes improved their learning experience in this course. Conclusions Our new computer-assisted learning tool is popular, and although its use can predict the final exam outcome, it does not provide strong evidence for direct improvement in academic performance. Students who chose to use quizzes did well on all aspects of the final exam and most commonly used quizzes to practice for final exam. Our efforts to revitalize the course material and promote learning by adding interactive online formative assessments improved students' learning experience overall.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brust, Frederick W.; Punch, Edward F.; Twombly, Elizabeth Kurth
This report summarizes the final product developed for the US DOE Small Business Innovation Research (SBIR) Phase II grant made to Engineering Mechanics Corporation of Columbus (Emc 2) between April 16, 2014 and August 31, 2016 titled ‘Adoption of High Performance Computational (HPC) Modeling Software for Widespread Use in the Manufacture of Welded Structures’. Many US companies have moved fabrication and production facilities off shore because of cheaper labor costs. A key aspect in bringing these jobs back to the US is the use of technology to render US-made fabrications more cost-efficient overall with higher quality. One significant advantage thatmore » has emerged in the US over the last two decades is the use of virtual design for fabrication of small and large structures in weld fabrication industries. Industries that use virtual design and analysis tools have reduced material part size, developed environmentally-friendly fabrication processes, improved product quality and performance, and reduced manufacturing costs. Indeed, Caterpillar Inc. (CAT), one of the partners in this effort, continues to have a large fabrication presence in the US because of the use of weld fabrication modeling to optimize fabrications by controlling weld residual stresses and distortions and improving fatigue, corrosion, and fracture performance. This report describes Emc 2’s DOE SBIR Phase II final results to extend an existing, state-of-the-art software code, Virtual Fabrication Technology (VFT®), currently used to design and model large welded structures prior to fabrication - to a broader range of products with widespread applications for small and medium-sized enterprises (SMEs). VFT® helps control distortion, can minimize and/or control residual stresses, control welding microstructure, and pre-determine welding parameters such as weld-sequencing, pre-bending, thermal-tensioning, etc. VFT® uses material properties, consumable properties, etc. as inputs. Through VFT®, manufacturing companies can avoid costly design changes after fabrication. This leads to the concept of joint design/fabrication where these important disciplines are intimately linked to minimize fabrication costs. Finally service performance (such as fatigue, corrosion, and fracture/damage) can be improved using this product. Emc 2’s DOE SBIR Phase II effort successfully adapted VFT® to perform efficiently in an HPC environment independent of commercial software on a platform to permit easy and cost effective access to the code. This provides the key for SMEs to access this sophisticated and proven methodology that is quick, accurate, cost effective and available “on-demand” to address weld-simulation and fabrication problems prior to manufacture. In addition, other organizations, such as Government agencies and large companies, may have a need for spot use of such a tool. The open source code, WARP3D, a high performance finite element code used in fracture and damage assessment of structures, was significantly modified so computational weld problems can be solved efficiently on multiple processors and threads with VFT®. The thermal solver for VFT®, based on a series of closed form solution approximations, was extensively enhanced for solution on multiple processors greatly increasing overall speed. In addition, the graphical user interface (GUI) was re-written to permit SMEs access to an HPC environment at the Ohio Super Computer Center (OSC) to integrate these solutions with WARP3D. The GUI is used to define all weld pass descriptions, number of passes, material properties, consumable properties, weld speed, etc. for the structure to be modeled. The GUI was enhanced to make it more user-friendly so that non-experts can perform weld modeling. Finally, an extensive outreach program to market this capability to fabrication companies was performed. This access will permit SMEs to perform weld modeling to improve their competitiveness at a reasonable cost.« less
Performance predictors of brain-computer interfaces in patients with amyotrophic lateral sclerosis
NASA Astrophysics Data System (ADS)
Geronimo, A.; Simmons, Z.; Schiff, S. J.
2016-04-01
Objective. Patients with amyotrophic lateral sclerosis (ALS) may benefit from brain-computer interfaces (BCI), but the utility of such devices likely will have to account for the functional, cognitive, and behavioral heterogeneity of this neurodegenerative disorder. Approach. In this study, a heterogeneous group of patients with ALS participated in a study on BCI based on the P300 event related potential and motor-imagery. Results. The presence of cognitive impairment in these patients significantly reduced the quality of the control signals required to use these communication systems, subsequently impairing performance, regardless of progression of physical symptoms. Loss in performance among the cognitively impaired was accompanied by a decrease in the signal-to-noise ratio of task-relevant EEG band power. There was also evidence that behavioral dysfunction negatively affects P300 speller performance. Finally, older participants achieved better performance on the P300 system than the motor-imagery system, indicating a preference of BCI paradigm with age. Significance. These findings highlight the importance of considering the heterogeneity of disease when designing BCI augmentative and alternative communication devices for clinical applications.
ERIC Educational Resources Information Center
Goclowski, John C.; Baran, H. Anthony
This report gives a managerial overview of the Life Cycle Cost Impact Modeling System (LCCIM), which was designed to provide the Air Force with an in-house capability of assessing the life cycle cost impact of weapon system design alternatives. LCCIM consists of computer programs and the analyses which the user must perform to generate input data.…
MHD Advanced Power Train Phase I, Final Report, Volume 7
DOE Office of Scientific and Technical Information (OSTI.GOV)
A. R. Jones
This appendix provides additional data in support of the MHD/Steam Power Plant Analyses reported in report Volume 5. The data is in the form of 3PA/SUMARY computer code printouts. The order of presentation in all four cases is as follows: (1) Overall Performance; (2) Component/Subsystem Information; (3) Plant Cost Accounts Summary; and (4) Plant Costing Details and Cost of Electricity.
ERIC Educational Resources Information Center
General Learning Corp., Washington, DC.
The COST-ED model (Costs of Schools, Training, and Education) of the instructional process encourages the recognition of management alternatives and potential cost-savings. It is used to calculate the minimum cost of performing specified instructional tasks. COST-ED components are presented as cost modules in a flowchart format for manpower,…
NASA Technical Reports Server (NTRS)
Levison, W. H.; Baron, S.
1984-01-01
Preliminary results in the application of a closed loop pilot/simulator model to the analysis of some simulator fidelity issues are discussed in the context of an air to air target tracking task. The closed loop model is described briefly. Then, problem simplifications that are employed to reduce computational costs are discussed. Finally, model results showing sensitivity of performance to various assumptions concerning the simulator and/or the pilot are presented.
SLEEC: Semantics-Rich Libraries for Effective Exascale Computation. Final Technical Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milind, Kulkarni
SLEEC (Semantics-rich Libraries for Effective Exascale Computation) was a project funded by the Department of Energy X-Stack Program, award number DE-SC0008629. The initial project period was September 2012–August 2015. The project was renewed for an additional year, expiring August 2016. Finally, the project received a no-cost extension, leading to a final expiry date of August 2017. Modern applications, especially those intended to run at exascale, are not written from scratch. Instead, they are built by stitching together various carefully-written, hand-tuned libraries. Correctly composing these libraries is difficult, but traditional compilers are unable to effectively analyze and transform across abstraction layers.more » Domain specific compilers integrate semantic knowledge into compilers, allowing them to transform applications that use particular domain-specific languages, or domain libraries. But they do not help when new domains are developed, or applications span multiple domains. SLEEC aims to fix these problems. To do so, we are building generic compiler and runtime infrastructures that are semantics-aware but not domain-specific. By performing optimizations related to the semantics of a domain library, the same infrastructure can be made generic and apply across multiple domains.« less
CFD Analysis in Advance of the NASA Juncture Flow Experiment
NASA Technical Reports Server (NTRS)
Lee, H. C.; Pulliam, T. H.; Neuhart, D. H.; Kegerise, M. A.
2017-01-01
NASA through its Transformational Tools and Technologies Project (TTT) under the Advanced Air Vehicle Program, is supporting a substantial effort to investigate the formation and origin of separation bubbles found on wing-body juncture zones. The flow behavior in these regions is highly complex, difficult to measure experimentally, and challenging to model numerically. Multiple wing configurations were designed and evaluated using Computational Fluid Dynamics (CFD), and a series of wind tunnel risk reduction tests were performed to further down-select the candidates for the final experiment. This paper documents the CFD analysis done in conjunction with the 6 percent scale risk reduction experiment performed in NASA Langley's 14- by 22-Foot Subsonic Tunnel. The combined CFD and wind tunnel results ultimately helped the Juncture Flow committee select the wing configurations for the final experiment.
Thermoelectric-Driven Autonomous Sensors for a Biomass Power Plant
NASA Astrophysics Data System (ADS)
Rodríguez, A.; Astrain, D.; Martínez, A.; Gubía, E.; Sorbet, F. J.
2013-07-01
This work presents the design and development of a thermoelectric generator intended to harness waste heat in a biomass power plant, and generate electric power to operate sensors and the required electronics for wireless communication. The first objective of the work is to design the optimum thermoelectric generator to harness heat from a hot surface, and generate electric power to operate a flowmeter and a wireless transmitter. The process is conducted by using a computational model, presented in previous papers, to determine the final design that meets the requirements of electric power consumption and number of transmissions per minute. Finally, the thermoelectric generator is simulated to evaluate its performance. The final device transmits information every 5 s. Moreover, it is completely autonomous and can be easily installed, since no electric wires are required.
NASA Astrophysics Data System (ADS)
Wong-Loya, J. A.; Santoyo, E.; Andaverde, J. A.; Quiroz-Ruiz, A.
2015-12-01
A Web-Based Computer System (RPM-WEBBSYS) has been developed for the application of the Rational Polynomial Method (RPM) to estimate static formation temperatures (SFT) of geothermal and petroleum wells. The system is also capable to reproduce the full thermal recovery processes occurred during the well completion. RPM-WEBBSYS has been programmed using advances of the information technology to perform more efficiently computations of SFT. RPM-WEBBSYS may be friendly and rapidly executed by using any computing device (e.g., personal computers and portable computing devices such as tablets or smartphones) with Internet access and a web browser. The computer system was validated using bottomhole temperature (BHT) measurements logged in a synthetic heat transfer experiment, where a good matching between predicted and true SFT was achieved. RPM-WEBBSYS was finally applied to BHT logs collected from well drilling and shut-in operations, where the typical problems of the under- and over-estimation of the SFT (exhibited by most of the existing analytical methods) were effectively corrected.
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Kavi, Srinu
1984-01-01
This Working Paper Series entry presents a detailed survey of knowledge based systems. After being in a relatively dormant state for many years, only recently is Artificial Intelligence (AI) - that branch of computer science that attempts to have machines emulate intelligent behavior - accomplishing practical results. Most of these results can be attributed to the design and use of Knowledge-Based Systems, KBSs (or ecpert systems) - problem solving computer programs that can reach a level of performance comparable to that of a human expert in some specialized problem domain. These systems can act as a consultant for various requirements like medical diagnosis, military threat analysis, project risk assessment, etc. These systems possess knowledge to enable them to make intelligent desisions. They are, however, not meant to replace the human specialists in any particular domain. A critical survey of recent work in interactive KBSs is reported. A case study (MYCIN) of a KBS, a list of existing KBSs, and an introduction to the Japanese Fifth Generation Computer Project are provided as appendices. Finally, an extensive set of KBS-related references is provided at the end of the report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bordival, M.; Schmidt, F. M.; Le Maoult, Y.
In the Stretch-Blow Molding (SBM) process, the temperature distribution of the reheated perform affects drastically the blowing kinematic, the bottle thickness distribution, as well as the orientation induced by stretching. Consequently, mechanical and optical properties of the final bottle are closely related to heating conditions. In order to predict the 3D temperature distribution of a rotating preform, numerical software using control-volume method has been developed. Since PET behaves like a semi-transparent medium, the radiative flux absorption was computed using Beer Lambert law. In a second step, 2D axi-symmetric simulations of the SBM have been developed using the finite element packagemore » ABAQUS registered . Temperature profiles through the preform wall thickness and along its length were computed and applied as initial condition. Air pressure inside the preform was not considered as an input variable, but was automatically computed using a thermodynamic model. The heat transfer coefficient applied between the mold and the polymer was also measured. Finally, the G'sell law was used for modeling PET behavior. For both heating and blowing stage simulations, a good agreement has been observed with experimental measurements. This work is part of the European project ''APT{sub P}ACK'' (Advanced knowledge of Polymer deformation for Tomorrow's PACKaging)« less
NASA Technical Reports Server (NTRS)
Kelley, H. J.; Lefton, L.
1976-01-01
The numerical analysis of composite differential-turn trajectory pairs was studied for 'fast-evader' and 'neutral-evader' attitude dynamics idealization for attack aircraft. Transversality and generalized corner conditions are examined and the joining of trajectory segments discussed. A criterion is given for the screening of 'tandem-motion' trajectory segments. Main focus is upon the computation of barrier surfaces. Fortunately, from a computational viewpoint, the trajectory pairs defining these surfaces need not be calculated completely, the final subarc of multiple-subarc pairs not being required. Some calculations for pairs of example aircraft are presented. A computer program used to perform the calculations is included.
NASA Technical Reports Server (NTRS)
Roumeliotis, Chris; Grinblat, Jonathan; Reeves, Glenn
2013-01-01
Second Chance (SECC) was a bare bones version of Mars Science Laboratory's (MSL) Entry Descent & Landing (EDL) flight software that ran on Curiosity's backup computer, which could have taken over swiftly in the event of a reset of Curiosity's prime computer, in order to land her safely on Mars. Without SECC, a reset of Curiosity's prime computer would have lead to catastrophic mission failure. Even though a reset of the prime computer never occurred, SECC had the important responsibility as EDL's guardian angel, and this responsibility would not have seen such success without unparalleled systems engineering. This paper will focus on the systems engineering behind SECC: Covering a brief overview of SECC's design, the intense schedule to use SECC as a backup system, the verification and validation of the system's "Do No Harm" mandate, the system's overall functional performance, and finally, its use on the fateful day of August 5th, 2012.
The development and application of CFD technology in mechanical engineering
NASA Astrophysics Data System (ADS)
Wei, Yufeng
2017-12-01
Computational Fluid Dynamics (CFD) is an analysis of the physical phenomena involved in fluid flow and heat conduction by computer numerical calculation and graphical display. The numerical method simulates the complexity of the physical problem and the precision of the numerical solution, which is directly related to the hardware speed of the computer and the hardware such as memory. With the continuous improvement of computer performance and CFD technology, it has been widely applied to the field of water conservancy engineering, environmental engineering and industrial engineering. This paper summarizes the development process of CFD, the theoretical basis, the governing equations of fluid mechanics, and introduces the various methods of numerical calculation and the related development of CFD technology. Finally, CFD technology in the mechanical engineering related applications are summarized. It is hoped that this review will help researchers in the field of mechanical engineering.
Application of CFD to a generic hypersonic flight research study
NASA Technical Reports Server (NTRS)
Green, Michael J.; Lawrence, Scott L.; Dilley, Arthur D.; Hawkins, Richard W.; Walker, Mary M.; Oberkampf, William L.
1993-01-01
Computational analyses have been performed for the initial assessment of flight research vehicle concepts that satisfy requirements for potential hypersonic experiments. Results were obtained from independent analyses at NASA Ames, NASA Langley, and Sandia National Labs, using sophisticated time-dependent Navier-Stokes and parabolized Navier-Stokes methods. Careful study of a common problem consisting of hypersonic flow past a slightly blunted conical forebody was undertaken to estimate the level of uncertainty in the computed results, and to assess the capabilities of current computational methods for predicting boundary-layer transition onset. Results of this study in terms of surface pressure and heat transfer comparisons, as well as comparisons of boundary-layer edge quantities and flow-field profiles are presented here. Sensitivities to grid and gas model are discussed. Finally, representative results are presented relating to the use of Computational Fluid Dynamics in the vehicle design and the integration/support of potential experiments.
Peng, Fei; Li, Jiao-ting; Long, Min
2015-03-01
To discriminate the acquisition pipelines of digital images, a novel scheme for the identification of natural images and computer-generated graphics is proposed based on statistical and textural features. First, the differences between them are investigated from the view of statistics and texture, and 31 dimensions of feature are acquired for identification. Then, LIBSVM is used for the classification. Finally, the experimental results are presented. The results show that it can achieve an identification accuracy of 97.89% for computer-generated graphics, and an identification accuracy of 97.75% for natural images. The analyses also demonstrate the proposed method has excellent performance, compared with some existing methods based only on statistical features or other features. The method has a great potential to be implemented for the identification of natural images and computer-generated graphics. © 2014 American Academy of Forensic Sciences.
ERIC Educational Resources Information Center
Anderson-Inman, Lynne; Ditson, Mary
This final report describes activities and accomplishments of the four-year Computer-Based Study Strategies (CBSS) Outreach Project at the University of Oregon. This project disseminated information about using computer-based study strategies as an intervention for students with learning disabilities and provided teachers in participating outreach…
THE DEVELOPMENT AND PRESENTATION OF FOUR COLLEGE COURSES BY COMPUTER TELEPROCESSING. FINAL REPORT.
ERIC Educational Resources Information Center
MITZEL, HAROLD E.
THIS IS A FINAL REPORT ON THE DEVELOPMENT AND PRESENTATION OF FOUR COLLEGE COURSES BY COMPUTER TELEPROCESSING FROM APRIL 1964 TO JUNE 1967. IT OUTLINES THE PROGRESS MADE TOWARDS THE PREPARATION, DEVELOPMENT, AND EVALUATION OF MATERIALS FOR COMPUTER PRESENTATION OF COURSES IN AUDIOLOGY, MANAGEMENT ACCOUNTING, ENGINEERING ECONOMICS, AND MODERN…
Additive Manufacturing and High-Performance Computing: a Disruptive Latent Technology
NASA Astrophysics Data System (ADS)
Goodwin, Bruce
2015-03-01
This presentation will discuss the relationship between recent advances in Additive Manufacturing (AM) technology, High-Performance Computing (HPC) simulation and design capabilities, and related advances in Uncertainty Quantification (UQ), and then examines their impacts upon national and international security. The presentation surveys how AM accelerates the fabrication process, while HPC combined with UQ provides a fast track for the engineering design cycle. The combination of AM and HPC/UQ almost eliminates the engineering design and prototype iterative cycle, thereby dramatically reducing cost of production and time-to-market. These methods thereby present significant benefits for US national interests, both civilian and military, in an age of austerity. Finally, considering cyber security issues and the advent of the ``cloud,'' these disruptive, currently latent technologies may well enable proliferation and so challenge both nuclear and non-nuclear aspects of international security.
Real-time simulation of the retina allowing visualization of each processing stage
NASA Astrophysics Data System (ADS)
Teeters, Jeffrey L.; Werblin, Frank S.
1991-08-01
The retina computes to let us see, but can we see the retina compute? Until now, the answer has been no, because the unconscious nature of the processing hides it from our view. Here the authors describe a method of seeing computations performed throughout the retina. This is achieved by using neurophysiological data to construct a model of the retina, and using a special-purpose image processing computer (PIPE) to implement the model in real time. Processing in the model is organized into stages corresponding to computations performed by each retinal cell type. The final stage is the transient (change detecting) ganglion cell. A CCD camera forms the input image, and the activity of a selected retinal cell type is the output which is displayed on a TV monitor. By changing the retina cell driving the monitor, the progressive transformations of the image by the retina can be observed. These simulations demonstrate the ubiquitous presence of temporal and spatial variations in the patterns of activity generated by the retina which are fed into the brain. The dynamical aspects make these patterns very different from those generated by the common DOG (Difference of Gaussian) model of receptive field. Because the retina is so successful in biological vision systems, the processing described here may be useful in machine vision.
The GeantV project: Preparing the future of simulation
Amadio, G.; J. Apostolakis; Bandieramonte, M.; ...
2015-12-23
Detector simulation is consuming at least half of the HEP computing cycles, and even so, experiments have to take hard decisions on what to simulate, as their needs greatly surpass the availability of computing resources. New experiments still in the design phase such as FCC, CLIC and ILC as well as upgraded versions of the existing LHC detectors will push further the simulation requirements. Since the increase in computing resources is not likely to keep pace with our needs, it is therefore necessary to explore innovative ways of speeding up simulation in order to sustain the progress of High Energymore » Physics. The GeantV project aims at developing a high performance detector simulation system integrating fast and full simulation that can be ported on different computing architectures, including CPU accelerators. After more than two years of R&D the project has produced a prototype capable of transporting particles in complex geometries exploiting micro-parallelism, SIMD and multithreading. Portability is obtained via C++ template techniques that allow the development of machine- independent computational kernels. Furthermore, a set of tables derived from Geant4 for cross sections and final states provides a realistic shower development and, having been ported into a Geant4 physics list, can be used as a basis for a direct performance comparison.« less
Methane Adsorption in Zr-Based MOFs: Comparison and Critical Evaluation of Force Fields
2017-01-01
The search for nanoporous materials that are highly performing for gas storage and separation is one of the contemporary challenges in material design. The computational tools to aid these experimental efforts are widely available, and adsorption isotherms are routinely computed for huge sets of (hypothetical) frameworks. Clearly the computational results depend on the interactions between the adsorbed species and the adsorbent, which are commonly described using force fields. In this paper, an extensive comparison and in-depth investigation of several force fields from literature is reported for the case of methane adsorption in the Zr-based Metal–Organic Frameworks UiO-66, UiO-67, DUT-52, NU-1000, and MOF-808. Significant quantitative differences in the computed uptake are observed when comparing different force fields, but most qualitative features are common which suggests some predictive power of the simulations when it comes to these properties. More insight into the host–guest interactions is obtained by benchmarking the force fields with an extensive number of ab initio computed single molecule interaction energies. This analysis at the molecular level reveals that especially ab initio derived force fields perform well in reproducing the ab initio interaction energies. Finally, the high sensitivity of uptake predictions on the underlying potential energy surface is explored. PMID:29170687
Performance monitoring for brain-computer-interface actions.
Schurger, Aaron; Gale, Steven; Gozel, Olivia; Blanke, Olaf
2017-02-01
When presented with a difficult perceptual decision, human observers are able to make metacognitive judgements of subjective certainty. Such judgements can be made independently of and prior to any overt response to a sensory stimulus, presumably via internal monitoring. Retrospective judgements about one's own task performance, on the other hand, require first that the subject perform a task and thus could potentially be made based on motor processes, proprioceptive, and other sensory feedback rather than internal monitoring. With this dichotomy in mind, we set out to study performance monitoring using a brain-computer interface (BCI), with which subjects could voluntarily perform an action - moving a cursor on a computer screen - without any movement of the body, and thus without somatosensory feedback. Real-time visual feedback was available to subjects during training, but not during the experiment where the true final position of the cursor was only revealed after the subject had estimated where s/he thought it had ended up after 6s of BCI-based cursor control. During the first half of the experiment subjects based their assessments primarily on the prior probability of the end position of the cursor on previous trials. However, during the second half of the experiment subjects' judgements moved significantly closer to the true end position of the cursor, and away from the prior. This suggests that subjects can monitor task performance when the task is performed without overt movement of the body. Copyright © 2016 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strout, Michelle
Programming parallel machines is fraught with difficulties: the obfuscation of algorithms due to implementation details such as communication and synchronization, the need for transparency between language constructs and performance, the difficulty of performing program analysis to enable automatic parallelization techniques, and the existence of important "dusty deck" codes. The SAIMI project developed abstractions that enable the orthogonal specification of algorithms and implementation details within the context of existing DOE applications. The main idea is to enable the injection of small programming models such as expressions involving transcendental functions, polyhedral iteration spaces with sparse constraints, and task graphs into full programsmore » through the use of pragmas. These smaller, more restricted programming models enable orthogonal specification of many implementation details such as how to map the computation on to parallel processors, how to schedule the computation, and how to allocation storage for the computation. At the same time, these small programming models enable the expression of the most computationally intense and communication heavy portions in many scientific simulations. The ability to orthogonally manipulate the implementation for such computations will significantly ease performance programming efforts and expose transformation possibilities and parameter to automated approaches such as autotuning. At Colorado State University, the SAIMI project was supported through DOE grant DE-SC3956 from April 2010 through August 2015. The SAIMI project has contributed a number of important results to programming abstractions that enable the orthogonal specification of implementation details in scientific codes. This final report summarizes the research that was funded by the SAIMI project.« less
The coupling of fluids, dynamics, and controls on advanced architecture computers
NASA Technical Reports Server (NTRS)
Atwood, Christopher
1995-01-01
This grant provided for the demonstration of coupled controls, body dynamics, and fluids computations in a workstation cluster environment; and an investigation of the impact of peer-peer communication on flow solver performance and robustness. The findings of these investigations were documented in the conference articles.The attached publication, 'Towards Distributed Fluids/Controls Simulations', documents the solution and scaling of the coupled Navier-Stokes, Euler rigid-body dynamics, and state feedback control equations for a two-dimensional canard-wing. The poor scaling shown was due to serialized grid connectivity computation and Ethernet bandwidth limits. The scaling of a peer-to-peer communication flow code on an IBM SP-2 was also shown. The scaling of the code on the switched fabric-linked nodes was good, with a 2.4 percent loss due to communication of intergrid boundary point information. The code performance on 30 worker nodes was 1.7 (mu)s/point/iteration, or a factor of three over a Cray C-90 head. The attached paper, 'Nonlinear Fluid Computations in a Distributed Environment', documents the effect of several computational rate enhancing methods on convergence. For the cases shown, the highest throughput was achieved using boundary updates at each step, with the manager process performing communication tasks only. Constrained domain decomposition of the implicit fluid equations did not degrade the convergence rate or final solution. The scaling of a coupled body/fluid dynamics problem on an Ethernet-linked cluster was also shown.
A scalable approach to solving dense linear algebra problems on hybrid CPU-GPU systems
Song, Fengguang; Dongarra, Jack
2014-10-01
Aiming to fully exploit the computing power of all CPUs and all graphics processing units (GPUs) on hybrid CPU-GPU systems to solve dense linear algebra problems, in this paper we design a class of heterogeneous tile algorithms to maximize the degree of parallelism, to minimize the communication volume, and to accommodate the heterogeneity between CPUs and GPUs. The new heterogeneous tile algorithms are executed upon our decentralized dynamic scheduling runtime system, which schedules a task graph dynamically and transfers data between compute nodes automatically. The runtime system uses a new distributed task assignment protocol to solve data dependencies between tasksmore » without any coordination between processing units. By overlapping computation and communication through dynamic scheduling, we are able to attain scalable performance for the double-precision Cholesky factorization and QR factorization. Finally, our approach demonstrates a performance comparable to Intel MKL on shared-memory multicore systems and better performance than both vendor (e.g., Intel MKL) and open source libraries (e.g., StarPU) in the following three environments: heterogeneous clusters with GPUs, conventional clusters without GPUs, and shared-memory systems with multiple GPUs.« less
A scalable approach to solving dense linear algebra problems on hybrid CPU-GPU systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Fengguang; Dongarra, Jack
Aiming to fully exploit the computing power of all CPUs and all graphics processing units (GPUs) on hybrid CPU-GPU systems to solve dense linear algebra problems, in this paper we design a class of heterogeneous tile algorithms to maximize the degree of parallelism, to minimize the communication volume, and to accommodate the heterogeneity between CPUs and GPUs. The new heterogeneous tile algorithms are executed upon our decentralized dynamic scheduling runtime system, which schedules a task graph dynamically and transfers data between compute nodes automatically. The runtime system uses a new distributed task assignment protocol to solve data dependencies between tasksmore » without any coordination between processing units. By overlapping computation and communication through dynamic scheduling, we are able to attain scalable performance for the double-precision Cholesky factorization and QR factorization. Finally, our approach demonstrates a performance comparable to Intel MKL on shared-memory multicore systems and better performance than both vendor (e.g., Intel MKL) and open source libraries (e.g., StarPU) in the following three environments: heterogeneous clusters with GPUs, conventional clusters without GPUs, and shared-memory systems with multiple GPUs.« less
Counterfactual quantum computation through quantum interrogation
NASA Astrophysics Data System (ADS)
Hosten, Onur; Rakher, Matthew T.; Barreiro, Julio T.; Peters, Nicholas A.; Kwiat, Paul G.
2006-02-01
The logic underlying the coherent nature of quantum information processing often deviates from intuitive reasoning, leading to surprising effects. Counterfactual computation constitutes a striking example: the potential outcome of a quantum computation can be inferred, even if the computer is not run. Relying on similar arguments to interaction-free measurements (or quantum interrogation), counterfactual computation is accomplished by putting the computer in a superposition of `running' and `not running' states, and then interfering the two histories. Conditional on the as-yet-unknown outcome of the computation, it is sometimes possible to counterfactually infer information about the solution. Here we demonstrate counterfactual computation, implementing Grover's search algorithm with an all-optical approach. It was believed that the overall probability of such counterfactual inference is intrinsically limited, so that it could not perform better on average than random guesses. However, using a novel `chained' version of the quantum Zeno effect, we show how to boost the counterfactual inference probability to unity, thereby beating the random guessing limit. Our methods are general and apply to any physical system, as illustrated by a discussion of trapped-ion systems. Finally, we briefly show that, in certain circumstances, counterfactual computation can eliminate errors induced by decoherence.
Support Vector Machine-Based Endmember Extraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Filippi, Anthony M; Archibald, Richard K
Introduced in this paper is the utilization of Support Vector Machines (SVMs) to automatically perform endmember extraction from hyperspectral data. The strengths of SVM are exploited to provide a fast and accurate calculated representation of high-dimensional data sets that may consist of multiple distributions. Once this representation is computed, the number of distributions can be determined without prior knowledge. For each distribution, an optimal transform can be determined that preserves informational content while reducing the data dimensionality, and hence, the computational cost. Finally, endmember extraction for the whole data set is accomplished. Results indicate that this Support Vector Machine-Based Endmembermore » Extraction (SVM-BEE) algorithm has the capability of autonomously determining endmembers from multiple clusters with computational speed and accuracy, while maintaining a robust tolerance to noise.« less
A software control system for the ACTS high-burst-rate link evaluation terminal
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.; Daugherty, Elaine S.
1991-01-01
Control and performance monitoring of NASA's High Burst Rate Link Evaluation Terminal (HBR-LET) is accomplished by using several software control modules. Different software modules are responsible for controlling remote radio frequency (RF) instrumentation, supporting communication between a host and a remote computer, controlling the output power of the Link Evaluation Terminal and data display. Remote commanding of microwave RF instrumentation and the LET digital ground terminal allows computer control of various experiments, including bit error rate measurements. Computer communication allows system operators to transmit and receive from the Advanced Communications Technology Satellite (ACTS). Finally, the output power control software dynamically controls the uplink output power of the terminal to compensate for signal loss due to rain fade. Included is a discussion of each software module and its applications.
NASA Astrophysics Data System (ADS)
Aishah Syed Ali, Sharifah
2017-09-01
This paper considers economic lot sizing problem in remanufacturing with separate setup (ELSRs), where remanufactured and new products are produced on dedicated production lines. Since this problem is NP-hard in general, which leads to computationally inefficient and low-quality of solutions, we present (a) a multicommodity formulation and (b) a strengthened formulation based on a priori addition of valid inequalities in the space of original variables, which are then compared with the Wagner-Whitin based formulation available in the literature. Computational experiments on a large number of test data sets are performed to evaluate the different approaches. The numerical results show that our strengthened formulation outperforms all the other tested approaches in terms of linear relaxation bounds. Finally, we conclude with future research directions.
Symbolic Computational Approach to the Marangoni Convection Problem With Soret Diffusion
NASA Technical Reports Server (NTRS)
Skarda, J. Raymond
1998-01-01
A recently reported solution for stationary stability of a thermosolutal system with Soret diffusion is re-derived and examined using a symbolic computational package. Symbolic computational languages are well suited for such an analysis and facilitate a pragmatic approach that is adaptable to similar problems. Linearization of the equations, normal mode analysis, and extraction of the final solution are performed in a Mathematica notebook format. An exact solution is obtained for stationary stability in the limit of zero gravity. A closed form expression is also obtained for the location of asymptotes in relevant parameter, (Sm(sub c), Mac(sub c)), space. The stationary stability behavior is conveniently examined within the symbolic language environment. An abbreviated version of the Mathematica notebook is given in the Appendix.
Data association approaches in bearings-only multi-target tracking
NASA Astrophysics Data System (ADS)
Xu, Benlian; Wang, Zhiquan
2008-03-01
According to requirements of time computation complexity and correctness of data association of the multi-target tracking, two algorithms are suggested in this paper. The proposed Algorithm 1 is developed from the modified version of dual Simplex method, and it has the advantage of direct and explicit form of the optimal solution. The Algorithm 2 is based on the idea of Algorithm 1 and rotational sort method, it combines not only advantages of Algorithm 1, but also reduces the computational burden, whose complexity is only 1/ N times that of Algorithm 1. Finally, numerical analyses are carried out to evaluate the performance of the two data association algorithms.
The symmetric MSD encoder for one-step adder of ternary optical computer
NASA Astrophysics Data System (ADS)
Kai, Song; LiPing, Yan
2016-08-01
The symmetric Modified Signed-Digit (MSD) encoding is important for achieving the one-step MSD adder of Ternary Optical Computer (TOC). The paper described the symmetric MSD encoding algorithm in detail, and developed its truth table which has nine rows and nine columns. According to the truth table, the state table was developed, and the optical-path structure and circuit-implementation scheme of the symmetric MSD encoder (SME) for one-step adder of TOC were proposed. Finally, a series of experiments were designed and performed. The observed results of the experiments showed that the scheme to implement SME was correct, feasible and efficient.
Microscopic approaches to liquid nitromethane detonation properties.
Hervouët, Anaïs; Desbiens, Nicolas; Bourasseau, Emeric; Maillet, Jean-Bernard
2008-04-24
In this paper, thermodynamic and chemical properties of nitromethane are investigated using microscopic simulations. The Hugoniot curve of the inert explosive is computed using Monte Carlo simulations with a modified version of the adaptative Erpenbeck equation of state and a recently developed intermolecular potential. Molecular dynamic simulations of nitromethane decomposition have been performed using a reactive potential, allowing the calculation of kinetic rate constants and activation energies. Finally, the Crussard curve of detonation products as well as thermodynamic properties at the Chapman-Jouguet (CJ) point are computed using reactive ensemble Monte Carlo simulations. Results are in good agreement with both thermochemical calculations and experimental measurements.
3D Parallel Multigrid Methods for Real-Time Fluid Simulation
NASA Astrophysics Data System (ADS)
Wan, Feifei; Yin, Yong; Zhang, Suiyu
2018-03-01
The multigrid method is widely used in fluid simulation because of its strong convergence. In addition to operating accuracy, operational efficiency is also an important factor to consider in order to enable real-time fluid simulation in computer graphics. For this problem, we compared the performance of the Algebraic Multigrid and the Geometric Multigrid in the V-Cycle and Full-Cycle schemes respectively, and analyze the convergence and speed of different methods. All the calculations are done on the parallel computing of GPU in this paper. Finally, we experiment with the 3D-grid for each scale, and give the exact experimental results.
Interactive collision detection for deformable models using streaming AABBs.
Zhang, Xinyu; Kim, Young J
2007-01-01
We present an interactive and accurate collision detection algorithm for deformable, polygonal objects based on the streaming computational model. Our algorithm can detect all possible pairwise primitive-level intersections between two severely deforming models at highly interactive rates. In our streaming computational model, we consider a set of axis aligned bounding boxes (AABBs) that bound each of the given deformable objects as an input stream and perform massively-parallel pairwise, overlapping tests onto the incoming streams. As a result, we are able to prevent performance stalls in the streaming pipeline that can be caused by expensive indexing mechanism required by bounding volume hierarchy-based streaming algorithms. At runtime, as the underlying models deform over time, we employ a novel, streaming algorithm to update the geometric changes in the AABB streams. Moreover, in order to get only the computed result (i.e., collision results between AABBs) without reading back the entire output streams, we propose a streaming en/decoding strategy that can be performed in a hierarchical fashion. After determining overlapped AABBs, we perform a primitive-level (e.g., triangle) intersection checking on a serial computational model such as CPUs. We implemented the entire pipeline of our algorithm using off-the-shelf graphics processors (GPUs), such as nVIDIA GeForce 7800 GTX, for streaming computations, and Intel Dual Core 3.4G processors for serial computations. We benchmarked our algorithm with different models of varying complexities, ranging from 15K up to 50K triangles, under various deformation motions, and the timings were obtained as 30 approximately 100 FPS depending on the complexity of models and their relative configurations. Finally, we made comparisons with a well-known GPU-based collision detection algorithm, CULLIDE [4] and observed about three times performance improvement over the earlier approach. We also made comparisons with a SW-based AABB culling algorithm [2] and observed about two times improvement.
NASA Astrophysics Data System (ADS)
Holden, Jacob R.
Descending maple seeds generate lift to slow their fall and remain aloft in a blowing wind; have the wings of these seeds evolved to descend as slowly as possible? A unique energy balance equation, experimental data, and computational fluid dynamics simulations have all been developed to explore this question from a turbomachinery perspective. The computational fluid dynamics in this work is the first to be performed in the relative reference frame. Maple seed performance has been analyzed for the first time based on principles of wind turbine analysis. Application of the Betz Limit and one-dimensional momentum theory allowed for empirical and computational power and thrust coefficients to be computed for maple seeds. It has been determined that the investigated species of maple seeds perform near the Betz limit for power conversion and thrust coefficient. The power coefficient for a maple seed is found to be in the range of 48-54% and the thrust coefficient in the range of 66-84%. From Betz theory, the stream tube area expansion of the maple seed is necessary for power extraction. Further investigation of computational solutions and mechanical analysis find three key reasons for high maple seed performance. First, the area expansion is driven by maple seed lift generation changing the fluid momentum and requiring area to increase. Second, radial flow along the seed surface is promoted by a sustained leading edge vortex that centrifuges low momentum fluid outward. Finally, the area expansion is also driven by the spanwise area variation of the maple seed imparting a radial force on the flow. These mechanisms result in a highly effective device for the purpose of seed dispersal. However, the maple seed also provides insight into fundamental questions about how turbines can most effectively change the momentum of moving fluids in order to extract useful power or dissipate kinetic energy.
Flügge, Tabea Viktoria; Nelson, Katja; Schmelzeisen, Rainer; Metzger, Marc Christian
2013-08-01
To present an efficient workflow for the production of implant drilling guides using virtual planning tools. For this purpose, laser surface scanning, cone beam computed tomography, computer-aided design and manufacturing, and 3-dimensional (3D) printing were combined. Intraoral optical impressions (iTero, Align Technologies, Santa Clara, CA) and digital 3D radiographs (cone beam computed tomography) were performed at the first consultation of 1 exemplary patient. With image processing techniques, the intraoral surface data, acquired using an intraoral scanner, and radiologic 3D data were fused. The virtual implant planning process (using virtual library teeth) and the in-office production of the implant drilling guide was performed after only 1 clinical consultation of the patient. Implant surgery with a computer-aided design and manufacturing produced implant drilling guide was performed during the second consultation. The production of a scan prosthesis and multiple preoperative consultations of the patient were unnecessary. The presented procedure offers another step in facilitating the production of drilling guides in dental implantology. Four main advantages are realized with this procedure. First, no additional scan prosthesis is needed. Second, data acquisition can be performed during the first consultation. Third, the virtual planning is directly transferred to the drilling guide without a loss of accuracy. Finally, the treatment cost and time required are reduced with this facilitated production process. Copyright © 2013 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
A Wearable Channel Selection-Based Brain-Computer Interface for Motor Imagery Detection.
Lo, Chi-Chun; Chien, Tsung-Yi; Chen, Yu-Chun; Tsai, Shang-Ho; Fang, Wai-Chi; Lin, Bor-Shyh
2016-02-06
Motor imagery-based brain-computer interface (BCI) is a communication interface between an external machine and the brain. Many kinds of spatial filters are used in BCIs to enhance the electroencephalography (EEG) features related to motor imagery. The approach of channel selection, developed to reserve meaningful EEG channels, is also an important technique for the development of BCIs. However, current BCI systems require a conventional EEG machine and EEG electrodes with conductive gel to acquire multi-channel EEG signals and then transmit these EEG signals to the back-end computer to perform the approach of channel selection. This reduces the convenience of use in daily life and increases the limitations of BCI applications. In order to improve the above issues, a novel wearable channel selection-based brain-computer interface is proposed. Here, retractable comb-shaped active dry electrodes are designed to measure the EEG signals on a hairy site, without conductive gel. By the design of analog CAR spatial filters and the firmware of EEG acquisition module, the function of spatial filters could be performed without any calculation, and channel selection could be performed in the front-end device to improve the practicability of detecting motor imagery in the wearable EEG device directly or in commercial mobile phones or tablets, which may have relatively low system specifications. Finally, the performance of the proposed BCI is investigated, and the experimental results show that the proposed system is a good wearable BCI system prototype.
Toward Petascale Biologically Plausible Neural Networks
NASA Astrophysics Data System (ADS)
Long, Lyle
This talk will describe an approach to achieving petascale neural networks. Artificial intelligence has been oversold for many decades. Computers in the beginning could only do about 16,000 operations per second. Computer processing power, however, has been doubling every two years thanks to Moore's law, and growing even faster due to massively parallel architectures. Finally, 60 years after the first AI conference we have computers on the order of the performance of the human brain (1016 operations per second). The main issues now are algorithms, software, and learning. We have excellent models of neurons, such as the Hodgkin-Huxley model, but we do not know how the human neurons are wired together. With careful attention to efficient parallel computing, event-driven programming, table lookups, and memory minimization massive scale simulations can be performed. The code that will be described was written in C + + and uses the Message Passing Interface (MPI). It uses the full Hodgkin-Huxley neuron model, not a simplified model. It also allows arbitrary network structures (deep, recurrent, convolutional, all-to-all, etc.). The code is scalable, and has, so far, been tested on up to 2,048 processor cores using 107 neurons and 109 synapses.
Optimal Filter Estimation for Lucas-Kanade Optical Flow
Sharmin, Nusrat; Brad, Remus
2012-01-01
Optical flow algorithms offer a way to estimate motion from a sequence of images. The computation of optical flow plays a key-role in several computer vision applications, including motion detection and segmentation, frame interpolation, three-dimensional scene reconstruction, robot navigation and video compression. In the case of gradient based optical flow implementation, the pre-filtering step plays a vital role, not only for accurate computation of optical flow, but also for the improvement of performance. Generally, in optical flow computation, filtering is used at the initial level on original input images and afterwards, the images are resized. In this paper, we propose an image filtering approach as a pre-processing step for the Lucas-Kanade pyramidal optical flow algorithm. Based on a study of different types of filtering methods and applied on the Iterative Refined Lucas-Kanade, we have concluded on the best filtering practice. As the Gaussian smoothing filter was selected, an empirical approach for the Gaussian variance estimation was introduced. Tested on the Middlebury image sequences, a correlation between the image intensity value and the standard deviation value of the Gaussian function was established. Finally, we have found that our selection method offers a better performance for the Lucas-Kanade optical flow algorithm.
Study on the effects of flow in the volute casing on the performance of a sirocco fan
NASA Astrophysics Data System (ADS)
Adachi, Tsutomu; Sugita, Naohiro; Ohomori, Satoshi
2004-08-01
The flow at the exit from the runner blade of a centrifugal fan with forward curved blades (a sirocco fan) sometimes separates and becomes unstable. We have conducted many researches on the impeller shape of a sirocco fan, proper inlet and exit blade angles were considered to obtain optimum performance. In this paper, the casing shape were decided by changing the circumferential angle, magnifying angle and the width, 21 sorts of casings were used. Performance tests, inner flow velocity and pressure distributions were measured as well. Computational fluid dynamic calculations were also made and compared with the experimental results. Finally, the most suitable casing shape for best performance is considered.
CCOMP: An efficient algorithm for complex roots computation of determinantal equations
NASA Astrophysics Data System (ADS)
Zouros, Grigorios P.
2018-01-01
In this paper a free Python algorithm, entitled CCOMP (Complex roots COMPutation), is developed for the efficient computation of complex roots of determinantal equations inside a prescribed complex domain. The key to the method presented is the efficient determination of the candidate points inside the domain which, in their close neighborhood, a complex root may lie. Once these points are detected, the algorithm proceeds to a two-dimensional minimization problem with respect to the minimum modulus eigenvalue of the system matrix. In the core of CCOMP exist three sub-algorithms whose tasks are the efficient estimation of the minimum modulus eigenvalues of the system matrix inside the prescribed domain, the efficient computation of candidate points which guarantee the existence of minima, and finally, the computation of minima via bound constrained minimization algorithms. Theoretical results and heuristics support the development and the performance of the algorithm, which is discussed in detail. CCOMP supports general complex matrices, and its efficiency, applicability and validity is demonstrated to a variety of microwave applications.
Cognitive performance deficits in a simulated climb of Mount Everest - Operation Everest II
NASA Technical Reports Server (NTRS)
Kennedy, R. S.; Dunlap, W. P.; Banderet, L. E.; Smith, M. G.; Houston, C. S.
1989-01-01
Cognitive function at simulated altitude was investigated in a repeated-measures within-subject study of performance by seven volunteers in a hypobaric chamber, in which atmospheric pressure was systematically lowered over a period of 40 d to finally reach a pressure equivalent to 8845 m, the approximate height of Mount Everest. The automated performance test system employed compact computer design; automated test administrations, data storage, and retrieval; psychometric properties of stability and reliability; and factorial richness. Significant impairments of cognitive function were seen for three of the five tests in the battery; on two tests, grammatical reasoning and pattern comparison, every subject showed a substantial decrement.
Performance of DPSK with convolutional encoding on time-varying fading channels
NASA Technical Reports Server (NTRS)
Mui, S. Y.; Modestino, J. W.
1977-01-01
The bit error probability performance of a differentially-coherent phase-shift keyed (DPSK) modem with convolutional encoding and Viterbi decoding on time-varying fading channels is examined. Both the Rician and the lognormal channels are considered. Bit error probability upper bounds on fully-interleaved (zero-memory) fading channels are derived and substantiated by computer simulation. It is shown that the resulting coded system performance is a relatively insensitive function of the choice of channel model provided that the channel parameters are related according to the correspondence developed as part of this paper. Finally, a comparison of DPSK with a number of other modulation strategies is provided.
Prediction and Stability of Mathematics Skill and Difficulty
Martin, Rebecca B.; Cirino, Paul T.; Barnes, Marcia A.; Ewing-Cobbs, Linda; Fuchs, Lynn S.; Stuebing, Karla K.; Fletcher, Jack M.
2016-01-01
The present study evaluated the stability of math learning difficulties over a 2-year period and investigated several factors that might influence this stability (categorical vs. continuous change, liberal vs. conservative cut point, broad vs. specific math assessment); the prediction of math performance over time and by performance level was also evaluated. Participants were 144 students initially identified as having a math difficulty (MD) or no learning difficulty according to low achievement criteria in the spring of Grade 3 or Grade 4. Students were reassessed 2 years later. For both measure types, a similar proportion of students changed whether assessed categorically or continuously. However, categorical change was heavily dependent on distance from the cut point and so more common for MD, who started closer to the cut point; reliable change index change was more similar across groups. There were few differences with regard to severity level of MD on continuous metrics or in terms of prediction. Final math performance on a broad computation measure was predicted by behavioral inattention and working memory while considering initial performance; for a specific fluency measure, working memory was not uniquely related, and behavioral inattention more variably related to final performance, again while considering initial performance. PMID:22392890
Prediction and stability of mathematics skill and difficulty.
Martin, Rebecca B; Cirino, Paul T; Barnes, Marcia A; Ewing-Cobbs, Linda; Fuchs, Lynn S; Stuebing, Karla K; Fletcher, Jack M
2013-01-01
The present study evaluated the stability of math learning difficulties over a 2-year period and investigated several factors that might influence this stability (categorical vs. continuous change, liberal vs. conservative cut point, broad vs. specific math assessment); the prediction of math performance over time and by performance level was also evaluated. Participants were 144 students initially identified as having a math difficulty (MD) or no learning difficulty according to low achievement criteria in the spring of Grade 3 or Grade 4. Students were reassessed 2 years later. For both measure types, a similar proportion of students changed whether assessed categorically or continuously. However, categorical change was heavily dependent on distance from the cut point and so more common for MD, who started closer to the cut point; reliable change index change was more similar across groups. There were few differences with regard to severity level of MD on continuous metrics or in terms of prediction. Final math performance on a broad computation measure was predicted by behavioral inattention and working memory while considering initial performance; for a specific fluency measure, working memory was not uniquely related, and behavioral inattention more variably related to final performance, again while considering initial performance.
A Systematic Investigation of Computation Models for Predicting Adverse Drug Reactions (ADRs)
Kuang, Qifan; Wang, MinQi; Li, Rong; Dong, YongCheng; Li, Yizhou; Li, Menglong
2014-01-01
Background Early and accurate identification of adverse drug reactions (ADRs) is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs. Principal Findings In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper. Conclusion Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms. PMID:25180585
A Talking Computers System for Persons with Vision and Speech Handicaps. Final Report.
ERIC Educational Resources Information Center
Visek & Maggs, Urbana, IL.
This final report contains a detailed description of six software systems designed to assist individuals with blindness and/or speech disorders in using inexpensive, off-the-shelf computers rather than expensive custom-made devices. The developed software is not written in the native machine language of any particular brand of computer, but in the…
Software for Brain Network Simulations: A Comparative Study
Tikidji-Hamburyan, Ruben A.; Narayana, Vikram; Bozkus, Zeki; El-Ghazawi, Tarek A.
2017-01-01
Numerical simulations of brain networks are a critical part of our efforts in understanding brain functions under pathological and normal conditions. For several decades, the community has developed many software packages and simulators to accelerate research in computational neuroscience. In this article, we select the three most popular simulators, as determined by the number of models in the ModelDB database, such as NEURON, GENESIS, and BRIAN, and perform an independent evaluation of these simulators. In addition, we study NEST, one of the lead simulators of the Human Brain Project. First, we study them based on one of the most important characteristics, the range of supported models. Our investigation reveals that brain network simulators may be biased toward supporting a specific set of models. However, all simulators tend to expand the supported range of models by providing a universal environment for the computational study of individual neurons and brain networks. Next, our investigations on the characteristics of computational architecture and efficiency indicate that all simulators compile the most computationally intensive procedures into binary code, with the aim of maximizing their computational performance. However, not all simulators provide the simplest method for module development and/or guarantee efficient binary code. Third, a study of their amenability for high-performance computing reveals that NEST can almost transparently map an existing model on a cluster or multicore computer, while NEURON requires code modification if the model developed for a single computer has to be mapped on a computational cluster. Interestingly, parallelization is the weakest characteristic of BRIAN, which provides no support for cluster computations and limited support for multicore computers. Fourth, we identify the level of user support and frequency of usage for all simulators. Finally, we carry out an evaluation using two case studies: a large network with simplified neural and synaptic models and a small network with detailed models. These two case studies allow us to avoid any bias toward a particular software package. The results indicate that BRIAN provides the most concise language for both cases considered. Furthermore, as expected, NEST mostly favors large network models, while NEURON is better suited for detailed models. Overall, the case studies reinforce our general observation that simulators have a bias in the computational performance toward specific types of the brain network models. PMID:28775687
[Activities of the Department of Electrical Engineering, Howard University
NASA Technical Reports Server (NTRS)
Yalamanchili, Raj C.
1997-01-01
Theoretical derivations, computer analysis and test data are provided to demonstrate that the cavity model is a feasible one to analyze thin-substrate, rectangular-patch microstrip antennas. Seven separate antennas were tested. Most of the antennas were designed to resonate at L-band frequencies (1-2 GHz). One antenna was designed to resonate at an S-band (2-4 GHz) frequency of 2.025 GHz. All dielectric substrates were made of Duroid, and were of varying thicknesses and relative dielectric constant values. Theoretical derivations to calculate radiated free space electromagnetic fields and antenna input impedance were performed. MATHEMATICA 2.2 software was used to generate Smith Chart input impedance plots, normalized relative power radiation plots and to perform other numerical manipulations. Network Analyzer tests were used to verify the data from the computer programming (such as input impedance and VSWR). Finally, tests were performed in an anechoic chamber to measure receive-mode polar power patterns in the E and H planes. Agreement between computer analysis and test data is presented. The antenna with the thickest substrate (e(sub r) = 2.33,62 mils thick) showed the worst match to theoretical impedance data. This is anticipated due to the fact that the cavity model generally loses accuracy when the dielectric substrate thickness exceeds 5% of the antenna's free space wavelength. A method of reducing computer execution time for impedance calculations is also presented.
A baroclinic quasigeostrophic open ocean model
NASA Technical Reports Server (NTRS)
Miller, R. N.; Robinson, A. R.; Haidvogel, D. B.
1983-01-01
A baroclinic quasigeostrophic open ocean model is presented, calibrated by a series of test problems, and demonstrated to be feasible and efficient for application to realistic mid-oceanic mesoscale eddy flow regimes. Two methods of treating the depth dependence of the flow, a finite difference method and a collocation method, are tested and intercompared. Sample Rossby wave calculations with and without advection are performed with constant stratification and two levels of nonlinearity, one weaker than and one typical of real ocean flows. Using exact analytical solutions for comparison, the accuracy and efficiency of the model is tabulated as a function of the computational parameters and stability limits set; typically, errors were controlled between 1 percent and 10 percent RMS after two wave periods. Further Rossby wave tests with realistic stratification and wave parameters chosen to mimic real ocean conditions were performed to determine computational parameters for use with real and simulated data. Finally, a prototype calculation with quasiturbulent simulated data was performed successfully, which demonstrates the practicality of the model for scientific use.
WTO — a deterministic approach to 4-fermion physics
NASA Astrophysics Data System (ADS)
Passarino, Giampiero
1996-09-01
The program WTO, which is designed for computing cross sections and other relevant observables in the e+e- annihilation into four fermions, is described. The various quantities are computed over both a completely inclusive experimental set-up and a realistic one, i.e. with cuts on the final state energies, final state angles, scattering angles and final state invariant masses. Initial state QED corrections are included by means of the structure function approach while final state QCD corrections are applicable in their naive formulation. A gauge restoring mechanism is included according to the Fermion-Loop scheme. The program structure is highly modular and particular care has been devoted to computing efficiency and speed.
Assessing the limitations of the Banister model in monitoring training
Hellard, Philippe; Avalos, Marta; Lacoste, Lucien; Barale, Frédéric; Chatard, Jean-Claude; Millet, Grégoire P.
2006-01-01
The aim of this study was to carry out a statistical analysis of the Banister model to verify how useful it is in monitoring the training programmes of elite swimmers. The accuracy, the ill-conditioning and the stability of this model were thus investigated. Training loads of nine elite swimmers, measured over one season, were related to performances with the Banister model. Firstly, to assess accuracy, the 95% bootstrap confidence interval (95% CI) of parameter estimates and modelled performances were calculated. Secondly, to study ill-conditioning, the correlation matrix of parameter estimates was computed. Finally, to analyse stability, iterative computation was performed with the same data but minus one performance, chosen randomly. Performances were significantly related to training loads in all subjects (R2= 0.79 ± 0.13, P < 0.05) and the estimation procedure seemed to be stable. Nevertheless, the 95% CI of the most useful parameters for monitoring training were wide τa =38 (17, 59), τf =19 (6, 32), tn =19 (7, 35), tg =43 (25, 61). Furthermore, some parameters were highly correlated making their interpretation worthless. The study suggested possible ways to deal with these problems and reviewed alternative methods to model the training-performance relationships. PMID:16608765
2012-04-01
spr2012-Final.indd 1 5/9/2012 10:54:32 AM DOD Spring 2012 INSIGHTS A publication of the Department of Defense High Performance Computing...0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing...collection of information if it does not display a currently valid OMB control number. 1 . REPORT DATE 2012 2. REPORT TYPE 3. DATES COVERED 4. TITLE
The time resolution of the St Petersburg paradox
Peters, Ole
2011-01-01
A resolution of the St Petersburg paradox is presented. In contrast to the standard resolution, utility is not required. Instead, the time-average performance of the lottery is computed. The final result can be phrased mathematically identically to Daniel Bernoulli's resolution, which uses logarithmic utility, but is derived using a conceptually different argument. The advantage of the time resolution is the elimination of arbitrary utility functions. PMID:22042904
ERIC Educational Resources Information Center
Becker, David S.; Pyrce, Sharon R.
The goal of this project was to find ways of enhancing the efficiency of searching machine readable data bases. Ways are sought to transfer to the computer some of the tasks that are normally performed by the user, i.e., to further automate information retrieval. Four experiments were conducted to test the feasibility of a sequential processing…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spangler, Lee; Cunningham, Alfred; Lageson, David
2011-03-31
ZERT has made major contributions to five main areas of sequestration science: improvement of computational tools; measurement and monitoring techniques to verify storage and track migration of CO{sub 2}; development of a comprehensive performance and risk assessment framework; fundamental geophysical, geochemical and hydrological investigations of CO{sub 2} storage; and investigate innovative, bio-based mitigation strategies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freitez, Juan A.; Sanchez, Morella; Ruette, Fernando
Application of simulated annealing (SA) and simplified GSA (SGSA) techniques for parameter optimization of parametric quantum chemistry method (CATIVIC) was performed. A set of organic molecules were selected for test these techniques. Comparison of the algorithms was carried out for error function minimization with respect to experimental values. Results show that SGSA is more efficient than SA with respect to computer time. Accuracy is similar in both methods; however, there are important differences in the final set of parameters.
ERIC Educational Resources Information Center
Tallman, Oliver H.
A digital simulation of a model for the processing of visual images is derived from known aspects of the human visual system. The fundamental principle of computation suggested by a biological model is a transformation that distributes information contained in an input stimulus everywhere in a transform domain. Each sensory input contributes under…
What’s Wrong With Automatic Speech Recognition (ASR) and How Can We Fix It?
2013-03-01
Jordan Cohen International Computer Science Institute 1947 Center Street, Suite 600 Berkeley, CA 94704 MARCH 2013 Final Report ...This report was cleared for public release by the 88th Air Base Wing Public Affairs Office and is available to the general public, including foreign...711th Human Performance Wing Air Force Research Laboratory This report is published in the interest of scientific and technical
Knowledge modeling tool for evidence-based design.
Durmisevic, Sanja; Ciftcioglu, Ozer
2010-01-01
The aim of this study is to take evidence-based design (EBD) to the next level by activating available knowledge, integrating new knowledge, and combining them for more efficient use by the planning and design community. This article outlines a framework for a performance-based measurement tool that can provide the necessary decision support during the design or evaluation of a healthcare environment by estimating the overall design performance of multiple variables. New knowledge in EBD adds continuously to complexity (the "information explosion"), and it becomes impossible to consider all aspects (design features) at the same time, much less their impact on final building performance. How can existing knowledge and the information explosion in healthcare-specifically the domain of EBD-be rendered manageable? Is it feasible to create a computational model that considers many design features and deals with them in an integrated way, rather than one at a time? The found evidence is structured and readied for computation through a "fuzzification" process. The weights are calculated using an analytical hierarchy process. Actual knowledge modeling is accomplished through a fuzzy neural tree structure. The impact of all inputs on the outcome-in this case, patient recovery-is calculated using sensitivity analysis. Finally, the added value of the model is discussed using a hypothetical case study of a patient room. The proposed model can deal with the complexities of various aspects and the relationships among variables in a coordinated way, allowing existing and new pieces of evidence to be integrated in a knowledge tree structure that facilitates understanding of the effects of various design interventions on overall design performance.
Exploiting graphics processing units for computational biology and bioinformatics.
Payne, Joshua L; Sinnott-Armstrong, Nicholas A; Moore, Jason H
2010-09-01
Advances in the video gaming industry have led to the production of low-cost, high-performance graphics processing units (GPUs) that possess more memory bandwidth and computational capability than central processing units (CPUs), the standard workhorses of scientific computing. With the recent release of generalpurpose GPUs and NVIDIA's GPU programming language, CUDA, graphics engines are being adopted widely in scientific computing applications, particularly in the fields of computational biology and bioinformatics. The goal of this article is to concisely present an introduction to GPU hardware and programming, aimed at the computational biologist or bioinformaticist. To this end, we discuss the primary differences between GPU and CPU architecture, introduce the basics of the CUDA programming language, and discuss important CUDA programming practices, such as the proper use of coalesced reads, data types, and memory hierarchies. We highlight each of these topics in the context of computing the all-pairs distance between instances in a dataset, a common procedure in numerous disciplines of scientific computing. We conclude with a runtime analysis of the GPU and CPU implementations of the all-pairs distance calculation. We show our final GPU implementation to outperform the CPU implementation by a factor of 1700.
A characterization of workflow management systems for extreme-scale applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferreira da Silva, Rafael; Filgueira, Rosa; Pietri, Ilia
We present that the automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compellingmore » case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. Finally, the paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.« less
A characterization of workflow management systems for extreme-scale applications
Ferreira da Silva, Rafael; Filgueira, Rosa; Pietri, Ilia; ...
2017-02-16
We present that the automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compellingmore » case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. Finally, the paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.« less
AnRAD: A Neuromorphic Anomaly Detection Framework for Massive Concurrent Data Streams.
Chen, Qiuwen; Luley, Ryan; Wu, Qing; Bishop, Morgan; Linderman, Richard W; Qiu, Qinru
2018-05-01
The evolution of high performance computing technologies has enabled the large-scale implementation of neuromorphic models and pushed the research in computational intelligence into a new era. Among the machine learning applications, unsupervised detection of anomalous streams is especially challenging due to the requirements of detection accuracy and real-time performance. Designing a computing framework that harnesses the growing computing power of the multicore systems while maintaining high sensitivity and specificity to the anomalies is an urgent research topic. In this paper, we propose anomaly recognition and detection (AnRAD), a bioinspired detection framework that performs probabilistic inferences. We analyze the feature dependency and develop a self-structuring method that learns an efficient confabulation network using unlabeled data. This network is capable of fast incremental learning, which continuously refines the knowledge base using streaming data. Compared with several existing anomaly detection approaches, our method provides competitive detection quality. Furthermore, we exploit the massive parallel structure of the AnRAD framework. Our implementations of the detection algorithm on the graphic processing unit and the Xeon Phi coprocessor both obtain substantial speedups over the sequential implementation on general-purpose microprocessor. The framework provides real-time service to concurrent data streams within diversified knowledge contexts, and can be applied to large problems with multiple local patterns. Experimental results demonstrate high computing performance and memory efficiency. For vehicle behavior detection, the framework is able to monitor up to 16000 vehicles (data streams) and their interactions in real time with a single commodity coprocessor, and uses less than 0.2 ms for one testing subject. Finally, the detection network is ported to our spiking neural network simulator to show the potential of adapting to the emerging neuromorphic architectures.
Performance of the engineering analysis and data system 2 common file system
NASA Technical Reports Server (NTRS)
Debrunner, Linda S.
1993-01-01
The Engineering Analysis and Data System (EADS) was used from April 1986 to July 1993 to support large scale scientific and engineering computation (e.g. computational fluid dynamics) at Marshall Space Flight Center. The need for an updated system resulted in a RFP in June 1991, after which a contract was awarded to Cray Grumman. EADS II was installed in February 1993, and by July 1993 most users were migrated. EADS II is a network of heterogeneous computer systems supporting scientific and engineering applications. The Common File System (CFS) is a key component of this system. The CFS provides a seamless, integrated environment to the users of EADS II including both disk and tape storage. UniTree software is used to implement this hierarchical storage management system. The performance of the CFS suffered during the early months of the production system. Several of the performance problems were traced to software bugs which have been corrected. Other problems were associated with hardware. However, the use of NFS in UniTree UCFM software limits the performance of the system. The performance issues related to the CFS have led to a need to develop a greater understanding of the CFS organization. This paper will first describe the EADS II with emphasis on the CFS. Then, a discussion of mass storage systems will be presented, and methods of measuring the performance of the Common File System will be outlined. Finally, areas for further study will be identified and conclusions will be drawn.
NASA Astrophysics Data System (ADS)
Bu, Xiangwei; Wu, Xiaoyan; Huang, Jiaqi; Wei, Daozhi
2016-11-01
This paper investigates the design of a novel estimation-free prescribed performance non-affine control strategy for the longitudinal dynamics of an air-breathing hypersonic vehicle (AHV) via back-stepping. The proposed control scheme is capable of guaranteeing tracking errors of velocity, altitude, flight-path angle, pitch angle and pitch rate with prescribed performance. By prescribed performance, we mean that the tracking error is limited to a predefined arbitrarily small residual set, with convergence rate no less than a certain constant, exhibiting maximum overshoot less than a given value. Unlike traditional back-stepping designs, there is no need of an affine model in this paper. Moreover, both the tedious analytic and numerical computations of time derivatives of virtual control laws are completely avoided. In contrast to estimation-based strategies, the presented estimation-free controller possesses much lower computational costs, while successfully eliminating the potential problem of parameter drifting. Owing to its independence on an accurate AHV model, the studied methodology exhibits excellent robustness against system uncertainties. Finally, simulation results from a fully nonlinear model clarify and verify the design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beesing, M. E.; Buchholz, R. L.; Evans, R. A.
1980-01-01
An investigation of the optical performance of a variety of concentrating solar collectors is reported. The study addresses two important issues: the accuracy of reflective or refractive surfaces required to achieve specified performance goals, and the effect of environmental exposure on the performance concentrators. To assess the importance of surface accuracy on optical performance, 11 tracking and nontracking concentrator designs were selected for detailed evaluation. Mathematical models were developed for each design and incorporated into a Monte Carlo ray trace computer program to carry out detailed calculations. Results for the 11 concentrators are presented in graphic form. The models andmore » computer program are provided along with a user's manual. A survey data base was established on the effect of environmental exposure on the optical degradation of mirrors and lenses. Information on environmental and maintenance effects was found to be insufficient to permit specific recommendations for operating and maintenance procedures, but the available information is compiled and reported and does contain procedures that other workers have found useful.« less
ERIC Educational Resources Information Center
Peck, Greg
This document contains (1) the final report of a project to develop a computer-aided drafting (CAD) curriculum and (2) a competency-based unit of instruction for use with the CADAPPLE system. The final report states the problem and project objective, presents conclusions and recommendations, and includes survey instruments. The unit is designed…
Willaume, Thibault; Farrugia, Audrey; Kieffer, Estelle-Marie; Charton, Jeanne; Geraut, Annie; Berthelon, Laurent; Bierry, Guillaume; Raul, Jean-Sébastien
2018-05-01
Nowadays, post-mortem computed tomography (PMCT) has become an integral part of Forensic practice. The purpose of the study was to determine PMCT impact on diagnosis of the cause of death within the context of the external examination of the body, when autopsy has, at first, not been requested. We reviewed the records of 145 cases for which unenhanced PMCT was performed in addition to the external examination of the body from January 2014 to July 2015 at the Institute of Forensic medicine in Strasbourg (France). We confronted final reports from forensic pathologist to the corresponding PMCT reports. Data were collected in a contingency table and the impact of PMCT on the final conclusions of the forensic pathologist was evaluated via a Chi 2 test. PMCT results significantly impact the final conclusions of forensic pathologist (p<0,001). In some cases, PMCT permits etiological diagnosis by revealing a cause of death hidden from external examination (mainly natural death) or by supporting the clinical findings of the forensic pathologist. In other cases (traumatic death), PMCT enables fast and exhaustive lesion assessment. Lastly, there are situations where PMCT may be ineffective (intoxication, hanging or some natural deaths). Performing PMCT within the context of the external examination of the body when autopsy has, at first, not been requested could bring significant benefits in diagnosing the cause of death. The impact of PMCT varies depending on the circumstances of death. Copyright © 2018 Elsevier B.V. All rights reserved.
Automatic small target detection in synthetic infrared images
NASA Astrophysics Data System (ADS)
Yardımcı, Ozan; Ulusoy, Ä.°lkay
2017-05-01
Automatic detection of targets from far distances is a very challenging problem. Background clutter and small target size are the main difficulties which should be solved while reaching a high detection performance as well as a low computational load. The pre-processing, detection and post-processing approaches are very effective on the final results. In this study, first of all, various methods in the literature were evaluated separately for each of these stages using the simulated test scenarios. Then, a full system of detection was constructed among available solutions which resulted in the best performance in terms of detection. However, although a precision rate as 100% was reached, the recall values stayed low around 25-45%. Finally, a post-processing method was proposed which increased the recall value while keeping the precision at 100%. The proposed post-processing method, which is based on local operations, increased the recall value to 65-95% in all test scenarios.
Injector Design Tool Improvements: User's manual for FDNS V.4.5
NASA Technical Reports Server (NTRS)
Chen, Yen-Sen; Shang, Huan-Min; Wei, Hong; Liu, Jiwen
1998-01-01
The major emphasis of the current effort is in the development and validation of an efficient parallel machine computational model, based on the FDNS code, to analyze the fluid dynamics of a wide variety of liquid jet configurations for general liquid rocket engine injection system applications. This model includes physical models for droplet atomization, breakup/coalescence, evaporation, turbulence mixing and gas-phase combustion. Benchmark validation cases for liquid rocket engine chamber combustion conditions will be performed for model validation purpose. Test cases may include shear coaxial, swirl coaxial and impinging injection systems with combinations LOXIH2 or LOXISP-1 propellant injector elements used in rocket engine designs. As a final goal of this project, a well tested parallel CFD performance methodology together with a user's operation description in a final technical report will be reported at the end of the proposed research effort.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-18
... device to function as a cloud computing device similar to a network storage RAID array (HDDs strung... contract. This final determination, in HQ H082476, was issued at the request of Scale Computing under... response to your request dated October 15, 2009, made on behalf of Scale Computing (``Scale''). You ask for...
Gross, Douglas P; Zhang, Jing; Steenstra, Ivan; Barnsley, Susan; Haws, Calvin; Amell, Tyler; McIntosh, Greg; Cooper, Juliette; Zaiane, Osmar
2013-12-01
To develop a classification algorithm and accompanying computer-based clinical decision support tool to help categorize injured workers toward optimal rehabilitation interventions based on unique worker characteristics. Population-based historical cohort design. Data were extracted from a Canadian provincial workers' compensation database on all claimants undergoing work assessment between December 2009 and January 2011. Data were available on: (1) numerous personal, clinical, occupational, and social variables; (2) type of rehabilitation undertaken; and (3) outcomes following rehabilitation (receiving time loss benefits or undergoing repeat programs). Machine learning, concerned with the design of algorithms to discriminate between classes based on empirical data, was the foundation of our approach to build a classification system with multiple independent and dependent variables. The population included 8,611 unique claimants. Subjects were predominantly employed (85 %) males (64 %) with diagnoses of sprain/strain (44 %). Baseline clinician classification accuracy was high (ROC = 0.86) for selecting programs that lead to successful return-to-work. Classification performance for machine learning techniques outperformed the clinician baseline classification (ROC = 0.94). The final classifiers were multifactorial and included the variables: injury duration, occupation, job attachment status, work status, modified work availability, pain intensity rating, self-rated occupational disability, and 9 items from the SF-36 Health Survey. The use of machine learning classification techniques appears to have resulted in classification performance better than clinician decision-making. The final algorithm has been integrated into a computer-based clinical decision support tool that requires additional validation in a clinical sample.
BridgeRank: A novel fast centrality measure based on local structure of the network
NASA Astrophysics Data System (ADS)
Salavati, Chiman; Abdollahpouri, Alireza; Manbari, Zhaleh
2018-04-01
Ranking nodes in complex networks have become an important task in many application domains. In a complex network, influential nodes are those that have the most spreading ability. Thus, identifying influential nodes based on their spreading ability is a fundamental task in different applications such as viral marketing. One of the most important centrality measures to ranking nodes is closeness centrality which is efficient but suffers from high computational complexity O(n3) . This paper tries to improve closeness centrality by utilizing the local structure of nodes and presents a new ranking algorithm, called BridgeRank centrality. The proposed method computes local centrality value for each node. For this purpose, at first, communities are detected and the relationship between communities is completely ignored. Then, by applying a centrality in each community, only one best critical node from each community is extracted. Finally, the nodes are ranked based on computing the sum of the shortest path length of nodes to obtained critical nodes. We have also modified the proposed method by weighting the original BridgeRank and selecting several nodes from each community based on the density of that community. Our method can find the best nodes with high spread ability and low time complexity, which make it applicable to large-scale networks. To evaluate the performance of the proposed method, we use the SIR diffusion model. Finally, experiments on real and artificial networks show that our method is able to identify influential nodes so efficiently, and achieves better performance compared to other recent methods.
Modeling high-temperature superconductors and metallic alloys on the Intel IPSC/860
NASA Astrophysics Data System (ADS)
Geist, G. A.; Peyton, B. W.; Shelton, W. A.; Stocks, G. M.
Oak Ridge National Laboratory has embarked on several computational Grand Challenges, which require the close cooperation of physicists, mathematicians, and computer scientists. One of these projects is the determination of the material properties of alloys from first principles and, in particular, the electronic structure of high-temperature superconductors. While the present focus of the project is on superconductivity, the approach is general enough to permit study of other properties of metallic alloys such as strength and magnetic properties. This paper describes the progress to date on this project. We include a description of a self-consistent KKR-CPA method, parallelization of the model, and the incorporation of a dynamic load balancing scheme into the algorithm. We also describe the development and performance of a consolidated KKR-CPA code capable of running on CRAYs, workstations, and several parallel computers without source code modification. Performance of this code on the Intel iPSC/860 is also compared to a CRAY 2, CRAY YMP, and several workstations. Finally, some density of state calculations of two perovskite superconductors are given.
Web-Based Computational Chemistry Education with CHARMMing I: Lessons and Tutorial
Miller, Benjamin T.; Singh, Rishi P.; Schalk, Vinushka; Pevzner, Yuri; Sun, Jingjun; Miller, Carrie S.; Boresch, Stefan; Ichiye, Toshiko; Brooks, Bernard R.; Woodcock, H. Lee
2014-01-01
This article describes the development, implementation, and use of web-based “lessons” to introduce students and other newcomers to computer simulations of biological macromolecules. These lessons, i.e., interactive step-by-step instructions for performing common molecular simulation tasks, are integrated into the collaboratively developed CHARMM INterface and Graphics (CHARMMing) web user interface (http://www.charmming.org). Several lessons have already been developed with new ones easily added via a provided Python script. In addition to CHARMMing's new lessons functionality, web-based graphical capabilities have been overhauled and are fully compatible with modern mobile web browsers (e.g., phones and tablets), allowing easy integration of these advanced simulation techniques into coursework. Finally, one of the primary objections to web-based systems like CHARMMing has been that “point and click” simulation set-up does little to teach the user about the underlying physics, biology, and computational methods being applied. In response to this criticism, we have developed a freely available tutorial to bridge the gap between graphical simulation setup and the technical knowledge necessary to perform simulations without user interface assistance. PMID:25057988
A Hybrid Shared-Memory Parallel Max-Tree Algorithm for Extreme Dynamic-Range Images.
Moschini, Ugo; Meijster, Arnold; Wilkinson, Michael H F
2018-03-01
Max-trees, or component trees, are graph structures that represent the connected components of an image in a hierarchical way. Nowadays, many application fields rely on images with high-dynamic range or floating point values. Efficient sequential algorithms exist to build trees and compute attributes for images of any bit depth. However, we show that the current parallel algorithms perform poorly already with integers at bit depths higher than 16 bits per pixel. We propose a parallel method combining the two worlds of flooding and merging max-tree algorithms. First, a pilot max-tree of a quantized version of the image is built in parallel using a flooding method. Later, this structure is used in a parallel leaf-to-root approach to compute efficiently the final max-tree and to drive the merging of the sub-trees computed by the threads. We present an analysis of the performance both on simulated and actual 2D images and 3D volumes. Execution times are about better than the fastest sequential algorithm and speed-up goes up to on 64 threads.
Modeling for IFOG Vibration Error Based on the Strain Distribution of Quadrupolar Fiber Coil
Gao, Zhongxing; Zhang, Yonggang; Zhang, Yunhao
2016-01-01
Improving the performance of interferometric fiber optic gyroscope (IFOG) in harsh environment, especially in vibrational environment, is necessary for its practical applications. This paper presents a mathematical model for IFOG to theoretically compute the short-term rate errors caused by mechanical vibration. The computational procedures are mainly based on the strain distribution of quadrupolar fiber coil measured by stress analyzer. The definition of asymmetry of strain distribution (ASD) is given in the paper to evaluate the winding quality of the coil. The established model reveals that the high ASD and the variable fiber elastic modulus in large strain situation are two dominant reasons that give rise to nonreciprocity phase shift in IFOG under vibration. Furthermore, theoretical analysis and computational results indicate that vibration errors of both open-loop and closed-loop IFOG increase with the raise of vibrational amplitude, vibrational frequency and ASD. Finally, an estimation of vibration-induced IFOG errors in aircraft is done according to the proposed model. Our work is meaningful in designing IFOG coils to achieve a better anti-vibration performance. PMID:27455257
Web-based computational chemistry education with CHARMMing I: Lessons and tutorial.
Miller, Benjamin T; Singh, Rishi P; Schalk, Vinushka; Pevzner, Yuri; Sun, Jingjun; Miller, Carrie S; Boresch, Stefan; Ichiye, Toshiko; Brooks, Bernard R; Woodcock, H Lee
2014-07-01
This article describes the development, implementation, and use of web-based "lessons" to introduce students and other newcomers to computer simulations of biological macromolecules. These lessons, i.e., interactive step-by-step instructions for performing common molecular simulation tasks, are integrated into the collaboratively developed CHARMM INterface and Graphics (CHARMMing) web user interface (http://www.charmming.org). Several lessons have already been developed with new ones easily added via a provided Python script. In addition to CHARMMing's new lessons functionality, web-based graphical capabilities have been overhauled and are fully compatible with modern mobile web browsers (e.g., phones and tablets), allowing easy integration of these advanced simulation techniques into coursework. Finally, one of the primary objections to web-based systems like CHARMMing has been that "point and click" simulation set-up does little to teach the user about the underlying physics, biology, and computational methods being applied. In response to this criticism, we have developed a freely available tutorial to bridge the gap between graphical simulation setup and the technical knowledge necessary to perform simulations without user interface assistance.
NASA Astrophysics Data System (ADS)
Chen, Zuojing; Polizzi, Eric
2010-11-01
Effective modeling and numerical spectral-based propagation schemes are proposed for addressing the challenges in time-dependent quantum simulations of systems ranging from atoms, molecules, and nanostructures to emerging nanoelectronic devices. While time-dependent Hamiltonian problems can be formally solved by propagating the solutions along tiny simulation time steps, a direct numerical treatment is often considered too computationally demanding. In this paper, however, we propose to go beyond these limitations by introducing high-performance numerical propagation schemes to compute the solution of the time-ordered evolution operator. In addition to the direct Hamiltonian diagonalizations that can be efficiently performed using the new eigenvalue solver FEAST, we have designed a Gaussian propagation scheme and a basis-transformed propagation scheme (BTPS) which allow to reduce considerably the simulation times needed by time intervals. It is outlined that BTPS offers the best computational efficiency allowing new perspectives in time-dependent simulations. Finally, these numerical schemes are applied to study the ac response of a (5,5) carbon nanotube within a three-dimensional real-space mesh framework.
A joint precoding scheme for indoor downlink multi-user MIMO VLC systems
NASA Astrophysics Data System (ADS)
Zhao, Qiong; Fan, Yangyu; Kang, Bochao
2017-11-01
In this study, we aim to improve the system performance and reduce the implementation complexity of precoding scheme for visible light communication (VLC) systems. By incorporating the power-method algorithm and the block diagonalization (BD) algorithm, we propose a joint precoding scheme for indoor downlink multi-user multi-input-multi-output (MU-MIMO) VLC systems. In this scheme, we apply the BD algorithm to eliminate the co-channel interference (CCI) among users firstly. Secondly, the power-method algorithm is used to search the precoding weight for each user based on the optimal criterion of signal to interference plus noise ratio (SINR) maximization. Finally, the optical power restrictions of VLC systems are taken into account to constrain the precoding weight matrix. Comprehensive computer simulations in two scenarios indicate that the proposed scheme always has better bit error rate (BER) performance and lower computation complexity than that of the traditional scheme.
NASA Technical Reports Server (NTRS)
Carpenter, M. H.
1988-01-01
The generalized chemistry version of the computer code SPARK is extended to include two higher-order numerical schemes, yielding fourth-order spatial accuracy for the inviscid terms. The new and old formulations are used to study the influences of finite rate chemical processes on nozzle performance. A determination is made of the computationally optimum reaction scheme for use in high-enthalpy nozzles. Finite rate calculations are compared with the frozen and equilibrium limits to assess the validity of each formulation. In addition, the finite rate SPARK results are compared with the constant ratio of specific heats (gamma) SEAGULL code, to determine its accuracy in variable gamma flow situations. Finally, the higher-order SPARK code is used to calculate nozzle flows having species stratification. Flame quenching occurs at low nozzle pressures, while for high pressures, significant burning continues in the nozzle.
NASA Technical Reports Server (NTRS)
Bauer, Brent
1993-01-01
This paper discusses the development of a FORTRAN computer code to perform agility analysis on aircraft configurations. This code is to be part of the NASA-Ames ACSYNT (AirCraft SYNThesis) design code. This paper begins with a discussion of contemporary agility research in the aircraft industry and a survey of a few agility metrics. The methodology, techniques and models developed for the code are then presented. Finally, example trade studies using the agility module along with ACSYNT are illustrated. These trade studies were conducted using a Northrop F-20 Tigershark aircraft model. The studies show that the agility module is effective in analyzing the influence of common parameters such as thrust-to-weight ratio and wing loading on agility criteria. The module can compare the agility potential between different configurations. In addition, one study illustrates the module's ability to optimize a configuration's agility performance.
Reduction of Simulation Times for High-Q Structures using the Resonance Equation
Hall, Thomas Wesley; Bandaru, Prabhakar R.; Rees, Daniel Earl
2015-11-17
Simulating steady state performance of high quality factor (Q) resonant RF structures is computationally difficult for structures with sizes on the order of more than a few wavelengths because of the long times (on the order of ~ 0.1 ms) required to achieve steady state in comparison with maximum time step that can be used in the simulation (typically, on the order of ~ 1 ps). This paper presents analytical and computational approaches that can be used to accelerate the simulation of the steady state performance of such structures. The basis of the proposed approach is the utilization of amore » larger amplitude signal at the beginning to achieve steady state earlier relative to the nominal input signal. Finally, the methodology for finding the necessary input signal is then discussed in detail, and the validity of the approach is evaluated.« less
Computer simulations of phase field drops on super-hydrophobic surfaces
NASA Astrophysics Data System (ADS)
Fedeli, Livio
2017-09-01
We present a novel quasi-Newton continuation procedure that efficiently solves the system of nonlinear equations arising from the discretization of a phase field model for wetting phenomena. We perform a comparative numerical analysis that shows the improved speed of convergence gained with respect to other numerical schemes. Moreover, we discuss the conditions that, on a theoretical level, guarantee the convergence of this method. At each iterative step, a suitable continuation procedure develops and passes to the nonlinear solver an accurate initial guess. Discretization performs through cell-centered finite differences. The resulting system of equations is solved on a composite grid that uses dynamic mesh refinement and multi-grid techniques. The final code achieves three-dimensional, realistic computer experiments comparable to those produced in laboratory settings. This code offers not only new insights into the phenomenology of super-hydrophobicity, but also serves as a reliable predictive tool for the study of hydrophobic surfaces.
Load management strategy for Particle-In-Cell simulations in high energy particle acceleration
NASA Astrophysics Data System (ADS)
Beck, A.; Frederiksen, J. T.; Dérouillat, J.
2016-09-01
In the wake of the intense effort made for the experimental CILEX project, numerical simulation campaigns have been carried out in order to finalize the design of the facility and to identify optimal laser and plasma parameters. These simulations bring, of course, important insight into the fundamental physics at play. As a by-product, they also characterize the quality of our theoretical and numerical models. In this paper, we compare the results given by different codes and point out algorithmic limitations both in terms of physical accuracy and computational performances. These limitations are illustrated in the context of electron laser wakefield acceleration (LWFA). The main limitation we identify in state-of-the-art Particle-In-Cell (PIC) codes is computational load imbalance. We propose an innovative algorithm to deal with this specific issue as well as milestones towards a modern, accurate high-performance PIC code for high energy particle acceleration.
Perspectives in numerical astrophysics:
NASA Astrophysics Data System (ADS)
Reverdy, V.
2016-12-01
In this discussion paper, we investigate the current and future status of numerical astrophysics and highlight key questions concerning the transition to the exascale era. We first discuss the fact that one of the main motivation behind high performance simulations should not be the reproduction of observational or experimental data, but the understanding of the emergence of complexity from fundamental laws. This motivation is put into perspective regarding the quest for more computational power and we argue that extra computational resources can be used to gain in abstraction. Then, the readiness level of present-day simulation codes in regard to upcoming exascale architecture is examined and two major challenges are raised concerning both the central role of data movement for performances and the growing complexity of codes. Software architecture is finally presented as a key component to make the most of upcoming architectures while solving original physics problems.
Eisenbach, Markus; Larkin, Jeff; Lutjens, Justin; ...
2016-07-12
The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn–Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. In this paper, we present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. We reimplement the scattering matrix calculation for GPUs with a block matrix inversion algorithm that only uses accelerator memory. Finally, using the Craymore » XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code.« less
Parallel Unsteady Turbopump Simulations for Liquid Rocket Engines
NASA Technical Reports Server (NTRS)
Kiris, Cetin C.; Kwak, Dochan; Chan, William
2000-01-01
This paper reports the progress being made towards complete turbo-pump simulation capability for liquid rocket engines. Space Shuttle Main Engine (SSME) turbo-pump impeller is used as a test case for the performance evaluation of the MPI and hybrid MPI/Open-MP versions of the INS3D code. Then, a computational model of a turbo-pump has been developed for the shuttle upgrade program. Relative motion of the grid system for rotor-stator interaction was obtained by employing overset grid techniques. Time-accuracy of the scheme has been evaluated by using simple test cases. Unsteady computations for SSME turbo-pump, which contains 136 zones with 35 Million grid points, are currently underway on Origin 2000 systems at NASA Ames Research Center. Results from time-accurate simulations with moving boundary capability, and the performance of the parallel versions of the code will be presented in the final paper.
Takalo, Jouni; Piironen, Arto; Honkanen, Anna; Lempeä, Mikko; Aikio, Mika; Tuukkanen, Tuomas; Vähäsöyrinki, Mikko
2012-01-01
Ideally, neuronal functions would be studied by performing experiments with unconstrained animals whilst they behave in their natural environment. Although this is not feasible currently for most animal models, one can mimic the natural environment in the laboratory by using a virtual reality (VR) environment. Here we present a novel VR system based upon a spherical projection of computer generated images using a modified commercial data projector with an add-on fish-eye lens. This system provides equidistant visual stimulation with extensive coverage of the visual field, high spatio-temporal resolution and flexible stimulus generation using a standard computer. It also includes a track-ball system for closed-loop behavioural experiments with walking animals. We present a detailed description of the system and characterize it thoroughly. Finally, we demonstrate the VR system's performance whilst operating in closed-loop conditions by showing the movement trajectories of the cockroaches during exploratory behaviour in a VR forest.
Hemakom, Apit; Goverdovsky, Valentin; Looney, David; Mandic, Danilo P
2016-04-13
An extension to multivariate empirical mode decomposition (MEMD), termed adaptive-projection intrinsically transformed MEMD (APIT-MEMD), is proposed to cater for power imbalances and inter-channel correlations in real-world multichannel data. It is shown that the APIT-MEMD exhibits similar or better performance than MEMD for a large number of projection vectors, whereas it outperforms MEMD for the critical case of a small number of projection vectors within the sifting algorithm. We also employ the noise-assisted APIT-MEMD within our proposed intrinsic multiscale analysis framework and illustrate the advantages of such an approach in notoriously noise-dominated cooperative brain-computer interface (BCI) based on the steady-state visual evoked potentials and the P300 responses. Finally, we show that for a joint cognitive BCI task, the proposed intrinsic multiscale analysis framework improves system performance in terms of the information transfer rate. © 2016 The Author(s).
NASA Astrophysics Data System (ADS)
Durner, Maximilian; Márton, Zoltán.; Hillenbrand, Ulrich; Ali, Haider; Kleinsteuber, Martin
2017-03-01
In this work, a new ensemble method for the task of category recognition in different environments is presented. The focus is on service robotic perception in an open environment, where the robot's task is to recognize previously unseen objects of predefined categories, based on training on a public dataset. We propose an ensemble learning approach to be able to flexibly combine complementary sources of information (different state-of-the-art descriptors computed on color and depth images), based on a Markov Random Field (MRF). By exploiting its specific characteristics, the MRF ensemble method can also be executed as a Dynamic Classifier Selection (DCS) system. In the experiments, the committee- and topology-dependent performance boost of our ensemble is shown. Despite reduced computational costs and using less information, our strategy performs on the same level as common ensemble approaches. Finally, the impact of large differences between datasets is analyzed.
Static aeroelastic analysis and tailoring of a single-element racing car wing
NASA Astrophysics Data System (ADS)
Sadd, Christopher James
This thesis presents the research from an Engineering Doctorate research programme in collaboration with Reynard Motorsport Ltd, a manufacturer of racing cars. Racing car wing design has traditionally considered structures to be rigid. However, structures are never perfectly rigid and the interaction between aerodynamic loading and structural flexibility has a direct impact on aerodynamic performance. This interaction is often referred to as static aeroelasticity and the focus of this research has been the development of a computational static aeroelastic analysis method to improve the design of a single-element racing car wing. A static aeroelastic analysis method has been developed by coupling a Reynolds-Averaged Navier-Stokes CFD analysis method with a Finite Element structural analysis method using an iterative scheme. Development of this method has included assessment of CFD and Finite Element analysis methods and development of data transfer and mesh deflection methods. Experimental testing was also completed to further assess the computational analyses. The computational and experimental results show a good correlation and these studies have also shown that a Navier-Stokes static aeroelastic analysis of an isolated wing can be performed at an acceptable computational cost. The static aeroelastic analysis tool was used to assess methods of tailoring the structural flexibility of the wing to increase its aerodynamic performance. These tailoring methods were then used to produce two final wing designs to increase downforce and reduce drag respectively. At the average operating dynamic pressure of the racing car, the computational analysis predicts that the downforce-increasing wing has a downforce of C[1]=-1.377 in comparison to C[1]=-1.265 for the original wing. The computational analysis predicts that the drag-reducing wing has a drag of C[d]=0.115 in comparison to C[d]=0.143 for the original wing.
Parallel programming of industrial applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heroux, M; Koniges, A; Simon, H
1998-07-21
In the introductory material, we overview the typical MPP environment for real application computing and the special tools available such as parallel debuggers and performance analyzers. Next, we draw from a series of real applications codes and discuss the specific challenges and problems that are encountered in parallelizing these individual applications. The application areas drawn from include biomedical sciences, materials processing and design, plasma and fluid dynamics, and others. We show how it was possible to get a particular application to run efficiently and what steps were necessary. Finally we end with a summary of the lessons learned from thesemore » applications and predictions for the future of industrial parallel computing. This tutorial is based on material from a forthcoming book entitled: "Industrial Strength Parallel Computing" to be published by Morgan Kaufmann Publishers (ISBN l-55860-54).« less
NASA Astrophysics Data System (ADS)
Park, Kwan-Woo; Na, Suck-Joo
2010-06-01
A computational model for UV pulsed-laser scribing of silicon target is presented and compared with experimental results. The experiments were performed with a high-power Q-switched diode-pumped solid state laser which was operated at 355 nm. They were conducted on n-type 500 μm thick silicon wafers. The scribing width and depth were measured using scanning electron microscopy. The model takes into account major physics, such as heat transfer, evaporation, multiple reflections, and Rayleigh scattering. It also considers the attenuation and redistribution of laser energy due to Rayleigh scattering. Especially, the influence of the average particle sizes in the model is mainly investigated. Finally, it is shown that the computational model describing the laser scribing of silicon is valid at an average particle size of about 10 nm.
Final Report, DE-FG01-06ER25718 Domain Decomposition and Parallel Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Widlund, Olof B.
2015-06-09
The goal of this project is to develop and improve domain decomposition algorithms for a variety of partial differential equations such as those of linear elasticity and electro-magnetics.These iterative methods are designed for massively parallel computing systems and allow the fast solution of the very large systems of algebraic equations that arise in large scale and complicated simulations. A special emphasis is placed on problems arising from Maxwell's equation. The approximate solvers, the preconditioners, are combined with the conjugate gradient method and must always include a solver of a coarse model in order to have a performance which is independentmore » of the number of processors used in the computer simulation. A recent development allows for an adaptive construction of this coarse component of the preconditioner.« less
Reduced-Order Modeling: Cooperative Research and Development at the NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Beran, Philip S.; Cesnik, Carlos E. S.; Guendel, Randal E.; Kurdila, Andrew; Prazenica, Richard J.; Librescu, Liviu; Marzocca, Piergiovanni; Raveh, Daniella E.
2001-01-01
Cooperative research and development activities at the NASA Langley Research Center (LaRC) involving reduced-order modeling (ROM) techniques are presented. Emphasis is given to reduced-order methods and analyses based on Volterra series representations, although some recent results using Proper Orthogonal Deco in position (POD) are discussed as well. Results are reported for a variety of computational and experimental nonlinear systems to provide clear examples of the use of reduced-order models, particularly within the field of computational aeroelasticity. The need for and the relative performance (speed, accuracy, and robustness) of reduced-order modeling strategies is documented. The development of unsteady aerodynamic state-space models directly from computational fluid dynamics analyses is presented in addition to analytical and experimental identifications of Volterra kernels. Finally, future directions for this research activity are summarized.
Annealed importance sampling with constant cooling rate
NASA Astrophysics Data System (ADS)
Giovannelli, Edoardo; Cardini, Gianni; Gellini, Cristina; Pietraperzia, Giangaetano; Chelli, Riccardo
2015-02-01
Annealed importance sampling is a simulation method devised by Neal [Stat. Comput. 11, 125 (2001)] to assign weights to configurations generated by simulated annealing trajectories. In particular, the equilibrium average of a generic physical quantity can be computed by a weighted average exploiting weights and estimates of this quantity associated to the final configurations of the annealed trajectories. Here, we review annealed importance sampling from the perspective of nonequilibrium path-ensemble averages [G. E. Crooks, Phys. Rev. E 61, 2361 (2000)]. The equivalence of Neal's and Crooks' treatments highlights the generality of the method, which goes beyond the mere thermal-based protocols. Furthermore, we show that a temperature schedule based on a constant cooling rate outperforms stepwise cooling schedules and that, for a given elapsed computer time, performances of annealed importance sampling are, in general, improved by increasing the number of intermediate temperatures.
McGarry, J P
2009-11-01
A substantial body of work has been reported in which the mechanical properties of adherent cells were characterized using compression testing in tandem with computational modeling. However, a number of important issues remain to be addressed. In the current study, using computational analyses, the effect of cell compressibility on the force required to deform spread cells is investigated and the possibility that stiffening of the cell cytoplasm occurs during spreading is examined based on published experimental compression test data. The effect of viscoelasticity on cell compression is considered and difficulties in performing a complete characterization of the viscoelastic properties of a cell nucleus and cytoplasm by this method are highlighted. Finally, a non-linear force-deformation response is simulated using differing linear viscoelastic properties for the cell nucleus and the cell cytoplasm.
Wind Plant Performance Prediction (WP3) Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Craig, Anna
The methods for analysis of operational wind plant data are highly variable across the wind industry, leading to high uncertainties in the validation and bias-correction of preconstruction energy estimation methods. Lack of credibility in the preconstruction energy estimates leads to significant impacts on project financing and therefore the final levelized cost of energy for the plant. In this work, the variation in the evaluation of a wind plant's operational energy production as a result of variations in the processing methods applied to the operational data is examined. Preliminary results indicate that selection of the filters applied to the data andmore » the filter parameters can have significant impacts in the final computed assessment metrics.« less
Computational methods and software systems for dynamics and control of large space structures
NASA Technical Reports Server (NTRS)
Park, K. C.; Felippa, C. A.; Farhat, C.; Pramono, E.
1990-01-01
This final report on computational methods and software systems for dynamics and control of large space structures covers progress to date, projected developments in the final months of the grant, and conclusions. Pertinent reports and papers that have not appeared in scientific journals (or have not yet appeared in final form) are enclosed. The grant has supported research in two key areas of crucial importance to the computer-based simulation of large space structure. The first area involves multibody dynamics (MBD) of flexible space structures, with applications directed to deployment, construction, and maneuvering. The second area deals with advanced software systems, with emphasis on parallel processing. The latest research thrust in the second area, as reported here, involves massively parallel computers.
DOT National Transportation Integrated Search
2003-10-01
The purpose of this document is to expand upon the evaluation components presented in "Computer-aided dispatch--traffic management center field operational test final evaluation plan : WSDOT deployment". This document defines the objective, approach,...
DOT National Transportation Integrated Search
2006-05-01
This document provides the final report for the evaluation of the USDOT-sponsored Computer-Aided Dispatch - Traffic Management Center Integration Field Operations Test in the State of Washington. The document discusses evaluation findings in the foll...
MCloud: Secure Provenance for Mobile Cloud Users
2016-10-03
Feasibility of Smartphone Clouds , 2015 15th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid). 04-MAY- 15, Shenzhen, China...final decision. MCloud: Secure Provenance for Mobile Cloud Users Final Report Bogdan Carbunar Florida International University Computing and...Release; Distribution Unlimited UU UU UU UU 03-10-2016 31-May-2013 30-May-2016 Final Report: MCloud: Secure Provenance for Mobile Cloud Users The views
Transitional flow in thin tubes for space station freedom radiator
NASA Technical Reports Server (NTRS)
Loney, Patrick; Ibrahim, Mounir
1995-01-01
A two dimensional finite volume method is used to predict the film coefficients in the transitional flow region (laminar or turbulent) for the radiator panel tubes. The code used to perform this analysis is CAST (Computer Aided Simulation of Turbulent Flows). The information gathered from this code is then used to augment a Sinda85 model that predicts overall performance of the radiator. A final comparison is drawn between the results generated with a Sinda85 model using the Sinda85 provided transition region heat transfer correlations and the Sinda85 model using the CAST generated data.
Development history of the Hybrid Test Vehicle
NASA Technical Reports Server (NTRS)
Trummel, M. C.; Burke, A. F.
1983-01-01
Phase I of a joint Department of Energy/Jet Propulsion Laboratory Program undertook the development of the Hybrid Test Vehicle (HTV), which has subsequently progressed through design, fabrication, and testing and evaluation phases. Attention is presently given to the design and test experience gained during the HTV development program, and a discussion is presented of the design features and performance capabilities of the various 'mule' vehicles, devoted to the separate development of engine microprocessor control, vehicle structure, and mechanical components, whose elements were incorporated into the final HTV design. Computer projections of the HTV's performance are given.
Quantitative Tools for Examining the Vocalizations of Juvenile Songbirds
Wellock, Cameron D.; Reeke, George N.
2012-01-01
The singing of juvenile songbirds is highly variable and not well stereotyped, a feature that makes it difficult to analyze with existing computational techniques. We present here a method suitable for analyzing such vocalizations, windowed spectral pattern recognition (WSPR). Rather than performing pairwise sample comparisons, WSPR measures the typicality of a sample against a large sample set. We also illustrate how WSPR can be used to perform a variety of tasks, such as sample classification, song ontogeny measurement, and song variability measurement. Finally, we present a novel measure, based on WSPR, for quantifying the apparent complexity of a bird's singing. PMID:22701474
NASA Astrophysics Data System (ADS)
Kim, Jeong-Man; Koo, Min-Mo; Jeong, Jae-Hoon; Hong, Keyyong; Cho, Il-Hyoung; Choi, Jang-Young
2017-05-01
This paper reports the design and analysis of a tubular permanent magnet linear generator (TPMLG) for a small-scale wave-energy converter. The analytical field computation is performed by applying a magnetic vector potential and a 2-D analytical model to determine design parameters. Based on analytical solutions, parametric analysis is performed to meet the design specifications of a wave-energy converter (WEC). Then, 2-D FEA is employed to validate the analytical method. Finally, the experimental result confirms the predictions of the analytical and finite element analysis (FEA) methods under regular and irregular wave conditions.
NASA Astrophysics Data System (ADS)
Yu, Yali; Wang, Mengxia; Lima, Dimas
2018-04-01
In order to develop a novel alcoholism detection method, we proposed a magnetic resonance imaging (MRI)-based computer vision approach. We first use contrast equalization to increase the contrast of brain slices. Then, we perform Haar wavelet transform and principal component analysis. Finally, we use back propagation neural network (BPNN) as the classification tool. Our method yields a sensitivity of 81.71±4.51%, a specificity of 81.43±4.52%, and an accuracy of 81.57±2.18%. The Haar wavelet gives better performance than db4 wavelet and sym3 wavelet.
A Simple XML Producer-Consumer Protocol
NASA Technical Reports Server (NTRS)
Smith, Warren; Gunter, Dan; Quesnel, Darcy; Biegel, Bryan (Technical Monitor)
2001-01-01
There are many different projects from government, academia, and industry that provide services for delivering events in distributed environments. The problem with these event services is that they are not general enough to support all uses and they speak different protocols so that they cannot interoperate. We require such interoperability when we, for example, wish to analyze the performance of an application in a distributed environment. Such an analysis might require performance information from the application, computer systems, networks, and scientific instruments. In this work we propose and evaluate a standard XML-based protocol for the transmission of events in distributed systems. One recent trend in government and academic research is the development and deployment of computational grids. Computational grids are large-scale distributed systems that typically consist of high-performance compute, storage, and networking resources. Examples of such computational grids are the DOE Science Grid, the NASA Information Power Grid (IPG), and the NSF Partnerships for Advanced Computing Infrastructure (PACIs). The major effort to deploy these grids is in the area of developing the software services to allow users to execute applications on these large and diverse sets of resources. These services include security, execution of remote applications, managing remote data, access to information about resources and services, and so on. There are several toolkits for providing these services such as Globus, Legion, and Condor. As part of these efforts to develop computational grids, the Global Grid Forum is working to standardize the protocols and APIs used by various grid services. This standardization will allow interoperability between the client and server software of the toolkits that are providing the grid services. The goal of the Performance Working Group of the Grid Forum is to standardize protocols and representations related to the storage and distribution of performance data. These standard protocols and representations must support tasks such as profiling parallel applications, monitoring the status of computers and networks, and monitoring the performance of services provided by a computational grid. This paper describes a proposed protocol and data representation for the exchange of events in a distributed system. The protocol exchanges messages formatted in XML and it can be layered atop any low-level communication protocol such as TCP or UDP Further, we describe Java and C++ implementations of this protocol and discuss their performance. The next section will provide some further background information. Section 3 describes the main communication patterns of our protocol. Section 4 describes how we represent events and related information using XML. Section 5 describes our protocol and Section 6 discusses the performance of two implementations of the protocol. Finally, an appendix provides the XML Schema definition of our protocol and event information.
TADSim: Discrete Event-based Performance Prediction for Temperature Accelerated Dynamics
Mniszewski, Susan M.; Junghans, Christoph; Voter, Arthur F.; ...
2015-04-16
Next-generation high-performance computing will require more scalable and flexible performance prediction tools to evaluate software--hardware co-design choices relevant to scientific applications and hardware architectures. Here, we present a new class of tools called application simulators—parameterized fast-running proxies of large-scale scientific applications using parallel discrete event simulation. Parameterized choices for the algorithmic method and hardware options provide a rich space for design exploration and allow us to quickly find well-performing software--hardware combinations. We demonstrate our approach with a TADSim simulator that models the temperature-accelerated dynamics (TAD) method, an algorithmically complex and parameter-rich member of the accelerated molecular dynamics (AMD) family ofmore » molecular dynamics methods. The essence of the TAD application is captured without the computational expense and resource usage of the full code. We accomplish this by identifying the time-intensive elements, quantifying algorithm steps in terms of those elements, abstracting them out, and replacing them by the passage of time. We use TADSim to quickly characterize the runtime performance and algorithmic behavior for the otherwise long-running simulation code. We extend TADSim to model algorithm extensions, such as speculative spawning of the compute-bound stages, and predict performance improvements without having to implement such a method. Validation against the actual TAD code shows close agreement for the evolution of an example physical system, a silver surface. Finally, focused parameter scans have allowed us to study algorithm parameter choices over far more scenarios than would be possible with the actual simulation. This has led to interesting performance-related insights and suggested extensions.« less
A design methodology for portable software on parallel computers
NASA Technical Reports Server (NTRS)
Nicol, David M.; Miller, Keith W.; Chrisman, Dan A.
1993-01-01
This final report for research that was supported by grant number NAG-1-995 documents our progress in addressing two difficulties in parallel programming. The first difficulty is developing software that will execute quickly on a parallel computer. The second difficulty is transporting software between dissimilar parallel computers. In general, we expect that more hardware-specific information will be included in software designs for parallel computers than in designs for sequential computers. This inclusion is an instance of portability being sacrificed for high performance. New parallel computers are being introduced frequently. Trying to keep one's software on the current high performance hardware, a software developer almost continually faces yet another expensive software transportation. The problem of the proposed research is to create a design methodology that helps designers to more precisely control both portability and hardware-specific programming details. The proposed research emphasizes programming for scientific applications. We completed our study of the parallelizability of a subsystem of the NASA Earth Radiation Budget Experiment (ERBE) data processing system. This work is summarized in section two. A more detailed description is provided in Appendix A ('Programming Practices to Support Eventual Parallelism'). Mr. Chrisman, a graduate student, wrote and successfully defended a Ph.D. dissertation proposal which describes our research associated with the issues of software portability and high performance. The list of research tasks are specified in the proposal. The proposal 'A Design Methodology for Portable Software on Parallel Computers' is summarized in section three and is provided in its entirety in Appendix B. We are currently studying a proposed subsystem of the NASA Clouds and the Earth's Radiant Energy System (CERES) data processing system. This software is the proof-of-concept for the Ph.D. dissertation. We have implemented and measured the performance of a portion of this subsystem on the Intel iPSC/2 parallel computer. These results are provided in section four. Our future work is summarized in section five, our acknowledgements are stated in section six, and references for published papers associated with NAG-1-995 are provided in section seven.
Private genome analysis through homomorphic encryption
2015-01-01
Background The rapid development of genome sequencing technology allows researchers to access large genome datasets. However, outsourcing the data processing o the cloud poses high risks for personal privacy. The aim of this paper is to give a practical solution for this problem using homomorphic encryption. In our approach, all the computations can be performed in an untrusted cloud without requiring the decryption key or any interaction with the data owner, which preserves the privacy of genome data. Methods We present evaluation algorithms for secure computation of the minor allele frequencies and χ2 statistic in a genome-wide association studies setting. We also describe how to privately compute the Hamming distance and approximate Edit distance between encrypted DNA sequences. Finally, we compare performance details of using two practical homomorphic encryption schemes - the BGV scheme by Gentry, Halevi and Smart and the YASHE scheme by Bos, Lauter, Loftus and Naehrig. Results The approach with the YASHE scheme analyzes data from 400 people within about 2 seconds and picks a variant associated with disease from 311 spots. For another task, using the BGV scheme, it took about 65 seconds to securely compute the approximate Edit distance for DNA sequences of size 5K and figure out the differences between them. Conclusions The performance numbers for BGV are better than YASHE when homomorphically evaluating deep circuits (like the Hamming distance algorithm or approximate Edit distance algorithm). On the other hand, it is more efficient to use the YASHE scheme for a low-degree computation, such as minor allele frequencies or χ2 test statistic in a case-control study. PMID:26733152
Rotary engine performance limits predicted by a zero-dimensional model
NASA Technical Reports Server (NTRS)
Bartrand, Timothy A.; Willis, Edward A.
1992-01-01
A parametric study was performed to determine the performance limits of a rotary combustion engine. This study shows how well increasing the combustion rate, insulating, and turbocharging increase brake power and decrease fuel consumption. Several generalizations can be made from the findings. First, it was shown that the fastest combustion rate is not necessarily the best combustion rate. Second, several engine insulation schemes were employed for a turbocharged engine. Performance improved only for a highly insulated engine. Finally, the variability of turbocompounding and the influence of exhaust port shape were calculated. Rotary engines performance was predicted by an improved zero-dimensional computer model based on a model developed at the Massachusetts Institute of Technology in the 1980's. Independent variables in the study include turbocharging, manifold pressures, wall thermal properties, leakage area, and exhaust port geometry. Additions to the computer programs since its results were last published include turbocharging, manifold modeling, and improved friction power loss calculation. The baseline engine for this study is a single rotor 650 cc direct-injection stratified-charge engine with aluminum housings and a stainless steel rotor. Engine maps are provided for the baseline and turbocharged versions of the engine.
Representation and alignment of sung queries for music information retrieval
NASA Astrophysics Data System (ADS)
Adams, Norman H.; Wakefield, Gregory H.
2005-09-01
The pursuit of robust and rapid query-by-humming systems, which search melodic databases using sung queries, is a common theme in music information retrieval. The retrieval aspect of this database problem has received considerable attention, whereas the front-end processing of sung queries and the data structure to represent melodies has been based on musical intuition and historical momentum. The present work explores three time series representations for sung queries: a sequence of notes, a ``smooth'' pitch contour, and a sequence of pitch histograms. The performance of the three representations is compared using a collection of naturally sung queries. It is found that the most robust performance is achieved by the representation with highest dimension, the smooth pitch contour, but that this representation presents a formidable computational burden. For all three representations, it is necessary to align the query and target in order to achieve robust performance. The computational cost of the alignment is quadratic, hence it is necessary to keep the dimension small for rapid retrieval. Accordingly, iterative deepening is employed to achieve both robust performance and rapid retrieval. Finally, the conventional iterative framework is expanded to adapt the alignment constraints based on previous iterations, further expediting retrieval without degrading performance.
Theoretical L-shell Coster-Kronig energies 11 or equal to z or equal to 103
NASA Technical Reports Server (NTRS)
Chen, M. H.; Crasemann, B.; Huang, K. N.; Aoyagi, M.; Mark, H.
1976-01-01
Relativistic relaxed-orbital calculations of L-shell Coster-Kronig transition energies have been performed for all possible transitions in atoms with atomic numbers. Hartree-Fock-Slater wave functions served as zeroth-order eigenfunctions to compute the expectation of the total Hamiltonian. A first-order approximation to the local approximation was thus included. Quantum-electrodynamic corrections were made. Each transition energy was computed as the difference between results of separate self-consistent-field calculations for the initial, singly ionized state and the final two-hole state. The following quantities are listed: total transition energy, 'electric' (Dirac-Hartree-Fock-Slater) contribution, magnetic and retardation contributions, and contributions due to vacuum polarization and self energy.
A Computer Based Moire Technique To Measure Very Small Displacements
NASA Astrophysics Data System (ADS)
Sciammarella, Cesar A.; Amadshahi, Mansour A.; Subbaraman, B.
1987-02-01
The accuracy that can be achieved in the measurement of very small displacements in techniques such as moire, holography and speckle is limited by the noise inherent to the utilized optical devices. To reduce the noise to signal ratio, the moire method can be utilized. Two system of carrier fringes are introduced, an initial system before the load is applied and a final system when the load is applied. The moire pattern of these two systems contains the sought displacement information and the noise common to the two patterns is eliminated. The whole process is performed by a computer on digitized versions of the patterns. Examples of application are given.
Recursive Gradient Estimation Using Splines for Navigation of Autonomous Vehicles.
1985-07-01
AUTONOMOUS VEHICLES C. N. SHEN DTIC " JULY 1985 SEP 1 219 85 V US ARMY ARMAMENT RESEARCH AND DEVELOPMENT CENTER LARGE CALIBER WEAPON SYSTEMS LABORATORY I...GRADIENT ESTIMATION USING SPLINES FOR NAVIGATION OF AUTONOMOUS VEHICLES Final S. PERFORMING ORG. REPORT NUMBER 7. AUTHOR(q) 8. CONTRACT OR GRANT NUMBER...which require autonomous vehicles . Essential to these robotic vehicles is an adequate and efficient computer vision system. A potentially more
Intelligent Decentralized Control In Large Distributed Computer Systems
1988-04-01
decentralized. The goal is to find a way for the agents to coordinate their actions to maximize some index of system performance. (Our main...shown in Figure 4.13. The controller observes the environ- ment through sensors, and then may issue a command (i.e., take action ) to affect the...the Hypothesis Generator and the Belief Manager, and finally actions are issued by the Action Generator, the Experiment Generator, or the Reflex
ERIC Educational Resources Information Center
Dwyer, Daniel J.
Designed to assess the effect of alternative display (CRT) screen sizes and resolution levels on user ability to identify and locate printed circuit (PC) board points, this study is the first in a protracted research program on the legibility of graphics in computer-based job aids. Air Force maintenance training pipeline students (35 male and 1…
NASA Technical Reports Server (NTRS)
Magana, Mario E.
1989-01-01
The digital position controller implemented in the control computer of the 3-axis attitude motion simulator is mathematically reconstructed and documented, since the information supplied with the executable code of this controller was insufficient to make substantial modifications to it. Also developed were methodologies to introduce changes in the controller which do not require rewriting the software. Finally, recommendations are made on possible improvement to the control system performance.
Making Computing on Encrypted Data Secure and Practical
2013-06-01
with SAF/AQR memorandum dated 10 Dec 08 and AFRL/ CA policy clarification memorandum dated 16 Jan 09. This report is available to the general public...control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) JUNE 2013 2 . REPORT TYPE FINAL TECHNICAL REPORT 3...150 Irvine, CA 92697 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) Air Force Research
NASA Astrophysics Data System (ADS)
Ford, Eric B.; Dindar, Saleh; Peters, Jorg
2015-08-01
The realism of astrophysical simulations and statistical analyses of astronomical data are set by the available computational resources. Thus, astronomers and astrophysicists are constantly pushing the limits of computational capabilities. For decades, astronomers benefited from massive improvements in computational power that were driven primarily by increasing clock speeds and required relatively little attention to details of the computational hardware. For nearly a decade, increases in computational capabilities have come primarily from increasing the degree of parallelism, rather than increasing clock speeds. Further increases in computational capabilities will likely be led by many-core architectures such as Graphical Processing Units (GPUs) and Intel Xeon Phi. Successfully harnessing these new architectures, requires significantly more understanding of the hardware architecture, cache hierarchy, compiler capabilities and network network characteristics.I will provide an astronomer's overview of the opportunities and challenges provided by modern many-core architectures and elastic cloud computing. The primary goal is to help an astronomical audience understand what types of problems are likely to yield more than order of magnitude speed-ups and which problems are unlikely to parallelize sufficiently efficiently to be worth the development time and/or costs.I will draw on my experience leading a team in developing the Swarm-NG library for parallel integration of large ensembles of small n-body systems on GPUs, as well as several smaller software projects. I will share lessons learned from collaborating with computer scientists, including both technical and soft skills. Finally, I will discuss the challenges of training the next generation of astronomers to be proficient in this new era of high-performance computing, drawing on experience teaching a graduate class on High-Performance Scientific Computing for Astrophysics and organizing a 2014 advanced summer school on Bayesian Computing for Astronomical Data Analysis with support of the Penn State Center for Astrostatistics and Institute for CyberScience.
The Magellan Final Report on Cloud Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
,; Coghlan, Susan; Yelick, Katherine
The goal of Magellan, a project funded through the U.S. Department of Energy (DOE) Office of Advanced Scientific Computing Research (ASCR), was to investigate the potential role of cloud computing in addressing the computing needs for the DOE Office of Science (SC), particularly related to serving the needs of mid- range computing and future data-intensive computing workloads. A set of research questions was formed to probe various aspects of cloud computing from performance, usability, and cost. To address these questions, a distributed testbed infrastructure was deployed at the Argonne Leadership Computing Facility (ALCF) and the National Energy Research Scientific Computingmore » Center (NERSC). The testbed was designed to be flexible and capable enough to explore a variety of computing models and hardware design points in order to understand the impact for various scientific applications. During the project, the testbed also served as a valuable resource to application scientists. Applications from a diverse set of projects such as MG-RAST (a metagenomics analysis server), the Joint Genome Institute, the STAR experiment at the Relativistic Heavy Ion Collider, and the Laser Interferometer Gravitational Wave Observatory (LIGO), were used by the Magellan project for benchmarking within the cloud, but the project teams were also able to accomplish important production science utilizing the Magellan cloud resources.« less
Problems Related to Parallelization of CFD Algorithms on GPU, Multi-GPU and Hybrid Architectures
NASA Astrophysics Data System (ADS)
Biazewicz, Marek; Kurowski, Krzysztof; Ludwiczak, Bogdan; Napieraia, Krystyna
2010-09-01
Computational Fluid Dynamics (CFD) is one of the branches of fluid mechanics, which uses numerical methods and algorithms to solve and analyze fluid flows. CFD is used in various domains, such as oil and gas reservoir uncertainty analysis, aerodynamic body shapes optimization (e.g. planes, cars, ships, sport helmets, skis), natural phenomena analysis, numerical simulation for weather forecasting or realistic visualizations. CFD problem is very complex and needs a lot of computational power to obtain the results in a reasonable time. We have implemented a parallel application for two-dimensional CFD simulation with a free surface approximation (MAC method) using new hardware architectures, in particular multi-GPU and hybrid computing environments. For this purpose we decided to use NVIDIA graphic cards with CUDA environment due to its simplicity of programming and good computations performance. We used finite difference discretization of Navier-Stokes equations, where fluid is propagated over an Eulerian Grid. In this model, the behavior of the fluid inside the cell depends only on the properties of local, surrounding cells, therefore it is well suited for the GPU-based architecture. In this paper we demonstrate how to use efficiently the computing power of GPUs for CFD. Additionally, we present some best practices to help users analyze and improve the performance of CFD applications executed on GPU. Finally, we discuss various challenges around the multi-GPU implementation on the example of matrix multiplication.
NASA Astrophysics Data System (ADS)
Deng, Liang; Bai, Hanli; Wang, Fang; Xu, Qingxin
2016-06-01
CPU/GPU computing allows scientists to tremendously accelerate their numerical codes. In this paper, we port and optimize a double precision alternating direction implicit (ADI) solver for three-dimensional compressible Navier-Stokes equations from our in-house Computational Fluid Dynamics (CFD) software on heterogeneous platform. First, we implement a full GPU version of the ADI solver to remove a lot of redundant data transfers between CPU and GPU, and then design two fine-grain schemes, namely “one-thread-one-point” and “one-thread-one-line”, to maximize the performance. Second, we present a dual-level parallelization scheme using the CPU/GPU collaborative model to exploit the computational resources of both multi-core CPUs and many-core GPUs within the heterogeneous platform. Finally, considering the fact that memory on a single node becomes inadequate when the simulation size grows, we present a tri-level hybrid programming pattern MPI-OpenMP-CUDA that merges fine-grain parallelism using OpenMP and CUDA threads with coarse-grain parallelism using MPI for inter-node communication. We also propose a strategy to overlap the computation with communication using the advanced features of CUDA and MPI programming. We obtain speedups of 6.0 for the ADI solver on one Tesla M2050 GPU in contrast to two Xeon X5670 CPUs. Scalability tests show that our implementation can offer significant performance improvement on heterogeneous platform.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Zi-Kui; Gleeson, Brian; Shang, Shunli
This project developed computational tools that can complement and support experimental efforts in order to enable discovery and more efficient development of Ni-base structural materials and coatings. The project goal was reached through an integrated computation-predictive and experimental-validation approach, including first-principles calculations, thermodynamic CALPHAD (CALculation of PHAse Diagram), and experimental investigations on compositions relevant to Ni-base superalloys and coatings in terms of oxide layer growth and microstructure stabilities. The developed description included composition ranges typical for coating alloys and, hence, allow for prediction of thermodynamic properties for these material systems. The calculation of phase compositions, phase fraction, and phase stabilities,more » which are directly related to properties such as ductility and strength, was a valuable contribution, along with the collection of computational tools that are required to meet the increasing demands for strong, ductile and environmentally-protective coatings. Specifically, a suitable thermodynamic description for the Ni-Al-Cr-Co-Si-Hf-Y system was developed for bulk alloy and coating compositions. Experiments were performed to validate and refine the thermodynamics from the CALPHAD modeling approach. Additionally, alloys produced using predictions from the current computational models were studied in terms of their oxidation performance. Finally, results obtained from experiments aided in the development of a thermodynamic modeling automation tool called ESPEI/pycalphad - for more rapid discovery and development of new materials.« less
High speed civil transport: Sonic boom softening and aerodynamic optimization
NASA Technical Reports Server (NTRS)
Cheung, Samson
1994-01-01
An improvement in sonic boom extrapolation techniques has been the desire of aerospace designers for years. This is because the linear acoustic theory developed in the 60's is incapable of predicting the nonlinear phenomenon of shock wave propagation. On the other hand, CFD techniques are too computationally expensive to employ on sonic boom problems. Therefore, this research focused on the development of a fast and accurate sonic boom extrapolation method that solves the Euler equations for axisymmetric flow. This new technique has brought the sonic boom extrapolation techniques up to the standards of the 90's. Parallel computing is a fast growing subject in the field of computer science because of its promising speed. A new optimizer (IIOWA) for the parallel computing environment has been developed and tested for aerodynamic drag minimization. This is a promising method for CFD optimization making use of the computational resources of workstations, which unlike supercomputers can spend most of their time idle. Finally, the OAW concept is attractive because of its overall theoretical performance. In order to fully understand the concept, a wind-tunnel model was built and is currently being tested at NASA Ames Research Center. The CFD calculations performed under this cooperative agreement helped to identify the problem of the flow separation, and also aided the design by optimizing the wing deflection for roll trim.
Brain-Computer Interfaces in Medicine
Shih, Jerry J.; Krusienski, Dean J.; Wolpaw, Jonathan R.
2012-01-01
Brain-computer interfaces (BCIs) acquire brain signals, analyze them, and translate them into commands that are relayed to output devices that carry out desired actions. BCIs do not use normal neuromuscular output pathways. The main goal of BCI is to replace or restore useful function to people disabled by neuromuscular disorders such as amyotrophic lateral sclerosis, cerebral palsy, stroke, or spinal cord injury. From initial demonstrations of electroencephalography-based spelling and single-neuron-based device control, researchers have gone on to use electroencephalographic, intracortical, electrocorticographic, and other brain signals for increasingly complex control of cursors, robotic arms, prostheses, wheelchairs, and other devices. Brain-computer interfaces may also prove useful for rehabilitation after stroke and for other disorders. In the future, they might augment the performance of surgeons or other medical professionals. Brain-computer interface technology is the focus of a rapidly growing research and development enterprise that is greatly exciting scientists, engineers, clinicians, and the public in general. Its future achievements will depend on advances in 3 crucial areas. Brain-computer interfaces need signal-acquisition hardware that is convenient, portable, safe, and able to function in all environments. Brain-computer interface systems need to be validated in long-term studies of real-world use by people with severe disabilities, and effective and viable models for their widespread dissemination must be implemented. Finally, the day-to-day and moment-to-moment reliability of BCI performance must be improved so that it approaches the reliability of natural muscle-based function. PMID:22325364
DOT National Transportation Integrated Search
2006-07-01
This document provides the final report for the evaluation of the USDOT-sponsored Computer-Aided Dispatch Traffic Management Center Integration Field Operations Test in the State of Utah. The document discusses evaluation findings in the followin...
DOT National Transportation Integrated Search
2004-01-01
The purpose of this document is to expand upon the evaluation components presented in "Computer-aided dispatch--traffic management center field operational test final evaluation plan : state of Utah". This document defines the objective, approach, an...
Simulation of the Two Stages Stretch-Blow Molding Process: Infrared Heating and Blowing Modeling
NASA Astrophysics Data System (ADS)
Bordival, M.; Schmidt, F. M.; Le Maoult, Y.; Velay, V.
2007-05-01
In the Stretch-Blow Molding (SBM) process, the temperature distribution of the reheated perform affects drastically the blowing kinematic, the bottle thickness distribution, as well as the orientation induced by stretching. Consequently, mechanical and optical properties of the final bottle are closely related to heating conditions. In order to predict the 3D temperature distribution of a rotating preform, numerical software using control-volume method has been developed. Since PET behaves like a semi-transparent medium, the radiative flux absorption was computed using Beer Lambert law. In a second step, 2D axi-symmetric simulations of the SBM have been developed using the finite element package ABAQUS®. Temperature profiles through the preform wall thickness and along its length were computed and applied as initial condition. Air pressure inside the preform was not considered as an input variable, but was automatically computed using a thermodynamic model. The heat transfer coefficient applied between the mold and the polymer was also measured. Finally, the G'sell law was used for modeling PET behavior. For both heating and blowing stage simulations, a good agreement has been observed with experimental measurements. This work is part of the European project "APT_PACK" (Advanced knowledge of Polymer deformation for Tomorrow's PACKaging).
Probing short-range nucleon-nucleon interactions with an electron-ion collider
Miller, Gerald A.; Sievert, Matthew D.; Venugopalan, Raju
2016-04-07
For this research, we derive the cross section for exclusive vector meson production in high-energy deeply inelastic scattering off a deuteron target that disintegrates into a proton and a neutron carrying large relative momentum in the final state. This cross section can be expressed in terms of a novel gluon transition generalized parton distribution (T-GPD); the hard scale in the final state makes the T-GPD sensitive to the short-distance nucleon-nucleon interaction. We perform a toy model computation of this process in a perturbative framework and discuss the time scales that allow the separation of initial- and final-state dynamics in themore » T-GPD. We outline the more general computation based on the factorization suggested by the toy computation: In particular, we discuss the relative role of “pointlike” and “geometric” Fock configurations that control the parton dynamics of short-range nucleon-nucleon scattering. With the aid of exclusive J/ψ production data at the Hadron-Electron Ring Accelerator at DESY, as well as elastic nucleon-nucleon cross sections, we estimate rates for exclusive deuteron photodisintegration at a future Electron-Ion Collider (EIC). Our results, obtained using conservative estimates of EIC integrated luminosities, suggest that center-of-mass energies sNN ~12GeV 2 of the neutron-proton subsystem can be accessed. We argue that the high energies of the EIC can address outstanding dynamical questions regarding the short-range quark-gluon structure of nuclear forces by providing clean gluon probes of such “knockout” exclusive reactions in light and heavy nuclei.« less
Synergia: an accelerator modeling tool with 3-D space charge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amundson, James F.; Spentzouris, P.; /Fermilab
2004-07-01
High precision modeling of space-charge effects, together with accurate treatment of single-particle dynamics, is essential for designing future accelerators as well as optimizing the performance of existing machines. We describe Synergia, a high-fidelity parallel beam dynamics simulation package with fully three dimensional space-charge capabilities and a higher order optics implementation. We describe the computational techniques, the advanced human interface, and the parallel performance obtained using large numbers of macroparticles. We also perform code benchmarks comparing to semi-analytic results and other codes. Finally, we present initial results on particle tune spread, beam halo creation, and emittance growth in the Fermilab boostermore » accelerator.« less
Parallel task processing of very large datasets
NASA Astrophysics Data System (ADS)
Romig, Phillip Richardson, III
This research concerns the use of distributed computer technologies for the analysis and management of very large datasets. Improvements in sensor technology, an emphasis on global change research, and greater access to data warehouses all are increase the number of non-traditional users of remotely sensed data. We present a framework for distributed solutions to the challenges of datasets which exceed the online storage capacity of individual workstations. This framework, called parallel task processing (PTP), incorporates both the task- and data-level parallelism exemplified by many image processing operations. An implementation based on the principles of PTP, called Tricky, is also presented. Additionally, we describe the challenges and practical issues in modeling the performance of parallel task processing with large datasets. We present a mechanism for estimating the running time of each unit of work within a system and an algorithm that uses these estimates to simulate the execution environment and produce estimated runtimes. Finally, we describe and discuss experimental results which validate the design. Specifically, the system (a) is able to perform computation on datasets which exceed the capacity of any one disk, (b) provides reduction of overall computation time as a result of the task distribution even with the additional cost of data transfer and management, and (c) in the simulation mode accurately predicts the performance of the real execution environment.
Session on High Speed Civil Transport Design Capability Using MDO and High Performance Computing
NASA Technical Reports Server (NTRS)
Rehder, Joe
2000-01-01
Since the inception of CAS in 1992, NASA Langley has been conducting research into applying multidisciplinary optimization (MDO) and high performance computing toward reducing aircraft design cycle time. The focus of this research has been the development of a series of computational frameworks and associated applications that increased in capability, complexity, and performance over time. The culmination of this effort is an automated high-fidelity analysis capability for a high speed civil transport (HSCT) vehicle installed on a network of heterogeneous computers with a computational framework built using Common Object Request Broker Architecture (CORBA) and Java. The main focus of the research in the early years was the development of the Framework for Interdisciplinary Design Optimization (FIDO) and associated HSCT applications. While the FIDO effort was eventually halted, work continued on HSCT applications of ever increasing complexity. The current application, HSCT4.0, employs high fidelity CFD and FEM analysis codes. For each analysis cycle, the vehicle geometry and computational grids are updated using new values for design variables. Processes for aeroelastic trim, loads convergence, displacement transfer, stress and buckling, and performance have been developed. In all, a total of 70 processes are integrated in the analysis framework. Many of the key processes include automatic differentiation capabilities to provide sensitivity information that can be used in optimization. A software engineering process was developed to manage this large project. Defining the interactions among 70 processes turned out to be an enormous, but essential, task. A formal requirements document was prepared that defined data flow among processes and subprocesses. A design document was then developed that translated the requirements into actual software design. A validation program was defined and implemented to ensure that codes integrated into the framework produced the same results as their standalone counterparts. Finally, a Commercial Off the Shelf (COTS) configuration management system was used to organize the software development. A computational environment, CJOPT, based on the Common Object Request Broker Architecture, CORBA, and the Java programming language has been developed as a framework for multidisciplinary analysis and Optimization. The environment exploits the parallelisms inherent in the application and distributes the constituent disciplines on machines best suited to their needs. In CJOpt, a discipline code is "wrapped" as an object. An interface to the object identifies the functionality (services) provided by the discipline, defined in Interface Definition Language (IDL) and implemented using Java. The results of using the HSCT4.0 capability are described. A summary of lessons learned is also presented. The use of some of the processes, codes, and techniques by industry are highlighted. The application of the methodology developed in this research to other aircraft are described. Finally, we show how the experience gained is being applied to entirely new vehicles, such as the Reusable Space Transportation System. Additional information is contained in the original.
ERIC Educational Resources Information Center
Frick, Theodore W.; And Others
The document is part of the final report on Project STEEL (Special Teacher Education and Evaluation Laboratory) intended to extend the utilization of technology in the training of preservice special education teachers. This volume focuses on the second of four project objectives, the development of a special education teacher computer literacy…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patton, A.D.; Ayoub, A.K.; Singh, C.
1982-07-01
Existing methods for generating capacity reliability evaluation do not explicitly recognize a number of operating considerations which may have important effects in system reliability performance. Thus, current methods may yield estimates of system reliability which differ appreciably from actual observed reliability. Further, current methods offer no means of accurately studying or evaluating alternatives which may differ in one or more operating considerations. Operating considerations which are considered to be important in generating capacity reliability evaluation include: unit duty cycles as influenced by load cycle shape, reliability performance of other units, unit commitment policy, and operating reserve policy; unit start-up failuresmore » distinct from unit running failures; unit start-up times; and unit outage postponability and the management of postponable outages. A detailed Monte Carlo simulation computer model called GENESIS and two analytical models called OPCON and OPPLAN have been developed which are capable of incorporating the effects of many operating considerations including those noted above. These computer models have been used to study a variety of actual and synthetic systems and are available from EPRI. The new models are shown to produce system reliability indices which differ appreciably from index values computed using traditional models which do not recognize operating considerations.« less
Requirements for a network storage service
NASA Technical Reports Server (NTRS)
Kelly, Suzanne M.; Haynes, Rena A.
1991-01-01
Sandia National Laboratories provides a high performance classified computer network as a core capability in support of its mission of nuclear weapons design and engineering, physical sciences research, and energy research and development. The network, locally known as the Internal Secure Network (ISN), comprises multiple distributed local area networks (LAN's) residing in New Mexico and California. The TCP/IP protocol suite is used for inter-node communications. Scientific workstations and mid-range computers, running UNIX-based operating systems, compose most LAN's. One LAN, operated by the Sandia Corporate Computing Computing Directorate, is a general purpose resource providing a supercomputer and a file server to the entire ISN. The current file server on the supercomputer LAN is an implementation of the Common File Server (CFS). Subsequent to the design of the ISN, Sandia reviewed its mass storage requirements and chose to enter into a competitive procurement to replace the existing file server with one more adaptable to a UNIX/TCP/IP environment. The requirements study for the network was the starting point for the requirements study for the new file server. The file server is called the Network Storage Service (NSS) and its requirements are described. An application or functional description of the NSS is given. The final section adds performance, capacity, and access constraints to the requirements.
Computation in Dynamically Bounded Asymmetric Systems
Rutishauser, Ueli; Slotine, Jean-Jacques; Douglas, Rodney
2015-01-01
Previous explanations of computations performed by recurrent networks have focused on symmetrically connected saturating neurons and their convergence toward attractors. Here we analyze the behavior of asymmetrical connected networks of linear threshold neurons, whose positive response is unbounded. We show that, for a wide range of parameters, this asymmetry brings interesting and computationally useful dynamical properties. When driven by input, the network explores potential solutions through highly unstable ‘expansion’ dynamics. This expansion is steered and constrained by negative divergence of the dynamics, which ensures that the dimensionality of the solution space continues to reduce until an acceptable solution manifold is reached. Then the system contracts stably on this manifold towards its final solution trajectory. The unstable positive feedback and cross inhibition that underlie expansion and divergence are common motifs in molecular and neuronal networks. Therefore we propose that very simple organizational constraints that combine these motifs can lead to spontaneous computation and so to the spontaneous modification of entropy that is characteristic of living systems. PMID:25617645
3D CFD Quantification of the Performance of a Multi-Megawatt Wind Turbine
NASA Astrophysics Data System (ADS)
Laursen, J.; Enevoldsen, P.; Hjort, S.
2007-07-01
This paper presents the results of 3D CFD rotor computations of a Siemens SWT-2.3-93 variable speed wind turbine with 45m blades. In the paper CFD is applied to a rotor at stationary wind conditions without wind shear, using the commercial multi-purpose CFD-solvers ANSYS CFX 10.0 and 11.0. When comparing modelled mechanical effects with findings from other models and measurements, good agreement is obtained. Similarly the computed force distributions compare very well, whereas some discrepancies are found when comparing with an in-house BEM model. By applying the reduced axial velocity method the local angle of attack has been derived from the CFD solutions, and from this knowledge and the computed force distributions, local airfoil profile coefficients have been computed and compared to BEM airfoil coefficients. Finally, the transition model of Langtry and Menter is tested on the rotor, and the results are compared with the results from the fully turbulent setup.
The change in critical technologies for computational physics
NASA Technical Reports Server (NTRS)
Watson, Val
1990-01-01
It is noted that the types of technology required for computational physics are changing as the field matures. Emphasis has shifted from computer technology to algorithm technology and, finally, to visual analysis technology as areas of critical research for this field. High-performance graphical workstations tied to a supercommunicator with high-speed communications along with the development of especially tailored visualization software has enabled analysis of highly complex fluid-dynamics simulations. Particular reference is made here to the development of visual analysis tools at NASA's Numerical Aerodynamics Simulation Facility. The next technology which this field requires is one that would eliminate visual clutter by extracting key features of simulations of physics and technology in order to create displays that clearly portray these key features. Research in the tuning of visual displays to human cognitive abilities is proposed. The immediate transfer of technology to all levels of computers, specifically the inclusion of visualization primitives in basic software developments for all work stations and PCs, is recommended.
NASA Astrophysics Data System (ADS)
López, J.; Hernández, J.; Gómez, P.; Faura, F.
2018-02-01
The VOFTools library includes efficient analytical and geometrical routines for (1) area/volume computation, (2) truncation operations that typically arise in VOF (volume of fluid) methods, (3) area/volume conservation enforcement (VCE) in PLIC (piecewise linear interface calculation) reconstruction and(4) computation of the distance from a given point to the reconstructed interface. The computation of a polyhedron volume uses an efficient formula based on a quadrilateral decomposition and a 2D projection of each polyhedron face. The analytical VCE method is based on coupling an interpolation procedure to bracket the solution with an improved final calculation step based on the above volume computation formula. Although the library was originally created to help develop highly accurate advection and reconstruction schemes in the context of VOF methods, it may have more general applications. To assess the performance of the supplied routines, different tests, which are provided in FORTRAN and C, were implemented for several 2D and 3D geometries.
[Computer aided diagnosis model for lung tumor based on ensemble convolutional neural network].
Wang, Yuanyuan; Zhou, Tao; Lu, Huiling; Wu, Cuiying; Yang, Pengfei
2017-08-01
The convolutional neural network (CNN) could be used on computer-aided diagnosis of lung tumor with positron emission tomography (PET)/computed tomography (CT), which can provide accurate quantitative analysis to compensate for visual inertia and defects in gray-scale sensitivity, and help doctors diagnose accurately. Firstly, parameter migration method is used to build three CNNs (CT-CNN, PET-CNN, and PET/CT-CNN) for lung tumor recognition in CT, PET, and PET/CT image, respectively. Then, we aimed at CT-CNN to obtain the appropriate model parameters for CNN training through analysis the influence of model parameters such as epochs, batchsize and image scale on recognition rate and training time. Finally, three single CNNs are used to construct ensemble CNN, and then lung tumor PET/CT recognition was completed through relative majority vote method and the performance between ensemble CNN and single CNN was compared. The experiment results show that the ensemble CNN is better than single CNN on computer-aided diagnosis of lung tumor.
On the computation of molecular surface correlations for protein docking using fourier techniques.
Sakk, Eric
2007-08-01
The computation of surface correlations using a variety of molecular models has been applied to the unbound protein docking problem. Because of the computational complexity involved in examining all possible molecular orientations, the fast Fourier transform (FFT) (a fast numerical implementation of the discrete Fourier transform (DFT)) is generally applied to minimize the number of calculations. This approach is rooted in the convolution theorem which allows one to inverse transform the product of two DFTs in order to perform the correlation calculation. However, such a DFT calculation results in a cyclic or "circular" correlation which, in general, does not lead to the same result as the linear correlation desired for the docking problem. In this work, we provide computational bounds for constructing molecular models used in the molecular surface correlation problem. The derived bounds are then shown to be consistent with various intuitive guidelines previously reported in the protein docking literature. Finally, these bounds are applied to different molecular models in order to investigate their effect on the correlation calculation.
Face Recognition in Humans and Machines
NASA Astrophysics Data System (ADS)
O'Toole, Alice; Tistarelli, Massimo
The study of human face recognition by psychologists and neuroscientists has run parallel to the development of automatic face recognition technologies by computer scientists and engineers. In both cases, there are analogous steps of data acquisition, image processing, and the formation of representations that can support the complex and diverse tasks we accomplish with faces. These processes can be understood and compared in the context of their neural and computational implementations. In this chapter, we present the essential elements of face recognition by humans and machines, taking a perspective that spans psychological, neural, and computational approaches. From the human side, we overview the methods and techniques used in the neurobiology of face recognition, the underlying neural architecture of the system, the role of visual attention, and the nature of the representations that emerges. From the computational side, we discuss face recognition technologies and the strategies they use to overcome challenges to robust operation over viewing parameters. Finally, we conclude the chapter with a look at some recent studies that compare human and machine performances at face recognition.
Computer-assisted revision total knee replacement.
Sikorski, J M
2004-05-01
A technique for performing allograft-augmented revision total knee replacement (TKR) using computer assistance is described, on the basis of the results in 14 patients. Bone deficits were made up with impaction grafting. Femoral grafting was made possible by the construction of a retaining wall or dam which allowed pressurisation and retention of the graft. Tibial grafting used a mixture of corticocancellous and morsellised allograft. The position of the implants was monitored by the computer system and adjusted while the cement was setting. The outcome was determined using a six-parameter, quantitative technique (the Perth CT protocol) which measured the alignment of the prosthesis and provided an objective score. The final outcomes were not perfect with errors being made in femoral rotation and in producing a mismatch between the femoral and tibial components. In spite of the shortcomings the alignments were comparable in accuracy with those after primary TKR. Computer assistance shows considerable promise in producing accurate alignment in revision TKR with bone deficits.
Optimizing the Four-Index Integral Transform Using Data Movement Lower Bounds Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajbhandari, Samyam; Rastello, Fabrice; Kowalski, Karol
The four-index integral transform is a fundamental and computationally demanding calculation used in many computational chemistry suites such as NWChem. It transforms a four-dimensional tensor from an atomic basis to a molecular basis. This transformation is most efficiently implemented as a sequence of four tensor contractions that each contract a four-dimensional tensor with a two-dimensional transformation matrix. Differing degrees of permutation symmetry in the intermediate and final tensors in the sequence of contractions cause intermediate tensors to be much larger than the final tensor and limit the number of electronic states in the modeled systems. Loop fusion, in conjunction withmore » tiling, can be very effective in reducing the total space requirement, as well as data movement. However, the large number of possible choices for loop fusion and tiling, and data/computation distribution across a parallel system, make it challenging to develop an optimized parallel implementation for the four-index integral transform. We develop a novel approach to address this problem, using lower bounds modeling of data movement complexity. We establish relationships between available aggregate physical memory in a parallel computer system and ineffective fusion configurations, enabling their pruning and consequent identification of effective choices and a characterization of optimality criteria. This work has resulted in the development of a significantly improved implementation of the four-index transform that enables higher performance and the ability to model larger electronic systems than the current implementation in the NWChem quantum chemistry software suite.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lorenz, Daniel; Wolf, Felix
2016-02-17
The PRIMA-X (Performance Retargeting of Instrumentation, Measurement, and Analysis Technologies for Exascale Computing) project is the successor of the DOE PRIMA (Performance Refactoring of Instrumentation, Measurement, and Analysis Technologies for Petascale Computing) project, which addressed the challenge of creating a core measurement infrastructure that would serve as a common platform for both integrating leading parallel performance systems (notably TAU and Scalasca) and developing next-generation scalable performance tools. The PRIMA-X project shifts the focus away from refactorization of robust performance tools towards a re-targeting of the parallel performance measurement and analysis architecture for extreme scales. The massive concurrency, asynchronous execution dynamics,more » hardware heterogeneity, and multi-objective prerequisites (performance, power, resilience) that identify exascale systems introduce fundamental constraints on the ability to carry forward existing performance methodologies. In particular, there must be a deemphasis of per-thread observation techniques to significantly reduce the otherwise unsustainable flood of redundant performance data. Instead, it will be necessary to assimilate multi-level resource observations into macroscopic performance views, from which resilient performance metrics can be attributed to the computational features of the application. This requires a scalable framework for node-level and system-wide monitoring and runtime analyses of dynamic performance information. Also, the interest in optimizing parallelism parameters with respect to performance and energy drives the integration of tool capabilities in the exascale environment further. Initially, PRIMA-X was a collaborative project between the University of Oregon (lead institution) and the German Research School for Simulation Sciences (GRS). Because Prof. Wolf, the PI at GRS, accepted a position as full professor at Technische Universität Darmstadt (TU Darmstadt) starting February 1st, 2015, the project ended at GRS on January 31st, 2015. This report reflects the work accomplished at GRS until then. The work of GRS is expected to be continued at TU Darmstadt. The first main accomplishment of GRS is the design of different thread-level aggregation techniques. We created a prototype capable of aggregating the thread-level information in performance profiles using these techniques. The next step will be the integration of the most promising techniques into the Score-P measurement system and their evaluation. The second main accomplishment is a substantial increase of Score-P’s scalability, achieved by improving the design of the system-tree representation in Score-P’s profile format. We developed a new representation and a distributed algorithm to create the scalable system tree representation. Finally, we developed a lightweight approach to MPI wait-state profiling. Former algorithms either needed piggy-backing, which can cause significant runtime overhead, or tracing, which comes with its own set of scaling challenges. Our approach works with local data only and, thus, is scalable and has very little overhead.« less
Solving large mixed linear models using preconditioned conjugate gradient iteration.
Strandén, I; Lidauer, M
1999-12-01
Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.
Monitoring performance of a highly distributed and complex computing infrastructure in LHCb
NASA Astrophysics Data System (ADS)
Mathe, Z.; Haen, C.; Stagni, F.
2017-10-01
In order to ensure an optimal performance of the LHCb Distributed Computing, based on LHCbDIRAC, it is necessary to be able to inspect the behavior over time of many components: firstly the agents and services on which the infrastructure is built, but also all the computing tasks and data transfers that are managed by this infrastructure. This consists of recording and then analyzing time series of a large number of observables, for which the usage of SQL relational databases is far from optimal. Therefore within DIRAC we have been studying novel possibilities based on NoSQL databases (ElasticSearch, OpenTSDB and InfluxDB) as a result of this study we developed a new monitoring system based on ElasticSearch. It has been deployed on the LHCb Distributed Computing infrastructure for which it collects data from all the components (agents, services, jobs) and allows creating reports through Kibana and a web user interface, which is based on the DIRAC web framework. In this paper we describe this new implementation of the DIRAC monitoring system. We give details on the ElasticSearch implementation within the DIRAC general framework, as well as an overview of the advantages of the pipeline aggregation used for creating a dynamic bucketing of the time series. We present the advantages of using the ElasticSearch DSL high-level library for creating and running queries. Finally we shall present the performances of that system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Leary, Patrick
The primary challenge motivating this project is the widening gap between the ability to compute information and to store it for subsequent analysis. This gap adversely impacts science code teams, who can perform analysis only on a small fraction of the data they calculate, resulting in the substantial likelihood of lost or missed science, when results are computed but not analyzed. Our approach is to perform as much analysis or visualization processing on data while it is still resident in memory, which is known as in situ processing. The idea in situ processing was not new at the time ofmore » the start of this effort in 2014, but efforts in that space were largely ad hoc, and there was no concerted effort within the research community that aimed to foster production-quality software tools suitable for use by Department of Energy (DOE) science projects. Our objective was to produce and enable the use of production-quality in situ methods and infrastructure, at scale, on DOE high-performance computing (HPC) facilities, though we expected to have an impact beyond DOE due to the widespread nature of the challenges, which affect virtually all large-scale computational science efforts. To achieve this objective, we engaged in software technology research and development (R&D), in close partnerships with DOE science code teams, to produce software technologies that were shown to run efficiently at scale on DOE HPC platforms.« less
Service Migration from Cloud to Multi-tier Fog Nodes for Multimedia Dissemination with QoE Support
Camargo, João; Rochol, Juergen; Gerla, Mario
2018-01-01
A wide range of multimedia services is expected to be offered for mobile users via various wireless access networks. Even the integration of Cloud Computing in such networks does not support an adequate Quality of Experience (QoE) in areas with high demands for multimedia contents. Fog computing has been conceptualized to facilitate the deployment of new services that cloud computing cannot provide, particularly those demanding QoE guarantees. These services are provided using fog nodes located at the network edge, which is capable of virtualizing their functions/applications. Service migration from the cloud to fog nodes can be actuated by request patterns and the timing issues. To the best of our knowledge, existing works on fog computing focus on architecture and fog node deployment issues. In this article, we describe the operational impacts and benefits associated with service migration from the cloud to multi-tier fog computing for video distribution with QoE support. Besides that, we perform the evaluation of such service migration of video services. Finally, we present potential research challenges and trends. PMID:29364172
NASA Technical Reports Server (NTRS)
Greathouse, James S.; Schwing, Alan M.
2015-01-01
This paper explores use of computational fluid dynamics to study the e?ect of geometric porosity on static stability and drag for NASA's Multi-Purpose Crew Vehicle main parachute. Both of these aerodynamic characteristics are of interest to in parachute design, and computational methods promise designers the ability to perform detailed parametric studies and other design iterations with a level of control previously unobtainable using ground or flight testing. The approach presented here uses a canopy structural analysis code to define the inflated parachute shapes on which structured computational grids are generated. These grids are used by the computational fluid dynamics code OVERFLOW and are modeled as rigid, impermeable bodies for this analysis. Comparisons to Apollo drop test data is shown as preliminary validation of the technique. Results include several parametric sweeps through design variables in order to better understand the trade between static stability and drag. Finally, designs that maximize static stability with a minimal loss in drag are suggested for further study in subscale ground and flight testing.
Service Migration from Cloud to Multi-tier Fog Nodes for Multimedia Dissemination with QoE Support.
Rosário, Denis; Schimuneck, Matias; Camargo, João; Nobre, Jéferson; Both, Cristiano; Rochol, Juergen; Gerla, Mario
2018-01-24
A wide range of multimedia services is expected to be offered for mobile users via various wireless access networks. Even the integration of Cloud Computing in such networks does not support an adequate Quality of Experience (QoE) in areas with high demands for multimedia contents. Fog computing has been conceptualized to facilitate the deployment of new services that cloud computing cannot provide, particularly those demanding QoE guarantees. These services are provided using fog nodes located at the network edge, which is capable of virtualizing their functions/applications. Service migration from the cloud to fog nodes can be actuated by request patterns and the timing issues. To the best of our knowledge, existing works on fog computing focus on architecture and fog node deployment issues. In this article, we describe the operational impacts and benefits associated with service migration from the cloud to multi-tier fog computing for video distribution with QoE support. Besides that, we perform the evaluation of such service migration of video services. Finally, we present potential research challenges and trends.
Xu, Qun; Wang, Xianchao; Xu, Chao
2017-06-01
Multiplication with traditional electronic computers is faced with a low calculating accuracy and a long computation time delay. To overcome these problems, the modified signed digit (MSD) multiplication routine is established based on the MSD system and the carry-free adder. Also, its parallel algorithm and optimization techniques are studied in detail. With the help of a ternary optical computer's characteristics, the structured data processor is designed especially for the multiplication routine. Several ternary optical operators are constructed to perform M transformations and summations in parallel, which has accelerated the iterative process of multiplication. In particular, the routine allocates data bits of the ternary optical processor based on digits of multiplication input, so the accuracy of the calculation results can always satisfy the users. Finally, the routine is verified by simulation experiments, and the results are in full compliance with the expectations. Compared with an electronic computer, the MSD multiplication routine is not only good at dealing with large-value data and high-precision arithmetic, but also maintains lower power consumption and fewer calculating delays.
Influence of urban pattern on inundation flow in floodplains of lowland rivers.
Bruwier, M; Mustafa, A; Aliaga, D G; Archambeau, P; Erpicum, S; Nishida, G; Zhang, X; Pirotton, M; Teller, J; Dewals, B
2018-05-01
The objective of this paper is to investigate the respective influence of various urban pattern characteristics on inundation flow. A set of 2000 synthetic urban patterns were generated using an urban procedural model providing locations and shapes of streets and buildings over a square domain of 1×1km 2 . Steady two-dimensional hydraulic computations were performed over the 2000 urban patterns with identical hydraulic boundary conditions. To run such a large amount of simulations, the computational efficiency of the hydraulic model was improved by using an anisotropic porosity model. This model computes on relatively coarse computational cells, but preserves information from the detailed topographic data through porosity parameters. Relationships between urban characteristics and the computed inundation water depths have been based on multiple linear regressions. Finally, a simple mechanistic model based on two district-scale porosity parameters, combining several urban characteristics, is shown to capture satisfactorily the influence of urban characteristics on inundation water depths. The findings of this study give guidelines for more flood-resilient urban planning. Copyright © 2017 Elsevier B.V. All rights reserved.
Computation of Asteroid Proper Elements on the Grid
NASA Astrophysics Data System (ADS)
Novakovic, B.; Balaz, A.; Knezevic, Z.; Potocnik, M.
2009-12-01
A procedure of gridification of the computation of asteroid proper orbital elements is described. The need to speed up the time consuming computations and make them more efficient is justified by the large increase of observational data expected from the next generation all sky surveys. We give the basic notion of proper elements and of the contemporary theories and methods used to compute them for different populations of objects. Proper elements for nearly 70,000 asteroids are derived since the beginning of use of the Grid infrastructure for the purpose. The average time for the catalogs update is significantly shortened with respect to the time needed with stand-alone workstations. We also present basics of the Grid computing, the concepts of Grid middleware and its Workload management system. The practical steps we undertook to efficiently gridify our application are described in full detail. We present the results of a comprehensive testing of the performance of different Grid sites, and offer some practical conclusions based on the benchmark results and on our experience. Finally, we propose some possibilities for the future work.
Toward performance portability of the Albany finite element analysis code using the Kokkos library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demeshko, Irina; Watkins, Jerry; Tezaur, Irina K.
Performance portability on heterogeneous high-performance computing (HPC) systems is a major challenge faced today by code developers: parallel code needs to be executed correctly as well as with high performance on machines with different architectures, operating systems, and software libraries. The finite element method (FEM) is a popular and flexible method for discretizing partial differential equations arising in a wide variety of scientific, engineering, and industrial applications that require HPC. This paper presents some preliminary results pertaining to our development of a performance portable implementation of the FEM-based Albany code. Performance portability is achieved using the Kokkos library. We presentmore » performance results for the Aeras global atmosphere dynamical core module in Albany. Finally, numerical experiments show that our single code implementation gives reasonable performance across three multicore/many-core architectures: NVIDIA General Processing Units (GPU’s), Intel Xeon Phis, and multicore CPUs.« less
Toward performance portability of the Albany finite element analysis code using the Kokkos library
Demeshko, Irina; Watkins, Jerry; Tezaur, Irina K.; ...
2018-02-05
Performance portability on heterogeneous high-performance computing (HPC) systems is a major challenge faced today by code developers: parallel code needs to be executed correctly as well as with high performance on machines with different architectures, operating systems, and software libraries. The finite element method (FEM) is a popular and flexible method for discretizing partial differential equations arising in a wide variety of scientific, engineering, and industrial applications that require HPC. This paper presents some preliminary results pertaining to our development of a performance portable implementation of the FEM-based Albany code. Performance portability is achieved using the Kokkos library. We presentmore » performance results for the Aeras global atmosphere dynamical core module in Albany. Finally, numerical experiments show that our single code implementation gives reasonable performance across three multicore/many-core architectures: NVIDIA General Processing Units (GPU’s), Intel Xeon Phis, and multicore CPUs.« less
Flow Control Under Low-Pressure Turbine Conditions Using Pulsed Jets
NASA Technical Reports Server (NTRS)
Volino, Ralph J.; Ibrahim, Mounir B.
2012-01-01
This publication is the final report of research performed under an NRA/Cooperative Interagency Agreement, and includes a supplemental CD-ROM with detailed data. It is complemented by NASA/CR-2012-217416 and NASA/CR-2012-217417 which include a Ph.D. Dissertation and an M.S. thesis respectively, performed under this contract. In this study the effects of unsteady wakes and flow control using vortex generator jets (VGJs) were studied experimentally and computationally on the flow over the L1A low pressure turbine (LPT) airfoil. The experimental facility was a six passage linear cascade in a low speed wind tunnel at the U.S. Naval Academy. In parallel, computational work using the commercial code FLUENT (ANSYS, Inc.) was performed at Cleveland State University, using Unsteady Reynolds Averaged Navier Stokes (URANS) and Large Eddy Simulations (LES) methods. In the first phase of the work, the baseline flow was documented under steady inflow conditions without flow control. URANS calculations were done using a variety of turbulence models. In the second phase of the work, flow control was added using steady and pulsed vortex generator jets. The VGJs successfully suppressed separation and reduced aerodynamic losses. Pulsed operation was more effective and mass flow requirements are very low. Numerical simulations of the VGJs cases showed that URANS failed to capture the effect of the jets. LES results were generally better. In the third phase, effects of unsteady wakes were studied. Computations with URANS and LES captured the wake effect and generally predicted separation and reattachment to match the experiments. Quantitatively the results were mixed. In the final phase of the study, wakes and VGJs were combined and synchronized using various timing schemes. The timing of the jets with respect to the wakes had some effect, but in general once the disturbance frequency was high enough to control separation, the timing was not very important.
Flow Control Under Low-Pressure Turbine Conditions Using Pulsed Jets: Experimental Data Archive
NASA Technical Reports Server (NTRS)
Volino, Ralph J.; Ibrahim, Mounir B.
2012-01-01
This publication is the final report of research performed under an NRA/Cooperative Interagency Agreement, and includes a supplemental CD-ROM with detailed data. It is complemented by NASA/CR-2012-217416 and NASA/CR-2012-217417 which include a Ph.D. Dissertation and an M.S. thesis respectively, performed under this contract. In this study the effects of unsteady wakes and flow control using vortex generator jets (VGJs) were studied experimentally and computationally on the flow over the L1A low pressure turbine (LPT) airfoil. The experimental facility was a six passage linear cascade in a low speed wind tunnel at the U.S. Naval Academy. In parallel, computational work using the commercial code FLUENT (ANSYS, Inc.) was performed at Cleveland State University, using Unsteady Reynolds Averaged Navier Stokes (URANS) and Large Eddy Simulations (LES) methods. In the first phase of the work, the baseline flow was documented under steady inflow conditions without flow control. URANS calculations were done using a variety of turbulence models. In the second phase of the work, flow control was added using steady and pulsed vortex generator jets. The VGJs successfully suppressed separation and reduced aerodynamic losses. Pulsed operation was more effective and mass flow requirements are very low. Numerical simulations of the VGJs cases showed that URANS failed to capture the effect of the jets. LES results were generally better. In the third phase, effects of unsteady wakes were studied. Computations with URANS and LES captured the wake effect and generally predicted separation and reattachment to match the experiments. Quantitatively the results were mixed. In the final phase of the study, wakes and VGJs were combined and synchronized using various timing schemes. The timing of the jets with respect to the wakes had some effect, but in general once the disturbance frequency was high enough to control separation, the timing was not very important. This is the supplemental CD-ROM
Igarashi, Jun; Shouno, Osamu; Fukai, Tomoki; Tsujino, Hiroshi
2011-11-01
Real-time simulation of a biologically realistic spiking neural network is necessary for evaluation of its capacity to interact with real environments. However, the real-time simulation of such a neural network is difficult due to its high computational costs that arise from two factors: (1) vast network size and (2) the complicated dynamics of biologically realistic neurons. In order to address these problems, mainly the latter, we chose to use general purpose computing on graphics processing units (GPGPUs) for simulation of such a neural network, taking advantage of the powerful computational capability of a graphics processing unit (GPU). As a target for real-time simulation, we used a model of the basal ganglia that has been developed according to electrophysiological and anatomical knowledge. The model consists of heterogeneous populations of 370 spiking model neurons, including computationally heavy conductance-based models, connected by 11,002 synapses. Simulation of the model has not yet been performed in real-time using a general computing server. By parallelization of the model on the NVIDIA Geforce GTX 280 GPU in data-parallel and task-parallel fashion, faster-than-real-time simulation was robustly realized with only one-third of the GPU's total computational resources. Furthermore, we used the GPU's full computational resources to perform faster-than-real-time simulation of three instances of the basal ganglia model; these instances consisted of 1100 neurons and 33,006 synapses and were synchronized at each calculation step. Finally, we developed software for simultaneous visualization of faster-than-real-time simulation output. These results suggest the potential power of GPGPU techniques in real-time simulation of realistic neural networks. Copyright © 2011 Elsevier Ltd. All rights reserved.
Computers in medical education 1: evaluation of a problem-orientated learning package.
Devitt, P; Palmer, E
1998-04-01
A computer-based learning package has been developed, aimed at expanding students' knowledge base, as well as improving data-handling abilities and clinical problem-solving skills. The program was evaluated by monitoring its use by students, canvassing users' opinions and measuring its effectiveness as a learning tool compared to tutorials on the same material. Evaluation was undertaken using three methods: initially, by a questionnaire on computers as a learning tool and the applicability of the content: second, through monitoring by the computer of student use, decisions and performance; finally, through pre- and post-test assessment of fifth-year students who either used a computer package or attended a tutorial on equivalent material. Most students provided positive comments on the learning material and expressed a willingness to see computer-aided learning (CAL) introduced into the curriculum. Over a 3-month period, 26 modules in the program were used on 1246 occasions. Objective measurement showed a significant gain in knowledge, data handling and problem-solving skills. Computer-aided learning is a valuable learning resource that deserves better attention in medical education. When used appropriately, the computer can be an effective learning resource, not only for the delivery of knowledge. but also to help students develop their problem-solving skills.
Discovering Synergistic Drug Combination from a Computational Perspective.
Ding, Pingjian; Luo, Jiawei; Liang, Cheng; Xiao, Qiu; Cao, Buwen; Li, Guanghui
2018-03-30
Synergistic drug combinations play an important role in the treatment of complex diseases. The identification of effective drug combination is vital to further reduce the side effects and improve therapeutic efficiency. In previous years, in vitro method has been the main route to discover synergistic drug combinations. However, many limitations of time and resource consumption lie within the in vitro method. Therefore, with the rapid development of computational models and the explosive growth of large and phenotypic data, computational methods for discovering synergistic drug combinations are an efficient and promising tool and contribute to precision medicine. It is the key of computational methods how to construct the computational model. Different computational strategies generate different performance. In this review, the recent advancements in computational methods for predicting effective drug combination are concluded from multiple aspects. First, various datasets utilized to discover synergistic drug combinations are summarized. Second, we discussed feature-based approaches and partitioned these methods into two classes including feature-based methods in terms of similarity measure, and feature-based methods in terms of machine learning. Third, we discussed network-based approaches for uncovering synergistic drug combinations. Finally, we analyzed and prospected computational methods for predicting effective drug combinations. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Ship Detection from Ocean SAR Image Based on Local Contrast Variance Weighted Information Entropy
Huang, Yulin; Pei, Jifang; Zhang, Qian; Gu, Qin; Yang, Jianyu
2018-01-01
Ship detection from synthetic aperture radar (SAR) images is one of the crucial issues in maritime surveillance. However, due to the varying ocean waves and the strong echo of the sea surface, it is very difficult to detect ships from heterogeneous and strong clutter backgrounds. In this paper, an innovative ship detection method is proposed to effectively distinguish the vessels from complex backgrounds from a SAR image. First, the input SAR image is pre-screened by the maximally-stable extremal region (MSER) method, which can obtain the ship candidate regions with low computational complexity. Then, the proposed local contrast variance weighted information entropy (LCVWIE) is adopted to evaluate the complexity of those candidate regions and the dissimilarity between the candidate regions with their neighborhoods. Finally, the LCVWIE values of the candidate regions are compared with an adaptive threshold to obtain the final detection result. Experimental results based on measured ocean SAR images have shown that the proposed method can obtain stable detection performance both in strong clutter and heterogeneous backgrounds. Meanwhile, it has a low computational complexity compared with some existing detection methods. PMID:29652863
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vance, J.N.; Holderness, J.H.; James, D.W.
1992-12-01
Waste stream scaling factors based on sampling programs are vulnerable to one or more of the following factors: sample representativeness, analytic accuracy, and measurement sensitivity. As an alternative to sample analyses or as a verification of the sampling results, this project proposes the use of the RADSOURCE code, which accounts for the release of fuel-source radionuclides. Once the release rates of these nuclides from fuel are known, the code develops scaling factors for waste streams based on easily measured Cobalt-60 (Co-60) and Cesium-137 (Cs-137). The project team developed mathematical models to account for the appearance rate of 10CFR61 radionuclides inmore » reactor coolant. They based these models on the chemistry and nuclear physics of the radionuclides involved. Next, they incorporated the models into a computer code that calculates plant waste stream scaling factors based on reactor coolant gamma- isotopic data. Finally, the team performed special sampling at 17 reactors to validate the models in the RADSOURCE code.« less
Modelling mid-course corrections for optimality conditions along interplanetary transfers
NASA Astrophysics Data System (ADS)
Iorfida, Elisabetta; Palmer, Phil; Roberts, Mark
2014-12-01
Within the field of trajectory optimisation, Lawden developed the primer vector theory, which defines a set of necessary conditions to characterise whether a transfer trajectory, in the two-body problem context, is optimum with respect to propellant usage. If the conditions are not satisfied, a region of the transfer trajectory is identified in which one or more potential intermediate impulses are performed in order to lower the overall cost. The method is computationally complex owing to having to solve a boundary value problem. In this paper is presented a new propagator that reduces the mathematical complexity and the computational cost of the problem, in particular it exploits a separation between the in-plane and out-of-plane components of the primer vector along the transfer trajectory. Using this propagator, the optimality of the transfer arc has been investigated, varying the departure and arrival orbits. In particular, keeping fixed the transfer trajectory, the optimality has been extensively analysed varying both the initial and final positions on the orbit, together with the directions of the initial and final thrust impulses.
N3LO corrections to jet production in deep inelastic scattering using the Projection-to-Born method
NASA Astrophysics Data System (ADS)
Currie, J.; Gehrmann, T.; Glover, E. W. N.; Huss, A.; Niehues, J.; Vogt, A.
2018-05-01
Computations of higher-order QCD corrections for processes with exclusive final states require a subtraction method for real-radiation contributions. We present the first-ever generalisation of a subtraction method for third-order (N3LO) QCD corrections. The Projection-to-Born method is used to combine inclusive N3LO coefficient functions with an exclusive second-order (NNLO) calculation for a final state with an extra jet. The input requirements, advantages, and potential applications of the method are discussed, and validations at lower orders are performed. As a test case, we compute the N3LO corrections to kinematical distributions and production rates for single-jet production in deep inelastic scattering in the laboratory frame, and compare them with data from the ZEUS experiment at HERA. The corrections are small in the central rapidity region, where they stabilize the predictions to sub per-cent level. The corrections increase substantially towards forward rapidity where large logarithmic effects are expected, thereby yielding an improved description of the data in this region.
Horn-Ritzinger, Sabine; Bernhardt, Johannes; Horn, Michael; Smolle, Josef
2011-04-01
The importance of inductive instruction in medical education is increasingly growing. Little is known about the relevance of prior knowledge regarding students' inductive reasoning abilities. The purpose is to evaluate this inductive teaching method as a means of fostering higher levels of learning and to explore how individual differences in prior knowledge (high [HPK] vs. low [LPK]) contribute to students' inductive reasoning skills. Twenty-six LPK and 18 HPK students could train twice with an interactive computer-based training object to discover the underlying concept before doing the final comprehension check. Students had a median of 76.9% of correct answers in the first, 90.9% in the second training, and answered 92% of the final assessment questions correctly. More important, 86% of all students succeeded with inductive learning, among them 83% of the HPK students and 89% of the LPK students. Prior knowledge did not predict performance on overall comprehension. This inductive instructional strategy fostered students' deep approaches to learning in a time-effective way.
Distributed Issues for Ada Real-Time Systems
1990-07-23
NUMBERS Distributed Issues for Ada Real - Time Systems MDA 903-87- C- 0056 S. AUTHOR(S) Thomas E. Griest 7. PERFORMING ORGANiZATION NAME(S) AND ADORESS(ES) 8...considerations. I Adding to the problem of distributed real - time systems is the issue of maintaining a common sense of time among all of the processors...because -omeone is waiting for the final output of a very large set of computations. However in real - time systems , consistent meeting of short-term
Adapting a Navier-Stokes code to the ICL-DAP
NASA Technical Reports Server (NTRS)
Grosch, C. E.
1985-01-01
The results of an experiment are reported, i.c., to adapt a Navier-Stokes code, originally developed on a serial computer, to concurrent processing on the CL Distributed Array Processor (DAP). The algorithm used in solving the Navier-Stokes equations is briefly described. The architecture of the DAP and DAP FORTRAN are also described. The modifications of the algorithm so as to fit the DAP are given and discussed. Finally, performance results are given and conclusions are drawn.
Hybrid Parallelization of Adaptive MHD-Kinetic Module in Multi-Scale Fluid-Kinetic Simulation Suite
Borovikov, Sergey; Heerikhuisen, Jacob; Pogorelov, Nikolai
2013-04-01
The Multi-Scale Fluid-Kinetic Simulation Suite has a computational tool set for solving partially ionized flows. In this paper we focus on recent developments of the kinetic module which solves the Boltzmann equation using the Monte-Carlo method. The module has been recently redesigned to utilize intra-node hybrid parallelization. We describe in detail the redesign process, implementation issues, and modifications made to the code. Finally, we conduct a performance analysis.
Coincident Extraction of Line Objects from Stereo Image Pairs.
1983-09-01
4.4.3 Reconstruction of intersections 4.5 Final result processing 5. Presentation of the results 5.1 FIM image processing system 5.2 Extraction results in...image. To achieve this goal, the existing software system had to be modified and extended considerably. The following sections of this report will give...8000 pixels of each image without explicit loading of subimages could not yet be performed due to computer system software problems. m m n m -4- The
An efficient annealing in Boltzmann machine in Hopfield neural network
NASA Astrophysics Data System (ADS)
Kin, Teoh Yeong; Hasan, Suzanawati Abu; Bulot, Norhisam; Ismail, Mohammad Hafiz
2012-09-01
This paper proposes and implements Boltzmann machine in Hopfield neural network doing logic programming based on the energy minimization system. The temperature scheduling in Boltzmann machine enhancing the performance of doing logic programming in Hopfield neural network. The finest temperature is determined by observing the ratio of global solution and final hamming distance using computer simulations. The study shows that Boltzmann Machine model is more stable and competent in term of representing and solving difficult combinatory problems.
Blade Displacement Predictions for the Full-Scale UH-60A Airloads Rotor
NASA Technical Reports Server (NTRS)
Bledron, Robert T.; Lee-Rausch, Elizabeth M.
2014-01-01
An unsteady Reynolds-Averaged Navier-Stokes solver for unstructured grids is loosely coupled to a rotorcraft comprehensive code and used to simulate two different test conditions from a wind-tunnel test of a full-scale UH-60A rotor. Performance data and sectional airloads from the simulation are compared with corresponding tunnel data to assess the level of fidelity of the aerodynamic aspects of the simulation. The focus then turns to a comparison of the blade displacements, both rigid (blade root) and elastic. Comparisons of computed root motions are made with data from three independent measurement systems. Finally, comparisons are made between computed elastic bending and elastic twist, and the corresponding measurements obtained from a photogrammetry system. Overall the correlation between computed and measured displacements was good, especially for the root pitch and lag motions and the elastic bending deformation. The correlation of root lead-lag motion and elastic twist deformation was less favorable.
Navier-Stokes calculations of scramjet-nozzle-afterbody flowfields
NASA Technical Reports Server (NTRS)
Baysal, Oktay
1991-01-01
A comprehensive computational fluid dynamics effort was conducted from 1987 to 1990 to properly design a nozzle and lower aft end of a generic hypersonic vehicle powered by a scramjet engine. The interference of the exhaust on the control surfaces of the vehicle can have adverse effects on its stability. Two-dimensional Navier-Stokes computations were performed, where the exhaust gas was assumed to be air behaving as a perfect gas. Then the exhaust was simulated by a mixture of Freon-12 and argon, which required solving the Navier-Stokes equations for four species: (nitrogen, oxygen, Freon-12, and argon). This allowed gamma to be a field variable during the mixing of the multispecies gases. Two different mixing models were used and comparisons between them as well as the perfect gas air calculations were made to assess their relative merits. Finally, the three dimensional Navier-Stokes computations were made for the full-span scramjet nozzle afterbody module.
Navier-Stokes calculations of scramjet-nozzle-afterbody flowfields
NASA Astrophysics Data System (ADS)
Baysal, Oktay
1991-07-01
A comprehensive computational fluid dynamics effort was conducted from 1987 to 1990 to properly design a nozzle and lower aft end of a generic hypersonic vehicle powered by a scramjet engine. The interference of the exhaust on the control surfaces of the vehicle can have adverse effects on its stability. Two-dimensional Navier-Stokes computations were performed, where the exhaust gas was assumed to be air behaving as a perfect gas. Then the exhaust was simulated by a mixture of Freon-12 and argon, which required solving the Navier-Stokes equations for four species: (nitrogen, oxygen, Freon-12, and argon). This allowed gamma to be a field variable during the mixing of the multispecies gases. Two different mixing models were used and comparisons between them as well as the perfect gas air calculations were made to assess their relative merits. Finally, the three dimensional Navier-Stokes computations were made for the full-span scramjet nozzle afterbody module.
A pervasive parallel framework for visualization: final report for FWP 10-014707
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreland, Kenneth D.
2014-01-01
We are on the threshold of a transformative change in the basic architecture of highperformance computing. The use of accelerator processors, characterized by large core counts, shared but asymmetrical memory, and heavy thread loading, is quickly becoming the norm in high performance computing. These accelerators represent significant challenges in updating our existing base of software. An intrinsic problem with this transition is a fundamental programming shift from message passing processes to much more fine thread scheduling with memory sharing. Another problem is the lack of stability in accelerator implementation; processor and compiler technology is currently changing rapidly. This report documentsmore » the results of our three-year ASCR project to address these challenges. Our project includes the development of the Dax toolkit, which contains the beginnings of new algorithms for a new generation of computers and the underlying infrastructure to rapidly prototype and build further algorithms as necessary.« less
Improved computer simulation of the TCAS 3 circular array mounted on an aircraft
NASA Astrophysics Data System (ADS)
Rojas, R. G.; Chen, Y. C.; Burnside, Walter D.
1989-03-01
The Traffic advisory and Collision Avoidance System (TCAS) is being developed by the Federal Aviation Administration (FAA) to assist aircraft pilots in mid-air collision avoidance. This report concentrates on the computer simulation of the enchanced TCAS 2 systems mounted on a Boeing 727. First, the moment method is used to obtain an accurate model for the enhanced TCAS 2 antenna array. Then, the OSU Aircraft Code is used to generate theoretical radiation patterns of this model mounted on a simulated Boeing 727 model. Scattering error curves obtained from these patterns can be used to evaluate the performance of this system in determining the angular position of another aircraft with respect to the TCAS-equipped aircraft. Finally, the tracking of another aircraft is simulated when the TCAS-equipped aircraft follows a prescribed escape curve. In short, the computer models developed in this report have generality, completeness and yield reasonable results.
Raster Scan Computer Image Generation (CIG) System Based On Refresh Memory
NASA Astrophysics Data System (ADS)
Dichter, W.; Doris, K.; Conkling, C.
1982-06-01
A full color, Computer Image Generation (CIG) raster visual system has been developed which provides a high level of training sophistication by utilizing advanced semiconductor technology and innovative hardware and firmware techniques. Double buffered refresh memory and efficient algorithms eliminate the problem of conventional raster line ordering by allowing the generated image to be stored in a random fashion. Modular design techniques and simplified architecture provide significant advantages in reduced system cost, standardization of parts, and high reliability. The major system components are a general purpose computer to perform interfacing and data base functions; a geometric processor to define the instantaneous scene image; a display generator to convert the image to a video signal; an illumination control unit which provides final image processing; and a CRT monitor for display of the completed image. Additional optional enhancements include texture generators, increased edge and occultation capability, curved surface shading, and data base extensions.
LTCP 2D Graphical User Interface. Application Description and User's Guide
NASA Technical Reports Server (NTRS)
Ball, Robert; Navaz, Homayun K.
1996-01-01
A graphical user interface (GUI) written for NASA's LTCP (Liquid Thrust Chamber Performance) 2 dimensional computational fluid dynamic code is described. The GUI is written in C++ for a desktop personal computer running under a Microsoft Windows operating environment. Through the use of common and familiar dialog boxes, features, and tools, the user can easily and quickly create and modify input files for the LTCP code. In addition, old input files used with the LTCP code can be opened and modified using the GUI. The application is written in C++ for a desktop personal computer running under a Microsoft Windows operating environment. The program and its capabilities are presented, followed by a detailed description of each menu selection and the method of creating an input file for LTCP. A cross reference is included to help experienced users quickly find the variables which commonly need changes. Finally, the system requirements and installation instructions are provided.
A preliminary design for flight testing the FINDS algorithm
NASA Technical Reports Server (NTRS)
Caglayan, A. K.; Godiwala, P. M.
1986-01-01
This report presents a preliminary design for flight testing the FINDS (Fault Inferring Nonlinear Detection System) algorithm on a target flight computer. The FINDS software was ported onto the target flight computer by reducing the code size by 65%. Several modifications were made to the computational algorithms resulting in a near real-time execution speed. Finally, a new failure detection strategy was developed resulting in a significant improvement in the detection time performance. In particular, low level MLS, IMU and IAS sensor failures are detected instantaneously with the new detection strategy, while accelerometer and the rate gyro failures are detected within the minimum time allowed by the information generated in the sensor residuals based on the point mass equations of motion. All of the results have been demonstrated by using five minutes of sensor flight data for the NASA ATOPS B-737 aircraft in a Microwave Landing System (MLS) environment.
CFD Design and Analysis of a Passively Suspended Tesla Pump Left Ventricular Assist Device
Medvitz, Richard B.; Boger, David A.; Izraelev, Valentin; Rosenberg, Gerson; Paterson, Eric G.
2012-01-01
This paper summarizes the use of computational fluid dynamics (CFD) to design a novelly suspended Tesla LVAD. Several design variants were analyzed to study the parameters affecting device performance. CFD was performed at pump speeds of 6500, 6750 and 7000 RPM and at flow rates varying from 3 to 7 liter-per-minute (LPM). The CFD showed that shortening the plates nearest the pump inlet reduced the separations formed beneath the upper plate leading edges and provided a more uniform flow distribution through the rotor gaps, both of which positively affected the device hydrodynamic performance. The final pump design was found to produce a head rise of 77 mmHg with a hydraulic efficiency of 16% at the design conditions of 6 LPM throughflow and a 6750 RPM rotation rate. To assess the device hemodynamics the strain rate fields were evaluated. The wall shear stresses demonstrated that the pump wall shear stresses were likely adequate to inhibit thrombus deposition. Finally, an integrated field hemolysis model was applied to the CFD results to assess the effects of design variation and operating conditions on the device hemolytic performance. PMID:21595722
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iavarone, Salvatore; Smith, Sean T.; Smith, Philip J.
Oxy-coal combustion is an emerging low-cost “clean coal” technology for emissions reduction and Carbon Capture and Sequestration (CCS). The use of Computational Fluid Dynamics (CFD) tools is crucial for the development of cost-effective oxy-fuel technologies and the minimization of environmental concerns at industrial scale. The coupling of detailed chemistry models and CFD simulations is still challenging, especially for large-scale plants, because of the high computational efforts required. The development of scale-bridging models is therefore necessary, to find a good compromise between computational efforts and the physical-chemical modeling precision. This paper presents a procedure for scale-bridging modeling of coal devolatilization, inmore » the presence of experimental error, that puts emphasis on the thermodynamic aspect of devolatilization, namely the final volatile yield of coal, rather than kinetics. The procedure consists of an engineering approach based on dataset consistency and Bayesian methodology including Gaussian-Process Regression (GPR). Experimental data from devolatilization tests carried out in an oxy-coal entrained flow reactor were considered and CFD simulations of the reactor were performed. Jointly evaluating experiments and simulations, a novel yield model was validated against the data via consistency analysis. In parallel, a Gaussian-Process Regression was performed, to improve the understanding of the uncertainty associated to the devolatilization, based on the experimental measurements. Potential model forms that could predict yield during devolatilization were obtained. The set of model forms obtained via GPR includes the yield model that was proven to be consistent with the data. Finally, the overall procedure has resulted in a novel yield model for coal devolatilization and in a valuable evaluation of uncertainty in the data, in the model form, and in the model parameters.« less
Iavarone, Salvatore; Smith, Sean T.; Smith, Philip J.; ...
2017-06-03
Oxy-coal combustion is an emerging low-cost “clean coal” technology for emissions reduction and Carbon Capture and Sequestration (CCS). The use of Computational Fluid Dynamics (CFD) tools is crucial for the development of cost-effective oxy-fuel technologies and the minimization of environmental concerns at industrial scale. The coupling of detailed chemistry models and CFD simulations is still challenging, especially for large-scale plants, because of the high computational efforts required. The development of scale-bridging models is therefore necessary, to find a good compromise between computational efforts and the physical-chemical modeling precision. This paper presents a procedure for scale-bridging modeling of coal devolatilization, inmore » the presence of experimental error, that puts emphasis on the thermodynamic aspect of devolatilization, namely the final volatile yield of coal, rather than kinetics. The procedure consists of an engineering approach based on dataset consistency and Bayesian methodology including Gaussian-Process Regression (GPR). Experimental data from devolatilization tests carried out in an oxy-coal entrained flow reactor were considered and CFD simulations of the reactor were performed. Jointly evaluating experiments and simulations, a novel yield model was validated against the data via consistency analysis. In parallel, a Gaussian-Process Regression was performed, to improve the understanding of the uncertainty associated to the devolatilization, based on the experimental measurements. Potential model forms that could predict yield during devolatilization were obtained. The set of model forms obtained via GPR includes the yield model that was proven to be consistent with the data. Finally, the overall procedure has resulted in a novel yield model for coal devolatilization and in a valuable evaluation of uncertainty in the data, in the model form, and in the model parameters.« less