2013-07-01
www.le.ac.uk/ge/collagen/), in addition to mutations on COL1A1 and COL5A2. These mutations result in amino acid substitutions, RNA splicing...fibrillogenesis of type I collagen resulting in thinner collagen fibrils [29]. In the absence of PIIINP, Romanic and colleagues demonstrate that COL1A1 is...larger, shorter, and apparently stiffer; whereas in the presence of PIIINP/ COL1A1 copolymer, the type I collagen was longer, thinner, and more
2009-01-01
a potential military conflict against China in the future. 20 1 Russell Ong , China’s Security Interests in the 21’t Century. (New York, NY: Routledge...Conflictfrom 1500 to 2000 (New York: Random House, 1987), xxii. 11 Ong , China’s Security Interests in the 21st Centwy, 124. 12 Qiao Liang, and Wang Xiangsui...Strategies and their Implicationsfor the United States. xiv. 16 Ibid. xv. 17 Ibid. 18 Ong , China’s Security Interests in the 21’t Centwy, 124. 19 Ibid. 20
NASA Astrophysics Data System (ADS)
Marchi, S.; A'Hearn, M. F.; Barbieri, C.; Barucci, M. A.; Besse, S.; Cremonese, G.; Ip, W. H.; Keller, H. U.; Koschny, D.; Kuhrt, E.; Lamy, P. L.; Marzari, F.; Massironi, M.; Pajola, M.; Rickman, H.; Rodrigo, R.; Sierks, H.; Snodgrass, C.; Thomas, N.; Vincent, J. B.
2014-12-01
In this paper we present the major geomorphological features of comet Churymov-Gerasimenko (C-G), with emphasis on those that may have formed through collisional processes. The C-G nucleus has been imaged with the Rosetta/OSIRIS camera system at varying spatial resolution. At the moment of this writing the maximum spatial resolution achieved is ~20 meter per pixel, and it will improve to reach the unprecedented centimeter-scale in November 2014. This resolution should allow us to identify and characterize pits, lineaments and blocks that could be the result of collisional evolution. Indeed, C-G has spent some 1000 years on orbits crossing the main asteroid belt, and a much longer time in the outer solar system. Collisions may have, therefore, shaped the morphology of the nucleus in various ways. Previously imaged Jupiter Family Comets (e.g., Tempel 1) show significant numbers of pits and lineaments, some of which could be due to collisions. Additional proposed formation mechanisms are related to cometary activity processes, such as volatile outgassing.In addition to small scale features, the overall shape of C-G could also provide insights into the role of collisional processes. A striking feature is that C-G's shape is that of a contact binary. Similar shapes have been observed on rocky asteroids (e.g., Itokawa) and are generally interpreted as an indication of their rubble pile nature. A possibility is that C-G underwent similar processes, and therefore it may be constituted by reaccumulated fragments ejected from a larger precursor. An alternative view is that the current shape is the result of inhomogeneous outgassing activity, which may have dug a ~1-km deep trench responsible for the apparent contact binary shape.The role of the various proposed formation mechanisms (collisional vs outgassing) for both small scale and global features will be investigated and their implications for the evolution of C-G will be discussed.
Microsoft, libraries and open source
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2010-04-26
We are finally starting to see the early signs of transformation inscholarly publishing. The innovations we've been expecting for years areslowly being adopted, but we can also expect the pace of change toaccelerate in the coming 3 to 5 years. At the same time, many of ritualsand artifacts of the scholarly communication lifecycle are still rooted ina centuries-old model. What are the primary goals of scholarlycommunication, and what will be the future role of librarians in thatcycle? What are the obstacles in information flow (many of our owndesign) that can be removed?Is the library profession moving fast enough to staymore » ahead of the curve...or are we ever going to be struggling to keep up? With the advent of thedata deluge, all-XML workflows, the semantic Web, cloud servicesand increasingly intelligent mobile devices - what are the implicationsfor libraries, archivists, publishers, scholarly societies as well asindividual researchers and scholars? The opportunities are many - butcapitalizing on this ever-evolving landscape will require significantchanges to our field, changes that we are not currently well-positioned toenact. This talk will map the current scholarly communication landscape -highlighting recent exciting developments, and will focus on therepercussions and some specific recommendations for the broader field ofinformation management.About the speaker:Alex Wade is the Director for Scholarly Communication within Microsoft'sExternal Research division, where he oversees several projects related toresearcher productivity tools, semantic information capture, and theinteroperability of information systems. Alex holds a Bachelor's degree inPhilosophy from U.C. Berkeley, and a Masters of Librarianship degree fromthe University of Washington.During his career at Microsoft, Alex has managed the corporate search andtaxonomy management services; has shipped a SharePoint-based document andworkflow management solution for Sarbanes-Oxley compliance; and served asSenior Program Manager for Windows Search in Windows Vista and Windows 7.Prior to joining Microsoft, Alex held Systems Librarian, EngineeringLibrarian, Philosophy Librarian, and technical library positions at theUniversity of Washington, the University of Michigan, and U.C. Berkeley.Web: http://research.microsoft.com/en-us/people/awade/ « less
Han, Eui-Ryoung; Chung, Eun-Kyung
2016-02-01
This study examines the relationship between the clinical performance of medical students and their performance as doctors during their internships. This retrospective study involved 63 applicants to a residency programme conducted at the Chonnam National University Hospital, South Korea, in November 2012. We compared the performance of the applicants during their internship with the clinical performance of the applicants during their fourth year of medical school. The performance of the applicants as interns was periodically evaluated by the faculty of each department, while the clinical performance of the applicants as fourth year medical students was assessed using the Clinical Performance Examination (CPX) and the Objective Structured Clinical Examination (OSCE). The performance of the applicants as interns was positively correlated with their clinical performance as fourth year medical students, as measured by CPX and OSCE. The performance of the applicants as interns was moderately correlated with the patient-physician interactions items addressing communication and interpersonal skills in the CPX. The clinical performance of medical students during their fourth year in medical school was related to their performance as medical interns. Medical students should be trained to develop good clinical skills, through actual encounters with patients or simulated encounters using manikins, so that they are able to become competent doctors. Copyright © Singapore Medical Association.
12 CFR 228.29 - Effect of CRA performance on applications.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 3 2010-01-01 2010-01-01 false Effect of CRA performance on applications. 228... account the record of performance under the CRA of: (1) Each applicant bank for the: (i) Establishment of... approval of application. A bank's record of performance may be the basis for denying or conditioning...
12 CFR 25.29 - Effect of CRA performance on applications.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 1 2010-01-01 2010-01-01 false Effect of CRA performance on applications. 25... takes into account the record of performance under the CRA of each applicant bank in considering an... application. A bank's record of performance may be the basis for denying or conditioning approval of an...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armstrong, Robert C.; Ray, Jaideep; Malony, A.
2003-11-01
We present a case study of performance measurement and modeling of a CCA (Common Component Architecture) component-based application in a high performance computing environment. We explore issues peculiar to component-based HPC applications and propose a performance measurement infrastructure for HPC based loosely on recent work done for Grid environments. A prototypical implementation of the infrastructure is used to collect data for a three components in a scientific application and construct performance models for two of them. Both computational and message-passing performance are addressed.
12 CFR 345.29 - Effect of CRA performance on applications.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Effect of CRA performance on applications. 345... OF GENERAL POLICY COMMUNITY REINVESTMENT Standards for Assessing Performance § 345.29 Effect of CRA performance on applications. (a) CRA performance. Among other factors, the FDIC takes into account the record...
User-level framework for performance monitoring of HPC applications
NASA Astrophysics Data System (ADS)
Hristova, R.; Goranov, G.
2013-10-01
HP-SEE is an infrastructure that links the existing HPC facilities in South East Europe in a common infrastructure. The analysis of the performance monitoring of the High-Performance Computing (HPC) applications in the infrastructure can be useful for the end user as diagnostic for the overall performance of his applications. The existing monitoring tools for HP-SEE provide to the end user only aggregated information for all applications. Usually, the user does not have permissions to select only the relevant information for him and for his applications. In this article we present a framework for performance monitoring of the HPC applications in the HP-SEE infrastructure. The framework provides standardized performance metrics, which every user can use in order to monitor his applications. Furthermore as a part of the framework a program interface is developed. The interface allows the user to publish metrics data from his application and to read and analyze gathered information. Publishing and reading through the framework is possible only with grid certificate valid for the infrastructure. Therefore the user is authorized to access only the data for his applications.
Use of Continuous Integration Tools for Application Performance Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vergara Larrea, Veronica G; Joubert, Wayne; Fuson, Christopher B
High performance computing systems are becom- ing increasingly complex, both in node architecture and in the multiple layers of software stack required to compile and run applications. As a consequence, the likelihood is increasing for application performance regressions to occur as a result of routine upgrades of system software components which interact in complex ways. The purpose of this study is to evaluate the effectiveness of continuous integration tools for application performance monitoring on HPC systems. In addition, this paper also describes a prototype system for application perfor- mance monitoring based on Jenkins, a Java-based continuous integration tool. The monitoringmore » system described leverages several features in Jenkins to track application performance results over time. Preliminary results and lessons learned from monitoring applications on Cray systems at the Oak Ridge Leadership Computing Facility are presented.« less
NASA Astrophysics Data System (ADS)
Fang, Juan; Hao, Xiaoting; Fan, Qingwen; Chang, Zeqing; Song, Shuying
2017-05-01
In the Heterogeneous multi-core architecture, CPU and GPU processor are integrated on the same chip, which poses a new challenge to the last-level cache management. In this architecture, the CPU application and the GPU application execute concurrently, accessing the last-level cache. CPU and GPU have different memory access characteristics, so that they have differences in the sensitivity of last-level cache (LLC) capacity. For many CPU applications, a reduced share of the LLC could lead to significant performance degradation. On the contrary, GPU applications can tolerate increase in memory access latency when there is sufficient thread-level parallelism. Taking into account the GPU program memory latency tolerance characteristics, this paper presents a method that let GPU applications can access to memory directly, leaving lots of LLC space for CPU applications, in improving the performance of CPU applications and does not affect the performance of GPU applications. When the CPU application is cache sensitive, and the GPU application is insensitive to the cache, the overall performance of the system is improved significantly.
HPC Profiling with the Sun Studio™ Performance Tools
NASA Astrophysics Data System (ADS)
Itzkowitz, Marty; Maruyama, Yukon
In this paper, we describe how to use the Sun Studio Performance Tools to understand the nature and causes of application performance problems. We first explore CPU and memory performance problems for single-threaded applications, giving some simple examples. Then, we discuss multi-threaded performance issues, such as locking and false-sharing of cache lines, in each case showing how the tools can help. We go on to describe OpenMP applications and the support for them in the performance tools. Then we discuss MPI applications, and the techniques used to profile them. Finally, we present our conclusions.
Performance Evaluation Model for Application Layer Firewalls.
Xuan, Shichang; Yang, Wu; Dong, Hui; Zhang, Jiangchuan
2016-01-01
Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers). Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall.
Understanding the Performance and Potential of Cloud Computing for Scientific Applications
Sadooghi, Iman; Martin, Jesus Hernandez; Li, Tonglin; ...
2015-02-19
In this paper, commercial clouds bring a great opportunity to the scientific computing area. Scientific applications usually require significant resources, however not all scientists have access to sufficient high-end computing systems, may of which can be found in the Top500 list. Cloud Computing has gained the attention of scientists as a competitive resource to run HPC applications at a potentially lower cost. But as a different infrastructure, it is unclear whether clouds are capable of running scientific applications with a reasonable performance per money spent. This work studies the performance of public clouds and places this performance in context tomore » price. We evaluate the raw performance of different services of AWS cloud in terms of the basic resources, such as compute, memory, network and I/O. We also evaluate the performance of the scientific applications running in the cloud. This paper aims to assess the ability of the cloud to perform well, as well as to evaluate the cost of the cloud running scientific applications. We developed a full set of metrics and conducted a comprehensive performance evlauation over the Amazon cloud. We evaluated EC2, S3, EBS and DynamoDB among the many Amazon AWS services. We evaluated the memory sub-system performance with CacheBench, the network performance with iperf, processor and network performance with the HPL benchmark application, and shared storage with NFS and PVFS in addition to S3. We also evaluated a real scientific computing application through the Swift parallel scripting system at scale. Armed with both detailed benchmarks to gauge expected performance and a detailed monetary cost analysis, we expect this paper will be a recipe cookbook for scientists to help them decide where to deploy and run their scientific applications between public clouds, private clouds, or hybrid clouds.« less
Understanding the Performance and Potential of Cloud Computing for Scientific Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sadooghi, Iman; Martin, Jesus Hernandez; Li, Tonglin
In this paper, commercial clouds bring a great opportunity to the scientific computing area. Scientific applications usually require significant resources, however not all scientists have access to sufficient high-end computing systems, may of which can be found in the Top500 list. Cloud Computing has gained the attention of scientists as a competitive resource to run HPC applications at a potentially lower cost. But as a different infrastructure, it is unclear whether clouds are capable of running scientific applications with a reasonable performance per money spent. This work studies the performance of public clouds and places this performance in context tomore » price. We evaluate the raw performance of different services of AWS cloud in terms of the basic resources, such as compute, memory, network and I/O. We also evaluate the performance of the scientific applications running in the cloud. This paper aims to assess the ability of the cloud to perform well, as well as to evaluate the cost of the cloud running scientific applications. We developed a full set of metrics and conducted a comprehensive performance evlauation over the Amazon cloud. We evaluated EC2, S3, EBS and DynamoDB among the many Amazon AWS services. We evaluated the memory sub-system performance with CacheBench, the network performance with iperf, processor and network performance with the HPL benchmark application, and shared storage with NFS and PVFS in addition to S3. We also evaluated a real scientific computing application through the Swift parallel scripting system at scale. Armed with both detailed benchmarks to gauge expected performance and a detailed monetary cost analysis, we expect this paper will be a recipe cookbook for scientists to help them decide where to deploy and run their scientific applications between public clouds, private clouds, or hybrid clouds.« less
Multi-Purpose, Application-Centric, Scalable I/O Proxy Application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, M. C.
2015-06-15
MACSio is a Multi-purpose, Application-Centric, Scalable I/O proxy application. It is designed to support a number of goals with respect to parallel I/O performance testing and benchmarking including the ability to test and compare various I/O libraries and I/O paradigms, to predict scalable performance of real applications and to help identify where improvements in I/O performance can be made within the HPC I/O software stack.
Han, Eui-Ryoung; Chung, Eun-Kyung
2016-01-01
INTRODUCTION This study examines the relationship between the clinical performance of medical students and their performance as doctors during their internships. METHODS This retrospective study involved 63 applicants of a residency programme conducted at Chonnam National University Hospital, South Korea, in November 2012. We compared the performance of the applicants during their internship with their clinical performance during their fourth year of medical school. The performance of the applicants as interns was periodically evaluated by the faculty of each department, while their clinical performance as fourth-year medical students was assessed using the Clinical Performance Examination (CPX) and the Objective Structured Clinical Examination (OSCE). RESULTS The performance of the applicants as interns was positively correlated with their clinical performance as fourth-year medical students, as measured by the CPX and OSCE. The performance of the applicants as interns was moderately correlated with the patient-physician interaction items addressing communication and interpersonal skills in the CPX. CONCLUSION The clinical performance of medical students during their fourth year in medical school was related to their performance as medical interns. Medical students should be trained to develop good clinical skills through actual encounters with patients or simulated encounters using manikins, to enable them to become more competent doctors. PMID:26768172
Barrett, R. F.; Crozier, P. S.; Doerfler, D. W.; ...
2014-09-28
Computational science and engineering application programs are typically large, complex, and dynamic, and are often constrained by distribution limitations. As a means of making tractable rapid explorations of scientific and engineering application programs in the context of new, emerging, and future computing architectures, a suite of miniapps has been created to serve as proxies for full scale applications. Each miniapp is designed to represent a key performance characteristic that does or is expected to significantly impact the runtime performance of an application program. In this paper we introduce a methodology for assessing the ability of these miniapps to effectively representmore » these performance issues. We applied this methodology to four miniapps, examining the linkage between them and an application they are intended to represent. Herein we evaluate the fidelity of that linkage. This work represents the initial steps required to begin to answer the question, ''Under what conditions does a miniapp represent a key performance characteristic in a full app?''« less
NASA Astrophysics Data System (ADS)
Weiss, Brian A.; Fronczek, Lisa; Morse, Emile; Kootbally, Zeid; Schlenoff, Craig
2013-05-01
Transformative Apps (TransApps) is a Defense Advanced Research Projects Agency (DARPA) funded program whose goal is to develop a range of militarily-relevant software applications ("apps") to enhance the operational-effectiveness of military personnel on (and off) the battlefield. TransApps is also developing a military apps marketplace to facilitate rapid development and dissemination of applications to address user needs by connecting engaged communities of endusers with development groups. The National Institute of Standards and Technology's (NIST) role in the TransApps program is to design and implement evaluation procedures to assess the performance of: 1) the various software applications, 2) software-hardware interactions, and 3) the supporting online application marketplace. Specifically, NIST is responsible for evaluating 50+ tactically-relevant applications operating on numerous Android™-powered platforms. NIST efforts include functional regression testing and quantitative performance testing. This paper discusses the evaluation methodologies employed to assess the performance of three key program elements: 1) handheld-based applications and their integration with various hardware platforms, 2) client-based applications and 3) network technologies operating on both the handheld and client systems along with their integration into the application marketplace. Handheld-based applications are assessed using a combination of utility and usability-based checklists and quantitative performance tests. Client-based applications are assessed to replicate current overseas disconnected (i.e. no network connectivity between handhelds) operations and to assess connected operations envisioned for later use. Finally, networked applications are assessed on handhelds to establish baselines of performance for when connectivity will be common usage.
38 CFR 62.22 - Scoring criteria for supportive services grant applicants.
Code of Federal Regulations, 2011 CFR
2011-07-01
... for Veteran Families Program. (3) Organizational qualifications and past performance. (i) Applicant... background, qualifications, experience, and past performance, of the applicant, and any subcontractors...: (1) Background and organizational history. (i) Applicant's, and any identified subcontractors...
2012-06-01
SLEEP AND PERFORMANCE STUDY: EVALUATING THE SAFTE MODEL FOR MARITIME WORKPLACE APPLICATION by Stephanie A. T. Brown June 2012 Thesis...REPORT DATE June 2012 3. REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE Maritime Platform Sleep and Performance Study...Evaluating the SAFTE Model for Maritime Workplace Application 5. FUNDING NUMBERS 6. AUTHOR(S) Stephanie A. T. Brown 7. PERFORMING ORGANIZATION
R&D100: Lightweight Distributed Metric Service
Gentile, Ann; Brandt, Jim; Tucker, Tom; Showerman, Mike
2018-06-12
On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.
R&D100: Lightweight Distributed Metric Service
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gentile, Ann; Brandt, Jim; Tucker, Tom
2015-11-19
On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.
Transparent wood for functional and structural applications
NASA Astrophysics Data System (ADS)
Li, Yuanyuan; Fu, Qiliang; Yang, Xuan; Berglund, Lars
2017-12-01
Optically transparent wood combines mechanical performance with optical functionalities is an emerging candidate for applications in smart buildings and structural optics and photonics. The present review summarizes transparent wood preparation methods, optical and mechanical performance, and functionalization routes, and discusses potential applications. The various challenges are discussed for the purpose of improved performance, scaled-up production and realization of advanced applications. This article is part of a discussion meeting issue `New horizons for cellulose nanotechnology'.
Aesthetic coatings for concrete bridge components
NASA Astrophysics Data System (ADS)
Kriha, Brent R.
This thesis evaluated the durability and aesthetic performance of coating systems for utilization in concrete bridge applications. The principle objectives of this thesis were: 1) Identify aesthetic coating systems appropriate for concrete bridge applications; 2) Evaluate the performance of the selected systems through a laboratory testing regimen; 3) Develop guidelines for coating selection, surface preparation, and application. A series of site visits to various bridges throughout the State of Wisconsin provided insight into the performance of common coating systems and allowed problematic structural details to be identified. To aid in the selection of appropriate coating systems, questionnaires were distributed to coating manufacturers, bridge contractors, and various DOT offices to identify high performing coating systems and best practices for surface preparation and application. These efforts supplemented a literature review investigating recent publications related to formulation, selection, surface preparation, application, and performance evaluation of coating materials.
Towards New Metrics for High-Performance Computing Resilience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hukerikar, Saurabh; Ashraf, Rizwan A; Engelmann, Christian
Ensuring the reliability of applications is becoming an increasingly important challenge as high-performance computing (HPC) systems experience an ever-growing number of faults, errors and failures. While the HPC community has made substantial progress in developing various resilience solutions, it continues to rely on platform-based metrics to quantify application resiliency improvements. The resilience of an HPC application is concerned with the reliability of the application outcome as well as the fault handling efficiency. To understand the scope of impact, effective coverage and performance efficiency of existing and emerging resilience solutions, there is a need for new metrics. In this paper, wemore » develop new ways to quantify resilience that consider both the reliability and the performance characteristics of the solutions from the perspective of HPC applications. As HPC systems continue to evolve in terms of scale and complexity, it is expected that applications will experience various types of faults, errors and failures, which will require applications to apply multiple resilience solutions across the system stack. The proposed metrics are intended to be useful for understanding the combined impact of these solutions on an application's ability to produce correct results and to evaluate their overall impact on an application's performance in the presence of various modes of faults.« less
Teodoro, George; Kurc, Tahsin; Kong, Jun; Cooper, Lee; Saltz, Joel
2014-01-01
We study and characterize the performance of operations in an important class of applications on GPUs and Many Integrated Core (MIC) architectures. Our work is motivated by applications that analyze low-dimensional spatial datasets captured by high resolution sensors, such as image datasets obtained from whole slide tissue specimens using microscopy scanners. Common operations in these applications involve the detection and extraction of objects (object segmentation), the computation of features of each extracted object (feature computation), and characterization of objects based on these features (object classification). In this work, we have identify the data access and computation patterns of operations in the object segmentation and feature computation categories. We systematically implement and evaluate the performance of these operations on modern CPUs, GPUs, and MIC systems for a microscopy image analysis application. Our results show that the performance on a MIC of operations that perform regular data access is comparable or sometimes better than that on a GPU. On the other hand, GPUs are significantly more efficient than MICs for operations that access data irregularly. This is a result of the low performance of MICs when it comes to random data access. We also have examined the coordinated use of MICs and CPUs. Our experiments show that using a performance aware task strategy for scheduling application operations improves performance about 1.29× over a first-come-first-served strategy. This allows applications to obtain high performance efficiency on CPU-MIC systems - the example application attained an efficiency of 84% on 192 nodes (3072 CPU cores and 192 MICs). PMID:25419088
Application-specific coarse-grained reconfigurable array: architecture and design methodology
NASA Astrophysics Data System (ADS)
Zhou, Li; Liu, Dongpei; Zhang, Jianfeng; Liu, Hengzhu
2015-06-01
Coarse-grained reconfigurable arrays (CGRAs) have shown potential for application in embedded systems in recent years. Numerous reconfigurable processing elements (PEs) in CGRAs provide flexibility while maintaining high performance by exploring different levels of parallelism. However, a difference remains between the CGRA and the application-specific integrated circuit (ASIC). Some application domains, such as software-defined radios (SDRs), require flexibility with performance demand increases. More effective CGRA architectures are expected to be developed. Customisation of a CGRA according to its application can improve performance and efficiency. This study proposes an application-specific CGRA architecture template composed of generic PEs (GPEs) and special PEs (SPEs). The hardware of the SPE can be customised to accelerate specific computational patterns. An automatic design methodology that includes pattern identification and application-specific function unit generation is also presented. A mapping algorithm based on ant colony optimisation is provided. Experimental results on the SDR target domain show that compared with other ordinary and application-specific reconfigurable architectures, the CGRA generated by the proposed method performs more efficiently for given applications.
Modeling the Office of Science Ten Year FacilitiesPlan: The PERI Architecture Tiger Team
DOE Office of Scientific and Technical Information (OSTI.GOV)
de Supinski, B R; Alam, S R; Bailey, D H
2009-05-27
The Performance Engineering Institute (PERI) originally proposed a tiger team activity as a mechanism to target significant effort to the optimization of key Office of Science applications, a model that was successfully realized with the assistance of two JOULE metric teams. However, the Office of Science requested a new focus beginning in 2008: assistance in forming its ten year facilities plan. To meet this request, PERI formed the Architecture Tiger Team, which is modeling the performance of key science applications on future architectures, with S3D, FLASH and GTC chosen as the first application targets. In this activity, we have measuredmore » the performance of these applications on current systems in order to understand their baseline performance and to ensure that our modeling activity focuses on the right versions and inputs of the applications. We have applied a variety of modeling techniques to anticipate the performance of these applications on a range of anticipated systems. While our initial findings predict that Office of Science applications will continue to perform well on future machines from major hardware vendors, we have also encountered several areas in which we must extend our modeling techniques in order to fulfill our mission accurately and completely. In addition, we anticipate that models of a wider range of applications will reveal critical differences between expected future systems, thus providing guidance for future Office of Science procurement decisions, and will enable DOE applications to exploit machines in future facilities fully.« less
Modeling the Office of Science Ten Year Facilities Plan: The PERI Architecture Tiger Team
DOE Office of Scientific and Technical Information (OSTI.GOV)
de Supinski, Bronis R.; Alam, Sadaf; Bailey, David H.
2009-06-26
The Performance Engineering Institute (PERI) originally proposed a tiger team activity as a mechanism to target significant effort optimizing key Office of Science applications, a model that was successfully realized with the assistance of two JOULE metric teams. However, the Office of Science requested a new focus beginning in 2008: assistance in forming its ten year facilities plan. To meet this request, PERI formed the Architecture Tiger Team, which is modeling the performance of key science applications on future architectures, with S3D, FLASH and GTC chosen as the first application targets. In this activity, we have measured the performance ofmore » these applications on current systems in order to understand their baseline performance and to ensure that our modeling activity focuses on the right versions and inputs of the applications. We have applied a variety of modeling techniques to anticipate the performance of these applications on a range of anticipated systems. While our initial findings predict that Office of Science applications will continue to perform well on future machines from major hardware vendors, we have also encountered several areas in which we must extend our modeling techniques in order to fulfill our mission accurately and completely. In addition, we anticipate that models of a wider range of applications will reveal critical differences between expected future systems, thus providing guidance for future Office of Science procurement decisions, and will enable DOE applications to exploit machines in future facilities fully.« less
Modeling the Office of Science Ten Year Facilities Plan: The PERI Architecture Team
DOE Office of Scientific and Technical Information (OSTI.GOV)
de Supinski, Bronis R.; Alam, Sadaf R; Bailey, David
2009-01-01
The Performance Engineering Institute (PERI) originally proposed a tiger team activity as a mechanism to target significant effort optimizing key Office of Science applications, a model that was successfully realized with the assistance of two JOULE metric teams. However, the Office of Science requested a new focus beginning in 2008: assistance in forming its ten year facilities plan. To meet this request, PERI formed the Architecture Tiger Team, which is modeling the performance of key science applications on future architectures, with S3D, FLASH and GTC chosen as the first application targets. In this activity, we have measured the performance ofmore » these applications on current systems in order to understand their baseline performance and to ensure that our modeling activity focuses on the right versions and inputs of the applications. We have applied a variety of modeling techniques to anticipate the performance of these applications on a range of anticipated systems. While our initial findings predict that Office of Science applications will continue to perform well on future machines from major hardware vendors, we have also encountered several areas in which we must extend our modeling techniques in order to fulfilll our mission accurately and completely. In addition, we anticipate that models of a wider range of applications will reveal critical differences between expected future systems, thus providing guidance for future Office of Science procurement decisions, and will enable DOE applications to exploit machines in future facilities fully.« less
40 CFR 63.7 - Performance testing requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Performance testing requirements. (a) Applicability and performance test dates. (1) The applicability of this... or operator of the affected source must perform such tests within 180 days of the compliance date for... standard initially, the owner or operator shall conduct a second performance test within 3 years and 180...
NASA Technical Reports Server (NTRS)
Lawson, Gary; Sosonkina, Masha; Baurle, Robert; Hammond, Dana
2017-01-01
In many fields, real-world applications for High Performance Computing have already been developed. For these applications to stay up-to-date, new parallel strategies must be explored to yield the best performance; however, restructuring or modifying a real-world application may be daunting depending on the size of the code. In this case, a mini-app may be employed to quickly explore such options without modifying the entire code. In this work, several mini-apps have been created to enhance a real-world application performance, namely the VULCAN code for complex flow analysis developed at the NASA Langley Research Center. These mini-apps explore hybrid parallel programming paradigms with Message Passing Interface (MPI) for distributed memory access and either Shared MPI (SMPI) or OpenMP for shared memory accesses. Performance testing shows that MPI+SMPI yields the best execution performance, while requiring the largest number of code changes. A maximum speedup of 23 was measured for MPI+SMPI, but only 11 was measured for MPI+OpenMP.
A Method for Evaluation of Microcomputers for Tactical Applications.
1980-06-01
application. The computational requirements of a tactical application are specified in terms of performance parameters. The presently marketed microcomputer...computational requirements of a tactical application are specified in terms of performance parameters. The presently marketed microcomputer and multi...also to provide a method to evaluate microcomputer systems for tactical applications, i.e., Command Control Communications (C 3), weapon systems, etc
MOGO: Model-Oriented Global Optimization of Petascale Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malony, Allen D.; Shende, Sameer S.
The MOGO project was initiated under in 2008 under the DOE Program Announcement for Software Development Tools for Improved Ease-of-Use on Petascale systems (LAB 08-19). The MOGO team consisted of Oak Ridge National Lab, Argonne National Lab, and the University of Oregon. The overall goal of MOGO was to attack petascale performance analysis by developing a general framework where empirical performance data could be efficiently and accurately compared with performance expectations at various levels of abstraction. This information could then be used to automatically identify and remediate performance problems. MOGO was be based on performance models derived from application knowledge,more » performance experiments, and symbolic analysis. MOGO was able to make reasonable impact on existing DOE applications and systems. New tools and techniques were developed, which, in turn, were used on important DOE applications on DOE LCF systems to show significant performance improvements.« less
Organ, Brock; Liu, Hao; Bromwich, Matthew
2015-01-01
The Epley particle repositioning maneuver (PRM) is an effective treatment for benign paroxysmal positional vertigo (BPPV), the most common cause of peripheral vertigo in primary care settings. The goal of this study was to determine whether the use of an iPhone application (DizzyFIX; Clearwater Clinical Ltd, Ottawa, Ontario, Canada) by medical students had a significant impact on the performance of the PRM. We recruited senior medical students who had previously been trained in the management of BPPV and asked them to perform the PRM on a healthy volunteer. One half of the students used a real iPhone application, whereas the others used a sham application. The PRM performance scores of the 2 groups were compared. iPhone application users scored significantly higher on their PRM performance compared with controls (P < .0001) and performed the PRM significantly more slowly (P < .0001). Senior medical students performed a more correct PRM when assisted by the iPhone application. This application represents a significant improvement from standard medical school training using written instructions. Family physicians could also use this iPhone application for the quick and effective treatment of BPPV. © Copyright 2015 by the American Board of Family Medicine.
Code of Federal Regulations, 2013 CFR
2013-07-01
... repackaging of agricultural pesticides performed by refilling establishments subcategory. 455.60 Section 455... STANDARDS (CONTINUED) PESTICIDE CHEMICALS Repackaging of Agricultural Pesticides Performed at Refilling Establishments § 455.60 Applicability; description of repackaging of agricultural pesticides performed by...
Code of Federal Regulations, 2012 CFR
2012-07-01
... repackaging of agricultural pesticides performed by refilling establishments subcategory. 455.60 Section 455... STANDARDS (CONTINUED) PESTICIDE CHEMICALS Repackaging of Agricultural Pesticides Performed at Refilling Establishments § 455.60 Applicability; description of repackaging of agricultural pesticides performed by...
Code of Federal Regulations, 2011 CFR
2011-07-01
... repackaging of agricultural pesticides performed by refilling establishments subcategory. 455.60 Section 455... STANDARDS PESTICIDE CHEMICALS Repackaging of Agricultural Pesticides Performed at Refilling Establishments § 455.60 Applicability; description of repackaging of agricultural pesticides performed by refilling...
Code of Federal Regulations, 2014 CFR
2014-07-01
... repackaging of agricultural pesticides performed by refilling establishments subcategory. 455.60 Section 455... STANDARDS (CONTINUED) PESTICIDE CHEMICALS Repackaging of Agricultural Pesticides Performed at Refilling Establishments § 455.60 Applicability; description of repackaging of agricultural pesticides performed by...
Code of Federal Regulations, 2010 CFR
2010-07-01
... repackaging of agricultural pesticides performed by refilling establishments subcategory. 455.60 Section 455... STANDARDS PESTICIDE CHEMICALS Repackaging of Agricultural Pesticides Performed at Refilling Establishments § 455.60 Applicability; description of repackaging of agricultural pesticides performed by refilling...
Integrated multi sensors and camera video sequence application for performance monitoring in archery
NASA Astrophysics Data System (ADS)
Taha, Zahari; Arif Mat-Jizat, Jessnor; Amirul Abdullah, Muhammad; Muazu Musa, Rabiu; Razali Abdullah, Mohamad; Fauzi Ibrahim, Mohamad; Hanafiah Shaharudin, Mohd Ali
2018-03-01
This paper explains the development of a comprehensive archery performance monitoring software which consisted of three camera views and five body sensors. The five body sensors evaluate biomechanical related variables of flexor and extensor muscle activity, heart rate, postural sway and bow movement during archery performance. The three camera views with the five body sensors are integrated into a single computer application which enables the user to view all the data in a single user interface. The five body sensors’ data are displayed in a numerical and graphical form in real-time. The information transmitted by the body sensors are computed with an embedded algorithm that automatically transforms the summary of the athlete’s biomechanical performance and displays in the application interface. This performance will be later compared to the pre-computed psycho-fitness performance from the prefilled data into the application. All the data; camera views, body sensors; performance-computations; are recorded for further analysis by a sports scientist. Our developed application serves as a powerful tool for assisting the coach and athletes to observe and identify any wrong technique employ during training which gives room for correction and re-evaluation to improve overall performance in the sport of archery.
A Comparative Study of Multi-material Data Structures for Computational Physics Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garimella, Rao Veerabhadra; Robey, Robert W.
The data structures used to represent the multi-material state of a computational physics application can have a drastic impact on the performance of the application. We look at efficient data structures for sparse applications where there may be many materials, but only one or few in most computational cells. We develop simple performance models for use in selecting possible data structures and programming patterns. We verify the analytic models of performance through a small test program of the representative cases.
Sikder, A K; Sikder, Nirmala
2004-08-09
Energetic materials used extensively both for civil and military applications. There are continuous research programmes worldwide to develop new materials with higher performance and enhanced insensitivity to thermal or shock insults than the existing ones in order to meet the requirements of future military and space applications. This review concentrates on recent advances in syntheses, potential formulations and space applications of potential compounds with respect to safety, performance and stability.
PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iandola, F N; O'Brien, M J; Procassini, R J
2010-11-29
Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improvesmore » usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.« less
Sankaran, Ramanan; Angel, Jordan; Brown, W. Michael
2015-04-08
The growth in size of networked high performance computers along with novel accelerator-based node architectures has further emphasized the importance of communication efficiency in high performance computing. The world's largest high performance computers are usually operated as shared user facilities due to the costs of acquisition and operation. Applications are scheduled for execution in a shared environment and are placed on nodes that are not necessarily contiguous on the interconnect. Furthermore, the placement of tasks on the nodes allocated by the scheduler is sub-optimal, leading to performance loss and variability. Here, we investigate the impact of task placement on themore » performance of two massively parallel application codes on the Titan supercomputer, a turbulent combustion flow solver (S3D) and a molecular dynamics code (LAMMPS). Benchmark studies show a significant deviation from ideal weak scaling and variability in performance. The inter-task communication distance was determined to be one of the significant contributors to the performance degradation and variability. A genetic algorithm-based parallel optimization technique was used to optimize the task ordering. This technique provides an improved placement of the tasks on the nodes, taking into account the application's communication topology and the system interconnect topology. As a result, application benchmarks after task reordering through genetic algorithm show a significant improvement in performance and reduction in variability, therefore enabling the applications to achieve better time to solution and scalability on Titan during production.« less
Performance Support in Internet Time: The State of the Practice.
ERIC Educational Resources Information Center
Gery, Gloria; Malcolm, Stan; Cichelli, Janet; Christensen, Hal; Raybould, Barry; Rosenberg, Marc J.
2000-01-01
Relates a discussion held via teleconference that addressed trends relating to performance support. Topics include computer-based training versus performance support; knowledge management; Internet and Web-based applications; dynamics and human activities; enterprise application integration; intrinsic performance support; and future possibilities.…
Code of Federal Regulations, 2013 CFR
2013-10-01
..., certificate for provider-performed microscopy (PPM) procedures, and certificate of compliance. 493.43 Section... Provider-performed Microscopy Procedures, and Certificate of Compliance § 493.43 Application for registration certificate, certificate for provider-performed microscopy (PPM) procedures, and certificate of...
Code of Federal Regulations, 2010 CFR
2010-10-01
..., certificate for provider-performed microscopy (PPM) procedures, and certificate of compliance. 493.43 Section... Provider-performed Microscopy Procedures, and Certificate of Compliance § 493.43 Application for registration certificate, certificate for provider-performed microscopy (PPM) procedures, and certificate of...
Code of Federal Regulations, 2012 CFR
2012-10-01
..., certificate for provider-performed microscopy (PPM) procedures, and certificate of compliance. 493.43 Section... Provider-performed Microscopy Procedures, and Certificate of Compliance § 493.43 Application for registration certificate, certificate for provider-performed microscopy (PPM) procedures, and certificate of...
Code of Federal Regulations, 2011 CFR
2011-10-01
..., certificate for provider-performed microscopy (PPM) procedures, and certificate of compliance. 493.43 Section... Provider-performed Microscopy Procedures, and Certificate of Compliance § 493.43 Application for registration certificate, certificate for provider-performed microscopy (PPM) procedures, and certificate of...
Code of Federal Regulations, 2014 CFR
2014-10-01
..., certificate for provider-performed microscopy (PPM) procedures, and certificate of compliance. 493.43 Section... Provider-performed Microscopy Procedures, and Certificate of Compliance § 493.43 Application for registration certificate, certificate for provider-performed microscopy (PPM) procedures, and certificate of...
Teodoro, George; Kurc, Tahsin; Andrade, Guilherme; Kong, Jun; Ferreira, Renato; Saltz, Joel
2015-01-01
We carry out a comparative performance study of multi-core CPUs, GPUs and Intel Xeon Phi (Many Integrated Core-MIC) with a microscopy image analysis application. We experimentally evaluate the performance of computing devices on core operations of the application. We correlate the observed performance with the characteristics of computing devices and data access patterns, computation complexities, and parallelization forms of the operations. The results show a significant variability in the performance of operations with respect to the device used. The performances of operations with regular data access are comparable or sometimes better on a MIC than that on a GPU. GPUs are more efficient than MICs for operations that access data irregularly, because of the lower bandwidth of the MIC for random data accesses. We propose new performance-aware scheduling strategies that consider variabilities in operation speedups. Our scheduling strategies significantly improve application performance compared to classic strategies in hybrid configurations. PMID:28239253
THE IMMEDIATE AND LONG-TERM EFFECTS OF KINESIOTAPE® ON BALANCE AND FUNCTIONAL PERFORMANCE.
Wilson, Victoria; Douris, Peter; Fukuroku, Taryn; Kuzniewski, Michael; Dias, Joe; Figueiredo, Patrick
2016-04-01
The application of Kinesio Tex® tape (KT) results, in theory, in the improvement of muscle contractibility by supporting weakened muscles. The effect of KT on muscle strength has been investigated by numerous researchers who have theorized that KT facilitates an immediate increase in muscle strength by generating a concentric pull on the fascia. The effect of KT on balance and functional performance has been controversial because of the inconsistencies of tension and direction of pull required during application of KT and whether its use on healthy individuals provides therapeutic benefits. The purpose of the present study was to investigate the immediate and long-term effects of the prescribed application (for facilitation) of KT when applied to the dominant lower extremity of healthy individuals. The hypothesis was that balance and functional performance would improve with the prescribed application of KT versus the sham application. Pretest-posttest repeated measures control group design. Seventeen healthy subjects (9 males; 8 females) ranging from 18-35 years of age (mean age 23.3 ± 0.72), volunteered to participate in this study. KT was applied to the gastrocnemius of the participant's dominant leg using a prescribed application to facilitate muscle performance for the experimental group versus a sham application for the control group. The Biodex Balance System and four hop tests were utilized to assess balance, proprioception, and functional performance beginning on the first day including pre- and immediately post-KT application measurements. Subsequent measurements were performed 24, 72, and 120 hours after tape application. Repeated measures ANOVA's were performed for each individual dependent variable. There were no significant differences for main and interaction effects between KT and sham groups for the balance and four hop tests. The results of the present study did not indicate any significant differences in balance and functional performance when KT was applied to the gastrocnemius muscle of the lower extremity. Level 1- Randomized Clinical Trial.
High Performance Computing Software Applications for Space Situational Awareness
NASA Astrophysics Data System (ADS)
Giuliano, C.; Schumacher, P.; Matson, C.; Chun, F.; Duncan, B.; Borelli, K.; Desonia, R.; Gusciora, G.; Roe, K.
The High Performance Computing Software Applications Institute for Space Situational Awareness (HSAI-SSA) has completed its first full year of applications development. The emphasis of our work in this first year was in improving space surveillance sensor models and image enhancement software. These applications are the Space Surveillance Network Analysis Model (SSNAM), the Air Force Space Fence simulation (SimFence), and physically constrained iterative de-convolution (PCID) image enhancement software tool. Specifically, we have demonstrated order of magnitude speed-up in those codes running on the latest Cray XD-1 Linux supercomputer (Hoku) at the Maui High Performance Computing Center. The software applications improvements that HSAI-SSA has made, has had significant impact to the warfighter and has fundamentally changed the role of high performance computing in SSA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Paul T.; Heroux, Michael A.; Barrett, Richard F.
The performance of a large-scale, production-quality science and engineering application (‘app’) is often dominated by a small subset of the code. Even within that subset, computational and data access patterns are often repeated, so that an even smaller portion can represent the performance-impacting features. If application developers, parallel computing experts, and computer architects can together identify this representative subset and then develop a small mini-application (‘miniapp’) that can capture these primary performance characteristics, then this miniapp can be used to both improve the performance of the app as well as provide a tool for co-design for the high-performance computing community.more » However, a critical question is whether a miniapp can effectively capture key performance behavior of an app. This study provides a comparison of an implicit finite element semiconductor device modeling app on unstructured meshes with an implicit finite element miniapp on unstructured meshes. The goal is to assess whether the miniapp is predictive of the performance of the app. Finally, single compute node performance will be compared, as well as scaling up to 16,000 cores. Results indicate that the miniapp can be reasonably predictive of the performance characteristics of the app for a single iteration of the solver on a single compute node.« less
Lin, Paul T.; Heroux, Michael A.; Barrett, Richard F.; ...
2015-07-30
The performance of a large-scale, production-quality science and engineering application (‘app’) is often dominated by a small subset of the code. Even within that subset, computational and data access patterns are often repeated, so that an even smaller portion can represent the performance-impacting features. If application developers, parallel computing experts, and computer architects can together identify this representative subset and then develop a small mini-application (‘miniapp’) that can capture these primary performance characteristics, then this miniapp can be used to both improve the performance of the app as well as provide a tool for co-design for the high-performance computing community.more » However, a critical question is whether a miniapp can effectively capture key performance behavior of an app. This study provides a comparison of an implicit finite element semiconductor device modeling app on unstructured meshes with an implicit finite element miniapp on unstructured meshes. The goal is to assess whether the miniapp is predictive of the performance of the app. Finally, single compute node performance will be compared, as well as scaling up to 16,000 cores. Results indicate that the miniapp can be reasonably predictive of the performance characteristics of the app for a single iteration of the solver on a single compute node.« less
Profiling and Improving I/O Performance of a Large-Scale Climate Scientific Application
NASA Technical Reports Server (NTRS)
Liu, Zhuo; Wang, Bin; Wang, Teng; Tian, Yuan; Xu, Cong; Wang, Yandong; Yu, Weikuan; Cruz, Carlos A.; Zhou, Shujia; Clune, Tom;
2013-01-01
Exascale computing systems are soon to emerge, which will pose great challenges on the huge gap between computing and I/O performance. Many large-scale scientific applications play an important role in our daily life. The huge amounts of data generated by such applications require highly parallel and efficient I/O management policies. In this paper, we adopt a mission-critical scientific application, GEOS-5, as a case to profile and analyze the communication and I/O issues that are preventing applications from fully utilizing the underlying parallel storage systems. Through in-detail architectural and experimental characterization, we observe that current legacy I/O schemes incur significant network communication overheads and are unable to fully parallelize the data access, thus degrading applications' I/O performance and scalability. To address these inefficiencies, we redesign its I/O framework along with a set of parallel I/O techniques to achieve high scalability and performance. Evaluation results on the NASA discover cluster show that our optimization of GEOS-5 with ADIOS has led to significant performance improvements compared to the original GEOS-5 implementation.
Space Station Application of Simulator-Developed Aircrew Coordination and Performance Measures
NASA Technical Reports Server (NTRS)
Murphy, Miles
1985-01-01
This paper summarizes a study in progress at NASA/Ames Research Center to develop measures of aircrew coordination and decision-making factors and to relate them to flight task performance, that is, to crew and system performance measures. The existence of some similar interpersonal process and task performance requirements suggests a potential application of these methods in space station crew research -- particularly research conducted in ground-based mock-ups. The secondary objective of this study should also be of interest: to develop information on crew process and performance for application in developing crew training programs.
45 CFR 305.33 - Determination of applicable percentages based on performance levels.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., DEPARTMENT OF HEALTH AND HUMAN SERVICES PROGRAM PERFORMANCE MEASURES, STANDARDS, FINANCIAL INCENTIVES, AND PENALTIES § 305.33 Determination of applicable percentages based on performance levels. (a) A State's... performance levels. 305.33 Section 305.33 Public Welfare Regulations Relating to Public Welfare OFFICE OF...
An Application-Based Performance Evaluation of NASAs Nebula Cloud Computing Platform
NASA Technical Reports Server (NTRS)
Saini, Subhash; Heistand, Steve; Jin, Haoqiang; Chang, Johnny; Hood, Robert T.; Mehrotra, Piyush; Biswas, Rupak
2012-01-01
The high performance computing (HPC) community has shown tremendous interest in exploring cloud computing as it promises high potential. In this paper, we examine the feasibility, performance, and scalability of production quality scientific and engineering applications of interest to NASA on NASA's cloud computing platform, called Nebula, hosted at Ames Research Center. This work represents the comprehensive evaluation of Nebula using NUTTCP, HPCC, NPB, I/O, and MPI function benchmarks as well as four applications representative of the NASA HPC workload. Specifically, we compare Nebula performance on some of these benchmarks and applications to that of NASA s Pleiades supercomputer, a traditional HPC system. We also investigate the impact of virtIO and jumbo frames on interconnect performance. Overall results indicate that on Nebula (i) virtIO and jumbo frames improve network bandwidth by a factor of 5x, (ii) there is a significant virtualization layer overhead of about 10% to 25%, (iii) write performance is lower by a factor of 25x, (iv) latency for short MPI messages is very high, and (v) overall performance is 15% to 48% lower than that on Pleiades for NASA HPC applications. We also comment on the usability of the cloud platform.
Performance implications from sizing a VM on multi-core systems: A Data analytic application s view
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Seung-Hwan; Horey, James L; Begoli, Edmon
In this paper, we present a quantitative performance analysis of data analytics applications running on multi-core virtual machines. Such environments form the core of cloud computing. In addition, data analytics applications, such as Cassandra and Hadoop, are becoming increasingly popular on cloud computing platforms. This convergence necessitates a better understanding of the performance and cost implications of such hybrid systems. For example, the very rst step in hosting applications in virtualized environments, requires the user to con gure the number of virtual processors and the size of memory. To understand performance implications of this step, we benchmarked three Yahoo Cloudmore » Serving Benchmark (YCSB) workloads in a virtualized multi-core environment. Our measurements indicate that the performance of Cassandra for YCSB workloads does not heavily depend on the processing capacity of a system, while the size of the data set is critical to performance relative to allocated memory. We also identi ed a strong relationship between the running time of workloads and various hardware events (last level cache loads, misses, and CPU migrations). From this analysis, we provide several suggestions to improve the performance of data analytics applications running on cloud computing environments.« less
Piezoelectric Actuator/Sensor Technology at Rockwell
NASA Technical Reports Server (NTRS)
Neurgaonkar, Ratnakar R.
1996-01-01
We describe the state-of-the art of piezoelectric materials based on perovskite and tungsten bronze families for sensor, actuator and smart structure applications. The microstructural defects in these materials have been eliminated to a large extent and the resulting materials exhibit exceedingly high performance for various applications. The performance of Rockwell actuators/sensors is at least 3 times better than commercially available products. These high performance actuators are being incorporated into various applications including, DOD, NASA and commercial. The multilayer actuator stacks fabricated from our piezoceramics are advantageous for sensing and high capacitance applications. In this presentation, we will describe the use of our high performance piezo-ceramics for actuators and sensors, including multilayer stacks and composite structures.
NASA Astrophysics Data System (ADS)
Adams, Matthew; Salgaonkar, Vasant; Jones, Peter; Plata, Juan; Chen, Henry; Pauly, Kim Butts; Sommer, Graham; Diederich, Chris
2017-03-01
An MR-guided endoluminal ultrasound applicator has been proposed for palliative and potential curative thermal therapy of pancreatic tumors. Minimally invasive ablation or hyperthermia treatment of pancreatic tumor tissue would be performed with the applicator positioned in the gastrointestinal (GI) lumen, and sparing of the luminal tissue would be achieved with a water-cooled balloon surrounding the ultrasound transducers. This approach offers the capability of conformal volumetric therapy for fast treatment times, with control over the 3D spatial deposition of energy. Prototype endoluminal ultrasound applicators have been fabricated using 3D printed fixtures that seat two 3.2 or 5.6 MHz planar or curvilinear transducers and contain channels for wiring and water flow. Spiral surface coils have been integrated onto the applicator body to allow for device localization and tracking for therapies performed under MR guidance. Heating experiments with a tissue-mimicking phantom in a 3T MR scanner were performed and demonstrated capability of the prototype to perform volumetric heating through duodenal luminal tissue under real-time PRF-based MR temperature imaging (MRTI). Additional experiments were performed in ex vivo pig carcasses with the applicator inserted into the esophagus and aimed towards liver or soft tissue surrounding the spine under MR guidance. These experiments verified the capacity of heating targets up to 20-25 mm from the GI tract. Active device tracking and automated prescription of imaging and temperature monitoring planes through the applicator were made possible by using Hadamard encoded tracking sequences to obtain the coordinates of the applicator tracking coils. The prototype applicators have been integrated with an MR software suite that performs real-time device tracking and temperature monitoring.
12 CFR 563e.29 - Effect of CRA performance on applications.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 5 2010-01-01 2010-01-01 false Effect of CRA performance on applications. 563e.29 Section 563e.29 Banks and Banking OFFICE OF THRIFT SUPERVISION, DEPARTMENT OF THE TREASURY COMMUNITY REINVESTMENT Standards for Assessing Performance § 563e.29 Effect of CRA performance on...
What does the multiple mini interview have to offer over the panel interview?
Pau, Allan; Chen, Yu Sui; Lee, Verna Kar Mun; Sow, Chew Fei; De Alwis, Ranjit
2016-01-01
This paper compares the panel interview (PI) performance with the multiple mini interview (MMI) performance and indication of behavioural concerns of a sample of medical school applicants. The acceptability of the MMI was also assessed. All applicants shortlisted for a PI were invited to an MMI. Applicants attended a 30-min PI with two faculty interviewers followed by an MMI consisting of ten 8-min stations. Applicants were assessed on their performance at each MMI station by one faculty. The interviewer also indicated if they perceived the applicant to be a concern. Finally, applicants completed an acceptability questionnaire. From the analysis of 133 (75.1%) completed MMI scoresheets, the MMI scores correlated statistically significantly with the PI scores (r=0.438, p=0.001). Both were not statistically associated with sex, age, race, or pre-university academic ability to any significance. Applicants assessed as a concern at two or more stations performed statistically significantly less well at the MMI when compared with those who were assessed as a concern at one station or none at all. However, there was no association with PI performance. Acceptability scores were generally high, and comparison of mean scores for each of the acceptability questionnaire items did not show statistically significant differences between sex and race categories. Although PI and MMI performances are correlated, the MMI may have the added advantage of more objectively generating multiple impressions of the applicant's interpersonal skill, thoughtfulness, and general demeanour. Results of the present study indicated that the MMI is acceptable in a multicultural context.
What does the multiple mini interview have to offer over the panel interview?
Pau, Allan; Chen, Yu Sui; Lee, Verna Kar Mun; Sow, Chew Fei; Alwis, Ranjit De
2016-01-01
Introduction This paper compares the panel interview (PI) performance with the multiple mini interview (MMI) performance and indication of behavioural concerns of a sample of medical school applicants. The acceptability of the MMI was also assessed. Materials and methods All applicants shortlisted for a PI were invited to an MMI. Applicants attended a 30-min PI with two faculty interviewers followed by an MMI consisting of ten 8-min stations. Applicants were assessed on their performance at each MMI station by one faculty. The interviewer also indicated if they perceived the applicant to be a concern. Finally, applicants completed an acceptability questionnaire. Results From the analysis of 133 (75.1%) completed MMI scoresheets, the MMI scores correlated statistically significantly with the PI scores (r=0.438, p=0.001). Both were not statistically associated with sex, age, race, or pre-university academic ability to any significance. Applicants assessed as a concern at two or more stations performed statistically significantly less well at the MMI when compared with those who were assessed as a concern at one station or none at all. However, there was no association with PI performance. Acceptability scores were generally high, and comparison of mean scores for each of the acceptability questionnaire items did not show statistically significant differences between sex and race categories. Conclusions Although PI and MMI performances are correlated, the MMI may have the added advantage of more objectively generating multiple impressions of the applicant's interpersonal skill, thoughtfulness, and general demeanour. Results of the present study indicated that the MMI is acceptable in a multicultural context. PMID:26873337
What does the multiple mini interview have to offer over the panel interview?
Pau, Allan; Chen, Yu Sui; Lee, Verna Kar Mun; Sow, Chew Fei; Alwis, Ranjit De
2016-01-01
Introduction This paper compares the panel interview (PI) performance with the multiple mini interview (MMI) performance and indication of behavioural concerns of a sample of medical school applicants. The acceptability of the MMI was also assessed. Materials and methods All applicants shortlisted for a PI were invited to an MMI. Applicants attended a 30-min PI with two faculty interviewers followed by an MMI consisting of ten 8-min stations. Applicants were assessed on their performance at each MMI station by one faculty. The interviewer also indicated if they perceived the applicant to be a concern. Finally, applicants completed an acceptability questionnaire. Results From the analysis of 133 (75.1%) completed MMI scoresheets, the MMI scores correlated statistically significantly with the PI scores (r=0.438, p=0.001). Both were not statistically associated with sex, age, race, or pre-university academic ability to any significance. Applicants assessed as a concern at two or more stations performed statistically significantly less well at the MMI when compared with those who were assessed as a concern at one station or none at all. However, there was no association with PI performance. Acceptability scores were generally high, and comparison of mean scores for each of the acceptability questionnaire items did not show statistically significant differences between sex and race categories. Conclusions Although PI and MMI performances are correlated, the MMI may have the added advantage of more objectively generating multiple impressions of the applicant's interpersonal skill, thoughtfulness, and general demeanour. Results of the present study indicated that the MMI is acceptable in a multicultural context.
High-performance silicon photonics technology for telecommunications applications.
Yamada, Koji; Tsuchizawa, Tai; Nishi, Hidetaka; Kou, Rai; Hiraki, Tatsurou; Takeda, Kotaro; Fukuda, Hiroshi; Ishikawa, Yasuhiko; Wada, Kazumi; Yamamoto, Tsuyoshi
2014-04-01
By way of a brief review of Si photonics technology, we show that significant improvements in device performance are necessary for practical telecommunications applications. In order to improve device performance in Si photonics, we have developed a Si-Ge-silica monolithic integration platform, on which compact Si-Ge-based modulators/detectors and silica-based high-performance wavelength filters are monolithically integrated. The platform features low-temperature silica film deposition, which cannot damage Si-Ge-based active devices. Using this platform, we have developed various integrated photonic devices for broadband telecommunications applications.
High-performance silicon photonics technology for telecommunications applications
Yamada, Koji; Tsuchizawa, Tai; Nishi, Hidetaka; Kou, Rai; Hiraki, Tatsurou; Takeda, Kotaro; Fukuda, Hiroshi; Ishikawa, Yasuhiko; Wada, Kazumi; Yamamoto, Tsuyoshi
2014-01-01
By way of a brief review of Si photonics technology, we show that significant improvements in device performance are necessary for practical telecommunications applications. In order to improve device performance in Si photonics, we have developed a Si-Ge-silica monolithic integration platform, on which compact Si-Ge–based modulators/detectors and silica-based high-performance wavelength filters are monolithically integrated. The platform features low-temperature silica film deposition, which cannot damage Si-Ge–based active devices. Using this platform, we have developed various integrated photonic devices for broadband telecommunications applications. PMID:27877659
High-performance silicon photonics technology for telecommunications applications
NASA Astrophysics Data System (ADS)
Yamada, Koji; Tsuchizawa, Tai; Nishi, Hidetaka; Kou, Rai; Hiraki, Tatsurou; Takeda, Kotaro; Fukuda, Hiroshi; Ishikawa, Yasuhiko; Wada, Kazumi; Yamamoto, Tsuyoshi
2014-04-01
By way of a brief review of Si photonics technology, we show that significant improvements in device performance are necessary for practical telecommunications applications. In order to improve device performance in Si photonics, we have developed a Si-Ge-silica monolithic integration platform, on which compact Si-Ge-based modulators/detectors and silica-based high-performance wavelength filters are monolithically integrated. The platform features low-temperature silica film deposition, which cannot damage Si-Ge-based active devices. Using this platform, we have developed various integrated photonic devices for broadband telecommunications applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sreepathi, Sarat; D'Azevedo, Eduardo; Philip, Bobby
On large supercomputers, the job scheduling systems may assign a non-contiguous node allocation for user applications depending on available resources. With parallel applications using MPI (Message Passing Interface), the default process ordering does not take into account the actual physical node layout available to the application. This contributes to non-locality in terms of physical network topology and impacts communication performance of the application. In order to mitigate such performance penalties, this work describes techniques to identify suitable task mapping that takes the layout of the allocated nodes as well as the application's communication behavior into account. During the first phasemore » of this research, we instrumented and collected performance data to characterize communication behavior of critical US DOE (United States - Department of Energy) applications using an augmented version of the mpiP tool. Subsequently, we developed several reordering methods (spectral bisection, neighbor join tree etc.) to combine node layout and application communication data for optimized task placement. We developed a tool called mpiAproxy to facilitate detailed evaluation of the various reordering algorithms without requiring full application executions. This work presents a comprehensive performance evaluation (14,000 experiments) of the various task mapping techniques in lowering communication costs on Titan, the leadership class supercomputer at Oak Ridge National Laboratory.« less
A Cross-Platform Infrastructure for Scalable Runtime Application Performance Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jack Dongarra; Shirley Moore; Bart Miller, Jeffrey Hollingsworth
2005-03-15
The purpose of this project was to build an extensible cross-platform infrastructure to facilitate the development of accurate and portable performance analysis tools for current and future high performance computing (HPC) architectures. Major accomplishments include tools and techniques for multidimensional performance analysis, as well as improved support for dynamic performance monitoring of multithreaded and multiprocess applications. Previous performance tool development has been limited by the burden of having to re-write a platform-dependent low-level substrate for each architecture/operating system pair in order to obtain the necessary performance data from the system. Manual interpretation of performance data is not scalable for large-scalemore » long-running applications. The infrastructure developed by this project provides a foundation for building portable and scalable performance analysis tools, with the end goal being to provide application developers with the information they need to analyze, understand, and tune the performance of terascale applications on HPC architectures. The backend portion of the infrastructure provides runtime instrumentation capability and access to hardware performance counters, with thread-safety for shared memory environments and a communication substrate to support instrumentation of multiprocess and distributed programs. Front end interfaces provides tool developers with a well-defined, platform-independent set of calls for requesting performance data. End-user tools have been developed that demonstrate runtime data collection, on-line and off-line analysis of performance data, and multidimensional performance analysis. The infrastructure is based on two underlying performance instrumentation technologies. These technologies are the PAPI cross-platform library interface to hardware performance counters and the cross-platform Dyninst library interface for runtime modification of executable images. The Paradyn and KOJAK projects have made use of this infrastructure to build performance measurement and analysis tools that scale to long-running programs on large parallel and distributed systems and that automate much of the search for performance bottlenecks.« less
40 CFR 60.40b - Applicability and delegation of authority.
Code of Federal Regulations, 2012 CFR
2012-07-01
... applicability requirements under subpart D (Standards of performance for fossil-fuel-fired steam generators... meeting the applicability requirements under subpart D (Standards of performance for fossil-fuel-fired... fossil fuel. If the affected facility (i.e. heat recovery steam generator) is subject to this subpart...
40 CFR 60.40b - Applicability and delegation of authority.
Code of Federal Regulations, 2014 CFR
2014-07-01
... applicability requirements under subpart D (Standards of performance for fossil-fuel-fired steam generators... meeting the applicability requirements under subpart D (Standards of performance for fossil-fuel-fired... fossil fuel. If the affected facility (i.e. heat recovery steam generator) is subject to this subpart...
40 CFR 60.40b - Applicability and delegation of authority.
Code of Federal Regulations, 2011 CFR
2011-07-01
... applicability requirements under subpart D (Standards of performance for fossil-fuel-fired steam generators... meeting the applicability requirements under subpart D (Standards of performance for fossil-fuel-fired...) heat input of fossil fuel. If the heat recovery steam generator is subject to this subpart, only...
40 CFR 60.40b - Applicability and delegation of authority.
Code of Federal Regulations, 2010 CFR
2010-07-01
... applicability requirements under subpart D (Standards of performance for fossil-fuel-fired steam generators... meeting the applicability requirements under subpart D (Standards of performance for fossil-fuel-fired...) heat input of fossil fuel. If the heat recovery steam generator is subject to this subpart, only...
40 CFR 60.40b - Applicability and delegation of authority.
Code of Federal Regulations, 2013 CFR
2013-07-01
... applicability requirements under subpart D (Standards of performance for fossil-fuel-fired steam generators... meeting the applicability requirements under subpart D (Standards of performance for fossil-fuel-fired... fossil fuel. If the affected facility (i.e. heat recovery steam generator) is subject to this subpart...
DOT National Transportation Integrated Search
2015-08-01
This document is the first of a seven volume report that describes performance requirements for connected vehicle vehicle-to-infrastructure (V2I) Safety Applications developed for the U.S. Department of Transportation (U.S. DOT). The applications add...
ERIC Educational Resources Information Center
Tang, Thomas Li-Ping; Austin, M. Jill
2009-01-01
This study examined business students' perceptions of four objectives (i.e., Enjoyment, Learning, Motivation, and Career Application) across five teaching technologies (i.e., Projector, PowerPoint, Video, the Internet, and Lecture), business professors' effective application of technologies, and students' academic performance. We collected data…
THE IMMEDIATE AND LONG-TERM EFFECTS OF KINESIOTAPE® ON BALANCE AND FUNCTIONAL PERFORMANCE
Douris, Peter; Fukuroku, Taryn; Kuzniewski, Michael; Dias, Joe; Figueiredo, Patrick
2016-01-01
Background The application of Kinesio Tex® tape (KT) results, in theory, in the improvement of muscle contractibility by supporting weakened muscles. The effect of KT on muscle strength has been investigated by numerous researchers who have theorized that KT facilitates an immediate increase in muscle strength by generating a concentric pull on the fascia. The effect of KT on balance and functional performance has been controversial because of the inconsistencies of tension and direction of pull required during application of KT and whether its use on healthy individuals provides therapeutic benefits. Hypotheses/Purpose The purpose of the present study was to investigate the immediate and long-term effects of the prescribed application (for facilitation) of KT when applied to the dominant lower extremity of healthy individuals. The hypothesis was that balance and functional performance would improve with the prescribed application of KT versus the sham application. Study Design Pretest-posttest repeated measures control group design. Methods Seventeen healthy subjects (9 males; 8 females) ranging from 18-35 years of age (mean age 23.3 ± 0.72), volunteered to participate in this study. KT was applied to the gastrocnemius of the participant's dominant leg using a prescribed application to facilitate muscle performance for the experimental group versus a sham application for the control group. The Biodex Balance System and four hop tests were utilized to assess balance, proprioception, and functional performance beginning on the first day including pre- and immediately post-KT application measurements. Subsequent measurements were performed 24, 72, and 120 hours after tape application. Repeated measures ANOVA's were performed for each individual dependent variable. Results There were no significant differences for main and interaction effects between KT and sham groups for the balance and four hop tests. Conclusion The results of the present study did not indicate any significant differences in balance and functional performance when KT was applied to the gastrocnemius muscle of the lower extremity. Level of evidence Level 1- Randomized Clinical Trial PMID:27104058
An examination of OLED display application to military equipment
NASA Astrophysics Data System (ADS)
Thomas, J.; Lorimer, S.
2010-04-01
OLED display technology has developed sufficiently to support small format commercial applications such as cell-phone main display functions. Revenues seem sufficient to finance both performance improvements and to develop new applications. The situation signifies the possibility that OLED technology is on the threshold of credibility for military applications. This paper will examine both performance and some possible applications for the military ground mobile environment, identifying the advantages and disadvantages of this promising new technology.
Marcolin, Giuseppe; Buriani, Alessandro; Giacomelli, Andrea; Blow, David; Grigoletto, Davide; Gesi, Marco
2017-06-24
Kinesiologic elastic tape is widely used for both clinical and sport applications although its efficacy in enhancing agonistic performance is still controversial. Aim of the study was to verify in a group of healthy basketball players whether a neuromuscular taping application (NMT) on ankle and knee joints could affect the kinematic and the kinetic parameters of the jump, either by enhancing or inhibiting the functional performance. Fourteen healthy male basketball players without any ongoing pathologies at upper limbs, lower limbs and trunk volunteered in the study. They randomly performed 2 sets of 5 counter movement jumps (CMJ) with and without application of Kinesiologic tape. The best 3 jumps of each set were considered for the analysis. The Kinematics parameters analyzed were: knees maximal flexion and ankles maximal dorsiflexion during the push off phase, jump height and take off velocity. Vertical ground reaction force and maximal power expressed in the push off phase of the jump were also investigated. The NMT application in both knees and ankles showed no statistically significant differences in the kinematic and kinetic parameters and did not interfere with the CMJ performance. Bilateral NMT application in the group of healthy male basketball players did not change kinematics and kinetics jump parameters, thus suggesting that its routine use should have no negative effect on functional performance. Similarly, the combined application of the tape on both knees and ankles did not affect in either way jump performance.
Marcolin, Giuseppe; Buriani, Alessandro; Giacomelli, Andrea; Blow, David; Grigoletto, Davide; Gesi, Marco
2017-01-01
Kinesiologic elastic tape is widely used for both clinical and sport applications although its efficacy in enhancing agonistic performance is still controversial. Aim of the study was to verify in a group of healthy basketball players whether a neuromuscular taping application (NMT) on ankle and knee joints could affect the kinematic and the kinetic parameters of the jump, either by enhancing or inhibiting the functional performance. Fourteen healthy male basketball players without any ongoing pathologies at upper limbs, lower limbs and trunk volunteered in the study. They randomly performed 2 sets of 5 counter movement jumps (CMJ) with and without application of Kinesiologic tape. The best 3 jumps of each set were considered for the analysis. The Kinematics parameters analyzed were: knees maximal flexion and ankles maximal dorsiflexion during the push off phase, jump height and take off velocity. Vertical ground reaction force and maximal power expressed in the push off phase of the jump were also investigated. The NMT application in both knees and ankles showed no statistically significant differences in the kinematic and kinetic parameters and did not interfere with the CMJ performance. Bilateral NMT application in the group of healthy male basketball players did not change kinematics and kinetics jump parameters, thus suggesting that its routine use should have no negative effect on functional performance. Similarly, the combined application of the tape on both knees and ankles did not affect in either way jump performance. PMID:28713536
Revel8or: Model Driven Capacity Planning Tool Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Liming; Liu, Yan; Bui, Ngoc B.
2007-05-31
Designing complex multi-tier applications that must meet strict performance requirements is a challenging software engineering problem. Ideally, the application architect could derive accurate performance predictions early in the project life-cycle, leveraging initial application design-level models and a description of the target software and hardware platforms. To this end, we have developed a capacity planning tool suite for component-based applications, called Revel8tor. The tool adheres to the model driven development paradigm and supports benchmarking and performance prediction for J2EE, .Net and Web services platforms. The suite is composed of three different tools: MDAPerf, MDABench and DSLBench. MDAPerf allows annotation of designmore » diagrams and derives performance analysis models. MDABench allows a customized benchmark application to be modeled in the UML 2.0 Testing Profile and automatically generates a deployable application, with measurement automatically conducted. DSLBench allows the same benchmark modeling and generation to be conducted using a simple performance engineering Domain Specific Language (DSL) in Microsoft Visual Studio. DSLBench integrates with Visual Studio and reuses its load testing infrastructure. Together, the tool suite can assist capacity planning across platforms in an automated fashion.« less
Application submission date reflects applicant quality.
Fuhrman, George M; Dada, Stephen; Ehleben, Carole
2008-01-01
Applications for general surgery residency are submitted through the Electronic Residency Application Service (ERAS) beginning in early September. The purpose of this study was to determine whether the date of application submission could be used in the screening of an applicant for general surgery residency. The 2007 ERAS data for an independent program that accepts 2 categorical residents per year was evaluated. International medical graduates were excluded because no international applicants were considered for interviews. Applicants for preliminary positions were also excluded. The remaining graduates from medical schools accredited by the Liaison Committee on Medical Education (LCME) who applied for categorical positions were evaluated based on United States Medical Licensing Examination (USMLE) scores and on medical school performance, as well as on the quality of their personal statements and letters of recommendation. Medical school performance was determined from dean's letters and transcript information, and each applicant was classified as outstanding, average, or poor. The date of application submission was compared with USMLE scores and medical school performance. The lag time to submit an application was also evaluated and compared with whether a student was offered an interview and the assessment of the quality of that interview. Results were evaluated using analysis of variance and the Pearson correlation test to evaluate for significance. A total of 155 applications from LCME-accredited schools for categorical positions were received. The mean lag time to application for students with an outstanding medical school performance was 15.2 +/- 15.5 days compared with 37.4 +/- 26.2 days for poorly performing students (p < 0.01). A negative correlation between USMLE score and the lag time to application was noted (p < 0.01 USMLE I and USMLE II). Applicants offered an interview demonstrated a lag time to submit their application of 19.2 days +/- 21.7 versus 34.0 days +/- 25.8 for applicants not selected to interview (p < 0.01). The results of our study suggest that the date of application submission can provide important screening information about an applicant for general surgery residency. If nearly all high-quality applications are received in September, programs could begin the interview process in early November, which gives students an opportunity to visit more programs and increase their exposure to a broader variety of training options.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gentile, Ann C.; Brandt, James M.; Tucker, Thomas
2011-09-01
This report provides documentation for the completion of the Sandia Level II milestone 'Develop feedback system for intelligent dynamic resource allocation to improve application performance'. This milestone demonstrates the use of a scalable data collection analysis and feedback system that enables insight into how an application is utilizing the hardware resources of a high performance computing (HPC) platform in a lightweight fashion. Further we demonstrate utilizing the same mechanisms used for transporting data for remote analysis and visualization to provide low latency run-time feedback to applications. The ultimate goal of this body of work is performance optimization in the facemore » of the ever increasing size and complexity of HPC systems.« less
Accelerated Application Development: The ORNL Titan Experience
Joubert, Wayne; Archibald, Richard K.; Berrill, Mark A.; ...
2015-05-09
The use of computational accelerators such as NVIDIA GPUs and Intel Xeon Phi processors is now widespread in the high performance computing community, with many applications delivering impressive performance gains. However, programming these systems for high performance, performance portability and software maintainability has been a challenge. In this paper we discuss experiences porting applications to the Titan system. Titan, which began planning in 2009 and was deployed for general use in 2013, was the first multi-petaflop system based on accelerator hardware. To ready applications for accelerated computing, a preparedness effort was undertaken prior to delivery of Titan. In this papermore » we report experiences and lessons learned from this process and describe how users are currently making use of computational accelerators on Titan.« less
Accelerated application development: The ORNL Titan experience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joubert, Wayne; Archibald, Rick; Berrill, Mark
2015-08-01
The use of computational accelerators such as NVIDIA GPUs and Intel Xeon Phi processors is now widespread in the high performance computing community, with many applications delivering impressive performance gains. However, programming these systems for high performance, performance portability and software maintainability has been a challenge. In this paper we discuss experiences porting applications to the Titan system. Titan, which began planning in 2009 and was deployed for general use in 2013, was the first multi-petaflop system based on accelerator hardware. To ready applications for accelerated computing, a preparedness effort was undertaken prior to delivery of Titan. In this papermore » we report experiences and lessons learned from this process and describe how users are currently making use of computational accelerators on Titan.« less
Schripsema, Nienke R; van Trigt, Anke M; Borleffs, Jan C C; Cohen-Schotanus, Janke
2017-05-01
Situational Judgement Tests (SJTs) are increasingly implemented in medical school admissions. In this paper, we investigate the effects of vocational interests, previous academic experience, gender and age on SJT performance. The SJT was part of the selection process for the Bachelor's degree programme in Medicine at University of Groningen, the Netherlands. All applicants for the academic year 2015-2016 were included and had to choose between learning communities Global Health (n = 126), Sustainable Care (n = 149), Intramural Care (n = 225), or Molecular Medicine (n = 116). This choice was used as a proxy for vocational interest. In addition, all graduate-entry applicants for academic year 2015-2016 (n = 213) were included to examine the effect of previous academic experience on performance. We used MANCOVA analyses with Bonferroni post hoc multiple comparisons tests for applicant performance on a six-scenario SJT. The MANCOVA analyses showed that for all scenarios, the independent variables were significantly related to performance (Pillai's Trace: 0.02-0.47, p < .01). Vocational interest was related to performance on three scenarios (p < .01). Graduate-entry applicants outperformed all other groups on three scenarios (p < .01) and at least one other group on the other three scenarios (p < .01). Female applicants outperformed male applicants on three scenarios (p < .01) and age was positively related to performance on two scenarios (p < .05). A good fit between applicants' vocational interests and SJT scenario was related to better performance, as was previous academic experience. Gender and age were related to performance on SJT scenarios in different settings. Especially the first effect might be helpful in selecting appropriate candidates for areas of health care in which more professionals are needed.
Calculating Reuse Distance from Source Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narayanan, Sri Hari Krishna; Hovland, Paul
The efficient use of a system is of paramount importance in high-performance computing. Applications need to be engineered for future systems even before the architecture of such a system is clearly known. Static performance analysis that generates performance bounds is one way to approach the task of understanding application behavior. Performance bounds provide an upper limit on the performance of an application on a given architecture. Predicting cache hierarchy behavior and accesses to main memory is a requirement for accurate performance bounds. This work presents our static reuse distance algorithm to generate reuse distance histograms. We then use these histogramsmore » to predict cache miss rates. Experimental results for kernels studied show that the approach is accurate.« less
Performance Engineering Research Institute SciDAC-2 Enabling Technologies Institute Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucas, Robert
2013-04-20
Enhancing the performance of SciDAC applications on petascale systems had high priority within DOE SC at the start of the second phase of the SciDAC program, SciDAC-2, as it continues to do so today. Achieving expected levels of performance on high-end computing (HEC) systems is growing ever more challenging due to enormous scale, increasing architectural complexity, and increasing application complexity. To address these challenges, the University of Southern California?s Information Sciences Institute organized the Performance Engineering Research Institute (PERI). PERI implemented a unified, tripartite research plan encompassing: (1) performance modeling and prediction; (2) automatic performance tuning; and (3) performance engineeringmore » of high profile applications. Within PERI, USC?s primary research activity was automatic tuning (autotuning) of scientific software. This activity was spurred by the strong user preference for automatic tools and was based on previous successful activities such as ATLAS, which automatically tuned components of the LAPACK linear algebra library, and other recent work on autotuning domain-specific libraries. Our other major component was application engagement, to which we devoted approximately 30% of our effort to work directly with SciDAC-2 applications. This report is a summary of the overall results of the USC PERI effort.« less
Optimizing Irregular Applications for Energy and Performance on the Tilera Many-core Architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chavarría-Miranda, Daniel; Panyala, Ajay R.; Halappanavar, Mahantesh
Optimizing applications simultaneously for energy and performance is a complex problem. High performance, parallel, irregular applications are notoriously hard to optimize due to their data-dependent memory accesses, lack of structured locality and complex data structures and code patterns. Irregular kernels are growing in importance in applications such as machine learning, graph analytics and combinatorial scientific computing. Performance- and energy-efficient implementation of these kernels on modern, energy efficient, multicore and many-core platforms is therefore an important and challenging problem. We present results from optimizing two irregular applications { the Louvain method for community detection (Grappolo), and high-performance conjugate gradient (HPCCG) {more » on the Tilera many-core system. We have significantly extended MIT's OpenTuner auto-tuning framework to conduct a detailed study of platform-independent and platform-specific optimizations to improve performance as well as reduce total energy consumption. We explore the optimization design space along three dimensions: memory layout schemes, compiler-based code transformations, and optimization of parallel loop schedules. Using auto-tuning, we demonstrate whole node energy savings of up to 41% relative to a baseline instantiation, and up to 31% relative to manually optimized variants.« less
Agelastos, Anthony; Allan, Benjamin; Brandt, Jim; ...
2016-05-18
A detailed understanding of HPC applications’ resource needs and their complex interactions with each other and HPC platform resources are critical to achieving scalability and performance. Such understanding has been difficult to achieve because typical application profiling tools do not capture the behaviors of codes under the potentially wide spectrum of actual production conditions and because typical monitoring tools do not capture system resource usage information with high enough fidelity to gain sufficient insight into application performance and demands. In this paper we present both system and application profiling results based on data obtained through synchronized system wide monitoring onmore » a production HPC cluster at Sandia National Laboratories (SNL). We demonstrate analytic and visualization techniques that we are using to characterize application and system resource usage under production conditions for better understanding of application resource needs. Furthermore, our goals are to improve application performance (through understanding application-to-resource mapping and system throughput) and to ensure that future system capabilities match their intended workloads.« less
40 CFR 60.2110 - What operating limits must I meet and by when?
Code of Federal Regulations, 2010 CFR
2010-07-01
... during the most recent performance test demonstrating compliance with all applicable emission limitations... most recent performance test demonstrating compliance with all applicable emission limitations. (2... drop across the wet scrubber measured during the most recent performance test demonstrating compliance...
40 CFR 60.2110 - What operating limits must I meet and by when?
Code of Federal Regulations, 2011 CFR
2011-07-01
... during the most recent performance test demonstrating compliance with all applicable emission limitations... most recent performance test demonstrating compliance with all applicable emission limitations. (2... drop across the wet scrubber measured during the most recent performance test demonstrating compliance...
40 CFR 60.2110 - What operating limits must I meet and by when?
Code of Federal Regulations, 2012 CFR
2012-07-01
... during the most recent performance test demonstrating compliance with all applicable emission limitations... most recent performance test demonstrating compliance with all applicable emission limitations. (2... drop across the wet scrubber measured during the most recent performance test demonstrating compliance...
Developing Information Power Grid Based Algorithms and Software
NASA Technical Reports Server (NTRS)
Dongarra, Jack
1998-01-01
This exploratory study initiated our effort to understand performance modeling on parallel systems. The basic goal of performance modeling is to understand and predict the performance of a computer program or set of programs on a computer system. Performance modeling has numerous applications, including evaluation of algorithms, optimization of code implementations, parallel library development, comparison of system architectures, parallel system design, and procurement of new systems. Our work lays the basis for the construction of parallel libraries that allow for the reconstruction of application codes on several distinct architectures so as to assure performance portability. Following our strategy, once the requirements of applications are well understood, one can then construct a library in a layered fashion. The top level of this library will consist of architecture-independent geometric, numerical, and symbolic algorithms that are needed by the sample of applications. These routines should be written in a language that is portable across the targeted architectures.
Performance Engineering Research Institute SciDAC-2 Enabling Technologies Institute Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, Mary
2014-09-19
Enhancing the performance of SciDAC applications on petascale systems has high priority within DOE SC. As we look to the future, achieving expected levels of performance on high-end com-puting (HEC) systems is growing ever more challenging due to enormous scale, increasing archi-tectural complexity, and increasing application complexity. To address these challenges, PERI has implemented a unified, tripartite research plan encompassing: (1) performance modeling and prediction; (2) automatic performance tuning; and (3) performance engineering of high profile applications. The PERI performance modeling and prediction activity is developing and refining performance models, significantly reducing the cost of collecting the data upon whichmore » the models are based, and increasing model fidelity, speed and generality. Our primary research activity is automatic tuning (autotuning) of scientific software. This activity is spurred by the strong user preference for automatic tools and is based on previous successful activities such as ATLAS, which has automatically tuned components of the LAPACK linear algebra library, and other re-cent work on autotuning domain-specific libraries. Our third major component is application en-gagement, to which we are devoting approximately 30% of our effort to work directly with Sci-DAC-2 applications. This last activity not only helps DOE scientists meet their near-term per-formance goals, but also helps keep PERI research focused on the real challenges facing DOE computational scientists as they enter the Petascale Era.« less
NASA wiring for space applications program
NASA Technical Reports Server (NTRS)
Schulze, Norman
1995-01-01
An overview of the NASA Wiring for Space Applications Program and its relationship to NASA's space technology enterprise is given in viewgraph format. The mission of the space technology enterprise is to pioneer, with industry, the development and use of space technology to secure national economic competitiveness, promote industrial growth, and to support space missions. The objectives of the NASA Wiring for Space Applications Program is to improve the safety, performance, and reliability of wiring systems for space applications and to develop improved wiring technologies for NASA flight programs and commercial applications. Wiring system failures in space and commercial applications have shown the need for arc track resistant wiring constructions. A matrix of tests performed versus wiring constructions is presented. Preliminary data indicate the performance of the Tensolite and Filotex hybrid constructions are the best of the various candidates.
High-Performance Java Codes for Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)
2001-01-01
The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.
1983-09-01
AD-Ali33 592 ARTIFICIAL INTELLIGENCE: AN ANALYSIS OF POTENTIAL 1/1 APPLICATIONS TO TRAININ..(U) DENVER RESEARCH INST CO JRICHARDSON SEP 83 AFHRL-TP...83-28 b ’ 3 - 4. TITLE (aied Suhkie) 5. TYPE OF REPORT & PERIOD COVERED ARTIFICIAL INTEL11GENCE: AN ANALYSIS OF Interim POTENTIAL APPLICATIONS TO...8217 sde if neceseamy end ides*f by black naumber) artificial intelligence military research * computer-aided diagnosis performance tests computer
Recent Niobium Developments for High Strength Steel Energy Applications
NASA Astrophysics Data System (ADS)
Jansto, Steven G.
Niobium-containing high strength steel materials have been developed for oil and gas pipelines, offshore platforms, nuclear plants, boilers and alternative energy applications. Recent research and the commercialization of alternative energy applications such as windtower structural supports and power transmission gear components provide enhanced performance. Through the application of these Nb-bearing steels in demanding energy-related applications, the designer and end user experience improved toughness at low temperature, excellent fatigue resistance and fracture toughness and excellent weldability. These enhancements provide structural engineers the opportunity to further improve the structural design and performance. For example, through the adoption of these Nb-containing structural materials, several design-manufacturing companies are initiating new windtower designs operating at higher energy efficiency, lower cost, and improved overall material design performance.
NASA Technical Reports Server (NTRS)
Carchedi, C. H.; Gough, T. L.; Huston, H. A.
1983-01-01
The results of a variety of tests designed to demonstrate and evaluate the performance of several commercially available data base management system (DBMS) products compatible with the Digital Equipment Corporation VAX 11/780 computer system are summarized. The tests were performed on the INGRES, ORACLE, and SEED DBMS products employing applications that were similar to scientific applications under development by NASA. The objectives of this testing included determining the strength and weaknesses of the candidate systems, performance trade-offs of various design alternatives and the impact of some installation and environmental (computer related) influences.
NASA Astrophysics Data System (ADS)
Bandara, Sumith V.
2009-11-01
Advancements in III-V semiconductor based, Quantum-well infrared photodetector (QWIP) and Type-II Strained-Layer Superlattice detector (T2SLS) technologies have yielded highly uniform, large-format long-wavelength infrared (LWIR) QWIP FPAs and high quantum efficiency (QE), small format, LWIR T2SLS FPAs. In this article, we have analyzed the QWIP and T2SLS detector level performance requirements and readout integrated circuit (ROIC) noise levels for several staring array long-wavelength infrared (LWIR) imaging applications at various background levels. As a result of lower absorption QE and less than unity photoconductive gain, QWIP FPAs are appropriate for high background tactical applications. However, if the application restricts the integration time, QWIP FPA performance may be limited by the read noise of the ROIC. Rapid progress in T2SLS detector material has already demonstrated LWIR detectors with sufficient performance for tactical applications and potential for strategic applications. However, significant research is needed to suppress surface leakage currents in order to reproduce performances at pixel levels of T2SLS FPAs.
Kokkos: Enabling manycore performance portability through polymorphic memory access patterns
Carter Edwards, H.; Trott, Christian R.; Sunderland, Daniel
2014-07-22
The manycore revolution can be characterized by increasing thread counts, decreasing memory per thread, and diversity of continually evolving manycore architectures. High performance computing (HPC) applications and libraries must exploit increasingly finer levels of parallelism within their codes to sustain scalability on these devices. We found that a major obstacle to performance portability is the diverse and conflicting set of constraints on memory access patterns across devices. Contemporary portable programming models address manycore parallelism (e.g., OpenMP, OpenACC, OpenCL) but fail to address memory access patterns. The Kokkos C++ library enables applications and domain libraries to achieve performance portability on diversemore » manycore architectures by unifying abstractions for both fine-grain data parallelism and memory access patterns. In this paper we describe Kokkos’ abstractions, summarize its application programmer interface (API), present performance results for unit-test kernels and mini-applications, and outline an incremental strategy for migrating legacy C++ codes to Kokkos. Furthermore, the Kokkos library is under active research and development to incorporate capabilities from new generations of manycore architectures, and to address a growing list of applications and domain libraries.« less
Telerobotic system performance measurement - Motivation and methods
NASA Technical Reports Server (NTRS)
Kondraske, George V.; Khoury, George J.
1992-01-01
A systems performance-based strategy for modeling and conducting experiments relevant to the design and performance characterization of telerobotic systems is described. A developmental testbed consisting of a distributed telerobotics network and initial efforts to implement the strategy described is presented. Consideration is given to the general systems performance theory (GSPT) to tackle human performance problems as a basis for: measurement of overall telerobotic system (TRS) performance; task decomposition; development of a generic TRS model; and the characterization of performance of subsystems comprising the generic model. GSPT employs a resource construct to model performance and resource economic principles to govern the interface of systems to tasks. It provides a comprehensive modeling/measurement strategy applicable to complex systems including both human and artificial components. Application is presented within the framework of a distributed telerobotics network as a testbed. Insight into the design of test protocols which elicit application-independent data is described.
Irregular Applications: Architectures & Algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feo, John T.; Villa, Oreste; Tumeo, Antonino
Irregular applications are characterized by irregular data structures, control and communication patterns. Novel irregular high performance applications which deal with large data sets and require have recently appeared. Unfortunately, current high performance systems and software infrastructures executes irregular algorithms poorly. Only coordinated efforts by end user, area specialists and computer scientists that consider both the architecture and the software stack may be able to provide solutions to the challenges of modern irregular applications.
Applications of CFD and visualization techniques
NASA Technical Reports Server (NTRS)
Saunders, James H.; Brown, Susan T.; Crisafulli, Jeffrey J.; Southern, Leslie A.
1992-01-01
In this paper, three applications are presented to illustrate current techniques for flow calculation and visualization. The first two applications use a commercial computational fluid dynamics (CFD) code, FLUENT, performed on a Cray Y-MP. The results are animated with the aid of data visualization software, apE. The third application simulates a particulate deposition pattern using techniques inspired by developments in nonlinear dynamical systems. These computations were performed on personal computers.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 34 2013-07-01 2013-07-01 false What emission testing must I perform... emission testing must I perform for my application for a certificate of conformity? This section describes the emission testing you must perform to show compliance with the emission standards in subpart B of...
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 33 2014-07-01 2014-07-01 false What emission testing must I perform... emission testing must I perform for my application for a certificate of conformity? This section describes the emission testing you must perform to show compliance with the emission standards in subpart B of...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 32 2010-07-01 2010-07-01 false What emission testing must I perform... emission testing must I perform for my application for a certificate of conformity? This section describes the emission testing you must perform to show compliance with the emission standards in subpart B of...
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 34 2012-07-01 2012-07-01 false What emission testing must I perform... emission testing must I perform for my application for a certificate of conformity? This section describes the emission testing you must perform to show compliance with the emission standards in subpart B of...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 33 2011-07-01 2011-07-01 false What emission testing must I perform... emission testing must I perform for my application for a certificate of conformity? This section describes the emission testing you must perform to show compliance with the emission standards in subpart B of...
Performance Thresholds for Application of MEMS Inertial Sensors in Space
NASA Technical Reports Server (NTRS)
Smit, Geoffrey N.
1995-01-01
We review types of inertial sensors available and current usage of inertial sensors in space and the performance requirements for these applications. We then assess the performance available from micro-electro-mechanical systems (MEMS) devices, both in the near and far term. Opportunities for the application of these devices are identified. A key point is that although the performance available from MEMS inertial sensors is significantly lower than that achieved by existing macroscopic devices (at least in the near term), the low cost, low size, and power of the MEMS devices opens up a number of applications. In particular, we show that there are substantial benefits to using MEMS devices to provide vibration, and for some missions, attitude sensing. In addition, augmentation for global positioning system (GPS) navigation systems holds much promise.
NASA Astrophysics Data System (ADS)
El Akbar, R. Reza; Anshary, Muhammad Adi Khairul; Hariadi, Dennis
2018-02-01
Model MACP for HE ver.1. Is a model that describes how to perform measurement and monitoring performance for Higher Education. Based on a review of the research related to the model, there are several parts of the model component to develop in further research, so this research has four main objectives. The first objective is to differentiate the CSF (critical success factor) components in the previous model, the two key KPI (key performance indicators) exploration in the previous model, the three based on the previous objective, the new and more detailed model design. The final goal is the fourth designed prototype application for performance measurement in higher education, based on a new model created. The method used is explorative research method and application design using prototype method. The results of this study are first, forming a more detailed new model for measurement and monitoring of performance in higher education, differentiation and exploration of the Model MACP for HE Ver.1. The second result compiles a dictionary of college performance measurement by re-evaluating the existing indicators. The third result is the design of prototype application of performance measurement in higher education.
Institute for Sustained Performance, Energy, and Resilience (SuPER)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jagode, Heike; Bosilca, George; Danalis, Anthony
The University of Tennessee (UTK) and University of Texas at El Paso (UTEP) partnership supported the three main thrusts of the SUPER project---performance, energy, and resilience. The UTK-UTEP effort thus helped advance the main goal of SUPER, which was to ensure that DOE's computational scientists can successfully exploit the emerging generation of high performance computing (HPC) systems. This goal is being met by providing application scientists with strategies and tools to productively maximize performance, conserve energy, and attain resilience. The primary vehicle through which UTK provided performance measurement support to SUPER and the larger HPC community is the Performance Applicationmore » Programming Interface (PAPI). PAPI is an ongoing project that provides a consistent interface and methodology for collecting hardware performance information from various hardware and software components, including most major CPUs, GPUs and accelerators, interconnects, I/O systems, and power interfaces, as well as virtual cloud environments. The PAPI software is widely used for performance modeling of scientific and engineering applications---for example, the HOMME (High Order Methods Modeling Environment) climate code, and the GAMESS and NWChem computational chemistry codes---on DOE supercomputers. PAPI is widely deployed as middleware for use by higher-level profiling, tracing, and sampling tools (e.g., CrayPat, HPCToolkit, Scalasca, Score-P, TAU, Vampir, PerfExpert), making it the de facto standard for hardware counter analysis. PAPI has established itself as fundamental software infrastructure in every application domain (spanning academia, government, and industry), where improving performance can be mission critical. Ultimately, as more application scientists migrate their applications to HPC platforms, they will benefit from the extended capabilities this grant brought to PAPI to analyze and optimize performance in these environments, whether they use PAPI directly, or via third-party performance tools. Capabilities added to PAPI through this grant include support for new architectures such as the lastest GPU and Xeon Phi accelerators, and advanced power measurement and management features. Another important topic for the UTK team was providing support for a rich ecosystem of different fault management strategies in the context of parallel computing. Our long term efforts have been oriented toward proposing flexible strategies and providing building boxes that application developers can use to build the most efficient fault management technique for their application. These efforts span across the entire software spectrum, from theoretical models of existing strategies to easily assess their performance, to algorithmic modifications to take advantage of specific mathematical properties for data redundancy and to extensions to widely used programming paradigms to empower the application developers to deal with all types of faults. We have also continued our tight collaborations with users to help them adopt these technologies to ensure their application always deliver meaningful scientific data. Large supercomputer systems are becoming more and more power and energy constrained, and future systems and applications running on them will need to be optimized to run under power caps and/or minimize energy consumption. The UTEP team contributed to the SUPER energy thrust by developing power modeling methodologies and investigating power management strategies. Scalability modeling results showed that some applications can scale better with respect to an increasing power budget than with respect to only the number of processors. Power management, in particular shifting power to processors on the critical path of an application execution, can reduce perturbation due to system noise and other sources of runtime variability, which are growing problems on large-scale power-constrained computer systems.« less
2013 R&D 100 Award: âMiniappsâ Bolster High Performance Computing
Belak, Jim; Richards, David
2018-06-12
Two Livermore computer scientists served on a Sandia National Laboratories-led team that developed Mantevo Suite 1.0, the first integrated suite of small software programs, also called "miniapps," to be made available to the high performance computing (HPC) community. These miniapps facilitate the development of new HPC systems and the applications that run on them. Miniapps (miniature applications) serve as stripped down surrogates for complex, full-scale applications that can require a great deal of time and effort to port to a new HPC system because they often consist of hundreds of thousands of lines of code. The miniapps are a prototype that contains some or all of the essentials of the real application but with many fewer lines of code, making the miniapp more versatile for experimentation. This allows researchers to more rapidly explore options and optimize system design, greatly improving the chances the full-scale application will perform successfully. These miniapps have become essential tools for exploring complex design spaces because they can reliably predict the performance of full applications.
Web-based application on employee performance assessment using exponential comparison method
NASA Astrophysics Data System (ADS)
Maryana, S.; Kurnia, E.; Ruyani, A.
2017-02-01
Employee performance assessment is also called a performance review, performance evaluation, or assessment of employees, is an effort to assess the achievements of staffing performance with the aim to increase productivity of employees and companies. This application helps in the assessment of employee performance using five criteria: Presence, Quality of Work, Quantity of Work, Discipline, and Teamwork. The system uses the Exponential Comparative Method and Weighting Eckenrode. Calculation results using graphs were provided to see the assessment of each employee. Programming language used in this system is written in Notepad++ and MySQL database. The testing result on the system can be concluded that this application is correspond with the design and running properly. The test conducted is structural test, functional test, and validation, sensitivity analysis, and SUMI testing.
Code of Federal Regulations, 2014 CFR
2014-07-01
... grant application package, as described in the NOFA. (b) Renewal application. After receiving an initial... evaluations of renewal applications rely on performance data related to the initial grant, the application and...
7 CFR 1778.31 - Performing development.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 12 2011-01-01 2011-01-01 false Performing development. 1778.31 Section 1778.31... development. (a) Applicable provisions of subpart C of part 1780 of this chapter will be followed in performing development for grants made under this part. (b) After filing an application in accordance with...
12 CFR 345.29 - Effect of CRA performance on applications.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 4 2011-01-01 2011-01-01 false Effect of CRA performance on applications. 345.29 Section 345.29 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION REGULATIONS AND STATEMENTS OF GENERAL POLICY COMMUNITY REINVESTMENT Standards for Assessing Performance § 345.29 Effect of CRA...
40 CFR 60.290 - Applicability and designation of affected facility.
Code of Federal Regulations, 2010 CFR
2010-07-01
... (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Glass Manufacturing Plants § 60.290 Applicability and designation of affected facility. (a...
40 CFR 60.80 - Applicability and designation of affected facility.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Sulfuric Acid Plants § 60.80 Applicability and designation of affected facility. (a) The...
40 CFR 60.80 - Applicability and designation of affected facility.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Sulfuric Acid Plants § 60.80 Applicability and designation of affected facility. (a) The...
Routing performance analysis and optimization within a massively parallel computer
Archer, Charles Jens; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen
2013-04-16
An apparatus, program product and method optimize the operation of a massively parallel computer system by, in part, receiving actual performance data concerning an application executed by the plurality of interconnected nodes, and analyzing the actual performance data to identify an actual performance pattern. A desired performance pattern may be determined for the application, and an algorithm may be selected from among a plurality of algorithms stored within a memory, the algorithm being configured to achieve the desired performance pattern based on the actual performance data.
Spiral microstrip hyperthermia applicators: technical design and clinical performance.
Samulski, T V; Fessenden, P; Lee, E R; Kapp, D S; Tanabe, E; McEuen, A
1990-01-01
Spiral microstrip microwave (MW) antennas have been developed and adapted for use as clinical hyperthermia applicators. The design has been configured in a variety of forms including single fixed antenna applicators, multi-element arrays, and mechanically scanned single or paired antennas. The latter three configurations have been used to allow an expansion of the effective heating area. Specific absorption rate (SAR) distributions measured in phantom have been used to estimate the depth and volume of effective heating. The estimates are made using the bioheat equation assuming uniformly perfused tissue. In excess of 500 treatments of patients with advanced or recurrent localized superficial tumors have been performed using this applicator technology. Data from clinical treatments have been analyzed to quantify the heating performance and verify the suitability of these applicators for clinical use. Good microwave coupling efficiency together with the compact applicator size have proved to be valuable clinical assets.
Performance Evaluation in Network-Based Parallel Computing
NASA Technical Reports Server (NTRS)
Dezhgosha, Kamyar
1996-01-01
Network-based parallel computing is emerging as a cost-effective alternative for solving many problems which require use of supercomputers or massively parallel computers. The primary objective of this project has been to conduct experimental research on performance evaluation for clustered parallel computing. First, a testbed was established by augmenting our existing SUNSPARCs' network with PVM (Parallel Virtual Machine) which is a software system for linking clusters of machines. Second, a set of three basic applications were selected. The applications consist of a parallel search, a parallel sort, a parallel matrix multiplication. These application programs were implemented in C programming language under PVM. Third, we conducted performance evaluation under various configurations and problem sizes. Alternative parallel computing models and workload allocations for application programs were explored. The performance metric was limited to elapsed time or response time which in the context of parallel computing can be expressed in terms of speedup. The results reveal that the overhead of communication latency between processes in many cases is the restricting factor to performance. That is, coarse-grain parallelism which requires less frequent communication between processes will result in higher performance in network-based computing. Finally, we are in the final stages of installing an Asynchronous Transfer Mode (ATM) switch and four ATM interfaces (each 155 Mbps) which will allow us to extend our study to newer applications, performance metrics, and configurations.
Large-screen display technology assessment for military applications
NASA Astrophysics Data System (ADS)
Blaha, Richard J.
1990-08-01
Full-color, large screen display systems can enhance military applications that require group presentation, coordinated decisions, or interaction between decision makers. The technology already plays an important role in operations centers, simulation facilities, conference rooms, and training centers. Some applications display situational, status, or briefing information, while others portray instructional material for procedural training or depict realistic panoramic scenes that are used in simulators. While each specific application requires unique values of luminance, resolution, response time, reliability, and the video interface, suitable performance can be achieved with available commercial large screen displays. Advances in the technology of large screen displays are driven by the commercial applications because the military applications do not provide the significant market share enjoyed by high definition television (HDTV), entertainment, advertisement, training, and industrial applications. This paper reviews the status of full-color, large screen display technologies and includes the performance and cost metrics of available systems. For this discussion, performance data is based upon either measurements made by our personnel or extractions from vendors' data sheets.
Exploring Cloud Computing for Large-scale Scientific Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Guang; Han, Binh; Yin, Jian
This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address thesemore » challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agelastos, Anthony; Allan, Benjamin; Brandt, Jim
A detailed understanding of HPC applications’ resource needs and their complex interactions with each other and HPC platform resources are critical to achieving scalability and performance. Such understanding has been difficult to achieve because typical application profiling tools do not capture the behaviors of codes under the potentially wide spectrum of actual production conditions and because typical monitoring tools do not capture system resource usage information with high enough fidelity to gain sufficient insight into application performance and demands. In this paper we present both system and application profiling results based on data obtained through synchronized system wide monitoring onmore » a production HPC cluster at Sandia National Laboratories (SNL). We demonstrate analytic and visualization techniques that we are using to characterize application and system resource usage under production conditions for better understanding of application resource needs. Furthermore, our goals are to improve application performance (through understanding application-to-resource mapping and system throughput) and to ensure that future system capabilities match their intended workloads.« less
40 CFR 60.70 - Applicability and designation of affected facility.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Nitric Acid Plants § 60.70 Applicability and designation of affected facility. (a) The provisions...
40 CFR 60.70 - Applicability and designation of affected facility.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Nitric Acid Plants § 60.70 Applicability and designation of affected facility. (a) The provisions...
Improvement of the Performance of a Turbo-Ramjet Engine for UAV and Missile Applications
2003-12-01
Improvement of the Performance of a Turbo-Ramjet Engine for UAV and Missile Applications 5. FUNDING NUMBERS 6. AUTHOR ( S ) Dimitrios...Krikellas 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) Naval Postgraduate School Monterey, CA 93943-5000 8. PERFORMING ORGANIZATION REPORT...NUMBER 9. SPONSORING / MONITORING AGENCY NAME( S ) AND ADDRESS(ES) N/A 10. SPONSORING/MONITORING AGENCY REPORT NUMBER 11
Calibration Modeling Methodology to Optimize Performance for Low Range Applications
NASA Technical Reports Server (NTRS)
McCollum, Raymond A.; Commo, Sean A.; Parker, Peter A.
2010-01-01
Calibration is a vital process in characterizing the performance of an instrument in an application environment and seeks to obtain acceptable accuracy over the entire design range. Often, project requirements specify a maximum total measurement uncertainty, expressed as a percent of full-scale. However in some applications, we seek to obtain enhanced performance at the low range, therefore expressing the accuracy as a percent of reading should be considered as a modeling strategy. For example, it is common to desire to use a force balance in multiple facilities or regimes, often well below its designed full-scale capacity. This paper presents a general statistical methodology for optimizing calibration mathematical models based on a percent of reading accuracy requirement, which has broad application in all types of transducer applications where low range performance is required. A case study illustrates the proposed methodology for the Mars Entry Atmospheric Data System that employs seven strain-gage based pressure transducers mounted on the heatshield of the Mars Science Laboratory mission.
Management of ATM-based networks supporting multimedia medical information systems
NASA Astrophysics Data System (ADS)
Whitman, Robert A.; Blaine, G. James; Fritz, Kevin; Goodgold, Ken; Heisinger, Patrick
1997-05-01
Medical information systems are acquiring the ability to collect and deliver many different types of medical information. In support of the increased network demands necessitated by these expanded capabilities, asynchronous transfer mode (ATM) based networks are being deployed in medical care systems. While ATM supplies a much greater line rate than currently deployed networks, the management and standards surrounding ATM are yet to mature. This paper explores the management and control issues surrounding an ATM network supporting medical information systems, and examines how management impacts network performance and robustness. A multivendor ATM network at the BJC Health System/Washington University and the applications using the network are discussed. Performance information for specific applications is presented and analyzed. Network management's influence on application reliability is outlined. The information collected is used to show how ATM network standards and management tools influence network reliability and performance. Performance of current applications using the ATM network is discussed. Special attention is given to issues encountered in implementation of hypertext transfer protocol over ATM internet protocol (IP) communications. A classical IP ATM implementation yields greater than twenty percent higher network performance over LANE. Maximum performance for a host's suite of applications can be obtained by establishing multiple individually engineered IP links through its ATM network connection.
DOE Centers of Excellence Performance Portability Meeting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neely, J. R.
2016-04-21
Performance portability is a phrase often used, but not well understood. The DOE is deploying systems at all of the major facilities across ASCR and ASC that are forcing application developers to confront head-on the challenges of running applications across these diverse systems. With GPU-based systems at the OLCF and LLNL, and Phi-based systems landing at NERSC, ACES (LANL/SNL), and the ALCF – the issue of performance portability is confronting the DOE mission like never before. A new best practice in the DOE is to include “Centers of Excellence” with each major procurement, with a goal of focusing efforts onmore » preparing key applications to be ready for the systems coming to each site, and engaging the vendors directly in a “shared fate” approach to ensuring success. While each COE is necessarily focused on a particular deployment, applications almost invariably must be able to run effectively across the entire DOE HPC ecosystem. This tension between optimizing performance for a particular platform, while still being able to run with acceptable performance wherever the resources are available, is the crux of the challenge we call “performance portability”. This meeting was an opportunity to bring application developers, software providers, and vendors together to discuss this challenge and begin to chart a path forward.« less
Kim, Hyun Nam; Lee, Ju Hyuk; Park, Han Beom; Kim, Hyun Jin; Cho, Sung Oh
2018-01-01
We designed and fabricated a surface applicator of a novel carbon nanotube (CNT)-based miniature X-ray tube for the use in superficial electronic brachytherapy of skin cancer. To investigate the effectiveness of the surface applicator, the performance of the applicator was numerically and experimentally analyzed. The surface applicator consists of a graphite flattening filter and an X-ray shield. A Monte Carlo radiation transport code, MCNP6, was used to optimize the geometries of both the flattening filter and the shield so that X-rays are generated uniformly over the desired region. The performance of the graphite filter was compared with that of conventional aluminum (Al) filters of different geometries using the numerical simulations. After fabricating a surface applicator, the X-ray spatial distribution was measured to evaluate the performance of the applicator. The graphite filter shows better spatial dose uniformity and less dose distortion than Al filters. Moreover, graphite allows easy fabrication of the flattening filter due to its low X-ray attenuation property, which is particularly important for low-energy electronic brachytherapy. The applicator also shows that no further X-ray shielding is required for the application because unwanted X-rays are completely protected. As a result, highly uniform X-ray dose distribution was achieved from the miniature X-ray tube mounted with the surface applicators. The measured values of both flatness and symmetry were less than 5% and the measured penumbra values were less than 1 mm. All these values satisfy the currently accepted tolerance criteria for radiation therapy. The surface applicator exhibits sufficient performance capability for their application in electronic brachytherapy of skin cancers. © 2017 American Association of Physicists in Medicine.
Creep Performance of Oxide Ceramic Fiber Materials at Elevated Temperature in Air and in Steam
2011-03-24
engineered materials are finding more and more applications in space, aeronautics, energy, automotive, and other industries . In particular, engineered...performance in harsh environments are prime candidates for such applications . Oxide ceramic materials have been used as constituents in CMCs...183 xviii List of Tables Page Table 1. CMC Applications [2
30 CFR 7.83 - Application requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) Performance specifications of turbocharger, if applicable. (c) The application shall include dimensional...) Injector nozzle; (9) Injection fuel pump; (10) Governor; (11) Turbocharger, if applicable; (12) Aftercooler...
30 CFR 7.83 - Application requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
...) Performance specifications of turbocharger, if applicable. (c) The application shall include dimensional...) Injector nozzle; (9) Injection fuel pump; (10) Governor; (11) Turbocharger, if applicable; (12) Aftercooler...
30 CFR 7.83 - Application requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
...) Performance specifications of turbocharger, if applicable. (c) The application shall include dimensional...) Injector nozzle; (9) Injection fuel pump; (10) Governor; (11) Turbocharger, if applicable; (12) Aftercooler...
30 CFR 7.83 - Application requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) Performance specifications of turbocharger, if applicable. (c) The application shall include dimensional...) Injector nozzle; (9) Injection fuel pump; (10) Governor; (11) Turbocharger, if applicable; (12) Aftercooler...
Introduction of the UNIX International Performance Management Work Group
NASA Technical Reports Server (NTRS)
Newman, Henry
1993-01-01
In this paper we presented the planned direction of the UNIX International Performance Management Work Group. This group consists of concerned system developers and users who have organized to synthesize recommendations for standard UNIX performance management subsystem interfaces and architectures. The purpose of these recommendations is to provide a core set of performance management functions and these functions can be used to build tools by hardware system developers, vertical application software developers, and performance application software developers.
EPDM - Silicone blends - a high performance elastomeric composition for automotive applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, J.M.
1987-01-01
Styling and design changes have dramatically altered performance requirements for elastomers. High performance engines with electronic fuel injection have increased temperatures under the hood. Therefore, high performance elastomers are required to meet today's service conditions. New technology has been developed to compatibilize EPDM and silicone into high performance elastomeric compositions. These blends have physical, electrical and mechanical properties, for 175/sup 0/C service. Formulations are discussed for applications which require heat and weather resistance.
Multicore Architecture-aware Scientific Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Srinivasa, Avinash
Modern high performance systems are becoming increasingly complex and powerful due to advancements in processor and memory architecture. In order to keep up with this increasing complexity, applications have to be augmented with certain capabilities to fully exploit such systems. These may be at the application level, such as static or dynamic adaptations or at the system level, like having strategies in place to override some of the default operating system polices, the main objective being to improve computational performance of the application. The current work proposes two such capabilites with respect to multi-threaded scientific applications, in particular a largemore » scale physics application computing ab-initio nuclear structure. The first involves using a middleware tool to invoke dynamic adaptations in the application, so as to be able to adjust to the changing computational resource availability at run-time. The second involves a strategy for effective placement of data in main memory, to optimize memory access latencies and bandwidth. These capabilties when included were found to have a significant impact on the application performance, resulting in average speedups of as much as two to four times.« less
Machine Learning Based Online Performance Prediction for Runtime Parallelization and Task Scheduling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, J; Ma, X; Singh, K
2008-10-09
With the emerging many-core paradigm, parallel programming must extend beyond its traditional realm of scientific applications. Converting existing sequential applications as well as developing next-generation software requires assistance from hardware, compilers and runtime systems to exploit parallelism transparently within applications. These systems must decompose applications into tasks that can be executed in parallel and then schedule those tasks to minimize load imbalance. However, many systems lack a priori knowledge about the execution time of all tasks to perform effective load balancing with low scheduling overhead. In this paper, we approach this fundamental problem using machine learning techniques first to generatemore » performance models for all tasks and then applying those models to perform automatic performance prediction across program executions. We also extend an existing scheduling algorithm to use generated task cost estimates for online task partitioning and scheduling. We implement the above techniques in the pR framework, which transparently parallelizes scripts in the popular R language, and evaluate their performance and overhead with both a real-world application and a large number of synthetic representative test scripts. Our experimental results show that our proposed approach significantly improves task partitioning and scheduling, with maximum improvements of 21.8%, 40.3% and 22.1% and average improvements of 15.9%, 16.9% and 4.2% for LMM (a real R application) and synthetic test cases with independent and dependent tasks, respectively.« less
End-to-end performance measurement of Internet based medical applications.
Dev, P; Harris, D; Gutierrez, D; Shah, A; Senger, S
2002-01-01
We present a method to obtain an end-to-end characterization of the performance of an application over a network. This method is not dependent on any specific application or type of network. The method requires characterization of network parameters, such as latency and packet loss, between the expected server or client endpoints, as well as characterization of the application's constraints on these parameters. A subjective metric is presented that integrates these characterizations and that operates over a wide range of applications and networks. We believe that this method may be of wide applicability as research and educational applications increasingly make use of computation and data servers that are distributed over the Internet.
ASIC For Complex Fixed-Point Arithmetic
NASA Technical Reports Server (NTRS)
Petilli, Stephen G.; Grimm, Michael J.; Olson, Erlend M.
1995-01-01
Application-specific integrated circuit (ASIC) performs 24-bit, fixed-point arithmetic operations on arrays of complex-valued input data. High-performance, wide-band arithmetic logic unit (ALU) designed for use in computing fast Fourier transforms (FFTs) and for performing ditigal filtering functions. Other applications include general computations involved in analysis of spectra and digital signal processing.
DOT National Transportation Integrated Search
2017-09-01
A number of Connected and/or Automated Vehicle (CAV) applications have recently been designed to improve the performance of our transportation system. Safety, mobility and environmental sustainability are three cornerstone performance metrics when ev...
Boysen-Osborn, Megan; Yanuck, Justin; Mattson, James; Toohey, Shannon; Wray, Alisa; Wiechmann, Warren; Lahham, Shadi; Langdorf, Mark I
2017-01-01
The Medical Student Performance Evaluation (MSPE) appendices provide a program director with comparative performance for a student's academic and professional attributes, but they are frequently absent or incomplete. We reviewed MSPEs from applicants to our emergency medicine residency program from 134 of 136 (99%) U.S. allopathic medical schools, over two application cycles (2012-13, 2014-15). We determined the degree of compliance with each of the five recommended MSPE appendices. Only three (2%) medical schools were compliant with all five appendices. The medical school information page (MSIP, appendix E) was present most commonly (85%), followed by comparative clerkship performance (appendix B, 82%), overall performance (appendix D, 59%), preclinical performance (appendix A, 57%), and professional attributes (appendix C, 18%). Few schools (7%) provided student-specific, comparative professionalism assessments. Medical schools inconsistently provide graphic, comparative data for their students in the MSPE. Although program directors (PD) value evidence of an applicant's professionalism when selecting residents, medical schools rarely provide such useful, comparative professionalism data in their MSPEs. As PDs seek to evaluate applicants based on academic performance and professionalism, rather than standardized testing alone, medical schools must make MSPEs more consistent, objective, and comparative.
Natt, Neena; Chang, Alice Y; Berbari, Elie F; Kennel, Kurt A; Kearns, Ann E
2016-01-01
To determine which residency characteristics are associated with performance during endocrinology fellowship training as measured by competency-based faculty evaluation scores and faculty global ratings of trainee performance. We performed a retrospective review of interview applications from endocrinology fellows who graduated from a single academic institution between 2006 and 2013. Performance measures included competency-based faculty evaluation scores and faculty global ratings. The association between applicant characteristics and measures of performance during fellowship was examined by linear regression. The presence of a laudatory comparative statement in the residency program director's letter of recommendation (LoR) or experience as a chief resident was significantly associated with competency-based faculty evaluation scores (β = 0.22, P = .001; and β = 0.24, P = .009, respectively) and faculty global ratings (β = 0.85, P = .006; and β = 0.96, P = .015, respectively). The presence of a laudatory comparative statement in the residency program director's LoR or experience as a chief resident were significantly associated with overall performance during subspecialty fellowship training. Future studies are needed in other cohorts to determine the broader implications of these findings in the application and selection process.
2010-09-01
application of existing assessment tools that may be applicable to Marine Air Ground Task Force (MAGTF) Command, Control, Communications and...of existing assessment tools that may be applicable to Marine Air Ground Task Force (MAGTF) Command, Control, Communications and Computers (C4...assessment tools and analysis concepts that may be extended to the Marine Corps’ C4 System of Systems assessment methodology as a means to obtain a
DOE Office of Scientific and Technical Information (OSTI.GOV)
The Profile Interface Generator (PIG) is a tool for loosely coupling applications and performance tools. It enables applications to write code that looks like standard C and Fortran functions calls, without requiring that applications link to specific implementations of those function calls. Performance tools can register with PIG in order to listen to only the calls that give information they care about. This interface reduces the build and configuration burden on application developers and allows semantic instrumentation to live in production codes without interfering with production runs.
Toward a Model-Based Predictive Controller Design in Brain–Computer Interfaces
Kamrunnahar, M.; Dias, N. S.; Schiff, S. J.
2013-01-01
A first step in designing a robust and optimal model-based predictive controller (MPC) for brain–computer interface (BCI) applications is presented in this article. An MPC has the potential to achieve improved BCI performance compared to the performance achieved by current ad hoc, nonmodel-based filter applications. The parameters in designing the controller were extracted as model-based features from motor imagery task-related human scalp electroencephalography. Although the parameters can be generated from any model-linear or non-linear, we here adopted a simple autoregressive model that has well-established applications in BCI task discriminations. It was shown that the parameters generated for the controller design can as well be used for motor imagery task discriminations with performance (with 8–23% task discrimination errors) comparable to the discrimination performance of the commonly used features such as frequency specific band powers and the AR model parameters directly used. An optimal MPC has significant implications for high performance BCI applications. PMID:21267657
Toward a model-based predictive controller design in brain-computer interfaces.
Kamrunnahar, M; Dias, N S; Schiff, S J
2011-05-01
A first step in designing a robust and optimal model-based predictive controller (MPC) for brain-computer interface (BCI) applications is presented in this article. An MPC has the potential to achieve improved BCI performance compared to the performance achieved by current ad hoc, nonmodel-based filter applications. The parameters in designing the controller were extracted as model-based features from motor imagery task-related human scalp electroencephalography. Although the parameters can be generated from any model-linear or non-linear, we here adopted a simple autoregressive model that has well-established applications in BCI task discriminations. It was shown that the parameters generated for the controller design can as well be used for motor imagery task discriminations with performance (with 8-23% task discrimination errors) comparable to the discrimination performance of the commonly used features such as frequency specific band powers and the AR model parameters directly used. An optimal MPC has significant implications for high performance BCI applications.
Vetter, Jeffrey S.
2005-02-01
The method and system described herein presents a technique for performance analysis that helps users understand the communication behavior of their message passing applications. The method and system described herein may automatically classifies individual communication operations and reveal the cause of communication inefficiencies in the application. This classification allows the developer to quickly focus on the culprits of truly inefficient behavior, rather than manually foraging through massive amounts of performance data. Specifically, the method and system described herein trace the message operations of Message Passing Interface (MPI) applications and then classify each individual communication event using a supervised learning technique: decision tree classification. The decision tree may be trained using microbenchmarks that demonstrate both efficient and inefficient communication. Since the method and system described herein adapt to the target system's configuration through these microbenchmarks, they simultaneously automate the performance analysis process and improve classification accuracy. The method and system described herein may improve the accuracy of performance analysis and dramatically reduce the amount of data that users must encounter.
Symbiotic Sensing for Energy-Intensive Tasks in Large-Scale Mobile Sensing Applications.
Le, Duc V; Nguyen, Thuong; Scholten, Hans; Havinga, Paul J M
2017-11-29
Energy consumption is a critical performance and user experience metric when developing mobile sensing applications, especially with the significantly growing number of sensing applications in recent years. As proposed a decade ago when mobile applications were still not popular and most mobile operating systems were single-tasking, conventional sensing paradigms such as opportunistic sensing and participatory sensing do not explore the relationship among concurrent applications for energy-intensive tasks. In this paper, inspired by social relationships among living creatures in nature, we propose a symbiotic sensing paradigm that can conserve energy, while maintaining equivalent performance to existing paradigms. The key idea is that sensing applications should cooperatively perform common tasks to avoid acquiring the same resources multiple times. By doing so, this sensing paradigm executes sensing tasks with very little extra resource consumption and, consequently, extends battery life. To evaluate and compare the symbiotic sensing paradigm with the existing ones, we develop mathematical models in terms of the completion probability and estimated energy consumption. The quantitative evaluation results using various parameters obtained from real datasets indicate that symbiotic sensing performs better than opportunistic sensing and participatory sensing in large-scale sensing applications, such as road condition monitoring, air pollution monitoring, and city noise monitoring.
NASA Astrophysics Data System (ADS)
Indrayana, I. N. E.; P, N. M. Wirasyanti D.; Sudiartha, I. KG
2018-01-01
Mobile application allow many users to access data from the application without being limited to space, space and time. Over time the data population of this application will increase. Data access time will cause problems if the data record has reached tens of thousands to millions of records.The objective of this research is to maintain the performance of data execution for large data records. One effort to maintain data access time performance is to apply query optimization method. The optimization used in this research is query heuristic optimization method. The built application is a mobile-based financial application using MySQL database with stored procedure therein. This application is used by more than one business entity in one database, thus enabling rapid data growth. In this stored procedure there is an optimized query using heuristic method. Query optimization is performed on a “Select” query that involves more than one table with multiple clausa. Evaluation is done by calculating the average access time using optimized and unoptimized queries. Access time calculation is also performed on the increase of population data in the database. The evaluation results shown the time of data execution with query heuristic optimization relatively faster than data execution time without using query optimization.
Symbiotic Sensing for Energy-Intensive Tasks in Large-Scale Mobile Sensing Applications
Scholten, Hans; Havinga, Paul J. M.
2017-01-01
Energy consumption is a critical performance and user experience metric when developing mobile sensing applications, especially with the significantly growing number of sensing applications in recent years. As proposed a decade ago when mobile applications were still not popular and most mobile operating systems were single-tasking, conventional sensing paradigms such as opportunistic sensing and participatory sensing do not explore the relationship among concurrent applications for energy-intensive tasks. In this paper, inspired by social relationships among living creatures in nature, we propose a symbiotic sensing paradigm that can conserve energy, while maintaining equivalent performance to existing paradigms. The key idea is that sensing applications should cooperatively perform common tasks to avoid acquiring the same resources multiple times. By doing so, this sensing paradigm executes sensing tasks with very little extra resource consumption and, consequently, extends battery life. To evaluate and compare the symbiotic sensing paradigm with the existing ones, we develop mathematical models in terms of the completion probability and estimated energy consumption. The quantitative evaluation results using various parameters obtained from real datasets indicate that symbiotic sensing performs better than opportunistic sensing and participatory sensing in large-scale sensing applications, such as road condition monitoring, air pollution monitoring, and city noise monitoring. PMID:29186037
I/O Performance Characterization of Lustre and NASA Applications on Pleiades
NASA Technical Reports Server (NTRS)
Saini, Subhash; Rappleye, Jason; Chang, Johnny; Barker, David Peter; Biswas, Rupak; Mehrotra, Piyush
2012-01-01
In this paper we study the performance of the Lustre file system using five scientific and engineering applications representative of NASA workload on large-scale supercomputing systems such as NASA s Pleiades. In order to facilitate the collection of Lustre performance metrics, we have developed a software tool that exports a wide variety of client and server-side metrics using SGI's Performance Co-Pilot (PCP), and generates a human readable report on key metrics at the end of a batch job. These performance metrics are (a) amount of data read and written, (b) number of files opened and closed, and (c) remote procedure call (RPC) size distribution (4 KB to 1024 KB, in powers of 2) for I/O operations. RPC size distribution measures the efficiency of the Lustre client and can pinpoint problems such as small write sizes, disk fragmentation, etc. These extracted statistics are useful in determining the I/O pattern of the application and can assist in identifying possible improvements for users applications. Information on the number of file operations enables a scientist to optimize the I/O performance of their applications. Amount of I/O data helps users choose the optimal stripe size and stripe count to enhance I/O performance. In this paper, we demonstrate the usefulness of this tool on Pleiades for five production quality NASA scientific and engineering applications. We compare the latency of read and write operations under Lustre to that with NFS by tracing system calls and signals. We also investigate the read and write policies and study the effect of page cache size on I/O operations. We examine the performance impact of Lustre stripe size and stripe count along with performance evaluation of file per process and single shared file accessed by all the processes for NASA workload using parameterized IOR benchmark.
Ntofon, Okung-Dike; Channegowda, Mayur P; Efstathiou, Nikolaos; Rashidi Fard, Mehdi; Nejabati, Reza; Hunter, David K; Simeonidou, Dimitra
2013-02-25
In this paper, a novel Software-Defined Networking (SDN) architecture is proposed for high-end Ultra High Definition (UHD) media applications. UHD media applications require huge amounts of bandwidth that can only be met with high-capacity optical networks. In addition, there are requirements for control frameworks capable of delivering effective application performance with efficient network utilization. A novel SDN-based Controller that tightly integrates application-awareness with network control and management is proposed for such applications. An OpenFlow-enabled test-bed demonstrator is reported with performance evaluations of advanced online and offline media- and network-aware schedulers.
ERIC Educational Resources Information Center
Richardson, J. Jeffrey
This paper is part of an Air Force planning effort to develop a research, development, and applications program for the use of artificial intelligence (AI) technology in three target areas: training, performance measurement, and job performance aiding. The paper is organized in five sections that (1) introduce the reader to AI and those subfields…
Performance Analysis Tool for HPC and Big Data Applications on Scientific Clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Wucherl; Koo, Michelle; Cao, Yu
Big data is prevalent in HPC computing. Many HPC projects rely on complex workflows to analyze terabytes or petabytes of data. These workflows often require running over thousands of CPU cores and performing simultaneous data accesses, data movements, and computation. It is challenging to analyze the performance involving terabytes or petabytes of workflow data or measurement data of the executions, from complex workflows over a large number of nodes and multiple parallel task executions. To help identify performance bottlenecks or debug the performance issues in large-scale scientific applications and scientific clusters, we have developed a performance analysis framework, using state-ofthe-more » art open-source big data processing tools. Our tool can ingest system logs and application performance measurements to extract key performance features, and apply the most sophisticated statistical tools and data mining methods on the performance data. It utilizes an efficient data processing engine to allow users to interactively analyze a large amount of different types of logs and measurements. To illustrate the functionality of the big data analysis framework, we conduct case studies on the workflows from an astronomy project known as the Palomar Transient Factory (PTF) and the job logs from the genome analysis scientific cluster. Our study processed many terabytes of system logs and application performance measurements collected on the HPC systems at NERSC. The implementation of our tool is generic enough to be used for analyzing the performance of other HPC systems and Big Data workows.« less
Information processing of earth resources data
NASA Technical Reports Server (NTRS)
Zobrist, A. L.; Bryant, N. A.
1982-01-01
Current trends in the use of remotely sensed data include integration of multiple data sources of various formats and use of complex models. These trends have placed a strain on information processing systems because an enormous number of capabilities are needed to perform a single application. A solution to this problem is to create a general set of capabilities which can perform a wide variety of applications. General capabilities for the Image-Based Information System (IBIS) are outlined in this report. They are then cross-referenced for a set of applications performed at JPL.
Performance analysis of medical video streaming over mobile WiMAX.
Alinejad, Ali; Philip, N; Istepanian, R H
2010-01-01
Wireless medical ultrasound streaming is considered one of the emerging application within the broadband mobile healthcare domain. These applications are considered as bandwidth demanding services that required high data rates with acceptable diagnostic quality of the transmitted medical images. In this paper, we present the performance analysis of a medical ultrasound video streaming acquired via special robotic ultrasonography system over emulated WiMAX wireless network. The experimental set-up of this application is described together with the performance of the relevant medical quality of service (m-QoS) metrics.
Determination of Duty Cycle for Energy Storage Systems in a Renewables (Solar) Firming Application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schoenwald, David A.; Ellison, James
2016-04-01
This report supplements the document, “Protocol for Uniformly Measuring and Expressing the Performance of Energy Storage Systems,” issued in a revised version in April 2016, which will include the renewables (solar) firming application for an energy storage system (ESS). This report provides the background and documentation associated with the determination of a duty cycle for an ESS operated in a renewables (solar) firming application for the purpose of measuring and expressing ESS performance in accordance with the ESS performance protocol.
Performance Analysis of Multilevel Parallel Applications on Shared Memory Architectures
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Jin, Haoqiang; Labarta, Jesus; Gimenez, Judit; Caubet, Jordi; Biegel, Bryan A. (Technical Monitor)
2002-01-01
In this paper we describe how to apply powerful performance analysis techniques to understand the behavior of multilevel parallel applications. We use the Paraver/OMPItrace performance analysis system for our study. This system consists of two major components: The OMPItrace dynamic instrumentation mechanism, which allows the tracing of processes and threads and the Paraver graphical user interface for inspection and analyses of the generated traces. We describe how to use the system to conduct a detailed comparative study of a benchmark code implemented in five different programming paradigms applicable for shared memory
NASA Technical Reports Server (NTRS)
Koenig, D. G.; Stoll, F.; Aoyagi, K.
1981-01-01
The status of ejector development in terms of application to V/STOL aircraft is reported in three categories: aircraft systems and ejector concepts; ejector performance including prediction techniques and experimental data base available; and, integration of the ejector with complete aircraft configurations. Available prediction techniques are reviewed and performance of three ejector designs with vertical lift capability is summarized. Applications of the 'fuselage' and 'short diffuser' ejectors to fighter aircraft are related to current and planned research programs. Recommendations are listed for effort needed to evaluate installed performance.
NASA Technical Reports Server (NTRS)
1974-01-01
The optimization of a thematic mapper for earth resources application is discussed in terms of cost versus performance. Performance tradeoffs and the cost impact are analyzed. The instrument design and radiometric performance are also described. The feasibility of a radiative cooler design for a scanning spectral radiometer is evaluated along with the charge coupled multiplex operation. Criteria for balancing the cost and complexity of data acquisition instruments against the requirements of the user, and a pushbroom scanner version of the thematic mapper are presented.
Optical design applications for enhanced illumination performance
NASA Astrophysics Data System (ADS)
Gilray, Carl; Lewin, Ian
1995-08-01
Nonimaging optical design techniques have been applied in the illumination industry for many years. Recently however, powerful software has been developed which allows accurate simulation and optimization of illumination devices. Wide experience has been obtained in using such design techniques for practical situations. These include automotive lighting where safety is of greatest importance, commercial lighting systems designed for energy efficiency, and numerous specialized applications. This presentation will discuss the performance requirements of a variety of illumination devices. It will further cover design methodology and present a variety of examples of practical applications for enhanced system performance.
Rotary-wing aerodynamics. Volume 2: Performance prediction of helicopters
NASA Technical Reports Server (NTRS)
Keys, C. N.; Stephniewski, W. Z. (Editor)
1979-01-01
Application of theories, as well as, special methods of procedures applicable to performance prediction are illustrated first, on an example of the conventional helicopter and then, winged and tandem configurations. Performance prediction of conventional helicopters in hover and vertical ascent are investigated. Various approaches to performance prediction in forward translation are presented. Performance problems are discussed only this time, a wing is added to the baseline configuration, and both aircraft are compared with respect to their performance. This comparison is extended to a tandem. Appendices on methods for estimating performance guarantees and growth of aircraft concludes this volume.
ERIC Educational Resources Information Center
Olaogun, Matthew O. B.
1986-01-01
J. Adams' application of the closed-loop theory (involving feedback and correction) on human learning and motor performance is described. The theory's applicability to behavioral kinesiology (the science of human movement) is discussed in the context of physical therapy, stressing the importance of knowledge of results as a motivating factor.…
Fault-Tolerant Computing: An Overview
1991-06-01
Addison Wesley:, Reading, MA) 1984. [8] J. Wakerly , Error Detecting Codes, Self-Checking Circuits and Applications , (Elsevier North Holland, Inc.- New York... applicable to bit-sliced organi- zations of hardware. In the first time step, the normal computation is performed on the operands and the results...for error detection and fault tolerance in parallel processor systems while perform- ing specific computation-intensive applications [111. Contrary to
Benchmarking multimedia performance
NASA Astrophysics Data System (ADS)
Zandi, Ahmad; Sudharsanan, Subramania I.
1998-03-01
With the introduction of faster processors and special instruction sets tailored to multimedia, a number of exciting applications are now feasible on the desktops. Among these is the DVD playback consisting, among other things, of MPEG-2 video and Dolby digital audio or MPEG-2 audio. Other multimedia applications such as video conferencing and speech recognition are also becoming popular on computer systems. In view of this tremendous interest in multimedia, a group of major computer companies have formed, Multimedia Benchmarks Committee as part of Standard Performance Evaluation Corp. to address the performance issues of multimedia applications. The approach is multi-tiered with three tiers of fidelity from minimal to full compliant. In each case the fidelity of the bitstream reconstruction as well as quality of the video or audio output are measured and the system is classified accordingly. At the next step the performance of the system is measured. In many multimedia applications such as the DVD playback the application needs to be run at a specific rate. In this case the measurement of the excess processing power, makes all the difference. All these make a system level, application based, multimedia benchmark very challenging. Several ideas and methodologies for each aspect of the problems will be presented and analyzed.
Hybrid monitoring scheme for end-to-end performance enhancement of multicast-based real-time media
NASA Astrophysics Data System (ADS)
Park, Ju-Won; Kim, JongWon
2004-10-01
As real-time media applications based on IP multicast networks spread widely, end-to-end QoS (quality of service) provisioning for these applications have become very important. To guarantee the end-to-end QoS of multi-party media applications, it is essential to monitor the time-varying status of both network metrics (i.e., delay, jitter and loss) and system metrics (i.e., CPU and memory utilization). In this paper, targeting the multicast-enabled AG (Access Grid) a next-generation group collaboration tool based on multi-party media services, the applicability of hybrid monitoring scheme that combines active and passive monitoring is investigated. The active monitoring measures network-layer metrics (i.e., network condition) with probe packets while the passive monitoring checks both application-layer metrics (i.e., user traffic condition by analyzing RTCP packets) and system metrics. By comparing these hybrid results, we attempt to pinpoint the causes of performance degradation and explore corresponding reactions to improve the end-to-end performance. The experimental results show that the proposed hybrid monitoring can provide useful information to coordinate the performance improvement of multi-party real-time media applications.
Gigaflop performance on a CRAY-2: Multitasking a computational fluid dynamics application
NASA Technical Reports Server (NTRS)
Tennille, Geoffrey M.; Overman, Andrea L.; Lambiotte, Jules J.; Streett, Craig L.
1991-01-01
The methodology is described for converting a large, long-running applications code that executed on a single processor of a CRAY-2 supercomputer to a version that executed efficiently on multiple processors. Although the conversion of every application is different, a discussion of the types of modification used to achieve gigaflop performance is included to assist others in the parallelization of applications for CRAY computers, especially those that were developed for other computers. An existing application, from the discipline of computational fluid dynamics, that had utilized over 2000 hrs of CPU time on CRAY-2 during the previous year was chosen as a test case to study the effectiveness of multitasking on a CRAY-2. The nature of dominant calculations within the application indicated that a sustained computational rate of 1 billion floating-point operations per second, or 1 gigaflop, might be achieved. The code was first analyzed and modified for optimal performance on a single processor in a batch environment. After optimal performance on a single CPU was achieved, the code was modified to use multiple processors in a dedicated environment. The results of these two efforts were merged into a single code that had a sustained computational rate of over 1 gigaflop on a CRAY-2. Timings and analysis of performance are given for both single- and multiple-processor runs.
Pay for Performance Proposals in Race to the Top Round II Applications. Briefing Memo
ERIC Educational Resources Information Center
Rose, Stephanie
2010-01-01
The Education Commission of the States reviewed all 36 Race to the Top (RttT) round II applications. Each of the 36 states that applied for round II funding referenced pay for performance under the heading of "Improving teacher and principal effectiveness based on performance." The majority of states outlined pay for performance…
Analytical Performance Modeling and Validation of Intel’s Xeon Phi Architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chunduri, Sudheer; Balaprakash, Prasanna; Morozov, Vitali
Modeling the performance of scientific applications on emerging hardware plays a central role in achieving extreme-scale computing goals. Analytical models that capture the interaction between applications and hardware characteristics are attractive because even a reasonably accurate model can be useful for performance tuning before the hardware is made available. In this paper, we develop a hardware model for Intel’s second-generation Xeon Phi architecture code-named Knights Landing (KNL) for the SKOPE framework. We validate the KNL hardware model by projecting the performance of mini-benchmarks and application kernels. The results show that our KNL model can project the performance with prediction errorsmore » of 10% to 20%. The hardware model also provides informative recommendations for code transformations and tuning.« less
Outline of CS application experiments
NASA Astrophysics Data System (ADS)
Otsu, Y.; Kondoh, K.; Matsumoto, M.
1985-09-01
To promote and investigate the practical application of satellite use, CS application experiments for various social activity needs, including those of public services such as the National Police Agency and the Japanese National Railway, computer network services, news material transmissions, and advanced teleconference activities, were performed. Public service satellite communications systems were developed and tested. Based on results obtained, several public services have implemented CS-2 for practical disaster-back-up uses. Practical application computer network and enhanced video-conference experiments have also been performed.
High-performance scientific computing in the cloud
NASA Astrophysics Data System (ADS)
Jorissen, Kevin; Vila, Fernando; Rehr, John
2011-03-01
Cloud computing has the potential to open up high-performance computational science to a much broader class of researchers, owing to its ability to provide on-demand, virtualized computational resources. However, before such approaches can become commonplace, user-friendly tools must be developed that hide the unfamiliar cloud environment and streamline the management of cloud resources for many scientific applications. We have recently shown that high-performance cloud computing is feasible for parallelized x-ray spectroscopy calculations. We now present benchmark results for a wider selection of scientific applications focusing on electronic structure and spectroscopic simulation software in condensed matter physics. These applications are driven by an improved portable interface that can manage virtual clusters and run various applications in the cloud. We also describe a next generation of cluster tools, aimed at improved performance and a more robust cluster deployment. Supported by NSF grant OCI-1048052.
PrismTech Data Distribution Service Java API Evaluation
NASA Technical Reports Server (NTRS)
Riggs, Cortney
2008-01-01
My internship duties with Launch Control Systems required me to start performance testing of an Object Management Group's (OMG) Data Distribution Service (DDS) specification implementation by PrismTech Limited through the Java programming language application programming interface (API). DDS is a networking middleware for Real-Time Data Distribution. The performance testing involves latency, redundant publishers, extended duration, redundant failover, and read performance. Time constraints allowed only for a data throughput test. I have designed the testing applications to perform all performance tests when time is allowed. Performance evaluation data such as megabits per second and central processing unit (CPU) time consumption were not easily attainable through the Java programming language; they required new methods and classes created in the test applications. Evaluation of this product showed the rate that data can be sent across the network. Performance rates are better on Linux platforms than AIX and Sun platforms. Compared to previous C++ programming language API, the performance evaluation also shows the language differences for the implementation. The Java API of the DDS has a lower throughput performance than the C++ API.
Combining Phase Identification and Statistic Modeling for Automated Parallel Benchmark Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Ye; Ma, Xiaosong; Liu, Qing Gary
2015-01-01
Parallel application benchmarks are indispensable for evaluating/optimizing HPC software and hardware. However, it is very challenging and costly to obtain high-fidelity benchmarks reflecting the scale and complexity of state-of-the-art parallel applications. Hand-extracted synthetic benchmarks are time-and labor-intensive to create. Real applications themselves, while offering most accurate performance evaluation, are expensive to compile, port, reconfigure, and often plainly inaccessible due to security or ownership concerns. This work contributes APPRIME, a novel tool for trace-based automatic parallel benchmark generation. Taking as input standard communication-I/O traces of an application's execution, it couples accurate automatic phase identification with statistical regeneration of event parameters tomore » create compact, portable, and to some degree reconfigurable parallel application benchmarks. Experiments with four NAS Parallel Benchmarks (NPB) and three real scientific simulation codes confirm the fidelity of APPRIME benchmarks. They retain the original applications' performance characteristics, in particular the relative performance across platforms.« less
Space Debris Detection on the HPDP, a Coarse-Grained Reconfigurable Array Architecture for Space
NASA Astrophysics Data System (ADS)
Suarez, Diego Andres; Bretz, Daniel; Helfers, Tim; Weidendorfer, Josef; Utzmann, Jens
2016-08-01
Stream processing, widely used in communications and digital signal processing applications, requires high- throughput data processing that is achieved in most cases using Application-Specific Integrated Circuit (ASIC) designs. Lack of programmability is an issue especially in space applications, which use on-board components with long life-cycles requiring applications updates. To this end, the High Performance Data Processor (HPDP) architecture integrates an array of coarse-grained reconfigurable elements to provide both flexible and efficient computational power suitable for stream-based data processing applications in space. In this work the capabilities of the HPDP architecture are demonstrated with the implementation of a real-time image processing algorithm for space debris detection in a space-based space surveillance system. The implementation challenges and alternatives are described making trade-offs to improve performance at the expense of negligible degradation of detection accuracy. The proposed implementation uses over 99% of the available computational resources. Performance estimations based on simulations show that the HPDP can amply match the application requirements.
48 CFR 1509.170-3 - Applicability.
Code of Federal Regulations, 2011 CFR
2011-10-01
... PLANNING CONTRACTOR QUALIFICATIONS Contractor Performance Evaluations 1509.170-3 Applicability. (a) This....604 provides detailed instructions for architect-engineer contractor performance evaluations. (b) The... simplified acquisition procedures do not require the creation or existence of a formal database for past...
DOT National Transportation Integrated Search
1999-09-01
This report presents the results of the field test portion of the Development, Evaluation, and Application of Performance-Based Brake Testing Technologies sponsored by the Federal Highway Administrations (FHWA) Office of Motor Carriers.
Scalability Analysis of Gleipnir: A Memory Tracing and Profiling Tool, on Titan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Janjusic, Tommy; Kartsaklis, Christos; Wang, Dali
2013-01-01
Application performance is hindered by a variety of factors but most notably driven by the well know CPU-memory speed gap (also known as the memory wall). Understanding application s memory behavior is key if we are trying to optimize performance. Understanding application performance properties is facilitated with various performance profiling tools. The scope of profiling tools varies in complexity, ease of deployment, profiling performance, and the detail of profiled information. Specifically, using profiling tools for performance analysis is a common task when optimizing and understanding scientific applications on complex and large scale systems such as Cray s XK7. This papermore » describes the performance characteristics of using Gleipnir, a memory tracing tool, on the Titan Cray XK7 system when instrumenting large applications such as the Community Earth System Model. Gleipnir is a memory tracing tool built as a plug-in tool for the Valgrind instrumentation framework. The goal of Gleipnir is to provide fine-grained trace information. The generated traces are a stream of executed memory transactions mapped to internal structures per process, thread, function, and finally the data structure or variable. Our focus was to expose tool performance characteristics when using Gleipnir with a combination of an external tools such as a cache simulator, Gl CSim, to characterize the tool s overall performance. In this paper we describe our experience with deploying Gleipnir on the Titan Cray XK7 system, report on the tool s ease-of-use, and analyze run-time performance characteristics under various workloads. While all performance aspects are important we mainly focus on I/O characteristics analysis due to the emphasis on the tools output which are trace-files. Moreover, the tool is dependent on the run-time system to provide the necessary infrastructure to expose low level system detail; therefore, we also discuss any theoretical benefits that can be achieved if such modules were present.« less
Guidelines for application of fluorescent lamps in high-performance avionic backlight systems
NASA Astrophysics Data System (ADS)
Syroid, Daniel D.
1997-07-01
Fluorescent lamps have proven to be well suited for use in high performance avionic backlight systems as demonstrated by numerous production applications for both commercial and military cockpit displays. Cockpit display applications include: Boeing 777, new 737s, F-15, F-16, F-18, F-22, C- 130, Navy P3, NASA Space Shuttle and many others. Fluorescent lamp based backlights provide high luminance, high lumen efficiency, precision chromaticity and long life for avionic active matrix liquid crystal display applications. Lamps have been produced in many sizes and shapes. Lamp diameters range from 2.6 mm to over 20 mm and lengths for the larger diameter lamps range to over one meter. Highly convoluted serpentine lamp configurations are common as are both hot and cold cathode electrode designs. This paper will review fluorescent lamp operating principles, discuss typical requirements for avionic grade lamps, compare avionic and laptop backlight designs and provide guidelines for the proper application of lamps and performance choices that must be made to attain optimum system performance considering high luminance output, system efficiency, dimming range and cost.
Likitlersuang, Jirapat; Leineweber, Matthew J; Andrysek, Jan
2017-10-01
Thin film force sensors are commonly used within biomechanical systems, and at the interface of the human body and medical and non-medical devices. However, limited information is available about their performance in such applications. The aims of this study were to evaluate and determine ways to improve the performance of thin film (FlexiForce) sensors at the body/device interface. Using a custom apparatus designed to load the sensors under simulated body/device conditions, two aspects were explored relating to sensor calibration and application. The findings revealed accuracy errors of 23.3±17.6% for force measurements at the body/device interface with conventional techniques of sensor calibration and application. Applying a thin rigid disc between the sensor and human body and calibrating the sensor using compliant surfaces was found to substantially reduce measurement errors to 2.9±2.0%. The use of alternative calibration and application procedures is recommended to gain acceptable measurement performance from thin film force sensors in body/device applications. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Remote Performance Monitoring of a Thermoplastic Composite Bridge at Camp Mackall, NC
2011-11-01
level, flow, creep, and force for slope stability, subsidence, seismicity studies, structural restoration, or site assessment applications. • Mining ...monitors mine ventilation, slope stability, convergence, and equipment performance. • Machinery testing- provides temperature, pressure, RPM, veloci...Contact an Applications Engineer for help in deter- mining the best antenna for your application. • 21831 0 dBd, ’l.t Wave Dipole Whip Antenna
DURIP: High Performance Computing in Biomathematics Applications
2017-05-10
Mathematics and Statistics (AMS) at the University of California, Santa Cruz (UCSC) to conduct research and research-related education in areas of...Computing in Biomathematics Applications Report Title The goal of this award was to enhance the capabilities of the Department of Applied Mathematics and...DURIP: High Performance Computing in Biomathematics Applications The goal of this award was to enhance the capabilities of the Department of Applied
NASA Technical Reports Server (NTRS)
Perkinson, J. A.
1974-01-01
The application of associative memory processor equipment to conventional host processors type systems is discussed. Efforts were made to demonstrate how such application relieves the task burden of conventional systems, and enhance system speed and efficiency. Data cover comparative theoretical performance analysis, demonstration of expanded growth capabilities, and demonstrations of actual hardware in simulated environment.
Applications Performance on NAS Intel Paragon XP/S - 15#
NASA Technical Reports Server (NTRS)
Saini, Subhash; Simon, Horst D.; Copper, D. M. (Technical Monitor)
1994-01-01
The Numerical Aerodynamic Simulation (NAS) Systems Division received an Intel Touchstone Sigma prototype model Paragon XP/S- 15 in February, 1993. The i860 XP microprocessor with an integrated floating point unit and operating in dual -instruction mode gives peak performance of 75 million floating point operations (NIFLOPS) per second for 64 bit floating point arithmetic. It is used in the Paragon XP/S-15 which has been installed at NAS, NASA Ames Research Center. The NAS Paragon has 208 nodes and its peak performance is 15.6 GFLOPS. Here, we will report on early experience using the Paragon XP/S- 15. We have tested its performance using both kernels and applications of interest to NAS. We have measured the performance of BLAS 1, 2 and 3 both assembly-coded and Fortran coded on NAS Paragon XP/S- 15. Furthermore, we have investigated the performance of a single node one-dimensional FFT, a distributed two-dimensional FFT and a distributed three-dimensional FFT Finally, we measured the performance of NAS Parallel Benchmarks (NPB) on the Paragon and compare it with the performance obtained on other highly parallel machines, such as CM-5, CRAY T3D, IBM SP I, etc. In particular, we investigated the following issues, which can strongly affect the performance of the Paragon: a. Impact of the operating system: Intel currently uses as a default an operating system OSF/1 AD from the Open Software Foundation. The paging of Open Software Foundation (OSF) server at 22 MB to make more memory available for the application degrades the performance. We found that when the limit of 26 NIB per node out of 32 MB available is reached, the application is paged out of main memory using virtual memory. When the application starts paging, the performance is considerably reduced. We found that dynamic memory allocation can help applications performance under certain circumstances. b. Impact of data cache on the i860/XP: We measured the performance of the BLAS both assembly coded and Fortran coded. We found that the measured performance of assembly-coded BLAS is much less than what memory bandwidth limitation would predict. The influence of data cache on different sizes of vectors is also investigated using one-dimensional FFTs. c. Impact of processor layout: There are several different ways processors can be laid out within the two-dimensional grid of processors on the Paragon. We have used the FFT example to investigate performance differences based on processors layout.
Bornmann, Lutz; Wallon, Gerlind; Ledin, Anna
2008-01-01
Does peer review fulfill its declared objective of identifying the best science and the best scientists? In order to answer this question we analyzed the Long-Term Fellowship and the Young Investigator programmes of the European Molecular Biology Organization. Both programmes aim to identify and support the best post doctoral fellows and young group leaders in the life sciences. We checked the association between the selection decisions and the scientific performance of the applicants. Our study involved publication and citation data for 668 applicants to the Long-Term Fellowship programme from the year 1998 (130 approved, 538 rejected) and 297 applicants to the Young Investigator programme (39 approved and 258 rejected applicants) from the years 2001 and 2002. If quantity and impact of research publications are used as a criterion for scientific achievement, the results of (zero-truncated) negative binomial models show that the peer review process indeed selects scientists who perform on a higher level than the rejected ones subsequent to application. We determined the extent of errors due to over-estimation (type I errors) and under-estimation (type 2 errors) of future scientific performance. Our statistical analyses point out that between 26% and 48% of the decisions made to award or reject an application show one of both error types. Even though for a part of the applicants, the selection committee did not correctly estimate the applicant's future performance, the results show a statistically significant association between selection decisions and the applicants' scientific achievements, if quantity and impact of research publications are used as a criterion for scientific achievement. PMID:18941530
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curry, Matthew L.; Ferreira, Kurt Brian; Pedretti, Kevin Thomas Tauke
2012-03-01
This report documents thirteen of Sandia's contributions to the Computational Systems and Software Environment (CSSE) within the Advanced Simulation and Computing (ASC) program between fiscal years 2009 and 2012. It describes their impact on ASC applications. Most contributions are implemented in lower software levels allowing for application improvement without source code changes. Improvements are identified in such areas as reduced run time, characterizing power usage, and Input/Output (I/O). Other experiments are more forward looking, demonstrating potential bottlenecks using mini-application versions of the legacy codes and simulating their network activity on Exascale-class hardware. The purpose of this report is to provemore » that the team has completed milestone 4467-Demonstration of a Legacy Application's Path to Exascale. Cielo is expected to be the last capability system on which existing ASC codes can run without significant modifications. This assertion will be tested to determine where the breaking point is for an existing highly scalable application. The goal is to stretch the performance boundaries of the application by applying recent CSSE RD in areas such as resilience, power, I/O, visualization services, SMARTMAP, lightweight LWKs, virtualization, simulation, and feedback loops. Dedicated system time reservations and/or CCC allocations will be used to quantify the impact of system-level changes to extend the life and performance of the ASC code base. Finally, a simulation of anticipated exascale-class hardware will be performed using SST to supplement the calculations. Determine where the breaking point is for an existing highly scalable application: Chapter 15 presented the CSSE work that sought to identify the breaking point in two ASC legacy applications-Charon and CTH. Their mini-app versions were also employed to complete the task. There is no single breaking point as more than one issue was found with the two codes. The results were that applications can expect to encounter performance issues related to the computing environment, system software, and algorithms. Careful profiling of runtime performance will be needed to identify the source of an issue, in strong combination with knowledge of system software and application source code.« less
By Hand or Not By-Hand: A Case Study of Alternative Approaches to Parallelize CFD Applications
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Bailey, David (Technical Monitor)
1997-01-01
While parallel processing promises to speed up applications by several orders of magnitude, the performance achieved still depends upon several factors, including the multiprocessor architecture, system software, data distribution and alignment, as well as the methods used for partitioning the application and mapping its components onto the architecture. The existence of the Gorden Bell Prize given out at Supercomputing every year suggests that while good performance can be attained for real applications on general purpose multiprocessors, the large investment in man-power and time still has to be repeated for each application-machine combination. As applications and machine architectures become more complex, the cost and time-delays for obtaining performance by hand will become prohibitive. Computer users today can turn to three possible avenues for help: parallel libraries, parallel languages and compilers, interactive parallelization tools. The success of these methodologies, in turn, depends on proper application of data dependency analysis, program structure recognition and transformation, performance prediction as well as exploitation of user supplied knowledge. NASA has been developing multidisciplinary applications on highly parallel architectures under the High Performance Computing and Communications Program. Over the past six years, the transition of underlying hardware and system software have forced the scientists to spend a large effort to migrate and recede their applications. Various attempts to exploit software tools to automate the parallelization process have not produced favorable results. In this paper, we report our most recent experience with CAPTOOL, a package developed at Greenwich University. We have chosen CAPTOOL for three reasons: 1. CAPTOOL accepts a FORTRAN 77 program as input. This suggests its potential applicability to a large collection of legacy codes currently in use. 2. CAPTOOL employs domain decomposition to obtain parallelism. Although the fact that not all kinds of parallelism are handled may seem unappealing, many NASA applications in computational aerosciences as well as earth and space sciences are amenable to domain decomposition. 3. CAPTOOL generates code for a large variety of environments employed across NASA centers: MPI/PVM on network of workstations to the IBS/SP2 and CRAY/T3D.
A Framework for Performing Verification and Validation in Reuse Based Software Engineering
NASA Technical Reports Server (NTRS)
Addy, Edward A.
1997-01-01
Verification and Validation (V&V) is currently performed during application development for many systems, especially safety-critical and mission- critical systems. The V&V process is intended to discover errors, especially errors related to critical processing, as early as possible during the development process. The system application provides the context under which the software artifacts are validated. This paper describes a framework that extends V&V from an individual application system to a product line of systems that are developed within an architecture-based software engineering environment. This framework includes the activities of traditional application-level V&V, and extends these activities into domain engineering and into the transition between domain engineering and application engineering. The framework includes descriptions of the types of activities to be performed during each of the life-cycle phases, and provides motivation for the activities.
Using high-performance networks to enable computational aerosciences applications
NASA Technical Reports Server (NTRS)
Johnson, Marjory J.
1992-01-01
One component of the U.S. Federal High Performance Computing and Communications Program (HPCCP) is the establishment of a gigabit network to provide a communications infrastructure for researchers across the nation. This gigabit network will provide new services and capabilities, in addition to increased bandwidth, to enable future applications. An understanding of these applications is necessary to guide the development of the gigabit network and other high-performance networks of the future. In this paper we focus on computational aerosciences applications run remotely using the Numerical Aerodynamic Simulation (NAS) facility located at NASA Ames Research Center. We characterize these applications in terms of network-related parameters and relate user experiences that reveal limitations imposed by the current wide-area networking infrastructure. Then we investigate how the development of a nationwide gigabit network would enable users of the NAS facility to work in new, more productive ways.
Application of a hierarchical structure stochastic learning automation
NASA Technical Reports Server (NTRS)
Neville, R. G.; Chrystall, M. S.; Mars, P.
1979-01-01
A hierarchical structure automaton was developed using a two state stochastic learning automato (SLA) in a time shared model. Application of the hierarchical SLA to systems with multidimensional, multimodal performance criteria is described. Results of experiments performed with the hierarchical SLA using a performance index with a superimposed noise component of ? or - delta distributed uniformly over the surface are discussed.
ERIC Educational Resources Information Center
Guerra-Lopez, Ingrid; Toker, Sacip
2012-01-01
This article illustrates the application of the Impact Evaluation Process for the design of a performance measurement and evaluation framework for an urban high school. One of the key aims of this framework is to enhance decision-making by providing timely feedback about the effectiveness of various performance improvement interventions. The…
ERIC Educational Resources Information Center
Darabi, A. Aubteen
2005-01-01
This article reports a case study describing how the principles of a cognitive apprenticeship (CA) model developed by Collins, Brown, and Holum (1991) were applied to a graduate course on performance systems analysis (PSA), and the differences this application made in student performance and evaluation of the course compared to the previous…
Characteristics of enhanced-mode AlGaN/GaN MIS HEMTs for millimeter wave applications
NASA Astrophysics Data System (ADS)
Lee, Jong-Min; Ahn, Ho-Kyun; Jung, Hyun-Wook; Shin, Min Jeong; Lim, Jong-Won
2017-09-01
In this paper, an enhanced-mode (E-mode) AlGaN/GaN high electron mobility transistor (HEMT) was developed by using 4-inch GaN HEMT process. We designed and fabricated Emode HEMTs and characterized device performance. To estimate the possibility of application for millimeter wave applications, we focused on the high frequency performance and power characteristics. To shift the threshold voltage of HEMTs we applied the Al2O3 insulator to the gate structure and adopted the gate recess technique. To increase the frequency performance the e-beam lithography technique was used to define the 0.15 um gate length. To evaluate the dc and high frequency performance, electrical characterization was performed. The threshold voltage was measured to be positive value by linear extrapolation from the transfer curve. The device leakage current is comparable to that of the depletion mode device. The current gain cut-off frequency and the maximum oscillation frequency of the E-mode device with a total gate width of 150 um were 55 GHz and 168 GHz, respectively. To confirm the power performance for mm-wave applications the load-pull test was performed. The measured power density of 2.32 W/mm was achieved at frequencies of 28 and 30 GHz.
Natt, Neena; Chang, Alice Y.; Berbari, Elie F.; Kennel, Kurt A.; Kearns, Ann E.
2016-01-01
Objective To determine which residency characteristics are associated with performance during endocrinology fellowship training as measured by competency-based faculty evaluation scores and faculty global ratings of trainee performance. Method We performed a retrospective review of interview applications from endocrinology fellows who graduated from a single academic institution between 2006 and 2013. Performance measures included competency-based faculty evaluation scores and faculty global ratings. The association between applicant characteristics and measures of performance during fellowship was examined by linear regression. Results The presence of a laudatory comparative statement in the residency program director’s letter of recommendation (LoR) or experience as a chief resident was significantly associated with competency-based faculty evaluation scores (β = 0.22, P = 0.001; and β = 0.24, P = 0.009, respectively) and faculty global ratings (β = 0.85, P = 0.006; and β = 0.96, P = 0.015, respectively). Conclusion The presence of a laudatory comparative statement in the residency program director’s LoR or experience as a chief resident were significantly associated with overall performance during subspecialty fellowship training. Future studies are needed in other cohorts to determine the broader implications of these findings in the application and selection process. PMID:26437219
The BioMedical Admissions Test for medical student selection: issues of fairness and bias.
Emery, Joanne L; Bell, John F; Vidal Rodeiro, Carmen L
2011-01-01
The BioMedical Admissions Test (BMAT) forms part of the undergraduate medical admission process at the University of Cambridge. The fairness of admissions tests is an important issue. Aims were to investigate the relationships between applicants' background variables and BMAT scores, whether they were offered a place or rejected and, for those admitted, performance on the first year course examinations. Multilevel regression models were employed with data from three combined applicant cohorts. Admission rates for different groups were investigated with and without controlling for BMAT performance. The fairness of the BMAT was investigated by determining, for those admitted, whether scores predicted examination performance equitably. Despite some differences in applicants' BMAT performance (e.g. by school type and gender), BMAT scores predicted mean examination marks equitably for all background variables considered. The probability of achieving a 1st class examination result, however, was slightly under-predicted for those admitted from schools and colleges entering relatively few applicants. Not all differences in admission rates were accounted for by BMAT performance. However, the test constitutes only one part of a compensatory admission system in which other factors, such as interview performance, are important considerations. Results are in support of the equity of the BMAT.
Automated Cache Performance Analysis And Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohror, Kathryn
While there is no lack of performance counter tools for coarse-grained measurement of cache activity, there is a critical lack of tools for relating data layout to cache behavior to application performance. Generally, any nontrivial optimizations are either not done at all, or are done ”by hand” requiring significant time and expertise. To the best of our knowledge no tool available to users measures the latency of memory reference instructions for partic- ular addresses and makes this information available to users in an easy-to-use and intuitive way. In this project, we worked to enable the Open|SpeedShop performance analysis tool tomore » gather memory reference latency information for specific instructions and memory ad- dresses, and to gather and display this information in an easy-to-use and intuitive way to aid performance analysts in identifying problematic data structures in their codes. This tool was primarily designed for use in the supercomputer domain as well as grid, cluster, cloud-based parallel e-commerce, and engineering systems and middleware. Ultimately, we envision a tool to automate optimization of application cache layout and utilization in the Open|SpeedShop performance analysis tool. To commercialize this soft- ware, we worked to develop core capabilities for gathering enhanced memory usage per- formance data from applications and create and apply novel methods for automatic data structure layout optimizations, tailoring the overall approach to support existing supercom- puter and cluster programming models and constraints. In this Phase I project, we focused on infrastructure necessary to gather performance data and present it in an intuitive way to users. With the advent of enhanced Precise Event-Based Sampling (PEBS) counters on recent Intel processor architectures and equivalent technology on AMD processors, we are now in a position to access memory reference information for particular addresses. Prior to the introduction of PEBS counters, cache behavior could only be measured reliably in the ag- gregate across tens or hundreds of thousands of instructions. With the newest iteration of PEBS technology, cache events can be tied to a tuple of instruction pointer, target address (for both loads and stores), memory hierarchy, and observed latency. With this information we can now begin asking questions regarding the efficiency of not only regions of code, but how these regions interact with particular data structures and how these interactions evolve over time. In the short term, this information will be vital for performance analysts understanding and optimizing the behavior of their codes for the memory hierarchy. In the future, we can begin to ask how data layouts might be changed to improve performance and, for a particular application, what the theoretical optimal performance might be. The overall benefit to be produced by this effort was a commercial quality easy-to- use and scalable performance tool that will allow both beginner and experienced parallel programmers to automatically tune their applications for optimal cache usage. Effective use of such a tool can literally save weeks of performance tuning effort. Easy to use. With the proposed innovations, finding and fixing memory performance issues would be more automated and hide most to all of the performance engineer exper- tise ”under the hood” of the Open|SpeedShop performance tool. One of the biggest public benefits from the proposed innovations is that it makes performance analysis more usable to a larger group of application developers. Intuitive reporting of results. The Open|SpeedShop performance analysis tool has a rich set of intuitive, yet detailed reports for presenting performance results to application developers. Our goal was to leverage this existing technology to present the results from our memory performance addition to Open|SpeedShop. Suitable for experts as well as novices. Application performance is getting more difficult to measure as the hardware platforms they run on become more complicated. This makes life difficult for the application developer, in that they need to know more about the hardware platform, including the memory system hierarchy, in order to understand the performance of their application. Some application developers are comfortable in that sce- nario, while others want to do their scientific research and not have to understand all the nuances in the hardware platform they are running their application on. Our proposed innovations were aimed to support both experts and novice performance analysts. Useful in many markets. The enhancement to Open|SpeedShop would appeal to a broader market space, as it will be useful in scientific, commercial, and cloud computing environments. Our goal was to use technology developed initially at the and Lawrence Livermore Na- tional Laboratory combined with the development and commercial software experience of the Argo Navis Technologies, LLC (ANT) to form a powerful combination to delivery these objectives.« less
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Jespersen, Dennis; Buning, Peter; Bailey, David (Technical Monitor)
1996-01-01
The Gorden Bell Prizes given out at Supercomputing every year includes at least two catergories: performance (highest GFLOP count) and price-performance (GFLOP/million $$) for real applications. In the past five years, the winners of the price-performance categories all came from networks of work-stations. This reflects three important facts: 1. supercomputers are still too expensive for the masses; 2. achieving high performance for real applications takes real work; and, most importantly; 3. it is possible to obtain acceptable performance for certain real applications on network of work stations. With the continued advance of network technology as well as increased performance of "desktop" workstation, the "Swarm of Ants vs. Herd of Elephants" debate, which began with vector multiprocessors (VPPs) against SIMD type multiprocessors (e.g. CM2), is now recast as VPPs against Symetric Multiprocessors (SMPs, e.g. SGI PowerChallenge). This paper reports on performance studies we performed solving a large scale (2-million grid pt.s) CFD problem involving a Boeing 747 based on a parallel version of OVERFLOW that utilizes message passing on PVM. A performance monitoring tool developed under NASA HPCC, called AIMS, was used to instrument and analyze the the performance data thus obtained. We plan to compare its performance data obtained across a wide spectrum of architectures including: the Cray C90, IBM/SP2, SGI/Power Challenge Cluster, to a group of workstations connected over a simple network. The metrics of comparison includes speed-up, price-performance, throughput, and turn-around time. We also plan to present a plan of attack for various issues that will make the execution of Grand Challenge Applications across the Global Information Infrastructure a reality.
The USEPA's National Homeland Security Research Center (NHSRC)Technology Testing and Evaluation Program (TTEP) is carrying out performance tests on homeland security technologies. Under TTEP, Battelle recently evaluated the performance of the Science Applications International Co...
High-Performance Liquid Chromatography-Mass Spectrometry.
ERIC Educational Resources Information Center
Vestal, Marvin L.
1984-01-01
Reviews techniques for online coupling of high-performance liquid chromatography with mass spectrometry, emphasizing those suitable for application to nonvolatile samples. Also summarizes the present status, strengths, and weaknesses of various techniques and discusses potential applications of recently developed techniques for combined liquid…
40 CFR 1065.415 - Durability demonstration.
Code of Federal Regulations, 2011 CFR
2011-07-01
... than in-use operation, subject to any pre-approval requirements established in the applicable standard.... Perform emission tests following the provisions of the standard setting part and this part, as applicable. Perform emission tests to determine deterioration factors consistent with good engineering judgment...
40 CFR 1065.415 - Durability demonstration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... than in-use operation, subject to any pre-approval requirements established in the applicable standard.... Perform emission tests following the provisions of the standard setting part and this part, as applicable. Perform emission tests to determine deterioration factors consistent with good engineering judgment...
40 CFR 1065.415 - Durability demonstration.
Code of Federal Regulations, 2010 CFR
2010-07-01
... than in-use operation, subject to any pre-approval requirements established in the applicable standard.... Perform emission tests following the provisions of the standard setting part and this part, as applicable. Perform emission tests to determine deterioration factors consistent with good engineering judgment...
NASA Astrophysics Data System (ADS)
Tsai, Li-Fen; Shaw, Jing-Chi; Wang, Pei-Wen; Shih, Meng-Long; Su, Yi-Jing
2011-10-01
This study aims to probe into customers' online word-of-mouth regarding cultural heritage applications and performance facilities in Cultural and Creative Industries. Findings demonstrate that, regarding online word-of-mouth for art museums, museums, and art villages, items valued by customers are design aesthetics of displays and collections, educational functions, and environments and landscapes. The percentages are 10.102%, 11.208% and 11.44%, respectively. In addition, cultural heritage applications and performance facility industries in Taiwan are highly valued in online word-of-mouth.
The 4 phase VSR motor: The ideal prime mover for electric vehicles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holling, G.H.; Yeck, M.M.
1994-12-31
4 phase variable switched reluctance motors are gaining acceptance in many applications due to their fault tolerant characteristics. A 4 phase variable switched reluctance motor (VSR) is modelled and its performance is predicted for several operating points for an electric vehicle application. The 4 phase VSR offers fault tolerance, high performance, and an excellent torque to weight ratio. The actual system performance was measured both on a teststand and on an actual vehicle. While the system described is used in a production electric motorscooter, the technology is equally applicable for high efficiency electric cars and buses. 4 refs.
Development and Performance Analysis of a Photonics-Assisted RF Converter for 5G Applications
NASA Astrophysics Data System (ADS)
Borges, Ramon Maia; Muniz, André Luiz Marques; Sodré Junior, Arismar Cerqueira
2017-03-01
This article presents a simple, ultra-wideband and tunable radiofrequency (RF) converter for 5G cellular networks. The proposed optoelectronic device performs broadband photonics-assisted upconversion and downconversion using a single optical modulator. Experimental results demonstrate RF conversion from DC to millimeter waves, including 28 and 38 GHz that are potential frequency bands for 5G applications. Narrow linewidth and low phase noise characteristics are observed in all generated RF carriers. An experimental digital performance analysis using different modulation schemes illustrates the applicability of the proposed photonics-based device in reconfigurable optical wireless communications.
Code of Federal Regulations, 2010 CFR
2010-04-01
... application for permission to perform such action as is necessary to bring the product into compliance with the Act, such application shall include the information required by § 1005.21. (c) If the application... such application. ...
Delaney, Declan T.; O’Hare, Gregory M. P.
2016-01-01
No single network solution for Internet of Things (IoT) networks can provide the required level of Quality of Service (QoS) for all applications in all environments. This leads to an increasing number of solutions created to fit particular scenarios. Given the increasing number and complexity of solutions available, it becomes difficult for an application developer to choose the solution which is best suited for an application. This article introduces a framework which autonomously chooses the best solution for the application given the current deployed environment. The framework utilises a performance model to predict the expected performance of a particular solution in a given environment. The framework can then choose an apt solution for the application from a set of available solutions. This article presents the framework with a set of models built using data collected from simulation. The modelling technique can determine with up to 85% accuracy the solution which performs the best for a particular performance metric given a set of solutions. The article highlights the fractured and disjointed practice currently in place for examining and comparing communication solutions and aims to open a discussion on harmonising testing procedures so that different solutions can be directly compared and offers a framework to achieve this within IoT networks. PMID:27916929
Delaney, Declan T; O'Hare, Gregory M P
2016-12-01
No single network solution for Internet of Things (IoT) networks can provide the required level of Quality of Service (QoS) for all applications in all environments. This leads to an increasing number of solutions created to fit particular scenarios. Given the increasing number and complexity of solutions available, it becomes difficult for an application developer to choose the solution which is best suited for an application. This article introduces a framework which autonomously chooses the best solution for the application given the current deployed environment. The framework utilises a performance model to predict the expected performance of a particular solution in a given environment. The framework can then choose an apt solution for the application from a set of available solutions. This article presents the framework with a set of models built using data collected from simulation. The modelling technique can determine with up to 85% accuracy the solution which performs the best for a particular performance metric given a set of solutions. The article highlights the fractured and disjointed practice currently in place for examining and comparing communication solutions and aims to open a discussion on harmonising testing procedures so that different solutions can be directly compared and offers a framework to achieve this within IoT networks.
Scheduling from the perspective of the application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berman, F.; Wolski, R.
1996-12-31
Metacomputing is the aggregation of distributed and high-performance resources on coordinated networks. With careful scheduling, resource-intensive applications can be implemented efficiently on metacomputing systems at the sizes of interest to developers and users. In this paper we focus on the problem of scheduling applications on metacomputing systems. We introduce the concept of application-centric scheduling in which everything about the system is evaluated in terms of its impact on the application. Application-centric scheduling is used by virtually all metacomputer programmers to achieve performance on metacomputing systems. We describe two successful metacomputing applications to illustrate this approach, and describe AppLeS scheduling agentsmore » which generalize the application-centric scheduling approach. Finally, we show preliminary results which compare AppLeS-derived schedules with conventional strip and blocked schedules for a two-dimensional Jacobi code.« less
Reliability Assessment for COTS Components in Space Flight Applications
NASA Technical Reports Server (NTRS)
Krishnan, G. S.; Mazzuchi, Thomas A.
2001-01-01
Systems built for space flight applications usually demand very high degree of performance and a very high level of accuracy. Hence, the design engineers are often prone to selecting state-of-art technologies for inclusion in their system design. The shrinking budgets also necessitate use of COTS (Commercial Off-The-Shelf) components, which are construed as being less expensive. The performance and accuracy requirements for space flight applications are much more stringent than those for the commercial applications. The quantity of systems designed and developed for space applications are much lower in number than those produced for the commercial applications. With a given set of requirements, are these COTS components reliable? This paper presents a model for assessing the reliability of COTS components in space applications and the associated affect on the system reliability. We illustrate the method with a real application.
Performance Analysis of a Hybrid Overset Multi-Block Application on Multiple Architectures
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Biswas, Rupak
2003-01-01
This paper presents a detailed performance analysis of a multi-block overset grid compu- tational fluid dynamics app!ication on multiple state-of-the-art computer architectures. The application is implemented using a hybrid MPI+OpenMP programming paradigm that exploits both coarse and fine-grain parallelism; the former via MPI message passing and the latter via OpenMP directives. The hybrid model also extends the applicability of multi-block programs to large clusters of SNIP nodes by overcoming the restriction that the number of processors be less than the number of grid blocks. A key kernel of the application, namely the LU-SGS linear solver, had to be modified to enhance the performance of the hybrid approach on the target machines. Investigations were conducted on cacheless Cray SX6 vector processors, cache-based IBM Power3 and Power4 architectures, and single system image SGI Origin3000 platforms. Overall results for complex vortex dynamics simulations demonstrate that the SX6 achieves the highest performance and outperforms the RISC-based architectures; however, the best scaling performance was achieved on the Power3.
Application of the high resolution return beam vidicon
NASA Technical Reports Server (NTRS)
Cantella, M. J.
1977-01-01
The Return Beam Vidicon (RBV) is a high-performance electronic image sensor and electrical storage component. It can accept continuous or discrete exposures. Information can be read out with a single scan or with many repetitive scans for either signal processing or display. Resolution capability is 10,000 TV lines/height, and at 100 lp/mm, performance matches or exceeds that of film, particularly with low-contrast imagery. Electronic zoom can be employed effectively for image magnification and data compression. The high performance and flexibility of the RBV permit wide application in systems for reconnaissance, scan conversion, information storage and retrieval, and automatic inspection and test. This paper summarizes the characteristics and performance parameters of the RBV and cites examples of feasible applications.
Engineering design of a high-temperature superconductor current lead
NASA Astrophysics Data System (ADS)
Niemann, R. C.; Cha, Y. S.; Hull, J. R.; Daugherty, M. A.; Buckles, W. E.
As part of the US Department of Energy's Superconductivity Pilot Center Program, Argonne National Laboratory and Superconductivity, Inc., are developing high-temperature superconductor (HTS) current leads suitable for application to superconducting magnetic energy storage systems. The principal objective of the development program is to design, construct, and evaluate the performance of HTS current leads suitable for near-term applications. Supporting objectives are to (1) develop performance criteria; (2) develop a detailed design; (3) analyze performance; (4) gain manufacturing experience in the areas of materials and components procurement, fabrication and assembly, quality assurance, and cost; (5) measure performance of critical components and the overall assembly; (6) identify design uncertainties and develop a program for their study; and (7) develop application-acceptance criteria.
1988 IEEE Aerospace Applications Conference, Park City, UT, Feb. 7-12, 1988, Digest
NASA Astrophysics Data System (ADS)
The conference presents papers on microwave applications, data and signal processing applications, related aerospace applications, and advanced microelectronic products for the aerospace industry. Topics include a high-performance antenna measurement system, microwave power beaming from earth to space, the digital enhancement of microwave component performance, and a GaAs vector processor based on parallel RISC microprocessors. Consideration is also given to unique techniques for reliable SBNR architectures, a linear analysis subsystem for CSSL-IV, and a structured singular value approach to missile autopilot analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y.; Gunasekaran, Raghul; Ma, Xiaosong
2016-01-01
Inter-application I/O contention and performance interference have been recognized as severe problems. In this work, we demonstrate, through measurement from Titan (world s No. 3 supercomputer), that high I/O variance co-exists with the fact that individual storage units remain under-utilized for the majority of the time. This motivates us to propose AID, a system that performs automatic application I/O characterization and I/O-aware job scheduling. AID analyzes existing I/O traffic and batch job history logs, without any prior knowledge on applications or user/developer involvement. It identifies the small set of I/O-intensive candidates among all applications running on a supercomputer and subsequentlymore » mines their I/O patterns, using more detailed per-I/O-node traffic logs. Based on such auto- extracted information, AID provides online I/O-aware scheduling recommendations to steer I/O-intensive applications away from heavy ongoing I/O activities. We evaluate AID on Titan, using both real applications (with extracted I/O patterns validated by contacting users) and our own pseudo-applications. Our results confirm that AID is able to (1) identify I/O-intensive applications and their detailed I/O characteristics, and (2) significantly reduce these applications I/O performance degradation/variance by jointly evaluating out- standing applications I/O pattern and real-time system l/O load.« less
Synthesizing parallel imaging applications using the CAP (computer-aided parallelization) tool
NASA Astrophysics Data System (ADS)
Gennart, Benoit A.; Mazzariol, Marc; Messerli, Vincent; Hersch, Roger D.
1997-12-01
Imaging applications such as filtering, image transforms and compression/decompression require vast amounts of computing power when applied to large data sets. These applications would potentially benefit from the use of parallel processing. However, dedicated parallel computers are expensive and their processing power per node lags behind that of the most recent commodity components. Furthermore, developing parallel applications remains a difficult task: writing and debugging the application is difficult (deadlocks), programs may not be portable from one parallel architecture to the other, and performance often comes short of expectations. In order to facilitate the development of parallel applications, we propose the CAP computer-aided parallelization tool which enables application programmers to specify at a high-level of abstraction the flow of data between pipelined-parallel operations. In addition, the CAP tool supports the programmer in developing parallel imaging and storage operations. CAP enables combining efficiently parallel storage access routines and image processing sequential operations. This paper shows how processing and I/O intensive imaging applications must be implemented to take advantage of parallelism and pipelining between data access and processing. This paper's contribution is (1) to show how such implementations can be compactly specified in CAP, and (2) to demonstrate that CAP specified applications achieve the performance of custom parallel code. The paper analyzes theoretically the performance of CAP specified applications and demonstrates the accuracy of the theoretical analysis through experimental measurements.
Block 4 solar cell module design and test specification for intermediate load center applications
NASA Technical Reports Server (NTRS)
1978-01-01
Requirements for performance of terrestrial solar cell modules intended for use in various test applications are established. During the 1979-80 time period, such applications are expected to be in the 20 to 500 kilowatt size range. A series of characterization and qualification tests necessary to certify the module design for production, and the necessary performance test for acceptance of modules are specified.
Performance and Scalability of the NAS Parallel Benchmarks in Java
NASA Technical Reports Server (NTRS)
Frumkin, Michael A.; Schultz, Matthew; Jin, Haoqiang; Yan, Jerry; Biegel, Bryan A. (Technical Monitor)
2002-01-01
Several features make Java an attractive choice for scientific applications. In order to gauge the applicability of Java to Computational Fluid Dynamics (CFD), we have implemented the NAS (NASA Advanced Supercomputing) Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would position Java closer to Fortran in the competition for scientific applications.
Performance of OVERFLOW-D Applications based on Hybrid and MPI Paradigms on IBM Power4 System
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Biegel, Bryan (Technical Monitor)
2002-01-01
This report briefly discusses our preliminary performance experiments with parallel versions of OVERFLOW-D applications. These applications are based on MPI and hybrid paradigms on the IBM Power4 system here at the NAS Division. This work is part of an effort to determine the suitability of the system and its parallel libraries (MPI/OpenMP) for specific scientific computing objectives.
Exploiting GPUs in Virtual Machine for BioCloud
Jo, Heeseung; Jeong, Jinkyu; Lee, Myoungho; Choi, Dong Hoon
2013-01-01
Recently, biological applications start to be reimplemented into the applications which exploit many cores of GPUs for better computation performance. Therefore, by providing virtualized GPUs to VMs in cloud computing environment, many biological applications will willingly move into cloud environment to enhance their computation performance and utilize infinite cloud computing resource while reducing expenses for computations. In this paper, we propose a BioCloud system architecture that enables VMs to use GPUs in cloud environment. Because much of the previous research has focused on the sharing mechanism of GPUs among VMs, they cannot achieve enough performance for biological applications of which computation throughput is more crucial rather than sharing. The proposed system exploits the pass-through mode of PCI express (PCI-E) channel. By making each VM be able to access underlying GPUs directly, applications can show almost the same performance as when those are in native environment. In addition, our scheme multiplexes GPUs by using hot plug-in/out device features of PCI-E channel. By adding or removing GPUs in each VM in on-demand manner, VMs in the same physical host can time-share their GPUs. We implemented the proposed system using the Xen VMM and NVIDIA GPUs and showed that our prototype is highly effective for biological GPU applications in cloud environment. PMID:23710465
Exploiting GPUs in virtual machine for BioCloud.
Jo, Heeseung; Jeong, Jinkyu; Lee, Myoungho; Choi, Dong Hoon
2013-01-01
Recently, biological applications start to be reimplemented into the applications which exploit many cores of GPUs for better computation performance. Therefore, by providing virtualized GPUs to VMs in cloud computing environment, many biological applications will willingly move into cloud environment to enhance their computation performance and utilize infinite cloud computing resource while reducing expenses for computations. In this paper, we propose a BioCloud system architecture that enables VMs to use GPUs in cloud environment. Because much of the previous research has focused on the sharing mechanism of GPUs among VMs, they cannot achieve enough performance for biological applications of which computation throughput is more crucial rather than sharing. The proposed system exploits the pass-through mode of PCI express (PCI-E) channel. By making each VM be able to access underlying GPUs directly, applications can show almost the same performance as when those are in native environment. In addition, our scheme multiplexes GPUs by using hot plug-in/out device features of PCI-E channel. By adding or removing GPUs in each VM in on-demand manner, VMs in the same physical host can time-share their GPUs. We implemented the proposed system using the Xen VMM and NVIDIA GPUs and showed that our prototype is highly effective for biological GPU applications in cloud environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Y.; Cameron, K.W.
1998-11-24
Workload characterization has been proven an essential tool to architecture design and performance evaluation in both scientific and commercial computing areas. Traditional workload characterization techniques include FLOPS rate, cache miss ratios, CPI (cycles per instruction or IPC, instructions per cycle) etc. With the complexity of sophisticated modern superscalar microprocessors, these traditional characterization techniques are not powerful enough to pinpoint the performance bottleneck of an application on a specific microprocessor. They are also incapable of immediately demonstrating the potential performance benefit of any architectural or functional improvement in a new processor design. To solve these problems, many people rely on simulators,more » which have substantial constraints especially on large-scale scientific computing applications. This paper presents a new technique of characterizing applications at the instruction level using hardware performance counters. It has the advantage of collecting instruction-level characteristics in a few runs virtually without overhead or slowdown. A variety of instruction counts can be utilized to calculate some average abstract workload parameters corresponding to microprocessor pipelines or functional units. Based on the microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. In particular, the analysis results can provide some insight to the problem that only a small percentage of processor peak performance can be achieved even for many very cache-friendly codes. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. Eventually, these abstract parameters can lead to the creation of an analytical microprocessor pipeline model and memory hierarchy model.« less
15 CFR 996.21 - Performance of compliance testing.
Code of Federal Regulations, 2011 CFR
2011-01-01
... CERTIFICATION REQUIREMENTS FOR NOAA HYDROGRAPHIC PRODUCTS AND SERVICES QUALITY ASSURANCE AND CERTIFICATION REQUIREMENTS FOR NOAA HYDROGRAPHIC PRODUCTS AND SERVICES Certification of a Hydrographic Product and Decertification. § 996.21 Performance of compliance testing. (a) NOAA and the applicant shall submit the applicant...
15 CFR 996.21 - Performance of compliance testing.
Code of Federal Regulations, 2014 CFR
2014-01-01
... CERTIFICATION REQUIREMENTS FOR NOAA HYDROGRAPHIC PRODUCTS AND SERVICES QUALITY ASSURANCE AND CERTIFICATION REQUIREMENTS FOR NOAA HYDROGRAPHIC PRODUCTS AND SERVICES Certification of a Hydrographic Product and Decertification. § 996.21 Performance of compliance testing. (a) NOAA and the applicant shall submit the applicant...
15 CFR 996.21 - Performance of compliance testing.
Code of Federal Regulations, 2013 CFR
2013-01-01
... CERTIFICATION REQUIREMENTS FOR NOAA HYDROGRAPHIC PRODUCTS AND SERVICES QUALITY ASSURANCE AND CERTIFICATION REQUIREMENTS FOR NOAA HYDROGRAPHIC PRODUCTS AND SERVICES Certification of a Hydrographic Product and Decertification. § 996.21 Performance of compliance testing. (a) NOAA and the applicant shall submit the applicant...
15 CFR 996.21 - Performance of compliance testing.
Code of Federal Regulations, 2012 CFR
2012-01-01
... CERTIFICATION REQUIREMENTS FOR NOAA HYDROGRAPHIC PRODUCTS AND SERVICES QUALITY ASSURANCE AND CERTIFICATION REQUIREMENTS FOR NOAA HYDROGRAPHIC PRODUCTS AND SERVICES Certification of a Hydrographic Product and Decertification. § 996.21 Performance of compliance testing. (a) NOAA and the applicant shall submit the applicant...
Li-Ion Pouch Cell Designs; Performance and Issues for Crewed Vehicle Applications
NASA Technical Reports Server (NTRS)
Darcy, Eric
2011-01-01
The purpose of this work: Are there any performance show stoppers for spinning them into spacecraft applications? (1) Are the seals compatible with extended vacuum operations? (2) How uniformly and cleanly are they made? (3) How durable are they?
40 CFR 60.180 - Applicability and designation of affected facility.
Code of Federal Regulations, 2010 CFR
2010-07-01
... (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Primary Lead Smelters § 60.180 Applicability and designation of affected facility. (a) The...: sintering machine, sintering machine discharge end, blast furnace, dross reverberatory furnace, electric...
40 CFR 60.180 - Applicability and designation of affected facility.
Code of Federal Regulations, 2011 CFR
2011-07-01
... (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Primary Lead Smelters § 60.180 Applicability and designation of affected facility. (a) The...: sintering machine, sintering machine discharge end, blast furnace, dross reverberatory furnace, electric...
GROUND-WATER MODEL TESTING: SYSTEMATIC EVALUATION AND TESTING OF CODE FUNCTIONALITY AND PERFORMANCE
Effective use of ground-water simulation codes as management decision tools requires the establishment of their functionality, performance characteristics, and applicability to the problem at hand. This is accomplished through application of a systematic code-testing protocol and...
2006-09-01
classification by making it applicant- centric while improving job satisfaction and performance , reducing attrition, and increasing continuation...produce greater job satisfaction , increase performance , and lengthen tenure. The difficulty the Navy faces is that enlisted applicants have limited work...P-J) fit. Empirically, job performance , employee satisfaction , and retention are contingent upon appropriately matching personnel with their desired
40 CFR Table 7 of Subpart Yyyy of... - Applicability of General Provisions to Subpart YYYY
Code of Federal Regulations, 2010 CFR
2010-07-01
... provisions Yes § 63.7(g) Performance test data analysis, recordkeeping, and reporting Yes § 63.7(h) Waiver of... conducting performance tests Yes § 63.7(e)(2) Conduct of performance tests and reduction of data Yes Subpart... Yes § 63.8(g) Data reduction Yes Except that provisions for COMS are not applicable. Averaging periods...
Review of Aircraft Engine Fan Noise Reduction
NASA Technical Reports Server (NTRS)
VanZante, Dale
2008-01-01
Aircraft turbofan engines incorporate multiple technologies to enhance performance and durability while reducing noise emissions. Both careful aerodynamic design of the fan and proper installation of the fan into the system are requirements for achieving the performance and acoustic objectives. The design and installation characteristics of high performance aircraft engine fans will be discussed along with some lessons learned that may be applicable to spaceflight fan applications.
A High Performance Image Data Compression Technique for Space Applications
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Venbrux, Jack
2003-01-01
A highly performing image data compression technique is currently being developed for space science applications under the requirement of high-speed and pushbroom scanning. The technique is also applicable to frame based imaging data. The algorithm combines a two-dimensional transform with a bitplane encoding; this results in an embedded bit string with exact desirable compression rate specified by the user. The compression scheme performs well on a suite of test images acquired from spacecraft instruments. It can also be applied to three-dimensional data cube resulting from hyper-spectral imaging instrument. Flight qualifiable hardware implementations are in development. The implementation is being designed to compress data in excess of 20 Msampledsec and support quantization from 2 to 16 bits. This paper presents the algorithm, its applications and status of development.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panyala, Ajay; Chavarría-Miranda, Daniel; Manzano, Joseph B.
High performance, parallel applications with irregular data accesses are becoming a critical workload class for modern systems. In particular, the execution of such workloads on emerging many-core systems is expected to be a significant component of applications in data mining, machine learning, scientific computing and graph analytics. However, power and energy constraints limit the capabilities of individual cores, memory hierarchy and on-chip interconnect of such systems, thus leading to architectural and software trade-os that must be understood in the context of the intended application’s behavior. Irregular applications are notoriously hard to optimize given their data-dependent access patterns, lack of structuredmore » locality and complex data structures and code patterns. We have ported two irregular applications, graph community detection using the Louvain method (Grappolo) and high-performance conjugate gradient (HPCCG), to the Tilera many-core system and have conducted a detailed study of platform-independent and platform-specific optimizations that improve their performance as well as reduce their overall energy consumption. To conduct this study, we employ an auto-tuning based approach that explores the optimization design space along three dimensions - memory layout schemes, GCC compiler flag choices and OpenMP loop scheduling options. We leverage MIT’s OpenTuner auto-tuning framework to explore and recommend energy optimal choices for different combinations of parameters. We then conduct an in-depth architectural characterization to understand the memory behavior of the selected workloads. Finally, we perform a correlation study to demonstrate the interplay between the hardware behavior and application characteristics. Using auto-tuning, we demonstrate whole-node energy savings and performance improvements of up to 49:6% and 60% relative to a baseline instantiation, and up to 31% and 45:4% relative to manually optimized variants.« less
Applicability of common stomatal conductance models in maize under varying soil moisture conditions.
Wang, Qiuling; He, Qijin; Zhou, Guangsheng
2018-07-01
In the context of climate warming, the varying soil moisture caused by precipitation pattern change will affect the applicability of stomatal conductance models, thereby affecting the simulation accuracy of carbon-nitrogen-water cycles in ecosystems. We studied the applicability of four common stomatal conductance models including Jarvis, Ball-Woodrow-Berry (BWB), Ball-Berry-Leuning (BBL) and unified stomatal optimization (USO) models based on summer maize leaf gas exchange data from a soil moisture consecutive decrease manipulation experiment. The results showed that the USO model performed best, followed by the BBL model, BWB model, and the Jarvis model performed worst under varying soil moisture conditions. The effects of soil moisture made a difference in the relative performance among the models. By introducing a water response function, the performance of the Jarvis, BWB, and USO models improved, which decreased the normalized root mean square error (NRMSE) by 15.7%, 16.6% and 3.9%, respectively; however, the performance of the BBL model was negative, which increased the NRMSE by 5.3%. It was observed that the models of Jarvis, BWB, BBL and USO were applicable within different ranges of soil relative water content (i.e., 55%-65%, 56%-67%, 37%-79% and 37%-95%, respectively) based on the 95% confidence limits. Moreover, introducing a water response function, the applicability of the Jarvis and BWB models improved. The USO model performed best with or without introducing the water response function and was applicable under varying soil moisture conditions. Our results provide a basis for selecting appropriate stomatal conductance models under drought conditions. Copyright © 2018 Elsevier B.V. All rights reserved.
Neuromodulation research and application in the U.S. Department of Defense.
Nelson, Jeremy T; Tepe, Victoria
2015-01-01
Modern neuromodulatory techniques for military applications have been explored for the past decade, with an intent to optimize operator performance and, ultimately, to improve overall military effectiveness. In light of potential military applications, some researchers have voiced concern about national security agency involvement in this area of research, and possible exploitation of research findings to support military objectives. The aim of this article is to examine the U.S. Department of Defense's interest in and application of neuromodulation. We explored articles, cases, and historical context to identify critical considerations of debate concerning dual use (i.e., national security and civilian) technologies, specifically focusing on non-invasive brain stimulation (NIBS). We review the background and recent examples of DoD-sponsored neuromodulation research, framed in the more general context of research that aims to optimize and/or rehabilitate human performance. We propose that concerns about military exploitation of neuromodulatory science and technology are not unique, but rather are part of a larger philosophic debate pertaining to military application of human performance science and technology. We consider unique aspects of the Department of Defense research enterprise--which includes programs crucial to the advancement of military medicine--and why it is well-situated to fund and perform such research. We conclude that debate concerning DoD investment in human performance research must recognize the significant potential for dual use (civilian, medical) benefit as well as the need for civilian scientific insight and influence. Military interests in the health and performance of service members provide research funding and impetus to dual use applications that will benefit the civilian community. Copyright © 2015 Elsevier Inc. All rights reserved.
HSCT4.0 Application: Software Requirements Specification
NASA Technical Reports Server (NTRS)
Salas, A. O.; Walsh, J. L.; Mason, B. H.; Weston, R. P.; Townsend, J. C.; Samareh, J. A.; Green, L. L.
2001-01-01
The software requirements for the High Performance Computing and Communication Program High Speed Civil Transport application project, referred to as HSCT4.0, are described. The objective of the HSCT4.0 application project is to demonstrate the application of high-performance computing techniques to the problem of multidisciplinary design optimization of a supersonic transport configuration, using high-fidelity analysis simulations. Descriptions of the various functions (and the relationships among them) that make up the multidisciplinary application as well as the constraints on the software design arc provided. This document serves to establish an agreement between the suppliers and the customer as to what the HSCT4.0 application should do and provides to the software developers the information necessary to design and implement the system.
Hukerikar, Saurabh; Teranishi, Keita; Diniz, Pedro C.; ...
2017-02-11
In the presence of accelerated fault rates, which are projected to be the norm on future exascale systems, it will become increasingly difficult for high-performance computing (HPC) applications to accomplish useful computation. Due to the fault-oblivious nature of current HPC programming paradigms and execution environments, HPC applications are insufficiently equipped to deal with errors. We believe that HPC applications should be enabled with capabilities to actively search for and correct errors in their computations. The redundant multithreading (RMT) approach offers lightweight replicated execution streams of program instructions within the context of a single application process. Furthermore, the use of completemore » redundancy incurs significant overhead to the application performance.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hukerikar, Saurabh; Teranishi, Keita; Diniz, Pedro C.
In the presence of accelerated fault rates, which are projected to be the norm on future exascale systems, it will become increasingly difficult for high-performance computing (HPC) applications to accomplish useful computation. Due to the fault-oblivious nature of current HPC programming paradigms and execution environments, HPC applications are insufficiently equipped to deal with errors. We believe that HPC applications should be enabled with capabilities to actively search for and correct errors in their computations. The redundant multithreading (RMT) approach offers lightweight replicated execution streams of program instructions within the context of a single application process. Furthermore, the use of completemore » redundancy incurs significant overhead to the application performance.« less
Virtual tape measure for the operating microscope: system specifications and performance evaluation.
Kim, M Y; Drake, J M; Milgram, P
2000-01-01
The Virtual Tape Measure for the Operating Microscope (VTMOM) was created to assist surgeons in making accurate 3D measurements of anatomical structures seen in the surgical field under the operating microscope. The VTMOM employs augmented reality techniques by combining stereoscopic video images with stereoscopic computer graphics, and functions by relying on an operator's ability to align a 3D graphic pointer, which serves as the end-point of the virtual tape measure, with designated locations on the anatomical structure being measured. The VTMOM was evaluated for its baseline and application performances as well as its application efficacy. Baseline performance was determined by measuring the mean error (bias) and standard deviation of error (imprecision) in measurements of non-anatomical objects. Application performance was determined by comparing the error in measuring the dimensions of aneurysm models with and without the VTMOM. Application efficacy was determined by comparing the error in selecting the appropriate aneurysm clip size with and without the VTMOM. Baseline performance indicated a bias of 0.3 mm and an imprecision of 0.6 mm. Application bias was 3.8 mm and imprecision was 2.8 mm for aneurysm diameter. The VTMOM did not improve aneurysm clip size selection accuracy. The VTMOM is a potentially accurate tool for use under the operating microscope. However, its performance when measuring anatomical objects is highly dependent on complex visual features of the object surfaces. Copyright 2000 Wiley-Liss, Inc.
Zygouris, Stelios; Ntovas, Konstantinos; Giakoumis, Dimitrios; Votis, Konstantinos; Doumpoulakis, Stefanos; Segkouli, Sofia; Karagiannidis, Charalampos; Tzovaras, Dimitrios; Tsolaki, Magda
2017-01-01
It has been demonstrated that virtual reality (VR) applications can be used for the detection of mild cognitive impairment (MCI). The aim of this study is to provide a preliminary investigation on whether a VR cognitive training application can be used to detect MCI in persons using the application at home without the help of an examiner. Two groups, one of healthy older adults (n = 6) and one of MCI patients (n = 6) were recruited from Thessaloniki day centers for cognitive disorders and provided with a tablet PC with custom software enabling the self-administration of the Virtual Super Market (VSM) cognitive training exercise. The average performance (from 20 administrations of the exercise) of the two groups was compared and was also correlated with performance in established neuropsychological tests. Average performance in terms of duration to complete the given exercise differed significantly between healthy(μ = 247.41 s/ sd = 89.006) and MCI (μ= 454.52 s/ sd = 177.604) groups, yielding a correct classification rate of 91.8% with a sensitivity and specificity of 94% and 89% respectively for MCI detection. Average performance also correlated significantly with performance in Functional Cognitive Assessment Scale (FUCAS), Test of Everyday Attention (TEA), and Rey Osterrieth Complex Figure test (ROCFT). The VR application exhibited very high accuracy in detecting MCI while all participants were able to operate the tablet and application on their own. Diagnostic accuracy was improved compared to a previous study using data from only one administration of the exercise. The results of the present study suggest that remote MCI detection through VR applications can be feasible.
Trends in HFE Methods and Tools and Their Applicability to Safety Reviews
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Hara, J.M.; Plott, C.; Milanski, J.
2009-09-30
The U.S. Nuclear Regulatory Commission's (NRC) conducts human factors engineering (HFE) safety reviews of applicant submittals for new plants and for changes to existing plants. The reviews include the evaluation of the methods and tools (M&T) used by applicants as part of their HFE program. The technology used to perform HFE activities has been rapidly evolving, resulting in a whole new generation of HFE M&Ts. The objectives of this research were to identify the current trends in HFE methods and tools, determine their applicability to NRC safety reviews, and identify topics for which the NRC may need additional guidance tomore » support the NRC's safety reviews. We conducted a survey that identified over 100 new HFE M&Ts. The M&Ts were assessed to identify general trends. Seven trends were identified: Computer Applications for Performing Traditional Analyses, Computer-Aided Design, Integration of HFE Methods and Tools, Rapid Development Engineering, Analysis of Cognitive Tasks, Use of Virtual Environments and Visualizations, and Application of Human Performance Models. We assessed each trend to determine its applicability to the NRC's review by considering (1) whether the nuclear industry is making use of M&Ts for each trend, and (2) whether M&Ts reflecting the trend can be reviewed using the current design review guidance. We concluded that M&T trends that are applicable to the commercial nuclear industry and are expected to impact safety reviews may be considered for review guidance development. Three trends fell into this category: Analysis of Cognitive Tasks, Use of Virtual Environments and Visualizations, and Application of Human Performance Models. The other trends do not need to be addressed at this time.« less
DOT National Transportation Integrated Search
2011-06-01
This study evaluated the longevity of corrosion inhibitors and the performance of inhibited deicer products under storage or after pavement application. No significant degradation of corrosion inhibitor or loss of chlorides was seen during the months...
40 CFR 60.190 - Applicability and designation of affected facility.
Code of Federal Regulations, 2010 CFR
2010-07-01
... (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Primary Aluminum Reduction Plants § 60.190 Applicability and designation of affected facility. (a) The affected facilities in primary aluminum reduction plants to which this subpart applies are...
ERIC Educational Resources Information Center
Gao, Yuan
2015-01-01
This article emphasizes the urgent demand for measurements of university internationalization and proposes a new approach to develop a set of internationally applicable indicators for measuring university internationalization performance. The article looks into existing instruments developed for assessing university internationalization,…
Further applications for mosaic pixel FPA technology
NASA Astrophysics Data System (ADS)
Liddiard, Kevin C.
2011-06-01
In previous papers to this SPIE forum the development of novel technology for next generation PIR security sensors has been described. This technology combines the mosaic pixel FPA concept with low cost optics and purpose-designed readout electronics to provide a higher performance and affordable alternative to current PIR sensor technology, including an imaging capability. Progressive development has resulted in increased performance and transition from conventional microbolometer fabrication to manufacture on 8 or 12 inch CMOS/MEMS fabrication lines. A number of spin-off applications have been identified. In this paper two specific applications are highlighted: high performance imaging IRFPA design and forest fire detection. The former involves optional design for small pixel high performance imaging. The latter involves cheap expendable sensors which can detect approaching fire fronts and send alarms with positional data via mobile phone or satellite link. We also introduce to this SPIE forum the application of microbolometer IR sensor technology to IoT, the Internet of Things.
Integrating Reconfigurable Hardware-Based Grid for High Performance Computing
Dondo Gazzano, Julio; Sanchez Molina, Francisco; Rincon, Fernando; López, Juan Carlos
2015-01-01
FPGAs have shown several characteristics that make them very attractive for high performance computing (HPC). The impressive speed-up factors that they are able to achieve, the reduced power consumption, and the easiness and flexibility of the design process with fast iterations between consecutive versions are examples of benefits obtained with their use. However, there are still some difficulties when using reconfigurable platforms as accelerator that need to be addressed: the need of an in-depth application study to identify potential acceleration, the lack of tools for the deployment of computational problems in distributed hardware platforms, and the low portability of components, among others. This work proposes a complete grid infrastructure for distributed high performance computing based on dynamically reconfigurable FPGAs. Besides, a set of services designed to facilitate the application deployment is described. An example application and a comparison with other hardware and software implementations are shown. Experimental results show that the proposed architecture offers encouraging advantages for deployment of high performance distributed applications simplifying development process. PMID:25874241
Performance of the Cray T3D and Emerging Architectures on Canopy QCD Applications
NASA Astrophysics Data System (ADS)
Fischler, Mark; Uchima, Mike
1996-03-01
The Cray T3D, an MIMD system with NUMA shared memory capabilities and in principle very low communications latency, can support the Canopy framework for grid-oriented applications. CANOPY has been ported to the T3D, with the intent of making it available to a spectrum of users. The performance of the T3D running Canopy has been benchmarked on five QCD applications extensively run on ACPMAPS at Fermilab, requiring a variety of data access patterns. The net performance and scaling behavior reveals an efficiency relative to peak Gflops almost identical to that achieved on ACPMAPS. Detailed studies of the major factors impacting performance are presented. Generalizations applying this analysis to the newly emerging crop of commercial systems reveal where their limitations will lie. On these applications, efficiencies of above 25% are not to be expected; eliminating overheads due to Canopy will improve matters, but by less than a factor of two.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Washeleski, Robert L.; Meyer, Edmond J. IV; King, Lyon B.
2013-10-15
Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. Themore » key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed.« less
Washeleski, Robert L; Meyer, Edmond J; King, Lyon B
2013-10-01
Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. The key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed.
Performance Analysis of Scientific and Engineering Applications Using MPInside and TAU
NASA Technical Reports Server (NTRS)
Saini, Subhash; Mehrotra, Piyush; Taylor, Kenichi Jun Haeng; Shende, Sameer Suresh; Biswas, Rupak
2010-01-01
In this paper, we present performance analysis of two NASA applications using performance tools like Tuning and Analysis Utilities (TAU) and SGI MPInside. MITgcmUV and OVERFLOW are two production-quality applications used extensively by scientists and engineers at NASA. MITgcmUV is a global ocean simulation model, developed by the Estimating the Circulation and Climate of the Ocean (ECCO) Consortium, for solving the fluid equations of motion using the hydrostatic approximation. OVERFLOW is a general-purpose Navier-Stokes solver for computational fluid dynamics (CFD) problems. Using these tools, we analyze the MPI functions (MPI_Sendrecv, MPI_Bcast, MPI_Reduce, MPI_Allreduce, MPI_Barrier, etc.) with respect to message size of each rank, time consumed by each function, and how ranks communicate. MPI communication is further analyzed by studying the performance of MPI functions used in these two applications as a function of message size and number of cores. Finally, we present the compute time, communication time, and I/O time as a function of the number of cores.
Performance Evaluation of a Data Validation System
NASA Technical Reports Server (NTRS)
Wong, Edmond (Technical Monitor); Sowers, T. Shane; Santi, L. Michael; Bickford, Randall L.
2005-01-01
Online data validation is a performance-enhancing component of modern control and health management systems. It is essential that performance of the data validation system be verified prior to its use in a control and health management system. A new Data Qualification and Validation (DQV) Test-bed application was developed to provide a systematic test environment for this performance verification. The DQV Test-bed was used to evaluate a model-based data validation package known as the Data Quality Validation Studio (DQVS). DQVS was employed as the primary data validation component of a rocket engine health management (EHM) system developed under NASA's NGLT (Next Generation Launch Technology) program. In this paper, the DQVS and DQV Test-bed software applications are described, and the DQV Test-bed verification procedure for this EHM system application is presented. Test-bed results are summarized and implications for EHM system performance improvements are discussed.
REMOTE, a Wireless Sensor Network Based System to Monitor Rowing Performance
Llosa, Jordi; Vilajosana, Ignasi; Vilajosana, Xavier; Navarro, Nacho; Suriñach, Emma; Marquès, Joan Manuel
2009-01-01
In this paper, we take a hard look at the performance of REMOTE, a sensor network based application that provides a detailed picture of a boat movement, individual rower performance, or his/her performance compared with other crew members. The application analyzes data gathered with a WSN strategically deployed over a boat to obtain information on the boat and oar movements. Functionalities of REMOTE are compared to those of RowX [1] outdoor instrument, a commercial wired sensor instrument designed for similar purposes. This study demonstrates that with smart geometrical configuration of the sensors, rotation and translation of the oars and boat can be obtained. Three different tests are performed: laboratory calibration allows us to become familiar with the accelerometer readings and validate the theory, ergometer tests which help us to set the acquisition parameters, and on boat tests shows the application potential of this technologies in sports. PMID:22423204
Verifax: Biometric instruments measuring neuromuscular disorders/performance impairments
NASA Astrophysics Data System (ADS)
Morgenthaler, George W.; Shrairman, Ruth; Landau, Alexander
1998-01-01
VeriFax, founded in 1990 by Dr. Ruth Shrairman and Mr. Alex Landau, began operations with the aim of developing a biometric tool for the verification of signatures from a distance. In the course of developing this VeriFax Autograph technology, two other related applications for the technologies under development at VeriFax became apparent. The first application was in the use of biometric measurements as clinical monitoring tools for physicians investigating neuromuscular diseases (embodied in VeriFax's Neuroskill technology). The second application was to evaluate persons with critical skills (e.g., airline pilots, bus drivers) for physical and mental performance impairments caused by stress, physiological disorders, alcohol, drug abuse, etc. (represented by VeriFax's Impairoscope prototype instrument). This last application raised the possibility of using a space-qualified Impairoscope variant to evaluate astronaut performance with respect to the impacts of stress, fatigue, excessive workload, build-up of toxic chemicals within the space habitat, etc. The three applications of VeriFax's patented technology are accomplished by application-specific modifications of the customized VeriFax software. Strong commercial market potentials exist for all three VeriFax technology applications, and market progress will be presented in more detail below.
Development of Magneto-Resistive Angular Position Sensors for Space Applications
NASA Astrophysics Data System (ADS)
Hahn, Robert; Langendorf, Sven; Seifart, Klaus; Slatter, Rolf; Olberts, Bastian; Romera, Fernando
2015-09-01
Magnetic microsystems in the form of magneto- resistive (MR) sensors are firmly established in automobiles and industrial applications. They measure path, angle, electrical current, or magnetic fields. MR technology opens up new sensor possibilities in space applications and can be an enabling technology for optimal performance, high robustness and long lifetime at reasonable costs. In a recent assessment study performed by HTS GmbH and Sensitec GmbH under ESA Contract a market survey has confirmed that space industry has a very high interest in novel, contactless position sensors based on MR technology. Now, a detailed development stage is pursued, to advance the sensor design up to Engineering Qualification Model (EQM) level and to perform qualification testing for a representative pilot space application.The paper briefly reviews the basics of magneto- resistive effects and possible sensor applications and describes the key benefits of MR angular sensors with reference to currently operational industrial and space applications. The results of the assessment study are presented and potential applications and uses of contactless magneto-resistive angular sensors for spacecraft are identified. The baseline mechanical and electrical sensor design will be discussed. An outlook on the EQM development and qualification tests is provided.
Validation and evaluation of common large-area display set (CLADS) performance specification
NASA Astrophysics Data System (ADS)
Hermann, David J.; Gorenflo, Ronald L.
1998-09-01
Battelle is under contract with Warner Robins Air Logistics Center to design a Common Large Area Display Set (CLADS) for use in multiple Command, Control, Communications, Computers, and Intelligence (C4I) applications that currently use 19- inch Cathode Ray Tubes (CRTs). Battelle engineers have built and fully tested pre-production prototypes of the CLADS design for AWACS, and are completing pre-production prototype displays for three other platforms simultaneously. With the CLADS design, any display technology that can be packaged to meet the form, fit, and function requirements defined by the Common Large Area Display Head Assembly (CLADHA) performance specification is a candidate for CLADS applications. This technology independent feature reduced the risk of CLADS development, permits life long technology insertion upgrades without unnecessary redesign, and addresses many of the obsolescence problems associated with COTS technology-based acquisition. Performance and environmental testing were performed on the AWACS CLADS and continues on other platforms as a part of the performance specification validation process. A simulator assessment and flight assessment were successfully completed for the AWACS CLADS, and lessons learned from these assessments are being incorporated into the performance specifications. Draft CLADS specifications were released to potential display integrators and manufacturers for review in 1997, and the final version of the performance specifications are scheduled to be released to display integrators and manufacturers in May, 1998. Initial USAF applications include replacements for the E-3 AWACS color monitor assembly, E-8 Joint STARS graphics display unit, and ABCCC airborne color display. Initial U.S. Navy applications include the E-2C ACIS display. For these applications, reliability and maintainability are key objectives. The common design will reduce the cost of operation and maintenance by an estimated 3.3M per year on E-3 AWACS alone. It is realistic to anticipate savings of over 30M per year as CLADS is implemented widely across DoD applications. As commonality and open systems interfaces begin to surface in DoD applications, the CLADS architecture can easily and cost effectively absorb the changes, and avoid COTS obsolescence issues.
Design and Testing of a Combustor for a Turbo-Ramjet Engine for UAV and Missile Applications
2003-03-01
CA, September 1999. 6. Al- Namani , S . M., Development of Shrouded Turbojet to Form a Turboramjet for Future Missile Applications, Master’s Thesis...Turbo- ramjet Engine for UAV and Missile Applications 6. AUTHOR( S ) Ross H. Piper III 5. FUNDING NUMBERS 7. PERFORMING ORGANIZATION NAME( S ) AND...ADDRESS(ES) Naval Postgraduate School Monterey, CA 93943-5000 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING / MONITORING AGENCY NAME( S
Indoor Light Performance of Coil Type Cylindrical Dye Sensitized Solar Cells.
Kapil, Gaurav; Ogomi, Yuhei; Pandey, Shyam S; Ma, Tingli; Hayase, Shuzi
2016-04-01
A very good performance under low/diffused light intensities is one of the application areas in which dye-sensitized solar cells (DSSCs) can be utilized effectively compared to their inorganic silicon solar cell counterparts. In this article, we have investigated the 1 SUN and low intensity fluorescent light performance of Titanium (Ti)-coil based cylindrical DSSC (C-DSSC) using ruthenium based N719 dye and organic dyes such as D205 and Y123. Electrochemical impedance spectroscopic results were analyzed for variable solar cell performances. Reflecting mirror with parabolic geometry as concentrator was also utilized to tap diffused light for indoor applications. Fluorescent light at relatively lower illumination intensities (0.2 mW/cm2 to 0.5 mW/cm2) were used for the investigation of TCO-less C-DSSC performance with and without reflector geometry. Furthermore, the DSSC performances were analyzed and compared with the commercially available amorphous silicon based solar cell for indoor applications.
Impact of memory bottleneck on the performance of graphics processing units
NASA Astrophysics Data System (ADS)
Son, Dong Oh; Choi, Hong Jun; Kim, Jong Myon; Kim, Cheol Hong
2015-12-01
Recent graphics processing units (GPUs) can process general-purpose applications as well as graphics applications with the help of various user-friendly application programming interfaces (APIs) supported by GPU vendors. Unfortunately, utilizing the hardware resource in the GPU efficiently is a challenging problem, since the GPU architecture is totally different to the traditional CPU architecture. To solve this problem, many studies have focused on the techniques for improving the system performance using GPUs. In this work, we analyze the GPU performance varying GPU parameters such as the number of cores and clock frequency. According to our simulations, the GPU performance can be improved by 125.8% and 16.2% on average as the number of cores and clock frequency increase, respectively. However, the performance is saturated when memory bottleneck problems incur due to huge data requests to the memory. The performance of GPUs can be improved as the memory bottleneck is reduced by changing GPU parameters dynamically.
NASA Astrophysics Data System (ADS)
Raj, Anil; Wins, K. Leo Dev; Varadarajan, A. S.
2016-09-01
Cutting fluid application plays a significant role in the manufacturing industries that acts as a coolant as well as a lubricant. The conventional flood cooling application of cutting fluids not only increases the production cost on account of the expenses involved in procurement, storage and disposal but also creates serious environmental and health hazards. In order to overcome these negative effects, techniques like Minimum quantity lubrication (MQL) and Minimal Cutting fluid application (MCFA) have increasingly found their way into the area of metal cutting and have already been established as an alternative to conventional wet machining. This paper investigates the effect of minimal Cutting fluid application (MCFA) which involves application of high velocity pulsing jet of proprietary cutting fluids at the contact zones using a special fluid application system. During hard turning of oil hardened non shrinkable steel (OHNS) on cutting temperature and tool wear and to compare the performance with Minimum quantity lubrication (MQL) assisted hard turning in which cutting fluid is carried in a high velocity stream of air. An attempt was also made to compare the performance during Turning with MCFA and MQL application with conventional wet and dry turning by analysing the tool wear pattern using SEM images.
Multiresource allocation and scheduling for periodic soft real-time applications
NASA Astrophysics Data System (ADS)
Gopalan, Kartik; Chiueh, Tzi-cker
2001-12-01
Real-time applications that utilize multiple system resources, such as CPU, disks, and network links, require coordinated scheduling of these resources in order to meet their end-to-end performance requirements. Most state-of-the-art operating systems support independent resource allocation and deadline-driven scheduling but lack coordination among multiple heterogeneous resources. This paper describes the design and implementation of an Integrated Real-time Resource Scheduler (IRS) that performs coordinated allocation and scheduling of multiple heterogeneous resources on the same machine for periodic soft real-time application. The principal feature of IRS is a heuristic multi-resource allocation algorithm that reserves multiple resources for real-time applications in a manner that can maximize the number of applications admitted into the system in the long run. At run-time, a global scheduler dispatches the tasks of the soft real-time application to individual resource schedulers according to the precedence constraints between tasks. The individual resource schedulers, which could be any deadline based schedulers, can make scheduling decisions locally and yet collectively satisfy a real-time application's performance requirements. The tightness of overall timing guarantees is ultimately determined by the properties of individual resource schedulers. However, IRS maximizes overall system resource utilization efficiency by coordinating deadline assignment across multiple tasks in a soft real-time application.
Ephedrine QoS: An Antidote to Slow, Congested, Bufferless NoCs
Fang, Juan; Yao, Zhicheng; Sui, Xiufeng; Bao, Yungang
2014-01-01
Datacenters consolidate diverse applications to improve utilization. However when multiple applications are colocated on such platforms, contention for shared resources like networks-on-chip (NoCs) can degrade the performance of latency-critical online services (high-priority applications). Recently proposed bufferless NoCs (Nychis et al.) have the advantages of requiring less area and power, but they pose challenges in quality-of-service (QoS) support, which usually relies on buffer-based virtual channels (VCs). We propose QBLESS, a QoS-aware bufferless NoC scheme for datacenters. QBLESS consists of two components: a routing mechanism (QBLESS-R) that can substantially reduce flit deflection for high-priority applications and a congestion-control mechanism (QBLESS-CC) that guarantees performance for high-priority applications and improves overall system throughput. We use trace-driven simulation to model a 64-core system, finding that, when compared to BLESS, a previous state-of-the-art bufferless NoC design, QBLESS, improves performance of high-priority applications by an average of 33.2% and reduces network-hops by an average of 42.8%. PMID:25250386
NASA Astrophysics Data System (ADS)
Khazaee, I.
2015-05-01
In this study, the performance of a proton exchange membrane fuel cell in mobile applications is investigated analytically. At present the main use and advantages of fuel cells impact particularly strongly on mobile applications such as vehicles, mobile computers and mobile telephones. Some external parameters such as the cell temperature (Tcell ) , operating pressure of gases (P) and air stoichiometry (λair ) affect the performance and voltage losses in the PEM fuel cell. Because of the existence of many theoretical, empirical and semi-empirical models of the PEM fuel cell, it is necessary to compare the accuracy of these models. But theoretical models that are obtained from thermodynamic and electrochemical approach, are very exact but complex, so it would be easier to use the empirical and smi-empirical models in order to forecast the fuel cell system performance in many applications such as mobile applications. The main purpose of this study is to obtain the semi-empirical relation of a PEM fuel cell with the least voltage losses. Also, the results are compared with the existing experimental results in the literature and a good agreement is seen.
Unraveling Network-induced Memory Contention: Deeper Insights with Machine Learning
Groves, Taylor Liles; Grant, Ryan; Gonzales, Aaron; ...
2017-11-21
Remote Direct Memory Access (RDMA) is expected to be an integral communication mechanism for future exascale systems enabling asynchronous data transfers, so that applications may fully utilize CPU resources while simultaneously sharing data amongst remote nodes. We examine Network-induced Memory Contention (NiMC) on Infiniband networks. We expose the interactions between RDMA, main-memory and cache, when applications and out-of-band services compete for memory resources. We then explore NiMCs resulting impact on application-level performance. For a range of hardware technologies and HPC workloads, we quantify NiMC and show that NiMCs impact grows with scale resulting in up to 3X performance degradation atmore » scales as small as 8K processes even in applications that previously have been shown to be performance resilient in the presence of noise. In addition, this work examines the problem of predicting NiMC's impact on applications by leveraging machine learning and easily accessible performance counters. This approach provides additional insights about the root cause of NiMC and facilitates dynamic selection of potential solutions. Finally, we evaluated three potential techniques to reduce NiMCs impact, namely hardware offloading, core reservation and network throttling.« less
Smartphone usage among ROTU and its relationship towards study performance
NASA Astrophysics Data System (ADS)
Redzuan, Muhammad Fazrul Ilahi Mohd; Roslan, Mohamad Amri; Rahman, Rosshairy Abd
2015-12-01
Reserve Officer Training Unit (ROTU) is a cooperation program between the Ministry of Defense and the Ministry of Higher Education for undergraduate students in public university. ROTU is known for its tight training schedule which might lead to limited learning time. The usage of smartphone with various applications might assist them in their learning activities. Therefore, this study aims to discover the rate of smartphone usage among ROTU and then analyze the relationship of smartphone usage towards their study performance. The result shows that most of the ROTU students use smartphone for five to eight hours a day. No significant correlation between relationship of smartphone and study performance of ROTU students with very small positive relationship was recorded. The result reflects that the frequent use of smartphone applications among ROTU students could not significantly help them in the study. However, further study need to be carried out since this paper does not specifically focus on each type of application. Therefore, for future research, usage rate for each application is also needed to be discovered so that the usage impact for ROTU study performance on each application can be seen clearly.
Unraveling Network-induced Memory Contention: Deeper Insights with Machine Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Groves, Taylor Liles; Grant, Ryan; Gonzales, Aaron
Remote Direct Memory Access (RDMA) is expected to be an integral communication mechanism for future exascale systems enabling asynchronous data transfers, so that applications may fully utilize CPU resources while simultaneously sharing data amongst remote nodes. We examine Network-induced Memory Contention (NiMC) on Infiniband networks. We expose the interactions between RDMA, main-memory and cache, when applications and out-of-band services compete for memory resources. We then explore NiMCs resulting impact on application-level performance. For a range of hardware technologies and HPC workloads, we quantify NiMC and show that NiMCs impact grows with scale resulting in up to 3X performance degradation atmore » scales as small as 8K processes even in applications that previously have been shown to be performance resilient in the presence of noise. In addition, this work examines the problem of predicting NiMC's impact on applications by leveraging machine learning and easily accessible performance counters. This approach provides additional insights about the root cause of NiMC and facilitates dynamic selection of potential solutions. Finally, we evaluated three potential techniques to reduce NiMCs impact, namely hardware offloading, core reservation and network throttling.« less
Cutter Connectivity Bandwidth Study
NASA Astrophysics Data System (ADS)
2002-10-01
The goal of this study was to determine how much bandwidth is required for cutters to meet emerging data transfer requirements. The Cutter Connectivity Business Solutions Team with guidance front the Commandant's 5 Innovation Council sponsored this study. Today, many Coast Guard administrative and business functions are being conducted via electronic means. Although our larger cutters can establish part-time connectivity using commercial satellite communications (SATCOM) while underway, there are numerous complaints regarding poor application performance. Additionally, smaller cutters do not have any standard means of underway connectivity. The R&D study shows the most important factor affecting web performance and enterprise applications onboard cutters was latency. Latency describes the time it takes the signal to reach the satellite and come back down through space. The latency due to use of higher orbit satellites is causing poor application performance and inefficient use of expensive SATCOM links. To improve performance, the CC must, (1) reduce latency by using alternate communications links such as low-earth orbit satellites, (2) tailor applications to the SATCOM link and/or (3) optimize protocols used for data communication to minimize time required by present applications to establish communications between the user and the host systems.
Lee, Seungwon; Lee, Jisuk; Nam, Kyusuk; Shin, Weon Gyu; Sohn, Youngku
2016-12-20
Performing diverse application tests on synthesized metal oxides is critical for identifying suitable application areas based on the material performances. In the present study, Ni-oxide@TiO₂ core-shell materials were synthesized and applied to photocatalytic mixed dye (methyl orange + rhodamine + methylene blue) degradation under ultraviolet (UV) and visible lights, CO oxidation, and supercapacitors. Their physicochemical properties were examined by field-emission scanning electron microscopy, X-ray diffraction analysis, Fourier-transform infrared spectroscopy, and UV-visible absorption spectroscopy. It was shown that their performances were highly dependent on the morphology, thermal treatment procedure, and TiO₂ overlayer coating.
Uncooled microbolometer sensors for unattended applications
NASA Astrophysics Data System (ADS)
Kohin, Margaret; Miller, James E.; Leary, Arthur R.; Backer, Brian S.; Swift, William; Aston, Peter
2003-09-01
BAE SYSTEMS has been developing and producing uncooled microbolometer sensors since 1995. Recently, uncooled sensors have been used on Pointer Unattended Aerial Vehicles and considered for several unattended sensor applications including DARPA Micro-Internetted Unattended Ground Sensors (MIUGS), Army Modular Acoustic Imaging Sensors (MAIS), and Redeployable Unattended Ground Sensors (R-UGS). This paper describes recent breakthrough uncooled sensor performance at BAE SYSTEMS and how this improved performance has been applied to a new Standard Camera Core (SCC) that is ideal for these unattended applications. Video imagery from a BAE SYSTEMS 640x480 imaging camera flown in a Pointer UAV is provided. Recent performance results are also provided.
Particle simulation on heterogeneous distributed supercomputers
NASA Technical Reports Server (NTRS)
Becker, Jeffrey C.; Dagum, Leonardo
1993-01-01
We describe the implementation and performance of a three dimensional particle simulation distributed between a Thinking Machines CM-2 and a Cray Y-MP. These are connected by a combination of two high-speed networks: a high-performance parallel interface (HIPPI) and an optical network (UltraNet). This is the first application to use this configuration at NASA Ames Research Center. We describe our experience implementing and using the application and report the results of several timing measurements. We show that the distribution of applications across disparate supercomputing platforms is feasible and has reasonable performance. In addition, several practical aspects of the computing environment are discussed.
29 CFR 1926.700 - Scope, application, and definitions applicable to this subpart.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Concrete and Masonry Construction § 1926.700 Scope, application, and definitions applicable to this subpart... from the hazards associated with concrete and masonry construction operations performed in workplaces... parts 1910 and 1926 apply to concrete and masonry construction operations. (b) Definitions applicable to...
29 CFR 1926.700 - Scope, application, and definitions applicable to this subpart.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Concrete and Masonry Construction § 1926.700 Scope, application, and definitions applicable to this subpart... from the hazards associated with concrete and masonry construction operations performed in workplaces... parts 1910 and 1926 apply to concrete and masonry construction operations. (b) Definitions applicable to...
29 CFR 1926.700 - Scope, application, and definitions applicable to this subpart.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Concrete and Masonry Construction § 1926.700 Scope, application, and definitions applicable to this subpart... from the hazards associated with concrete and masonry construction operations performed in workplaces... parts 1910 and 1926 apply to concrete and masonry construction operations. (b) Definitions applicable to...
29 CFR 1926.700 - Scope, application, and definitions applicable to this subpart.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Concrete and Masonry Construction § 1926.700 Scope, application, and definitions applicable to this subpart... from the hazards associated with concrete and masonry construction operations performed in workplaces... parts 1910 and 1926 apply to concrete and masonry construction operations. (b) Definitions applicable to...
29 CFR 1926.700 - Scope, application, and definitions applicable to this subpart.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Concrete and Masonry Construction § 1926.700 Scope, application, and definitions applicable to this subpart... from the hazards associated with concrete and masonry construction operations performed in workplaces... parts 1910 and 1926 apply to concrete and masonry construction operations. (b) Definitions applicable to...
An Application-Based Performance Characterization of the Columbia Supercluster
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Djomehri, Jahed M.; Hood, Robert; Jin, Hoaqiang; Kiris, Cetin; Saini, Subhash
2005-01-01
Columbia is a 10,240-processor supercluster consisting of 20 Altix nodes with 512 processors each, and currently ranked as the second-fastest computer in the world. In this paper, we present the performance characteristics of Columbia obtained on up to four computing nodes interconnected via the InfiniBand and/or NUMAlink4 communication fabrics. We evaluate floating-point performance, memory bandwidth, message passing communication speeds, and compilers using a subset of the HPC Challenge benchmarks, and some of the NAS Parallel Benchmarks including the multi-zone versions. We present detailed performance results for three scientific applications of interest to NASA, one from molecular dynamics, and two from computational fluid dynamics. Our results show that both the NUMAlink4 and the InfiniBand hold promise for application scaling to a large number of processors.
Workload Characterization of CFD Applications Using Partial Differential Equation Solvers
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)
1998-01-01
Workload characterization is used for modeling and evaluating of computing systems at different levels of detail. We present workload characterization for a class of Computational Fluid Dynamics (CFD) applications that solve Partial Differential Equations (PDEs). This workload characterization focuses on three high performance computing platforms: SGI Origin2000, EBM SP-2, a cluster of Intel Pentium Pro bases PCs. We execute extensive measurement-based experiments on these platforms to gather statistics of system resource usage, which results in workload characterization. Our workload characterization approach yields a coarse-grain resource utilization behavior that is being applied for performance modeling and evaluation of distributed high performance metacomputing systems. In addition, this study enhances our understanding of interactions between PDE solver workloads and high performance computing platforms and is useful for tuning these applications.
Househ, Mowafa S.; Shubair, Mamdouh M.; Yunus, Faisel; Jamal, Amr; Aldossari, Bakheet
2015-01-01
Background: The aim of this paper is to present a usability analysis of the consumer ratings of key diabetes mHealth applications using an adapted Health IT Usability Evaluation Model (Health-ITUEM). Methods: A qualitative content analysis method was used to analyze publicly available consumer reported data posted on the Android Market and Google Play for four leading diabetes mHealth applications. Health-ITUEM concepts including information needs, flexibility/customizability, learnability, performance speed, and competency guided the categorization and analysis of the data. Health impact was an additional category that was included in the study. A total of 405 consumers’ ratings collected from January 9, 2014 to February 17, 2014 were included in the study. Results: Overall, the consumers’ ratings of the leading diabetes mHealth applications for both usability and health impacts were positive. The performance speed of the mHealth application and the information needs of the consumers were the primary usability factors impacting the use of the diabetes mHealth applications. There was also evidence on the positive health impacts of such applications. Conclusions: Consumers are more likely to use diabetes related mHealth applications that perform well and meet their information needs. Furthermore, there is preliminary evidence that diabetes mHealth applications can have positive impact on the health of patients. PMID:26635437
Househ, Mowafa S; Shubair, Mamdouh M; Yunus, Faisel; Jamal, Amr; Aldossari, Bakheet
2015-10-01
The aim of this paper is to present a usability analysis of the consumer ratings of key diabetes mHealth applications using an adapted Health IT Usability Evaluation Model (Health-ITUEM). A qualitative content analysis method was used to analyze publicly available consumer reported data posted on the Android Market and Google Play for four leading diabetes mHealth applications. Health-ITUEM concepts including information needs, flexibility/customizability, learnability, performance speed, and competency guided the categorization and analysis of the data. Health impact was an additional category that was included in the study. A total of 405 consumers' ratings collected from January 9, 2014 to February 17, 2014 were included in the study. Overall, the consumers' ratings of the leading diabetes mHealth applications for both usability and health impacts were positive. The performance speed of the mHealth application and the information needs of the consumers were the primary usability factors impacting the use of the diabetes mHealth applications. There was also evidence on the positive health impacts of such applications. Consumers are more likely to use diabetes related mHealth applications that perform well and meet their information needs. Furthermore, there is preliminary evidence that diabetes mHealth applications can have positive impact on the health of patients.
40 CFR 160.10 - Applicability to studies performed under grants and contracts.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Applicability to studies performed under grants and contracts. 160.10 Section 160.10 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS GOOD LABORATORY PRACTICE STANDARDS General Provisions § 160.10...
40 CFR 792.10 - Applicability to studies performed under grants and contracts.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 31 2010-07-01 2010-07-01 true Applicability to studies performed under grants and contracts. 792.10 Section 792.10 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT (CONTINUED) GOOD LABORATORY PRACTICE STANDARDS General...
USDA-ARS?s Scientific Manuscript database
Despite considerable interest in the application of land surface data assimilation systems (LDAS) for agricultural drought applications, relatively little is known about the large-scale performance of such systems and, thus, the optimal methodological approach for implementing them. To address this ...
40 CFR 60.150 - Applicability and designation of affected facility.
Code of Federal Regulations, 2014 CFR
2014-07-01
... (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Sewage Treatment Plants § 60.150 Applicability and designation of affected facility. (a) The... (dry basis) produced by municipal sewage treatment plants, or each incinerator that charges more than...
40 CFR 60.170 - Applicability and designation of affected facility.
Code of Federal Regulations, 2011 CFR
2011-07-01
... (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Primary Zinc Smelters § 60.170 Applicability and designation of affected facility. (a) The...: roaster and sintering machine. (b) Any facility under paragraph (a) of this section that commences...
Mehridehnavi, Alireza
2015-01-01
Admission includes written and interview at universities belonging to the ministry of the health and medical education of Iran at PhD level. In the present work, it was tried to find out the likelihood of interview performance of different candidates with their teaching experience in Iranian national medical PhD admission in the year 1386-87. In this study, applicants' exam results were extracted from their score workbooks for year 86-87. PhD applicants' categories were public (ordinary) and employed lecturers. Invited numbers of candidates for interview were 556 from 29 different fields of study. As the number of written subjects were not the same within different fields of study, at the first, each group score distribution were normalized to one and then combined together for final consideration. Accept and reject percentage within public applicants were 45.1 and 54.9, respectively, while the accept percentage within lecturer applicants was 66 and the reject was 34 respectively. Scores of all 29 groups were combined after normalization. The overall performance including test plus interview for public and lecturers were 1.02 ± 0.12 and 0.95 ± 0.1, respectively. The average and standard deviation of test exam of public and lecturer were 1.04 ± 0.16 and 0.91 ± 0.12, respectively. The average and standard deviation of interview exam of public applicants and lecturers applicants were 0.98 ± 0.18 and 1.04 ± 0.17, respectively. As results show, the interview performance of lecturers is better than public applicants. Unbalanced acceptance rate amongst lecturers was increased due to the hold of reservation toward interview and due to their higher results gain during interview. If the test performance was a reliable measure for viability of applicant, this reservation would change the acceptance rate close to balance.
Chiang, Li-Chi; Chaubey, Indrajeet; Hong, Nien-Ming; Lin, Yu-Pin; Huang, Tao
2012-01-01
Implementing a suite of best management practices (BMPs) can reduce non-point source (NPS) pollutants from various land use activities. Watershed models are generally used to evaluate the effectiveness of BMP performance in improving water quality as the basis for watershed management recommendations. This study evaluates 171 management practice combinations that incorporate nutrient management, vegetated filter strips (VFS) and grazing management for their performances in improving water quality in a pasture-dominated watershed with dynamic land use changes during 1992–2007 by using the Soil and Water Assessment Tool (SWAT). These selected BMPs were further examined with future climate conditions (2010–2069) downscaled from three general circulation models (GCMs) for understanding how climate change may impact BMP performance. Simulation results indicate that total nitrogen (TN) and total phosphorus (TP) losses increase with increasing litter application rates. Alum-treated litter applications resulted in greater TN losses, and fewer TP losses than the losses from untreated poultry litter applications. For the same litter application rates, sediment and TP losses are greater for summer applications than fall and spring applications, while TN losses are greater for fall applications. Overgrazing management resulted in the greatest sediment and phosphorus losses, and VFS is the most influential management practice in reducing pollutant losses. Simulations also indicate that climate change impacts TSS losses the most, resulting in a larger magnitude of TSS losses. However, the performance of selected BMPs in reducing TN and TP losses was more stable in future climate change conditions than in the BMP performance in the historical climate condition. We recommend that selection of BMPs to reduce TSS losses should be a priority concern when multiple uses of BMPs that benefit nutrient reductions are considered in a watershed. Therefore, the BMP combination of spring litter application, optimum grazing management and filter strip with a VFS ratio of 42 could be a promising alternative for use in mitigating future climate change. PMID:23202767
An MS-DOS-based program for analyzing plutonium gamma-ray spectra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruhter, W.D.; Buckley, W.M.
1989-09-07
A plutonium gamma-ray analysis system that operates on MS-DOS-based computers has been developed for the International Atomic Energy Agency (IAEA) to perform in-field analysis of plutonium gamma-ray spectra for plutonium isotopics. The program titled IAEAPU consists of three separate applications: a data-transfer application for transferring spectral data from a CICERO multichannel analyzer to a binary data file, a data-analysis application to analyze plutonium gamma-ray spectra, for plutonium isotopic ratios and weight percents of total plutonium, and a data-quality assurance application to check spectral data for proper data-acquisition setup and performance. Volume 3 contains the software listings for these applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enghauser, Michael
2016-02-01
The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.
LWIR detector requirements for low-background space applications
NASA Technical Reports Server (NTRS)
Deluccia, Frank J.
1990-01-01
Detection of cold bodies (200 to 300 K) against space backgrounds has many important applications, both military and non-military. The detector performance and design characteristics required to support low-background applications are discussed, with particular emphasis on those characteristics required for space surveillance. The status of existing detector technologies under active development for these applications is also discussed. In order to play a role in future systems, new, potentially competing detector technologies such as multiple quantum well detectors must not only meet system-derived requirements, but also offer distinct performance or other advantages over these incumbent technologies.
Scalable Performance Measurement and Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamblin, Todd
2009-01-01
Concurrency levels in large-scale, distributed-memory supercomputers are rising exponentially. Modern machines may contain 100,000 or more microprocessor cores, and the largest of these, IBM's Blue Gene/L, contains over 200,000 cores. Future systems are expected to support millions of concurrent tasks. In this dissertation, we focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. Tuning the performance of large-scale applications can be a subtle and time-consuming task because application developers must measure and interpret data from many independent processes. While the volume of the raw data scales linearly with the number ofmore » tasks in the running system, the number of tasks is growing exponentially, and data for even small systems quickly becomes unmanageable. Transporting performance data from so many processes over a network can perturb application performance and make measurements inaccurate, and storing such data would require a prohibitive amount of space. Moreover, even if it were stored, analyzing the data would be extremely time-consuming. In this dissertation, we present novel methods for reducing performance data volume. The first draws on multi-scale wavelet techniques from signal processing to compress systemwide, time-varying load-balance data. The second uses statistical sampling to select a small subset of running processes to generate low-volume traces. A third approach combines sampling and wavelet compression to stratify performance data adaptively at run-time and to reduce further the cost of sampled tracing. We have integrated these approaches into Libra, a toolset for scalable load-balance analysis. We present Libra and show how it can be used to analyze data from large scientific applications scalably.« less
MaMR: High-performance MapReduce programming model for material cloud applications
NASA Astrophysics Data System (ADS)
Jing, Weipeng; Tong, Danyu; Wang, Yangang; Wang, Jingyuan; Liu, Yaqiu; Zhao, Peng
2017-02-01
With the increasing data size in materials science, existing programming models no longer satisfy the application requirements. MapReduce is a programming model that enables the easy development of scalable parallel applications to process big data on cloud computing systems. However, this model does not directly support the processing of multiple related data, and the processing performance does not reflect the advantages of cloud computing. To enhance the capability of workflow applications in material data processing, we defined a programming model for material cloud applications that supports multiple different Map and Reduce functions running concurrently based on hybrid share-memory BSP called MaMR. An optimized data sharing strategy to supply the shared data to the different Map and Reduce stages was also designed. We added a new merge phase to MapReduce that can efficiently merge data from the map and reduce modules. Experiments showed that the model and framework present effective performance improvements compared to previous work.
HgCdTe APDS for time resolved space applications
NASA Astrophysics Data System (ADS)
Rothman, J.; Lasfargues, G.; Delacourt, B.; Dumas, A.; Gibert, F.; Bardoux, A.; Boutillier, M.
2017-09-01
HgCdTe APDs have opened a new horizon in photon starved applications due to their exceptional performance in terms of high linear gain, low excess noise and high quantum efficiency. Both focal plane arrays (FPAs) and large array single element using HgCdTe (MCT) APDs have been developed at CEA/Leti and Sofradir and high performance devices are at present available to detect without deterioration the spatial and/or temporal information in photon fluxes with a low number of photon in each spatio-temporal bin. The enhancement in performance that can be achieved with MCT has subsequently been demonstrated in a wide scope of applications such as astronomical observations, active imaging, deep space telecommunications, atmospheric LIDAR and mid-IR (MIR) time resolved photoluminescence measurements. Most of these applications can be used in space borne platforms.
Code of Federal Regulations, 2014 CFR
2014-01-01
... ENERGY PERMITS FOR ACCESS TO RESTRICTED DATA Applications § 725.11 Applications. (a) Any person desiring access to Restricted Data pursuant to this part should submit an application (Form 378), in triplicate... access to Restricted Data for use in the performance of his duties as an employee, the application for an...
Code of Federal Regulations, 2013 CFR
2013-01-01
... ENERGY PERMITS FOR ACCESS TO RESTRICTED DATA Applications § 725.11 Applications. (a) Any person desiring access to Restricted Data pursuant to this part should submit an application (Form 378), in triplicate... access to Restricted Data for use in the performance of his duties as an employee, the application for an...
Code of Federal Regulations, 2011 CFR
2011-01-01
... ENERGY PERMITS FOR ACCESS TO RESTRICTED DATA Applications § 725.11 Applications. (a) Any person desiring access to Restricted Data pursuant to this part should submit an application (Form 378), in triplicate... access to Restricted Data for use in the performance of his duties as an employee, the application for an...
Code of Federal Regulations, 2012 CFR
2012-01-01
... ENERGY PERMITS FOR ACCESS TO RESTRICTED DATA Applications § 725.11 Applications. (a) Any person desiring access to Restricted Data pursuant to this part should submit an application (Form 378), in triplicate... access to Restricted Data for use in the performance of his duties as an employee, the application for an...
45 CFR 63.4 - Cooperative arrangements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... another State, to apply for assistance. (b) A joint application made by two or more applicants for... activities performed by each of the joint applicants or may have a combined budget. If joint applications... authorizing separate amounts for each of the joint applicants. (c) In the case of each cooperative arrangement...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-16
..., Incorporated; Notice of Preliminary Permit Application Accepted for Filing and Soliciting Comments, Motions To Intervene, and Competing Applications On February 25, 2013, ECOsponsible, Incorporated filed an application... application during the permit term. A preliminary permit does not authorize the permit holder to perform any...
Implementation of a multi-threaded framework for large-scale scientific applications
Sexton-Kennedy, E.; Gartung, Patrick; Jones, C. D.; ...
2015-05-22
The CMS experiment has recently completed the development of a multi-threaded capable application framework. In this paper, we will discuss the design, implementation and application of this framework to production applications in CMS. For the 2015 LHC run, this functionality is particularly critical for both our online and offline production applications, which depend on faster turn-around times and a reduced memory footprint relative to before. These applications are complex codes, each including a large number of physics-driven algorithms. While the framework is capable of running a mix of thread-safe and 'legacy' modules, algorithms running in our production applications need tomore » be thread-safe for optimal use of this multi-threaded framework at a large scale. Towards this end, we discuss the types of changes, which were necessary for our algorithms to achieve good performance of our multithreaded applications in a full-scale application. Lastly performance numbers for what has been achieved for the 2015 run are presented.« less
Skel: Generative Software for Producing Skeletal I/O Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Logan, J.; Klasky, S.; Lofstead, J.
2011-01-01
Massively parallel computations consist of a mixture of computation, communication, and I/O. As part of the co-design for the inevitable progress towards exascale computing, we must apply lessons learned from past work to succeed in this new age of computing. Of the three components listed above, implementing an effective parallel I/O solution has often been overlooked by application scientists and was usually added to large scale simulations only when existing serial techniques had failed. As scientists teams scaled their codes to run on hundreds of processors, it was common to call on an I/O expert to implement a set ofmore » more scalable I/O routines. These routines were easily separated from the calculations and communication, and in many cases, an I/O kernel was derived from the application which could be used for testing I/O performance independent of the application. These I/O kernels developed a life of their own used as a broad measure for comparing different I/O techniques. Unfortunately, as years passed and computation and communication changes required changes to the I/O, the separate I/O kernel used for benchmarking remained static no longer providing an accurate indicator of the I/O performance of the simulation making I/O research less relevant for the application scientists. In this paper we describe a new approach to this problem where I/O kernels are replaced with skeletal I/O applications automatically generated from an abstract set of simulation I/O parameters. We realize this abstraction by leveraging the ADIOS middleware's XML I/O specification with additional runtime parameters. Skeletal applications offer all of the benefits of I/O kernels including allowing I/O optimizations to focus on useful I/O patterns. Moreover, since they are automatically generated, it is easy to produce an updated I/O skeleton whenever the simulation's I/O changes. In this paper we analyze the performance of automatically generated I/O skeletal applications for the S3D and GTS codes. We show that these skeletal applications achieve performance comparable to that of the production applications. We wrap up the paper with a discussion of future changes to make the skeletal application better approximate the actual I/O performed in the simulation.« less
29 CFR 778.418 - Pieceworkers.
Code of Federal Regulations, 2010 CFR
2010-07-01
... applicable maximum hours standard for the particular workweek; and (4) The compensation paid for the overtime... Principles Computing Overtime Pay on the Rate Applicable to the Type of Work Performed in Overtime Hours... the basis of a piece rate for the work performed during nonovertime hours may agree with his employer...
Code of Federal Regulations, 2011 CFR
2011-07-01
... performance test. 2. Carbon adsorber (regenerative) to which puncture sealant application spray booth emissions are ducted a. Maintain the total regeneration mass, volumetric flow, and carbon bed temperature at the operating range established during the performance test.b. Reestablish the carbon bed temperature...
PERFORMANCE AND COST OF MERCURY EMISSION CONTROL TECHNOLOGY APPLICATIONS ON ELECTRIC UTILITY BOILERS
The report presents estimates of the performance and cost of powdered activated carbon (PAC) injection-based mercury control technologies and projections of costs for future applications. (NOTE: Under the Clean Air Act Amendments of 1990, the U.S. EPA has to determine whether mer...
Performance Evaluation and Community Application of Low-Cost Sensors for Ozone and Nitrogen Dioxide
This study reports on the performance of electrochemical-based low-cost sensors and their use in a community application. CairClip sensors were collocated with federal reference and equivalent methods and operated in a network of sites by citizen scientists (community members) in...
Six degree of freedom active vibration damping for space application
NASA Technical Reports Server (NTRS)
Haynes, Leonard S.
1993-01-01
Work performed during the period 1 Jan. - 31 Mar. 1993 on six degree of freedom active vibration damping for space application is presented. A performance and cost report is included. Topics covered include: actuator testing; mechanical amplifier design; and neural network control system development and experimental evaluation.
Performance modeling codes for the QuakeSim problem solving environment
NASA Technical Reports Server (NTRS)
Parker, J. W.; Donnellan, A.; Lyzenga, G.; Rundle, J.; Tullis, T.
2003-01-01
The QuakeSim Problem Solving Environment uses a web-services approach to unify and deploy diverse remote data sources and processing services within a browser environment. Here we focus on the high-performance crustal modeling applications that will be included in this set of remote but interoperable applications.
Code of Federal Regulations, 2011 CFR
2011-10-01
... applicant either fails the air brake component of the knowledge test, or performs the skills test in a... the skills test and the restriction, air brakes include any braking system operating fully or partially on the air brake principle. (b) Full air brake. (1) If an applicant performs the skills test in a...
DOT National Transportation Integrated Search
2015-08-01
This document is the third of a seven volume report that describe the Performance Requirements for the connected vehicle vehicle-to-infrastructure (V2I) safety applications developed for the U.S. Department of Transportation (U.S. DOT). This volume d...
DOT National Transportation Integrated Search
2015-08-01
This document is the seventh of a seven volume report that describe the Performance Requirements for the connected vehicle vehicle-to-infrastructure (V2I) safety applications developed for the U.S. Department of Transportation (U.S. DOT). This volume...
34 CFR 647.22 - How does the Secretary evaluate prior experience?
Code of Federal Regulations, 2011 CFR
2011-07-01
...'s performance under its expiring McNair project; (2) Uses the approved project objectives for the applicant's expiring McNair grant and the information the applicant submitted in its annual performance... and scholarly activities each academic year. (3) (3 points) Graduate school enrollment. Whether the...
DOT National Transportation Integrated Search
2015-08-01
This document is the second of a seven volume report that describe the Performance Requirements for the connected vehicle vehicle-to-infrastructure (V2I) safety applications developed for the U.S. Department of Transportation (U.S. DOT). This volume ...
We present an application of the online coupled WRF-CMAQ modeling system to two annual simulations over North America performed under Phase 2 of the Air Quality Model Evaluation International Initiative (AQMEII). Operational evaluation shows that model performance is comparable t...
DOT National Transportation Integrated Search
1979-07-01
Tests were conducted to measure the effect generated by high-voltage transmission lines with and without supervisory carrier signals on the performance of typical LORAN-C receivers which might be used for land vehicle applications of the LORAN-C Navi...
Instruction-level performance modeling and characterization of multimedia applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Y.; Cameron, K.W.
1999-06-01
One of the challenges for characterizing and modeling realistic multimedia applications is the lack of access to source codes. On-chip performance counters effectively resolve this problem by monitoring run-time behaviors at the instruction-level. This paper presents a novel technique of characterizing and modeling workloads at the instruction level for realistic multimedia applications using hardware performance counters. A variety of instruction counts are collected from some multimedia applications, such as RealPlayer, GSM Vocoder, MPEG encoder/decoder, and speech synthesizer. These instruction counts can be used to form a set of abstract characteristic parameters directly related to a processor`s architectural features. Based onmore » microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. The biggest advantage of this new characterization technique is a better understanding of processor utilization efficiency and architectural bottleneck for each application. This technique also provides predictive insight of future architectural enhancements and their affect on current codes. In this paper the authors also attempt to model architectural effect on processor utilization without memory influence. They derive formulas for calculating CPI{sub 0}, CPI without memory effect, and they quantify utilization of architectural parameters. These equations are architecturally diagnostic and predictive in nature. Results provide promise in code characterization, and empirical/analytical modeling.« less
Baruch, Erez N; Benov, Avi; Shina, Avi; Berg, Amy L; Shlaifer, Amir; Glassberg, Elon; Aden, James K; Bader, Tarif; Kragh, John F; Yitzhak, Avraham
2016-12-01
Although a lifesaving skill, currently, there is no consensus for the required amount of practice in tourniquet use. We compared the effect of 2 amounts of practice on performance of tourniquet use by nonmedical personnel. Israeli military recruits without previous medical training underwent their standard tactical first aid course, and their initial performance in use of the Combat Application Tourniquet (CAT; Composite Resources, Rock Hill, SC) was assessed. The educational intervention was to allocate the participants into a monthly tourniquet practice program: either a single-application practice (SAP) group or a triple-application practice (TAP) group. Each group practiced according to its program. After 3 months, the participants' tourniquet use performance was reassessed. Assessments were conducted using the HapMed Leg Tourniquet Trainer (CHI Systems, Fort Washington, PA), a mannequin which measures time and pressure. A total of 151 participants dropped out, leaving 87 in the TAP group and 69 in the SAP group. On initial assessment, the TAP group and the SAP group performed similarly. Both groups improved their performance from the initial to the final assessment. The TAP group improved more than the SAP group in mean application time (faster by 18 vs 8 seconds, respectively; P = .023) and in reducing the proportion of participants who were unable to apply any pressure to the mannequin (less by 18% vs 8%, respectively; P = .009). Three applications per monthly practice session were superior to one. This is the first prospective validation of a tourniquet practice program based on objective measurements. Copyright © 2016 Elsevier Inc. All rights reserved.
PPP effectiveness study. [automatic procedures recording and crew performance monitoring system
NASA Technical Reports Server (NTRS)
Arbet, J. D.; Benbow, R. L.
1976-01-01
This design note presents a study of the Procedures and Performance Program (PPP) effectiveness. The intent of the study is to determine manpower time savings and the improvements in job performance gained through PPP automated techniques. The discussion presents a synopsis of PPP capabilities and identifies potential users and associated applications, PPP effectiveness, and PPP applications to other simulation/training facilities. Appendix A provides a detailed description of each PPP capability.
Evaluating Multi-Input/Multi-Output Digital Control Systems
NASA Technical Reports Server (NTRS)
Pototzky, Anthony S.; Wieseman, Carol D.; Hoadley, Sherwood T.; Mukhopadhyay, Vivek
1994-01-01
Controller-performance-evaluation (CPE) methodology for multi-input/multi-output (MIMO) digital control systems developed. Procedures identify potentially destabilizing controllers and confirm satisfactory performance of stabilizing ones. Methodology generic and used in many types of multi-loop digital-controller applications, including digital flight-control systems, digitally controlled spacecraft structures, and actively controlled wind-tunnel models. Also applicable to other complex, highly dynamic digital controllers, such as those in high-performance robot systems.
Collective operations in a file system based execution model
Shinde, Pravin; Van Hensbergen, Eric
2013-02-12
A mechanism is provided for group communications using a MULTI-PIPE synthetic file system. A master application creates a multi-pipe synthetic file in the MULTI-PIPE synthetic file system, the master application indicating a multi-pipe operation to be performed. The master application then writes a header-control block of the multi-pipe synthetic file specifying at least one of a multi-pipe synthetic file system name, a message type, a message size, a specific destination, or a specification of the multi-pipe operation. Any other application participating in the group communications then opens the same multi-pipe synthetic file. A MULTI-PIPE file system module then implements the multi-pipe operation as identified by the master application. The master application and the other applications then either read or write operation messages to the multi-pipe synthetic file and the MULTI-PIPE synthetic file system module performs appropriate actions.
Collective operations in a file system based execution model
Shinde, Pravin; Van Hensbergen, Eric
2013-02-19
A mechanism is provided for group communications using a MULTI-PIPE synthetic file system. A master application creates a multi-pipe synthetic file in the MULTI-PIPE synthetic file system, the master application indicating a multi-pipe operation to be performed. The master application then writes a header-control block of the multi-pipe synthetic file specifying at least one of a multi-pipe synthetic file system name, a message type, a message size, a specific destination, or a specification of the multi-pipe operation. Any other application participating in the group communications then opens the same multi-pipe synthetic file. A MULTI-PIPE file system module then implements the multi-pipe operation as identified by the master application. The master application and the other applications then either read or write operation messages to the multi-pipe synthetic file and the MULTI-PIPE synthetic file system module performs appropriate actions.
40 CFR 17.5 - Eligibility of applicants.
Code of Federal Regulations, 2010 CFR
2010-07-01
... an applicant include all persons who regularly perform services for remuneration for the applicant... controls or owns a majority of the voting shares of another business' board of directors, trustees, or...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simiele, S; Micka, J; Culberson, W
2014-06-01
Purpose: A full TG-43 dosimetric characterization has not been performed for the Xoft Axxent ® electronic brachytherapy source (Xoft, a subsidiary of iCAD, San Jose, CA) within the Xoft 30 mm diameter vaginal applicator. Currently, dose calculations are performed using the bare-source TG-43 parameters and do not account for the presence of the applicator. This work focuses on determining the difference between the bare-source and sourcein- applicator TG-43 parameters. Both the radial dose function (RDF) and polar anisotropy function (PAF) were computationally determined for the source-in-applicator and bare-source models to determine the impact of using the bare-source dosimetry data. Methods:more » MCNP5 was used to model the source and the Xoft 30 mm diameter vaginal applicator. All simulations were performed using 0.84p and 0.03e cross section libraries. All models were developed based on specifications provided by Xoft. The applicator is made of a proprietary polymer material and simulations were performed using the most conservative chemical composition. An F6 collision-kerma tally was used to determine the RDF and PAF values in water at various dwell positions. The RDF values were normalized to 2.0 cm from the source to accommodate the applicator radius. Source-in-applicator results were compared with bare-source results from this work as well as published baresource results. Results: For a 0 mm source pullback distance, the updated bare-source model and source-in-applicator RDF values differ by 2% at 3 cm and 4% at 5 cm. The largest PAF disagreements were observed at the distal end of the source and applicator with up to 17% disagreement at 2 cm and 8% at 8 cm. The bare-source model had RDF values within 2.6% of the published TG-43 data and PAF results within 7.2% at 2 cm. Conclusion: Results indicate that notable differences exist between the bare-source and source-in-applicator TG-43 simulated parameters. Xoft Inc. provided partial funding for this work.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
2013-09-26
The Gremlin sofrware package is a performance analysis approach targeted to support the Co-Design process for future systems. It consists of a series of modules that can be used to alter a machine's behavior with the goal of emulating future machine properties. The modules can be divided into several classes; the most significant ones are detailed below. PowGre is a series of modules that help explore the power consumption properties of applications and to determine the impact of power constraints on applications. Most of them use low-level processor interfaces to directly control voltage and frequency settings as well as permore » nodes, socket, or memory power bounds. MemGre are memory Gremlins and implement a new performance analysis technique that captures the application's effective use of the storage capacity of different levels of the memory hierarchy as well as the bandwidth between adjacent levels. The approach models various memory components as resources and measures how much of each resource the application uses from the application's own perspective. To the application a given amount of a resource is "used" if not having this amount will degrade the application's performance. This is in contrast to the hardware-centric perspective that considers "use" as any hardware action that utilizes the resource, even if it has no effect on performance. ResGre are Gremlins that use fault injection techniques to emulate higher fault rates than currently present in today's systems. Faults can be injected through various means, including network interposition, static analysis, and code modification, or direct application notification. ResGre also includes patches to previously released LLNL codes that can counteract and react to injected failures.« less
Palm: Easing the Burden of Analytical Performance Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tallent, Nathan R.; Hoisie, Adolfy
2014-06-01
Analytical (predictive) application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult because they must be both accurate and concise. To ease the burden of performance modeling, we developed Palm, a modeling tool that combines top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. To express insight, Palm defines a source code modeling annotation language. By coordinating models and source code, Palm's models are `first-class' and reproducible. Unlike prior work, Palm formally links models, functions, and measurements. As a result, Palm (a) uses functions to either abstract or express complexitymore » (b) generates hierarchical models (representing an application's static and dynamic structure); and (c) automatically incorporates measurements to focus attention, represent constant behavior, and validate models. We discuss generating models for three different applications.« less
An Analysis of Performance Enhancement Techniques for Overset Grid Applications
NASA Technical Reports Server (NTRS)
Djomehri, J. J.; Biswas, R.; Potsdam, M.; Strawn, R. C.; Biegel, Bryan (Technical Monitor)
2002-01-01
The overset grid methodology has significantly reduced time-to-solution of high-fidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process resolves the geometrical complexity of the problem domain by using separately generated but overlapping structured discretization grids that periodically exchange information through interpolation. However, high performance computations of such large-scale realistic applications must be handled efficiently on state-of-the-art parallel supercomputers. This paper analyzes the effects of various performance enhancement techniques on the parallel efficiency of an overset grid Navier-Stokes CFD application running on an SGI Origin2000 machine. Specifically, the role of asynchronous communication, grid splitting, and grid grouping strategies are presented and discussed. Results indicate that performance depends critically on the level of latency hiding and the quality of load balancing across the processors.
Cost/Performance Ratio Achieved by Using a Commodity-Based Cluster
NASA Technical Reports Server (NTRS)
Lopez, Isaac
2001-01-01
Researchers at the NASA Glenn Research Center acquired a commodity cluster based on Intel Corporation processors to compare its performance with a traditional UNIX cluster in the execution of aeropropulsion applications. Since the cost differential of the clusters was significant, a cost/performance ratio was calculated. After executing a propulsion application on both clusters, the researchers demonstrated a 9.4 cost/performance ratio in favor of the Intel-based cluster. These researchers utilize the Aeroshark cluster as one of the primary testbeds for developing NPSS parallel application codes and system software. The Aero-shark cluster provides 64 Intel Pentium II 400-MHz processors, housed in 32 nodes. Recently, APNASA - a code developed by a Government/industry team for the design and analysis of turbomachinery systems was used for a simulation on Glenn's Aeroshark cluster.
Lee, Seungwon; Lee, Jisuk; Nam, Kyusuk; Shin, Weon Gyu; Sohn, Youngku
2016-01-01
Performing diverse application tests on synthesized metal oxides is critical for identifying suitable application areas based on the material performances. In the present study, Ni-oxide@TiO2 core-shell materials were synthesized and applied to photocatalytic mixed dye (methyl orange + rhodamine + methylene blue) degradation under ultraviolet (UV) and visible lights, CO oxidation, and supercapacitors. Their physicochemical properties were examined by field-emission scanning electron microscopy, X-ray diffraction analysis, Fourier-transform infrared spectroscopy, and UV-visible absorption spectroscopy. It was shown that their performances were highly dependent on the morphology, thermal treatment procedure, and TiO2 overlayer coating. PMID:28774145
NASA Technical Reports Server (NTRS)
Birman, Kenneth; Cooper, Robert; Marzullo, Keith
1990-01-01
The ISIS project has developed a new methodology, virtual synchony, for writing robust distributed software. High performance multicast, large scale applications, and wide area networks are the focus of interest. Several interesting applications that exploit the strengths of ISIS, including an NFS-compatible replicated file system, are being developed. The META project is distributed control in a soft real-time environment incorporating feedback. This domain encompasses examples as diverse as monitoring inventory and consumption on a factory floor, and performing load-balancing on a distributed computing system. One of the first uses of META is for distributed application management: the tasks of configuring a distributed program, dynamically adapting to failures, and monitoring its performance. Recent progress and current plans are reported.
Evaluating Mobile Graphics Processing Units (GPUs) for Real-Time Resource Constrained Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meredith, J; Conger, J; Liu, Y
2005-11-11
Modern graphics processing units (GPUs) can provide tremendous performance boosts for some applications beyond what a single CPU can accomplish, and their performance is growing at a rate faster than CPUs as well. Mobile GPUs available for laptops have the small form factor and low power requirements suitable for use in embedded processing. We evaluated several desktop and mobile GPUs and CPUs on traditional and non-traditional graphics tasks, as well as on the most time consuming pieces of a full hyperspectral imaging application. Accuracy remained high despite small differences in arithmetic operations like rounding. Performance improvements are summarized here relativemore » to a desktop Pentium 4 CPU.« less
Optical filters for UV to near IR space applications
NASA Astrophysics Data System (ADS)
Begou, T.; Krol, H.; Hecquet, Christophe; Bondet, C.; Lumeau, J.; Grèzes-Besset, C.; Lequime, M.
2017-11-01
We present hereafter the results on the fabrication of complex optical filters within the Institut Fresnel in close collaboration with CILAS. Bandpass optical filters dedicated to astronomy and space applications, with central wavelengths ranging from ultraviolet to near infrared, were deposited on both sides of glass substrates with performances in very good congruence with theoretical designs. For these applications, the required functions are particularly complex as they must present a very narrow bandwidth as well as a high level of rejection over a broad spectral range. In addition to those severe optical performances, insensitivity to environmental conditions is necessary. For this purpose, robust solutions with particularly stable performances have to be proposed.
Optimization of microwire/glass-fibre reinforced polymer composites for wind turbine application
NASA Astrophysics Data System (ADS)
Qin, F. X.; Peng, H. X.; Chen, Z.; Wang, H.; Zhang, J. W.; Hilton, G.
2013-11-01
We here report a comprehensive study of glass-fibre reinforced polymers (GFRP) incorporating ferromagnetic microwires for microwave absorption applications. With wire addition, a remarkable dependence of microwave absorption performance appears on the local properties of wires such as wire geometry and the mesostructure such as inter-wire spacing, as well as the embedded depth of the wires layer. The impact testing further demonstrates that the metallic microwires can to some extent improve the impact performance. Based on both the absorption and impact behavior, we propose an optimized design of the microwire/GFRP composites to achieve simultaneous best possible absorption and impact performance for multifunctional applications in aeronautical structures and wind turbines.
NASA Astrophysics Data System (ADS)
Dave, Gaurav P.; Sureshkumar, N.; Blessy Trencia Lincy, S. S.
2017-11-01
Current trend in processor manufacturing focuses on multi-core architectures rather than increasing the clock speed for performance improvement. Graphic processors have become as commodity hardware for providing fast co-processing in computer systems. Developments in IoT, social networking web applications, big data created huge demand for data processing activities and such kind of throughput intensive applications inherently contains data level parallelism which is more suited for SIMD architecture based GPU. This paper reviews the architectural aspects of multi/many core processors and graphics processors. Different case studies are taken to compare performance of throughput computing applications using shared memory programming in OpenMP and CUDA API based programming.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-11
..., Inc.; Notice of Preliminary Permit Application Accepted for Filing and Soliciting Comments, Motions To Intervene, and Competing Applications On February 11, 2013, the Archon Energy 1, Inc., filed an application... application during the permit term. A preliminary permit does not authorize the permit holder to perform any...
Performance Basis for Airborne Separation
NASA Technical Reports Server (NTRS)
Wing, David J.
2008-01-01
Emerging applications of Airborne Separation Assistance System (ASAS) technologies make possible new and powerful methods in Air Traffic Management (ATM) that may significantly improve the system-level performance of operations in the future ATM system. These applications typically involve the aircraft managing certain components of its Four Dimensional (4D) trajectory within the degrees of freedom defined by a set of operational constraints negotiated with the Air Navigation Service Provider. It is hypothesized that reliable individual performance by many aircraft will translate into higher total system-level performance. To actually realize this improvement, the new capabilities must be attracted to high demand and complexity regions where high ATM performance is critical. Operational approval for use in such environments will require participating aircraft to be certified to rigorous and appropriate performance standards. Currently, no formal basis exists for defining these standards. This paper provides a context for defining the performance basis for 4D-ASAS operations. The trajectory constraints to be met by the aircraft are defined, categorized, and assessed for performance requirements. A proposed extension of the existing Required Navigation Performance (RNP) construct into a dynamic standard (Dynamic RNP) is outlined. Sample data is presented from an ongoing high-fidelity batch simulation series that is characterizing the performance of an advanced 4D-ASAS application. Data of this type will contribute to the evaluation and validation of the proposed performance basis.
Graphics performance in rich Internet applications.
Hoetzlein, Rama C
2012-01-01
Rendering performance for rich Internet applications (RIAs) has recently focused on the debate between using Flash and HTML5 for streaming video and gaming on mobile devices. A key area not widely explored, however, is the scalability of raw bitmap graphics performance for RIAs. Does Flash render animated sprites faster than HTML5? How much faster is WebGL than Flash? Answers to these questions are essential for developing large-scale data visualizations, online games, and truly dynamic websites. A new test methodology analyzes graphics performance across RIA frameworks and browsers, revealing specific performance outliers in existing frameworks. The results point toward a future in which all online experiences might be GPU accelerated.
Supersonic through-flow fan assessment
NASA Technical Reports Server (NTRS)
Kepler, C. E.; Champagne, G. A.
1988-01-01
A study was conducted to assess the performance potential of a supersonic through-flow fan engine for supersonic cruise aircraft. It included a mean-line analysis of fans designed to operate with in-flow velocities ranging from subsonic to high supersonic speeds. The fan performance generated was used to estimate the performance of supersonic fan engines designed for four applications: a Mach 2.3 supersonic transport, a Mach 2.5 fighter, a Mach 3.5 cruise missile, and a Mach 5.0 cruise vehicle. For each application an engine was conceptualized, fan performance and engine performance calculated, weight estimates made, engine installed in a hypothetical vehicle, and mission analysis was conducted.
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. House Committee on Science, Space and Technology.
This document contains the transcript of three hearings on the High Speed Performance Computing and High Speed Networking Applications Act of 1993 (H.R. 1757). The hearings were designed to obtain specific suggestions for improvements to the legislation and alternative or additional application areas that should be pursued. Testimony and prepared…
Performance Characterization of Global Address Space Applications: A Case Study with NWChem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hammond, Jeffrey R.; Krishnamoorthy, Sriram; Shende, Sameer
The use of global address space languages and one-sided communication for complex applications is gaining attention in the parallel computing community. However, lack of good evaluative methods to observe multiple levels of performance makes it difficult to isolate the cause of performance deficiencies and to understand the fundamental limitations of system and application design for future improvement. NWChem is a popular computational chemistry package which depends on the Global Arrays/ ARMCI suite for partitioned global address space functionality to deliver high-end molecular modeling capabilities. A workload characterization methodology was developed to support NWChem performance engineering on large-scale parallel platforms. Themore » research involved both the integration of performance instrumentation and measurement in the NWChem software, as well as the analysis of one-sided communication performance in the context of NWChem workloads. Scaling studies were conducted for NWChem on Blue Gene/P and on two large-scale clusters using different generation Infiniband interconnects and x86 processors. The performance analysis and results show how subtle changes in the runtime parameters related to the communication subsystem could have significant impact on performance behavior. The tool has successfully identified several algorithmic bottlenecks which are already being tackled by computational chemists to improve NWChem performance.« less
Astronaut Office Scheduling System Software
NASA Technical Reports Server (NTRS)
Brown, Estevancio
2010-01-01
AOSS is a highly efficient scheduling application that uses various tools to schedule astronauts weekly appointment information. This program represents an integration of many technologies into a single application to facilitate schedule sharing and management. It is a Windows-based application developed in Visual Basic. Because the NASA standard office automation load environment is Microsoft-based, Visual Basic provides AO SS developers with the ability to interact with Windows collaboration components by accessing objects models from applications like Outlook and Excel. This also gives developers the ability to create newly customizable components that perform specialized tasks pertaining to scheduling reporting inside the application. With this capability, AOSS can perform various asynchronous tasks, such as gathering/ sending/ managing astronauts schedule information directly to their Outlook calendars at any time.
Polakovič, Milan; Švitel, Juraj; Bučko, Marek; Filip, Jaroslav; Neděla, Vilém; Ansorge-Schumacher, Marion B; Gemeiner, Peter
2017-05-01
Viable microbial cells are important biocatalysts in the production of fine chemicals and biofuels, in environmental applications and also in emerging applications such as biosensors or medicine. Their increasing significance is driven mainly by the intensive development of high performance recombinant strains supplying multienzyme cascade reaction pathways, and by advances in preservation of the native state and stability of whole-cell biocatalysts throughout their application. In many cases, the stability and performance of whole-cell biocatalysts can be highly improved by controlled immobilization techniques. This review summarizes the current progress in the development of immobilized whole-cell biocatalysts, the immobilization methods as well as in the bioreaction engineering aspects and economical aspects of their biocatalytic applications.
Hybrid cloud and cluster computing paradigms for life science applications
2010-01-01
Background Clouds and MapReduce have shown themselves to be a broadly useful approach to scientific computing especially for parallel data intensive applications. However they have limited applicability to some areas such as data mining because MapReduce has poor performance on problems with an iterative structure present in the linear algebra that underlies much data analysis. Such problems can be run efficiently on clusters using MPI leading to a hybrid cloud and cluster environment. This motivates the design and implementation of an open source Iterative MapReduce system Twister. Results Comparisons of Amazon, Azure, and traditional Linux and Windows environments on common applications have shown encouraging performance and usability comparisons in several important non iterative cases. These are linked to MPI applications for final stages of the data analysis. Further we have released the open source Twister Iterative MapReduce and benchmarked it against basic MapReduce (Hadoop) and MPI in information retrieval and life sciences applications. Conclusions The hybrid cloud (MapReduce) and cluster (MPI) approach offers an attractive production environment while Twister promises a uniform programming environment for many Life Sciences applications. Methods We used commercial clouds Amazon and Azure and the NSF resource FutureGrid to perform detailed comparisons and evaluations of different approaches to data intensive computing. Several applications were developed in MPI, MapReduce and Twister in these different environments. PMID:21210982
Hybrid cloud and cluster computing paradigms for life science applications.
Qiu, Judy; Ekanayake, Jaliya; Gunarathne, Thilina; Choi, Jong Youl; Bae, Seung-Hee; Li, Hui; Zhang, Bingjing; Wu, Tak-Lon; Ruan, Yang; Ekanayake, Saliya; Hughes, Adam; Fox, Geoffrey
2010-12-21
Clouds and MapReduce have shown themselves to be a broadly useful approach to scientific computing especially for parallel data intensive applications. However they have limited applicability to some areas such as data mining because MapReduce has poor performance on problems with an iterative structure present in the linear algebra that underlies much data analysis. Such problems can be run efficiently on clusters using MPI leading to a hybrid cloud and cluster environment. This motivates the design and implementation of an open source Iterative MapReduce system Twister. Comparisons of Amazon, Azure, and traditional Linux and Windows environments on common applications have shown encouraging performance and usability comparisons in several important non iterative cases. These are linked to MPI applications for final stages of the data analysis. Further we have released the open source Twister Iterative MapReduce and benchmarked it against basic MapReduce (Hadoop) and MPI in information retrieval and life sciences applications. The hybrid cloud (MapReduce) and cluster (MPI) approach offers an attractive production environment while Twister promises a uniform programming environment for many Life Sciences applications. We used commercial clouds Amazon and Azure and the NSF resource FutureGrid to perform detailed comparisons and evaluations of different approaches to data intensive computing. Several applications were developed in MPI, MapReduce and Twister in these different environments.
Managing Scientific Software Complexity with Bocca and CCA
Allan, Benjamin A.; Norris, Boyana; Elwasif, Wael R.; ...
2008-01-01
In high-performance scientific software development, the emphasis is often on short time to first solution. Even when the development of new components mostly reuses existing components or libraries and only small amounts of new code must be created, dealing with the component glue code and software build processes to obtain complete applications is still tedious and error-prone. Component-based software meant to reduce complexity at the application level increases complexity to the extent that the user must learn and remember the interfaces and conventions of the component model itself. To address these needs, we introduce Bocca, the first tool to enablemore » application developers to perform rapid component prototyping while maintaining robust software-engineering practices suitable to HPC environments. Bocca provides project management and a comprehensive build environment for creating and managing applications composed of Common Component Architecture components. Of critical importance for high-performance computing (HPC) applications, Bocca is designed to operate in a language-agnostic way, simultaneously handling components written in any of the languages commonly used in scientific applications: C, C++, Fortran, Python and Java. Bocca automates the tasks related to the component glue code, freeing the user to focus on the scientific aspects of the application. Bocca embraces the philosophy pioneered by Ruby on Rails for web applications: start with something that works, and evolve it to the user's purpose.« less
Autonomic and Coevolutionary Sensor Networking
NASA Astrophysics Data System (ADS)
Boonma, Pruet; Suzuki, Junichi
(WSNs) applications are often required to balance the tradeoffs among conflicting operational objectives (e.g., latency and power consumption) and operate at an optimal tradeoff. This chapter proposes and evaluates a architecture, called BiSNET/e, which allows WSN applications to overcome this issue. BiSNET/e is designed to support three major types of WSN applications: , and hybrid applications. Each application is implemented as a decentralized group of, which is analogous to a bee colony (application) consisting of bees (agents). Agents collect sensor data or detect an event (a significant change in sensor reading) on individual nodes, and carry sensor data to base stations. They perform these data collection and event detection functionalities by sensing their surrounding network conditions and adaptively invoking behaviors such as pheromone emission, reproduction, migration, swarming and death. Each agent has its own behavior policy, as a set of genes, which defines how to invoke its behaviors. BiSNET/e allows agents to evolve their behavior policies (genes) across generations and autonomously adapt their performance to given objectives. Simulation results demonstrate that, in all three types of applications, agents evolve to find optimal tradeoffs among conflicting objectives and adapt to dynamic network conditions such as traffic fluctuations and node failures/additions. Simulation results also illustrate that, in hybrid applications, data collection agents and event detection agents coevolve to augment their adaptability and performance.
Proprioception and throwing accuracy in the dominant shoulder after cryotherapy.
Wassinger, Craig A; Myers, Joseph B; Gatti, Joseph M; Conley, Kevin M; Lephart, Scott M
2007-01-01
Application of cryotherapy modalities is common after acute shoulder injury and as part of rehabilitation. During athletic events, athletes may return to play after this treatment. The effects of cryotherapy on dominant shoulder proprioception have been assessed, yet the effects on throwing performance are unknown. To determine the effects of a cryotherapy application on shoulder proprioception and throwing accuracy. Single-group, pretest-posttest control session design. University-based biomechanics laboratory. Healthy college-aged subjects (n = 22). Twenty-minute ice pack application to the dominant shoulder. Active joint position replication, path of joint motion replication, and the Functional Throwing Performance Index. Subjects demonstrated significant increases in deviation for path of joint motion replication when moving from 90 degrees of abduction with 90 degrees of external rotation to 20 degrees of flexion with neutral shoulder rotation after ice pack application. Also, subjects exhibited a decrease in Functional Throwing Performance Index after cryotherapy application. No differences were found in subjects for active joint position replication after cryotherapy application. Proprioception and throwing accuracy were decreased after ice pack application to the shoulder. It is important that clinicians understand the deficits that occur after cryotherapy, as this modality is commonly used following acute injury and during rehabilitation. This information should also be considered when attempting to return an athlete to play after treatment.
Monolithic silicon-photonic platforms in state-of-the-art CMOS SOI processes [Invited].
Stojanović, Vladimir; Ram, Rajeev J; Popović, Milos; Lin, Sen; Moazeni, Sajjad; Wade, Mark; Sun, Chen; Alloatti, Luca; Atabaki, Amir; Pavanello, Fabio; Mehta, Nandish; Bhargava, Pavan
2018-05-14
Integrating photonics with advanced electronics leverages transistor performance, process fidelity and package integration, to enable a new class of systems-on-a-chip for a variety of applications ranging from computing and communications to sensing and imaging. Monolithic silicon photonics is a promising solution to meet the energy efficiency, sensitivity, and cost requirements of these applications. In this review paper, we take a comprehensive view of the performance of the silicon-photonic technologies developed to date for photonic interconnect applications. We also present the latest performance and results of our "zero-change" silicon photonics platforms in 45 nm and 32 nm SOI CMOS. The results indicate that the 45 nm and 32 nm processes provide a "sweet-spot" for adding photonic capability and enhancing integrated system applications beyond the Moore-scaling, while being able to offload major communication tasks from more deeply-scaled compute and memory chips without complicated 3D integration approaches.
Recent progress in nanostructured next-generation field emission devices
NASA Astrophysics Data System (ADS)
Mittal, Gaurav; Lahiri, Indranil
2014-08-01
Field emission has been known to mankind for more than a century, and extensive research in this field for the last 40-50 years has led to development of exciting applications such as electron sources, miniature x-ray devices, display materials, etc. In the last decade, large-area field emitters were projected as an important material to revolutionize healthcare and medical devices, and space research. With the advent of nanotechnology and advancements related to carbon nanotubes, field emitters are demonstrating highly enhanced performance and novel applications. Next-generation emitters need ultra-high emission current density, high brightness, excellent stability and reproducible performance. Novel design considerations and application of new materials can lead to achievement of these capabilities. This article presents an overview of recent developments in this field and their effects on improved performance of field emitters. These advancements are demonstrated to hold great potential for application in next-generation field emission devices.
Characterizing and Mitigating Work Time Inflation in Task Parallel Programs
Olivier, Stephen L.; de Supinski, Bronis R.; Schulz, Martin; ...
2013-01-01
Task parallelism raises the level of abstraction in shared memory parallel programming to simplify the development of complex applications. However, task parallel applications can exhibit poor performance due to thread idleness, scheduling overheads, and work time inflation – additional time spent by threads in a multithreaded computation beyond the time required to perform the same work in a sequential computation. We identify the contributions of each factor to lost efficiency in various task parallel OpenMP applications and diagnose the causes of work time inflation in those applications. Increased data access latency can cause significant work time inflation in NUMA systems.more » Our locality framework for task parallel OpenMP programs mitigates this cause of work time inflation. Our extensions to the Qthreads library demonstrate that locality-aware scheduling can improve performance up to 3X compared to the Intel OpenMP task scheduler.« less
Further weight reduction of applications in long glass reinforced polymers
NASA Astrophysics Data System (ADS)
Yanev, A.; Schijve, W.; Martin, C.; Brands, D.
2014-05-01
Long glass reinforced materials are broadly used in the automotive industry due to their good mechanical performance, competitive price and options for functional integration in order to reduce weight. With rapidly changing environmental requirements, a demand for further weight reduction is growing constantly. Designs in LGF-PP can bring light weight solutions in combination with system cost improvement. There are quite some possibilities for applying weight reduction technologies nowadays. These technologies have to be evaluated based on weight reduction potential, but also on mechanical performance of the end application, where the latter is often the key to success. Different weight reduction technologies are applied to SABIC®STAMAX{trade mark, serif} material, a long glass fiber reinforced polypropylene (LGF-PP), in order to investigate and define best application performance. These techniques include: chemical foaming, physical foaming and thin wall applications. Results from this research will be presented, giving a guideline for your development.
Development of heat-storage building materials for passive-solar applications
NASA Astrophysics Data System (ADS)
Fletcher, J. W.
A heat storage building material to be used for passive solar applications and general load leveling within building spaces was developed. Specifically, PCM-filled plastic panels are to be developed as wallboard and ceiling panels. Three PCMs (CaCl2, 6H2O; Na2SO4, 10H2O; LiNO3, 3H2O are to be evaluated for use in the double walled, hollow channeled plastic panels. Laboratory development of the panels will include determination of filling and sealing techniques, behavior of the PCMs, container properties and materials compatibility. Testing will include vapor transmission, thermal cycle, dynamic performance, accelerated life and durability tests. In addition to development and testing, an applications analysis will be performed for specific passive solar applications. Conceptual design of a single family passive solar residence will be prepared and performance evaluated. Screening of the three PCM candidates is essentially complete.
NASA Technical Reports Server (NTRS)
Tenney, Darrel R.
2008-01-01
AS&M performed a broad assessment survey and study to establish the potential composite materials and structures applications and benefits to the Constellation Program Elements. Trade studies were performed on selected elements to determine the potential weight or performance payoff from use of composites. Weight predictions were made for liquid hydrogen and oxygen tanks, interstage cylindrical shell, lunar surface access module, ascent module liquid methane tank, and lunar surface manipulator. A key part of this study was the evaluation of 88 different composite technologies to establish their criticality to applications for the Constellation Program. The overall outcome of this study shows that composites are viable structural materials which offer from 20% to 40% weight savings for many of the structural components that make up the Major Elements of the Constellation Program. NASA investment in advancing composite technologies for space structural applications is an investment in America's Space Exploration Program.
Space shuttle main engine computed tomography applications
NASA Technical Reports Server (NTRS)
Sporny, Richard F.
1990-01-01
For the past two years the potential applications of computed tomography to the fabrication and overhaul of the Space Shuttle Main Engine were evaluated. Application tests were performed at various government and manufacturer facilities with equipment produced by four different manufacturers. The hardware scanned varied in size and complexity from a small temperature sensor and turbine blades to an assembled heat exchanger and main injector oxidizer inlet manifold. The evaluation of capabilities included the ability to identify and locate internal flaws, measure the depth of surface cracks, measure wall thickness, compare manifold design contours to actual part contours, perform automatic dimensional inspections, generate 3D computer models of actual parts, and image the relationship of the details in a complex assembly. The capabilities evaluated, with the exception of measuring the depth of surface flaws, demonstrated the existing and potential ability to perform many beneficial Space Shuttle Main Engine applications.
40 CFR 60.330 - Applicability and designation of affected facility.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Performance for Stationary Gas Turbines § 60.330 Applicability and designation of affected facility. (a) The provisions of this subpart are applicable to the following affected facilities: All stationary gas turbines...
40 CFR 60.330 - Applicability and designation of affected facility.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Performance for Stationary Gas Turbines § 60.330 Applicability and designation of affected facility. (a) The provisions of this subpart are applicable to the following affected facilities: All stationary gas turbines...
40 CFR 60.330 - Applicability and designation of affected facility.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Performance for Stationary Gas Turbines § 60.330 Applicability and designation of affected facility. (a) The provisions of this subpart are applicable to the following affected facilities: All stationary gas turbines...
40 CFR 60.330 - Applicability and designation of affected facility.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Performance for Stationary Gas Turbines § 60.330 Applicability and designation of affected facility. (a) The provisions of this subpart are applicable to the following affected facilities: All stationary gas turbines...
40 CFR 60.170 - Applicability and designation of affected facility.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Performance for Primary Zinc Smelters § 60.170 Applicability and designation of affected facility. (a) The provisions of this subpart are applicable to the following affected facilities in primary zinc smelters...
40 CFR 60.330 - Applicability and designation of affected facility.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Performance for Stationary Gas Turbines § 60.330 Applicability and designation of affected facility. (a) The provisions of this subpart are applicable to the following affected facilities: All stationary gas turbines...
Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing
NASA Technical Reports Server (NTRS)
Ozguner, Fusun
1996-01-01
Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.
Cornaglia, Antonia Icaro; Faga, Angela; Scevola, Silvia
2014-01-01
Abstract Objective: An experimental study was conducted to assess the effectiveness and safety of an innovative quadripolar variable electrode configuration radiofrequency device with objective measurements in an ex vivo and in vivo human experimental model. Background data: Nonablative radiofrequency applications are well-established anti-ageing procedures for cosmetic skin tightening. Methods: The study was performed in two steps: ex vivo and in vivo assessments. In the ex vivo assessments the radiofrequency applications were performed on human full-thickness skin and subcutaneous tissue specimens harvested during surgery for body contouring. In the in vivo assessments the applications were performed on two volunteer patients scheduled for body contouring surgery at the end of the study. The assessment methods were: clinical examination and medical photography, temperature measurement with thermal imaging scan, and light microscopy histological examination. Results: The ex vivo assessments allowed for identification of the effective safety range for human application. The in vivo assessments allowed for demonstration of the biological effects of sequential radiofrequency applications. After a course of radiofrequency applications, the collagen fibers underwent an immediate heat-induced rearrangement and were partially denaturated and progressively metabolized by the macrophages. An overall thickening and spatial rearrangement was appreciated both in the collagen and elastic fibers, the latter displaying a juvenile reticular pattern. A late onset in the macrophage activation after sequential radiofrequency applications was appreciated. Conclusions: Our data confirm the effectiveness of sequential radiofrequency applications in obtaining attenuation of the skin wrinkles by an overall skin tightening. PMID:25244081
Nicoletti, Giovanni; Cornaglia, Antonia Icaro; Faga, Angela; Scevola, Silvia
2014-10-01
An experimental study was conducted to assess the effectiveness and safety of an innovative quadripolar variable electrode configuration radiofrequency device with objective measurements in an ex vivo and in vivo human experimental model. Nonablative radiofrequency applications are well-established anti-ageing procedures for cosmetic skin tightening. The study was performed in two steps: ex vivo and in vivo assessments. In the ex vivo assessments the radiofrequency applications were performed on human full-thickness skin and subcutaneous tissue specimens harvested during surgery for body contouring. In the in vivo assessments the applications were performed on two volunteer patients scheduled for body contouring surgery at the end of the study. The assessment methods were: clinical examination and medical photography, temperature measurement with thermal imaging scan, and light microscopy histological examination. The ex vivo assessments allowed for identification of the effective safety range for human application. The in vivo assessments allowed for demonstration of the biological effects of sequential radiofrequency applications. After a course of radiofrequency applications, the collagen fibers underwent an immediate heat-induced rearrangement and were partially denaturated and progressively metabolized by the macrophages. An overall thickening and spatial rearrangement was appreciated both in the collagen and elastic fibers, the latter displaying a juvenile reticular pattern. A late onset in the macrophage activation after sequential radiofrequency applications was appreciated. Our data confirm the effectiveness of sequential radiofrequency applications in obtaining attenuation of the skin wrinkles by an overall skin tightening.
Performance-based planning and programming guidebook.
DOT National Transportation Integrated Search
2013-09-01
"Performance-based planning and programming (PBPP) refers to the application of performance management principles within the planning and programming processes of transportation agencies to achieve desired performance outcomes for the multimodal tran...
USDA-ARS?s Scientific Manuscript database
In conventional and most IPM programs, application of insecticides continues to be the most important responsive pest control tactic. For both immediate and long-term optimization and sustainability of insecticide applications, it is paramount to study the factors affecting the performance of insect...
The Impact of Mobile Learning on ESP Learners' Performance
ERIC Educational Resources Information Center
Alkhezzi, Fahad; Al-Dousari, Wadha
2016-01-01
This study explores the impact of using mobile phone applications, namely Telegram Messenger, on teaching and learning English in an ESP context. The main objective is to test whether using mobile phone applications have an impact on ESP learners' performance by mainly investigating the influence such teaching technique can have on learning…
Performance Considerations for an Optical Jukebox in Document Archival/Retrieval Applications.
ERIC Educational Resources Information Center
Spenser, Peter
1991-01-01
Discusses the use of an optical jukebox in a retrieval-intensive application--i.e., for a law firm's litigation support--and examines factors affecting the performance of the jukebox. The imaging system's configuration is explained, document access from workstations is described, and expectations of retrieval times are discussed. (LRW)
beta-Aminoalcohols as Potential Reactivators of Aged Sarin-/Soman-Inhibited Acetylcholinesterase
2017-02-08
This approach includes high - quality quantum mechanical/molecular mechanical calcula- tions, providing reliable reactivation steps and energetics...I. V. Khavrutskii Department of Defense Biotechnology High Performance Computing Software Applications Institute Telemedicine and Advanced...b] Dr. A. Wallqvist Department of Defense Biotechnology High Performance Computing Software Applications Institute Telemedicine and Advanced
21 CFR 58.10 - Applicability to studies performed under grants and contracts.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 1 2010-04-01 2010-04-01 false Applicability to studies performed under grants and contracts. 58.10 Section 58.10 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH... nonclinical laboratory study intended to be submitted to or reviewed by the Food and Drug Administration...
21 CFR 58.10 - Applicability to studies performed under grants and contracts.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 1 2011-04-01 2011-04-01 false Applicability to studies performed under grants and contracts. 58.10 Section 58.10 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH... nonclinical laboratory study intended to be submitted to or reviewed by the Food and Drug Administration...
21 CFR 58.10 - Applicability to studies performed under grants and contracts.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 1 2013-04-01 2013-04-01 false Applicability to studies performed under grants and contracts. 58.10 Section 58.10 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH... nonclinical laboratory study intended to be submitted to or reviewed by the Food and Drug Administration...
21 CFR 58.10 - Applicability to studies performed under grants and contracts.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 1 2012-04-01 2012-04-01 false Applicability to studies performed under grants and contracts. 58.10 Section 58.10 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH... nonclinical laboratory study intended to be submitted to or reviewed by the Food and Drug Administration...
21 CFR 58.10 - Applicability to studies performed under grants and contracts.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 1 2014-04-01 2014-04-01 false Applicability to studies performed under grants and contracts. 58.10 Section 58.10 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH... nonclinical laboratory study intended to be submitted to or reviewed by the Food and Drug Administration...
2003-10-01
paper, which addresses the following questions: Is it worth it? What do we know about the value of technology applications in learning ( education and......fax) fletcher@ida.org SUMMARY Technology -based systems for education , training, and performance aiding (including decision aiding) may pose the
34 CFR 646.22 - How does the Secretary evaluate prior experience?
Code of Federal Regulations, 2011 CFR
2011-07-01
... application described in § 646.20(a)(2)(i), the Secretary— (1) Evaluates the applicant's performance under its... performance reports (APRs) to determine the number of prior PE points; and (3) May adjust a calculated PE...) of this section (Postsecondary retention) and paragraph (e)(3) of this section (Good academic...
Intermittent Punishment of Self-stimulation: Effectiveness During Application and Extinction
ERIC Educational Resources Information Center
Romanczyk, Raymond G.
1977-01-01
Two studies were performed comparing the effectiveness of fixed-ratio (FR) and variable-ratio (VR) schedules of punishment during application and extinction. Subjects were two young children. Both studies found significant positive "side effects" of punishment in terms of increased play and social behavior as well as increased performance of…
ERIC Educational Resources Information Center
Wang, Heng
2017-01-01
Construction project productivity typically lags other industries and it has been the focus of numerous studies in order to improve the project performance. This research investigated the application of Radio Frequency Identification (RFID) technology on construction projects' supply chain and determined that RFID technology can improve the…
DOT National Transportation Integrated Search
2015-08-01
This document is the fifth of a seven volume report that describe the Performance Requirements for the connected vehicle vehicle-to-infrastructure (V2I) safety applications developed for the U.S. Department of Transportation (U.S. DOT). This volume d...
DOT National Transportation Integrated Search
2015-08-01
This document is the sixth of a seven volume report that describe the Performance Requirements for the connected vehicle vehicle-to-infrastructure (V2I) safety applications developed for the U.S. Department of Transportation (U.S. DOT). This volume d...
DOT National Transportation Integrated Search
2015-08-01
This document is the fourth of a seven volume report that describe the Performance Requirements for the connected vehicle vehicle-to-infrastructure (V2I) safety applications developed for the U.S. Department of Transportation (U.S. DOT). This volume ...
Nonimaging applications for microbolometer arrays
NASA Astrophysics Data System (ADS)
Picard, Francis; Jerominek, Hubert; Pope, Timothy D.; Zhang, Rose; Ngo, Linh P.; Tremblay, Bruno; Tasker, Nick; Grenier, Carol; Bilodeau, Ghislain; Cayer, Felix; Lehoux, Mario; Alain, Christine; Larouche, Carl; Savard, Simon
2001-10-01
In an effort to leverage uncooled microbolometer technology, testing of bolometer performance in various nonimaging applications has been performed. One of these applications makes use of an uncooled microbolometer array as the sensing element for a laser beam analyzer. Results of the characterization of cw CO2 laser beams with this analyzer are given. A comparison with the results obtained with a commercial laser beam analyzer is made. Various advantages specific to microbolometer arrays for this application are identified. A second application makes use of microbolometers for absolute temperature measurements. The experimental method and results are described. The technique's limitations and possible implementations are discussed. Finally, the third application evaluated is related to the rapidly expanding field of biometry. It consists of using a modified microbolometer array for fingerprint sensing. The basic approach allowing the use of microbolometers for such an application is discussed. The results of a proof-of-principle experiment are described. Globally, the described work illustrates the fact that microbolometer array fabrication technology can be exploited for many important applications other than IR imaging.
Energy Efficient Graphene Based High Performance Capacitors.
Bae, Joonwon; Kwon, Oh Seok; Lee, Chang-Soo
2017-07-10
Graphene (GRP) is an interesting class of nano-structured electronic materials for various cutting-edge applications. To date, extensive research activities have been performed on the investigation of diverse properties of GRP. The incorporation of this elegant material can be very lucrative in terms of practical applications in energy storage/conversion systems. Among various those systems, high performance electrochemical capacitors (ECs) have become popular due to the recent need for energy efficient and portable devices. Therefore, in this article, the application of GRP for capacitors is described succinctly. In particular, a concise summary on the previous research activities regarding GRP based capacitors is also covered extensively. It was revealed that a lot of secondary materials such as polymers and metal oxides have been introduced to improve the performance. Also, diverse devices have been combined with capacitors for better use. More importantly, recent patents related to the preparation and application of GRP based capacitors are also introduced briefly. This article can provide essential information for future study. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
A Biosequence-based Approach to Software Characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oehmen, Christopher S.; Peterson, Elena S.; Phillips, Aaron R.
For many applications, it is desirable to have some process for recognizing when software binaries are closely related without relying on them to be identical or have identical segments. Some examples include monitoring utilization of high performance computing centers or service clouds, detecting freeware in licensed code, and enforcing application whitelists. But doing so in a dynamic environment is a nontrivial task because most approaches to software similarity require extensive and time-consuming analysis of a binary, or they fail to recognize executables that are similar but nonidentical. Presented herein is a novel biosequence-based method for quantifying similarity of executable binaries.more » Using this method, it is shown in an example application on large-scale multi-author codes that 1) the biosequence-based method has a statistical performance in recognizing and distinguishing between a collection of real-world high performance computing applications better than 90% of ideal; and 2) an example of using family tree analysis to tune identification for a code subfamily can achieve better than 99% of ideal performance.« less
Discrete Event-based Performance Prediction for Temperature Accelerated Dynamics
NASA Astrophysics Data System (ADS)
Junghans, Christoph; Mniszewski, Susan; Voter, Arthur; Perez, Danny; Eidenbenz, Stephan
2014-03-01
We present an example of a new class of tools that we call application simulators, parameterized fast-running proxies of large-scale scientific applications using parallel discrete event simulation (PDES). We demonstrate our approach with a TADSim application simulator that models the Temperature Accelerated Dynamics (TAD) method, which is an algorithmically complex member of the Accelerated Molecular Dynamics (AMD) family. The essence of the TAD application is captured without the computational expense and resource usage of the full code. We use TADSim to quickly characterize the runtime performance and algorithmic behavior for the otherwise long-running simulation code. We further extend TADSim to model algorithm extensions to standard TAD, such as speculative spawning of the compute-bound stages of the algorithm, and predict performance improvements without having to implement such a method. Focused parameter scans have allowed us to study algorithm parameter choices over far more scenarios than would be possible with the actual simulation. This has led to interesting performance-related insights into the TAD algorithm behavior and suggested extensions to the TAD method.
Combining instruction prefetching with partial cache locking to improve WCET in real-time systems.
Ni, Fan; Long, Xiang; Wan, Han; Gao, Xiaopeng
2013-01-01
Caches play an important role in embedded systems to bridge the performance gap between fast processor and slow memory. And prefetching mechanisms are proposed to further improve the cache performance. While in real-time systems, the application of caches complicates the Worst-Case Execution Time (WCET) analysis due to its unpredictable behavior. Modern embedded processors often equip locking mechanism to improve timing predictability of the instruction cache. However, locking the whole cache may degrade the cache performance and increase the WCET of the real-time application. In this paper, we proposed an instruction-prefetching combined partial cache locking mechanism, which combines an instruction prefetching mechanism (termed as BBIP) with partial cache locking to improve the WCET estimates of real-time applications. BBIP is an instruction prefetching mechanism we have already proposed to improve the worst-case cache performance and in turn the worst-case execution time. The estimations on typical real-time applications show that the partial cache locking mechanism shows remarkable WCET improvement over static analysis and full cache locking.
Combining Instruction Prefetching with Partial Cache Locking to Improve WCET in Real-Time Systems
Ni, Fan; Long, Xiang; Wan, Han; Gao, Xiaopeng
2013-01-01
Caches play an important role in embedded systems to bridge the performance gap between fast processor and slow memory. And prefetching mechanisms are proposed to further improve the cache performance. While in real-time systems, the application of caches complicates the Worst-Case Execution Time (WCET) analysis due to its unpredictable behavior. Modern embedded processors often equip locking mechanism to improve timing predictability of the instruction cache. However, locking the whole cache may degrade the cache performance and increase the WCET of the real-time application. In this paper, we proposed an instruction-prefetching combined partial cache locking mechanism, which combines an instruction prefetching mechanism (termed as BBIP) with partial cache locking to improve the WCET estimates of real-time applications. BBIP is an instruction prefetching mechanism we have already proposed to improve the worst-case cache performance and in turn the worst-case execution time. The estimations on typical real-time applications show that the partial cache locking mechanism shows remarkable WCET improvement over static analysis and full cache locking. PMID:24386133
Position Paper - pFLogger: The Parallel Fortran Logging framework for HPC Applications
NASA Technical Reports Server (NTRS)
Clune, Thomas L.; Cruz, Carlos A.
2017-01-01
In the context of high performance computing (HPC), software investments in support of text-based diagnostics, which monitor a running application, are typically limited compared to those for other types of IO. Examples of such diagnostics include reiteration of configuration parameters, progress indicators, simple metrics (e.g., mass conservation, convergence of solvers, etc.), and timers. To some degree, this difference in priority is justifiable as other forms of output are the primary products of a scientific model and, due to their large data volume, much more likely to be a significant performance concern. In contrast, text-based diagnostic content is generally not shared beyond the individual or group running an application and is most often used to troubleshoot when something goes wrong. We suggest that a more systematic approach enabled by a logging facility (or logger) similar to those routinely used by many communities would provide significant value to complex scientific applications. In the context of high-performance computing, an appropriate logger would provide specialized support for distributed and shared-memory parallelism and have low performance overhead. In this paper, we present our prototype implementation of pFlogger a parallel Fortran-based logging framework, and assess its suitability for use in a complex scientific application.
Haidar, Azzam; Jagode, Heike; Vaccaro, Phil; ...
2018-03-22
The emergence of power efficiency as a primary constraint in processor and system design poses new challenges concerning power and energy awareness for numerical libraries and scientific applications. Power consumption also plays a major role in the design of data centers, which may house petascale or exascale-level computing systems. At these extreme scales, understanding and improving the energy efficiency of numerical libraries and their related applications becomes a crucial part of the successful implementation and operation of the computing system. In this paper, we study and investigate the practice of controlling a compute system's power usage, and we explore howmore » different power caps affect the performance of numerical algorithms with different computational intensities. Further, we determine the impact, in terms of performance and energy usage, that these caps have on a system running scientific applications. This analysis will enable us to characterize the types of algorithms that benefit most from these power management schemes. Our experiments are performed using a set of representative kernels and several popular scientific benchmarks. Lastly, we quantify a number of power and performance measurements and draw observations and conclusions that can be viewed as a roadmap to achieving energy efficiency in the design and execution of scientific algorithms.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haidar, Azzam; Jagode, Heike; Vaccaro, Phil
The emergence of power efficiency as a primary constraint in processor and system design poses new challenges concerning power and energy awareness for numerical libraries and scientific applications. Power consumption also plays a major role in the design of data centers, which may house petascale or exascale-level computing systems. At these extreme scales, understanding and improving the energy efficiency of numerical libraries and their related applications becomes a crucial part of the successful implementation and operation of the computing system. In this paper, we study and investigate the practice of controlling a compute system's power usage, and we explore howmore » different power caps affect the performance of numerical algorithms with different computational intensities. Further, we determine the impact, in terms of performance and energy usage, that these caps have on a system running scientific applications. This analysis will enable us to characterize the types of algorithms that benefit most from these power management schemes. Our experiments are performed using a set of representative kernels and several popular scientific benchmarks. Lastly, we quantify a number of power and performance measurements and draw observations and conclusions that can be viewed as a roadmap to achieving energy efficiency in the design and execution of scientific algorithms.« less
POSITION PAPER - pFLogger: The Parallel Fortran Logging Framework for HPC Applications
NASA Technical Reports Server (NTRS)
Clune, Thomas L.; Cruz, Carlos A.
2017-01-01
In the context of high performance computing (HPC), software investments in support of text-based diagnostics, which monitor a running application, are typically limited compared to those for other types of IO. Examples of such diagnostics include reiteration of configuration parameters, progress indicators, simple metrics (e.g., mass conservation, convergence of solvers, etc.), and timers. To some degree, this difference in priority is justifiable as other forms of output are the primary products of a scientific model and, due to their large data volume, much more likely to be a significant performance concern. In contrast, text-based diagnostic content is generally not shared beyond the individual or group running an application and is most often used to troubleshoot when something goes wrong. We suggest that a more systematic approach enabled by a logging facility (or 'logger') similar to those routinely used by many communities would provide significant value to complex scientific applications. In the context of high-performance computing, an appropriate logger would provide specialized support for distributed and shared-memory parallelism and have low performance overhead. In this paper, we present our prototype implementation of pFlogger - a parallel Fortran-based logging framework, and assess its suitability for use in a complex scientific application.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gleicher, Frederick N.; Williamson, Richard L.; Ortensi, Javier
The MOOSE neutron transport application RATTLESNAKE was coupled to the fuels performance application BISON to provide a higher fidelity tool for fuel performance simulation. This project is motivated by the desire to couple a high fidelity core analysis program (based on the self-adjoint angular flux equations) to a high fidelity fuel performance program, both of which can simulate on unstructured meshes. RATTLESNAKE solves self-adjoint angular flux transport equation and provides a sub-pin level resolution of the multigroup neutron flux with resonance treatment during burnup or a fast transient. BISON solves the coupled thermomechanical equations for the fuel on a sub-millimetermore » scale. Both applications are able to solve their respective systems on aligned and unaligned unstructured finite element meshes. The power density and local burnup was transferred from RATTLESNAKE to BISON with the MOOSE Multiapp transfer system. Multiple depletion cases were run with one-way data transfer from RATTLESNAKE to BISON. The eigenvalues are shown to agree well with values obtained from the lattice physics code DRAGON. The one-way data transfer of power density is shown to agree with the power density obtained from an internal Lassman-style model in BISON.« less
Performance evaluation of a distance learning program.
Dailey, D J; Eno, K R; Brinkley, J F
1994-01-01
This paper presents a performance metric which uses a single number to characterize the response time for a non-deterministic client-server application operating over the Internet. When applied to a Macintosh-based distance learning application called the Digital Anatomist Browser, the metric allowed us to observe that "A typical student doing a typical mix of Browser commands on a typical data set will experience the same delay if they use a slow Macintosh on a local network or a fast Macintosh on the other side of the country accessing the data over the Internet." The methodology presented is applicable to other client-server applications that are rapidly appearing on the Internet.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enghauser, Michael
2015-02-01
The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.
Paramedir: A Tool for Programmable Performance Analysis
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Labarta, Jesus; Gimenez, Judit
2004-01-01
Performance analysis of parallel scientific applications is time consuming and requires great expertise in areas such as programming paradigms, system software, and computer hardware architectures. In this paper we describe a tool that facilitates the programmability of performance metric calculations thereby allowing the automation of the analysis and reducing the application development time. We demonstrate how the system can be used to capture knowledge and intuition acquired by advanced parallel programmers in order to be transferred to novice users.
NASA Astrophysics Data System (ADS)
Ulfa, Andi Maria; Sugiyarto, Kristian H.; Ikhsan, Jaslin
2017-05-01
Poor achievement of students' performance on Chemistry may result from unfavourable learning processes. Therefore, innovation on learning process must be created. Regarding fast development of mobile technology, learning process cannot ignore the crucial role of the technology. This research and development (R&D) studies was done to develop android based application and to study the effect of its integration in Learning together (LT) into the improvement of students' learning creativity and cognitive achievement. The development of the application was carried out by adapting Borg & Gall and Dick & Carey model. The developed-product was reviewed by chemist, learning media practitioners, peer reviewers, and educators. After the revision based on the reviews, the application was used in the LT model on the topic of Stoichiometry in a senior high school. The instruments were questionnaires to get comments and suggestion from the reviewers about the application, and the another questionnaire was to collect the data of learning creativity. Another instrument used was a set of test by which data of students' achievement was collected. The results showed that the use of the mobile based application on Learning Together can bring about significant improvement of students' performance including creativity and cognitive achievement.
A review of digital microfluidics as portable platforms for lab-on a-chip applications.
Samiei, Ehsan; Tabrizian, Maryam; Hoorfar, Mina
2016-07-07
Following the development of microfluidic systems, there has been a high tendency towards developing lab-on-a-chip devices for biochemical applications. A great deal of effort has been devoted to improve and advance these devices with the goal of performing complete sets of biochemical assays on the device and possibly developing portable platforms for point of care applications. Among the different microfluidic systems used for such a purpose, digital microfluidics (DMF) shows high flexibility and capability of performing multiplex and parallel biochemical operations, and hence, has been considered as a suitable candidate for lab-on-a-chip applications. In this review, we discuss the most recent advances in the DMF platforms, and evaluate the feasibility of developing multifunctional packages for performing complete sets of processes of biochemical assays, particularly for point-of-care applications. The progress in the development of DMF systems is reviewed from eight different aspects, including device fabrication, basic fluidic operations, automation, manipulation of biological samples, advanced operations, detection, biological applications, and finally, packaging and portability of the DMF devices. Success in developing the lab-on-a-chip DMF devices will be concluded based on the advances achieved in each of these aspects.
Evaluation of Smartphone Inertial Sensor Performance for Cross-Platform Mobile Applications
Kos, Anton; Tomažič, Sašo; Umek, Anton
2016-01-01
Smartphone sensors are being increasingly used in mobile applications. The performance of sensors varies considerably among different smartphone models and the development of a cross-platform mobile application might be a very complex and demanding task. A publicly accessible resource containing real-life-situation smartphone sensor parameters could be of great help for cross-platform developers. To address this issue we have designed and implemented a pilot participatory sensing application for measuring, gathering, and analyzing smartphone sensor parameters. We start with smartphone accelerometer and gyroscope bias and noise parameters. The application database presently includes sensor parameters of more than 60 different smartphone models of different platforms. It is a modest, but important start, offering information on several statistical parameters of the measured smartphone sensors and insights into their performance. The next step, a large-scale cloud-based version of the application, is already planned. The large database of smartphone sensor parameters may prove particularly useful for cross-platform developers. It may also be interesting for individual participants who would be able to check-up and compare their smartphone sensors against a large number of similar or identical models. PMID:27049391
Python in the NERSC Exascale Science Applications Program for Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronaghi, Zahra; Thomas, Rollin; Deslippe, Jack
We describe a new effort at the National Energy Re- search Scientific Computing Center (NERSC) in performance analysis and optimization of scientific Python applications targeting the Intel Xeon Phi (Knights Landing, KNL) many- core architecture. The Python-centered work outlined here is part of a larger effort called the NERSC Exascale Science Applications Program (NESAP) for Data. NESAP for Data focuses on applications that process and analyze high-volume, high-velocity data sets from experimental/observational science (EOS) facilities supported by the US Department of Energy Office of Science. We present three case study applications from NESAP for Data that use Python. These codesmore » vary in terms of “Python purity” from applications developed in pure Python to ones that use Python mainly as a convenience layer for scientists without expertise in lower level programming lan- guages like C, C++ or Fortran. The science case, requirements, constraints, algorithms, and initial performance optimizations for each code are discussed. Our goal with this paper is to contribute to the larger conversation around the role of Python in high-performance computing today and tomorrow, highlighting areas for future work and emerging best practices« less
Research-grade CMOS image sensors for remote sensing applications
NASA Astrophysics Data System (ADS)
Saint-Pe, Olivier; Tulet, Michel; Davancens, Robert; Larnaudie, Franck; Magnan, Pierre; Martin-Gonthier, Philippe; Corbiere, Franck; Belliot, Pierre; Estribeau, Magali
2004-11-01
Imaging detectors are key elements for optical instruments and sensors on board space missions dedicated to Earth observation (high resolution imaging, atmosphere spectroscopy...), Solar System exploration (micro cameras, guidance for autonomous vehicle...) and Universe observation (space telescope focal planes, guiding sensors...). This market has been dominated by CCD technology for long. Since the mid-90s, CMOS Image Sensors (CIS) have been competing with CCDs for consumer domains (webcams, cell phones, digital cameras...). Featuring significant advantages over CCD sensors for space applications (lower power consumption, smaller system size, better radiations behaviour...), CMOS technology is also expanding in this field, justifying specific R&D and development programs funded by national and European space agencies (mainly CNES, DGA and ESA). All along the 90s and thanks to their increasingly improving performances, CIS have started to be successfully used for more and more demanding space applications, from vision and control functions requiring low-level performances to guidance applications requiring medium-level performances. Recent technology improvements have made possible the manufacturing of research-grade CIS that are able to compete with CCDs in the high-performances arena. After an introduction outlining the growing interest of optical instruments designers for CMOS image sensors, this paper will present the existing and foreseen ways to reach high-level electro-optics performances for CIS. The developments and performances of CIS prototypes built using an imaging CMOS process will be presented in the corresponding section.
A Study of Vicon System Positioning Performance.
Merriaux, Pierre; Dupuis, Yohan; Boutteau, Rémi; Vasseur, Pascal; Savatier, Xavier
2017-07-07
Motion capture setups are used in numerous fields. Studies based on motion capture data can be found in biomechanical, sport or animal science. Clinical science studies include gait analysis as well as balance, posture and motor control. Robotic applications encompass object tracking. Today's life applications includes entertainment or augmented reality. Still, few studies investigate the positioning performance of motion capture setups. In this paper, we study the positioning performance of one player in the optoelectronic motion capture based on markers: Vicon system. Our protocol includes evaluations of static and dynamic performances. Mean error as well as positioning variabilities are studied with calibrated ground truth setups that are not based on other motion capture modalities. We introduce a new setup that enables directly estimating the absolute positioning accuracy for dynamic experiments contrary to state-of-the art works that rely on inter-marker distances. The system performs well on static experiments with a mean absolute error of 0.15 mm and a variability lower than 0.025 mm. Our dynamic experiments were carried out at speeds found in real applications. Our work suggests that the system error is less than 2 mm. We also found that marker size and Vicon sampling rate must be carefully chosen with respect to the speed encountered in the application in order to reach optimal positioning performance that can go to 0.3 mm for our dynamic study.
Detection methods and performance criteria for genetically modified organisms.
Bertheau, Yves; Diolez, Annick; Kobilinsky, André; Magin, Kimberly
2002-01-01
Detection methods for genetically modified organisms (GMOs) are necessary for many applications, from seed purity assessment to compliance of food labeling in several countries. Numerous analytical methods are currently used or under development to support these needs. The currently used methods are bioassays and protein- and DNA-based detection protocols. To avoid discrepancy of results between such largely different methods and, for instance, the potential resulting legal actions, compatibility of the methods is urgently needed. Performance criteria of methods allow evaluation against a common standard. The more-common performance criteria for detection methods are precision, accuracy, sensitivity, and specificity, which together specifically address other terms used to describe the performance of a method, such as applicability, selectivity, calibration, trueness, precision, recovery, operating range, limit of quantitation, limit of detection, and ruggedness. Performance criteria should provide objective tools to accept or reject specific methods, to validate them, to ensure compatibility between validated methods, and be used on a routine basis to reject data outside an acceptable range of variability. When selecting a method of detection, it is also important to consider its applicability, its field of applications, and its limitations, by including factors such as its ability to detect the target analyte in a given matrix, the duration of the analyses, its cost effectiveness, and the necessary sample sizes for testing. Thus, the current GMO detection methods should be evaluated against a common set of performance criteria.
Integrating Cache Performance Modeling and Tuning Support in Parallelization Tools
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)
1998-01-01
With the resurgence of distributed shared memory (DSM) systems based on cache-coherent Non Uniform Memory Access (ccNUMA) architectures and increasing disparity between memory and processors speeds, data locality overheads are becoming the greatest bottlenecks in the way of realizing potential high performance of these systems. While parallelization tools and compilers facilitate the users in porting their sequential applications to a DSM system, a lot of time and effort is needed to tune the memory performance of these applications to achieve reasonable speedup. In this paper, we show that integrating cache performance modeling and tuning support within a parallelization environment can alleviate this problem. The Cache Performance Modeling and Prediction Tool (CPMP), employs trace-driven simulation techniques without the overhead of generating and managing detailed address traces. CPMP predicts the cache performance impact of source code level "what-if" modifications in a program to assist a user in the tuning process. CPMP is built on top of a customized version of the Computer Aided Parallelization Tools (CAPTools) environment. Finally, we demonstrate how CPMP can be applied to tune a real Computational Fluid Dynamics (CFD) application.
Henderson, Mark C; Kelly, Carolyn J; Griffin, Erin; Hall, Theodore R; Jerant, Anthony; Peterson, Ellena M; Rainwater, Julie A; Sousa, Francis J; Wofsy, David; Franks, Peter
2017-10-31
To examine applicant characteristics associated with multi mini-interview (MMI) or traditional interview (TI) performance at five California public medical schools. Of the five California Longitudinal Evaluation of Admissions Practices (CA-LEAP) consortium schools, three used TIs and two used MMIs. Schools provided the following retrospective data on all 2011-2013 admissions cycle interviewees: age, gender, race/ethnicity (under-represented in medicine [UIM] or not), self-identified disadvantaged (DA) status, undergraduate GPA, Medical College Admission Test (MCAT) score, and interview score (standardized as z-score, mean = 0, SD = 1). Adjusted linear regression analyses, stratified by interview type, examined associations with interview performance. The 4,993 applicants who completed 7,516 interviews included 931 (18.6%) UIM and 962 (19.3%) DA individuals; 3,226 (64.6%) had one interview. Mean age was 24.4 (SD = 2.7); mean GPA and MCAT score were 3.72 (SD = 0.22) and 33.6 (SD = 3.7), respectively. Older age, female gender, and number of prior interviews were associated with better performance on both MMIs and TIs. Higher GPA was associated with lower MMI scores (z-score, per unit GPA = -0.26, 95% CI [-0.45, -0.06]), but unrelated to TI scores. DA applicants had higher TI scores (z-score = 0.17, 95% CI [0.07, 0.28]), but lower MMI scores (z-score = -0.18, 95% CI [-0.28, -.08]) than non-DA applicants. Neither UIM status nor MCAT score were associated with interview performance. These findings have potentially important workforce implications, particularly regarding DA applicants, and illustrate the need for other multi-institutional studies of medical school admissions processes.
Performance Evaluation of an Enhanced Uplink 3.5G System for Mobile Healthcare Applications.
Komnakos, Dimitris; Vouyioukas, Demosthenes; Maglogiannis, Ilias; Constantinou, Philip
2008-01-01
The present paper studies the prospective and the performance of a forthcoming high-speed third generation (3.5G) networking technology, called enhanced uplink, for delivering mobile health (m-health) applications. The performance of 3.5G networks is a critical factor for successful development of m-health services perceived by end users. In this paper, we propose a methodology for performance assessment based on the joint uplink transmission of voice, real-time video, biological data (such as electrocardiogram, vital signals, and heart sounds), and healthcare records file transfer. Various scenarios were concerned in terms of real-time, nonreal-time, and emergency applications in random locations, where no other system but 3.5G is available. The accomplishment of quality of service (QoS) was explored through a step-by-step improvement of enhanced uplink system's parameters, attributing the network system for the best performance in the context of the desired m-health services.
A Computational Framework for Efficient Low Temperature Plasma Simulations
NASA Astrophysics Data System (ADS)
Verma, Abhishek Kumar; Venkattraman, Ayyaswamy
2016-10-01
Over the past years, scientific computing has emerged as an essential tool for the investigation and prediction of low temperature plasmas (LTP) applications which includes electronics, nanomaterial synthesis, metamaterials etc. To further explore the LTP behavior with greater fidelity, we present a computational toolbox developed to perform LTP simulations. This framework will allow us to enhance our understanding of multiscale plasma phenomenon using high performance computing tools mainly based on OpenFOAM FVM distribution. Although aimed at microplasma simulations, the modular framework is able to perform multiscale, multiphysics simulations of physical systems comprises of LTP. Some salient introductory features are capability to perform parallel, 3D simulations of LTP applications on unstructured meshes. Performance of the solver is tested based on numerical results assessing accuracy and efficiency of benchmarks for problems in microdischarge devices. Numerical simulation of microplasma reactor at atmospheric pressure with hemispherical dielectric coated electrodes will be discussed and hence, provide an overview of applicability and future scope of this framework.
Performance Evaluation of an Enhanced Uplink 3.5G System for Mobile Healthcare Applications
Komnakos, Dimitris; Vouyioukas, Demosthenes; Maglogiannis, Ilias; Constantinou, Philip
2008-01-01
The present paper studies the prospective and the performance of a forthcoming high-speed third generation (3.5G) networking technology, called enhanced uplink, for delivering mobile health (m-health) applications. The performance of 3.5G networks is a critical factor for successful development of m-health services perceived by end users. In this paper, we propose a methodology for performance assessment based on the joint uplink transmission of voice, real-time video, biological data (such as electrocardiogram, vital signals, and heart sounds), and healthcare records file transfer. Various scenarios were concerned in terms of real-time, nonreal-time, and emergency applications in random locations, where no other system but 3.5G is available. The accomplishment of quality of service (QoS) was explored through a step-by-step improvement of enhanced uplink system's parameters, attributing the network system for the best performance in the context of the desired m-health services. PMID:19132096
Performance modeling of terahertz (THz) and millimeter waves (mmW) pupil plane imaging
NASA Astrophysics Data System (ADS)
Mohammadian, Nafiseh; Furxhi, Orges; Zhang, Lei; Offermans, Peter; Ghazi, Galia; Driggers, Ronald
2018-05-01
Terahertz- (THz) and millimeter-wave sensors are becoming more important in industrial, security, medical, and defense applications. A major problem in these sensing areas is the resolution, sensitivity, and visual acuity of the imaging systems. There are different fundamental parameters in designing a system that have significant effects on the imaging performance. The performance of THz systems can be discussed in terms of two characteristics: sensitivity and spatial resolution. New approaches for design and manufacturing of THz imagers are a vital basis for developing future applications. Photonics solutions have been at the technological forefront in THz band applications. A single scan antenna does not provide reasonable resolution, sensitivity, and speed. An effective approach to imaging is placing a high-performance antenna in a two-dimensional antenna array to achieve higher radiation efficiency and higher resolution in the imaging systems. Here, we present the performance modeling of a pupil plane imaging system to find the resolution and sensitivity efficiency of the imaging system.
Research on dynamic performance design of mobile phone application based on context awareness
NASA Astrophysics Data System (ADS)
Bo, Zhang
2018-05-01
It aims to explore the dynamic performance of different mobile phone applications and the user's cognitive differences, reduce the cognitive burden, and enhance the sense of experience. By analyzing the dynamic design performance in four different interactive contexts, and constructing the framework of information service process in the interactive context perception and the two perception principles of the cognitive consensus between designer and user, and the two kinds of knowledge in accordance with the perception principles. The analysis of the context will help users sense the dynamic performance more intuitively, so that the details of interaction will be performed more vividly and smoothly, thus enhance user's experience in the interactive process. The common perception experience enables designers and users to produce emotional resonance in different interactive contexts, and help them achieve rapid understanding of interactive content and perceive the logic and hierarchy of the content and the structure, therefore the effectiveness of mobile applications will be improved.
DEVICE TECHNOLOGY. Nanomaterials in transistors: From high-performance to thin-film applications.
Franklin, Aaron D
2015-08-14
For more than 50 years, silicon transistors have been continuously shrunk to meet the projections of Moore's law but are now reaching fundamental limits on speed and power use. With these limits at hand, nanomaterials offer great promise for improving transistor performance and adding new applications through the coming decades. With different transistors needed in everything from high-performance servers to thin-film display backplanes, it is important to understand the targeted application needs when considering new material options. Here the distinction between high-performance and thin-film transistors is reviewed, along with the benefits and challenges to using nanomaterials in such transistors. In particular, progress on carbon nanotubes, as well as graphene and related materials (including transition metal dichalcogenides and X-enes), outlines the advances and further research needed to enable their use in transistors for high-performance computing, thin films, or completely new technologies such as flexible and transparent devices. Copyright © 2015, American Association for the Advancement of Science.
Performance and Architecture Lab Modeling Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-06-19
Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, it formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this linkmore » makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program behavior. The model -- an executable program -- is a hierarchical composition of annotation functions, synthesized functions, statistics for runtime values, and performance measurements.« less
14 CFR 171.109 - Performance requirements.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 3 2011-01-01 2011-01-01 false Performance requirements. 171.109 Section... Performance requirements. (a) The Simplified Directional Facility must perform in accordance with the... performance and compliance with applicable performance requirements must be conducted in accordance with the...
14 CFR 171.109 - Performance requirements.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 3 2012-01-01 2012-01-01 false Performance requirements. 171.109 Section... Performance requirements. (a) The Simplified Directional Facility must perform in accordance with the... performance and compliance with applicable performance requirements must be conducted in accordance with the...
14 CFR 171.109 - Performance requirements.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 3 2014-01-01 2014-01-01 false Performance requirements. 171.109 Section... Performance requirements. (a) The Simplified Directional Facility must perform in accordance with the... performance and compliance with applicable performance requirements must be conducted in accordance with the...
14 CFR 171.109 - Performance requirements.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 3 2013-01-01 2013-01-01 false Performance requirements. 171.109 Section... Performance requirements. (a) The Simplified Directional Facility must perform in accordance with the... performance and compliance with applicable performance requirements must be conducted in accordance with the...
Wu, Zhao; Xiong, Naixue; Huang, Yannong; Xu, Degang; Hu, Chunyang
2015-01-01
The services composition technology provides flexible methods for building service composition applications (SCAs) in wireless sensor networks (WSNs). The high reliability and high performance of SCAs help services composition technology promote the practical application of WSNs. The optimization methods for reliability and performance used for traditional software systems are mostly based on the instantiations of software components, which are inapplicable and inefficient in the ever-changing SCAs in WSNs. In this paper, we consider the SCAs with fault tolerance in WSNs. Based on a Universal Generating Function (UGF) we propose a reliability and performance model of SCAs in WSNs, which generalizes a redundancy optimization problem to a multi-state system. Based on this model, an efficient optimization algorithm for reliability and performance of SCAs in WSNs is developed based on a Genetic Algorithm (GA) to find the optimal structure of SCAs with fault-tolerance in WSNs. In order to examine the feasibility of our algorithm, we have evaluated the performance. Furthermore, the interrelationships between the reliability, performance and cost are investigated. In addition, a distinct approach to determine the most suitable parameters in the suggested algorithm is proposed. PMID:26561818
49 CFR 390.3 - General applicability.
Code of Federal Regulations, 2010 CFR
2010-10-01
... SAFETY REGULATIONS; GENERAL General Applicability and Definitions § 390.3 General applicability. Link to... accessories required by this subchapter shall be maintained in compliance with all applicable performance and... rules in this subchapter do not apply to— (1) All school bus operations as defined in § 390.5; (2...
Paraffin-based hybrid rocket engines applications: A review and a market perspective
NASA Astrophysics Data System (ADS)
Mazzetti, Alessandro; Merotto, Laura; Pinarello, Giordano
2016-09-01
Hybrid propulsion technology for aerospace applications has received growing attention in recent years due to its important advantages over competitive solutions. Hybrid rocket engines have a great potential for several aeronautics and aerospace applications because of their safety, reliability, low cost and high performance. As a consequence, this propulsion technology is feasible for a number of innovative missions, including space tourism. On the other hand, hybrid rocket propulsion's main drawback, i.e. the difficulty in reaching high regression rate values using standard fuels, has so far limited the maturity level of this technology. The complex physico-chemical processes involved in hybrid rocket engines combustion are of major importance for engine performance prediction and control. Therefore, further investigation is ongoing in order to achieve a more complete understanding of such phenomena. It is well known that one of the most promising solutions for overcoming hybrid rocket engines performance limits is the use of liquefying fuels. Such fuels can lead to notably increased solid fuel regression rate due to the so-called "entrainment phenomenon". Among liquefying fuels, paraffin-based formulations have great potentials as solid fuels due to their low cost, availability (as they can be derived from industrial waste), low environmental impact and high performance. Despite the vast amount of literature available on this subject, a precise focus on market potential of paraffins for hybrid propulsion aerospace applications is lacking. In this work a review of hybrid rocket engines state of the art was performed, together with a detailed analysis of the possible applications of such a technology. A market study was carried out in order to define the near-future foreseeable development needs for hybrid technology application to the aforementioned missions. Paraffin-based fuels are taken into account as the most promising segment for market development.The present study is useful for driving future investigation and testing of paraffin-based fuels as solid fuels for hybrid propulsion technology, taking into account the needs of industrial applications of this technology.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-06
..., LLC; Notice of Preliminary Permit Application Accepted for Filing and Soliciting Comments, Motions To Intervene, and Competing Applications On January 1, 2013, FFP Project 111, LLC filed an application for a... application during the permit term. A preliminary permit does not authorize the permit holder to perform any...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-11
..., LLC; Notice of Preliminary Permit Application Accepted for Filing and Soliciting Comments, Motions to Intervene, and Competing Applications On February 1, 2013, FFP Project 118, LLC filed an application for a... application during the permit term. A preliminary permit does not authorize the permit holder to perform any...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-11
..., LLC; Notice of Preliminary Permit Application Accepted for Filing and Soliciting Comments, Motions To Intervene, and Competing Applications On February 1, 2013, FFP Project 119, LLC filed an application for a... application during the permit term. A preliminary permit does not authorize the permit holder to perform any...
Matching brain-machine interface performance to space applications.
Citi, Luca; Tonet, Oliver; Marinelli, Martina
2009-01-01
A brain-machine interface (BMI) is a particular class of human-machine interface (HMI). BMIs have so far been studied mostly as a communication means for people who have little or no voluntary control of muscle activity. For able-bodied users, such as astronauts, a BMI would only be practical if conceived as an augmenting interface. A method is presented for pointing out effective combinations of HMIs and applications of robotics and automation to space. Latency and throughput are selected as performance measures for a hybrid bionic system (HBS), that is, the combination of a user, a device, and a HMI. We classify and briefly describe HMIs and space applications and then compare the performance of classes of interfaces with the requirements of classes of applications, both in terms of latency and throughput. Regions of overlap correspond to effective combinations. Devices requiring simpler control, such as a rover, a robotic camera, or environmental controls are suitable to be driven by means of BMI technology. Free flyers and other devices with six degrees of freedom can be controlled, but only at low-interactivity levels. More demanding applications require conventional interfaces, although they could be controlled by BMIs once the same levels of performance as currently recorded in animal experiments are attained. Robotic arms and manipulators could be the next frontier for noninvasive BMIs. Integrating smart controllers in HBSs could improve interactivity and boost the use of BMI technology in space applications.
Adapting Wave-front Algorithms to Efficiently Utilize Systems with Deep Communication Hierarchies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerbyson, Darren J.; Lang, Michael; Pakin, Scott
2011-09-30
Large-scale systems increasingly exhibit a differential between intra-chip and inter-chip communication performance especially in hybrid systems using accelerators. Processorcores on the same socket are able to communicate at lower latencies, and with higher bandwidths, than cores on different sockets either within the same node or between nodes. A key challenge is to efficiently use this communication hierarchy and hence optimize performance. We consider here the class of applications that contains wavefront processing. In these applications data can only be processed after their upstream neighbors have been processed. Similar dependencies result between processors in which communication is required to pass boundarymore » data downstream and whose cost is typically impacted by the slowest communication channel in use. In this work we develop a novel hierarchical wave-front approach that reduces the use of slower communications in the hierarchy but at the cost of additional steps in the parallel computation and higher use of on-chip communications. This tradeoff is explored using a performance model. An implementation using the Reverse-acceleration programming model on the petascale Roadrunner system demonstrates a 27% performance improvement at full system-scale on a kernel application. The approach is generally applicable to large-scale multi-core and accelerated systems where a differential in system communication performance exists.« less
Adapting wave-front algorithms to efficiently utilize systems with deep communication hierarchies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerbyson, Darren J; Lang, Michael; Pakin, Scott
2009-01-01
Large-scale systems increasingly exhibit a differential between intra-chip and inter-chip communication performance. Processor-cores on the same socket are able to communicate at lower latencies, and with higher bandwidths, than cores on different sockets either within the same node or between nodes. A key challenge is to efficiently use this communication hierarchy and hence optimize performance. We consider here the class of applications that contain wave-front processing. In these applications data can only be processed after their upstream neighbors have been processed. Similar dependencies result between processors in which communication is required to pass boundary data downstream and whose cost ismore » typically impacted by the slowest communication channel in use. In this work we develop a novel hierarchical wave-front approach that reduces the use of slower communications in the hierarchy but at the cost of additional computation and higher use of on-chip communications. This tradeoff is explored using a performance model and an implementation on the Petascale Roadrunner system demonstrates a 27% performance improvement at full system-scale on a kernel application. The approach is generally applicable to large-scale multi-core and accelerated systems where a differential in system communication performance exists.« less
Lee, Dong-Hoon; Lee, Do-Wan; Han, Bong-Soo
2016-01-01
The purpose of this study is an application of scale invariant feature transform (SIFT) algorithm to stitch the cervical-thoracic-lumbar (C-T-L) spine magnetic resonance (MR) images to provide a view of the entire spine in a single image. All MR images were acquired with fast spin echo (FSE) pulse sequence using two MR scanners (1.5 T and 3.0 T). The stitching procedures for each part of spine MR image were performed and implemented on a graphic user interface (GUI) configuration. Moreover, the stitching process is performed in two categories; manual point-to-point (mPTP) selection that performed by user specified corresponding matching points, and automated point-to-point (aPTP) selection that performed by SIFT algorithm. The stitched images using SIFT algorithm showed fine registered results and quantitatively acquired values also indicated little errors compared with commercially mounted stitching algorithm in MRI systems. Our study presented a preliminary validation of the SIFT algorithm application to MRI spine images, and the results indicated that the proposed approach can be performed well for the improvement of diagnosis. We believe that our approach can be helpful for the clinical application and extension of other medical imaging modalities for image stitching. PMID:27064404
Characterizing MPI matching via trace-based simulation
Ferreira, Kurt Brian; Levy, Scott Larson Nicoll; Pedretti, Kevin; ...
2017-01-01
With the increased scale expected on future leadership-class systems, detailed information about the resource usage and performance of MPI message matching provides important insights into how to maintain application performance on next-generation systems. However, obtaining MPI message matching performance data is often not possible without significant effort. A common approach is to instrument an MPI implementation to collect relevant statistics. While this approach can provide important data, collecting matching data at runtime perturbs the application's execution, including its matching performance, and is highly dependent on the MPI library's matchlist implementation. In this paper, we introduce a trace-based simulation approach tomore » obtain detailed MPI message matching performance data for MPI applications without perturbing their execution. Using a number of key parallel workloads, we demonstrate that this simulator approach can rapidly and accurately characterize matching behavior. Specifically, we use our simulator to collect several important statistics about the operation of the MPI posted and unexpected queues. For example, we present data about search lengths and the duration that messages spend in the queues waiting to be matched. Here, data gathered using this simulation-based approach have significant potential to aid hardware designers in determining resource allocation for MPI matching functions and provide application and middleware developers with insight into the scalability issues associated with MPI message matching.« less
von Heimburg, Erna; Medbø, Jon Ingulf; Sandsund, Mariann; Reinertsen, Randi Eidsmo
2013-01-01
Firefighters must meet minimum physical demands. The Norwegian Labour Inspection Authority (NLIA) has approved a standardised treadmill walking test and 3 simple strength tests for smoke divers. The results of the Trondheim test were compared with those of the NLIA tests taking into account possible effects of age, experience level and gender. Four groups of participants took part in the tests: 19 young experienced firefighters, 24 senior male firefighters and inexperienced applicants, 12 male and 8 female. Oxygen uptake (VO2) at exhaustion rose linearly by the duration of the treadmill test. Time spent on the Trondheim test was closely related to performance time and peak VO2 on the treadmill test. Senior experienced firefighters did not perform better than equally fit young applicants. However, female applicants performed poorer on the Trondheim test than on the treadmill test. Performance on the Trondheim test was not closely related to muscle strength beyond a minimum. CONCLUSION. Firefighters completing the Trondheim test in under 19 min fit the requirements of the NLIA treadmill test. The Trondheim test can be used as an alternative to the NLIA tests for testing aerobic fitness but not for muscular strength. Women's result of the Trondheim test were poorer than the results of the NLIA treadmill test, probably because of their lower body mass.
Design and optimization of a portable LQCD Monte Carlo code using OpenACC
NASA Astrophysics Data System (ADS)
Bonati, Claudio; Coscetti, Simone; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Calore, Enrico; Schifano, Sebastiano Fabio; Silvi, Giorgio; Tripiccione, Raffaele
The present panorama of HPC architectures is extremely heterogeneous, ranging from traditional multi-core CPU processors, supporting a wide class of applications but delivering moderate computing performance, to many-core Graphics Processor Units (GPUs), exploiting aggressive data-parallelism and delivering higher performances for streaming computing applications. In this scenario, code portability (and performance portability) become necessary for easy maintainability of applications; this is very relevant in scientific computing where code changes are very frequent, making it tedious and prone to error to keep different code versions aligned. In this work, we present the design and optimization of a state-of-the-art production-level LQCD Monte Carlo application, using the directive-based OpenACC programming model. OpenACC abstracts parallel programming to a descriptive level, relieving programmers from specifying how codes should be mapped onto the target architecture. We describe the implementation of a code fully written in OpenAcc, and show that we are able to target several different architectures, including state-of-the-art traditional CPUs and GPUs, with the same code. We also measure performance, evaluating the computing efficiency of our OpenACC code on several architectures, comparing with GPU-specific implementations and showing that a good level of performance-portability can be reached.
QoS support for end users of I/O-intensive applications using shared storage systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Marion Kei; Zhang, Xuechen; Jiang, Song
2011-01-19
I/O-intensive applications are becoming increasingly common on today's high-performance computing systems. While performance of compute-bound applications can be effectively guaranteed with techniques such as space sharing or QoS-aware process scheduling, it remains a challenge to meet QoS requirements for end users of I/O-intensive applications using shared storage systems because it is difficult to differentiate I/O services for different applications with individual quality requirements. Furthermore, it is difficult for end users to accurately specify performance goals to the storage system using I/O-related metrics such as request latency or throughput. As access patterns, request rates, and the system workload change in time,more » a fixed I/O performance goal, such as bounds on throughput or latency, can be expensive to achieve and may not lead to a meaningful performance guarantees such as bounded program execution time. We propose a scheme supporting end-users QoS goals, specified in terms of program execution time, in shared storage environments. We automatically translate the users performance goals into instantaneous I/O throughput bounds using a machine learning technique, and use dynamically determined service time windows to efficiently meet the throughput bounds. We have implemented this scheme in the PVFS2 parallel file system and have conducted an extensive evaluation. Our results show that this scheme can satisfy realistic end-user QoS requirements by making highly efficient use of the I/O resources. The scheme seeks to balance programs attainment of QoS requirements, and saves as much of the remaining I/O capacity as possible for best-effort programs.« less
Membrane-based technologies for biogas separations.
Basu, Subhankar; Khan, Asim L; Cano-Odena, Angels; Liu, Chunqing; Vankelecom, Ivo F J
2010-02-01
Over the past two decades, membrane processes have gained a lot of attention for the separation of gases. They have been found to be very suitable for wide scale applications owing to their reasonable cost, good selectivity and easily engineered modules. This critical review primarily focuses on the various aspects of membrane processes related to the separation of biogas, more in specific CO(2) and H(2)S removal from CH(4) and H(2) streams. Considering the limitations of inorganic materials for membranes, the present review will only focus on work done with polymeric materials. An overview on the performance of commercial membranes and lab-made membranes highlighting the problems associated with their applications will be given first. The development studies carried out to enhance the performance of membranes for gas separation will be discussed in the subsequent section. This review has been broadly divided into three sections (i) performance of commercial polymeric membranes (ii) performance of lab-made polymeric membranes and (iii) performance of mixed matrix membranes (MMMs) for gas separations. It will include structural modifications at polymer level, polymer blending, as well as synthesis of mixed matrix membranes, for which addition of silane-coupling agents and selection of suitable fillers will receive special attention. Apart from an overview of the different membrane materials, the study will also highlight the effects of different operating conditions that eventually decide the performance and longevity of membrane applications in gas separations. The discussion will be largely restricted to the studies carried out on polyimide (PI), cellulose acetate (CA), polysulfone (PSf) and polydimethyl siloxane (PDMS) membranes, as these membrane materials have been most widely used for commercial applications. Finally, the most important strategies that would ensure new commercial applications will be discussed (156 references).
Actuation Using Piezoelectric Materials: Application in Augmenters, Energy Harvesters, and Motors
NASA Technical Reports Server (NTRS)
Hasenoehrl, Jennifer
2012-01-01
Piezoelectric actuators are used in many manipulation, movement, and mobility applications as well as transducers and sensors. When used at the resonance frequencies of the piezoelectric stack, the actuator performs at its maximum actuation capability. In this Space Grant internship, three applications of piezoelectric actuators were investigated including hammering augmenters of rotary drills, energy harvesters, and piezo-motors. The augmenter shows improved drill performance over rotation only. The energy harvesters rely on moving fluid to convert mechanical energy into electrical power. Specific designs allow the harvesters more freedom to move, which creates more power. The motor uses the linear movement of the actuator with a horn applied to the side of a rotor to create rotational motion. Friction inhibits this motion and is to be minimized for best performance. Tests and measurements were made during this internship to determine the requirements for optimal performance of the studied mechanisms and devices.
Performance analysis and improvement of WPAN MAC for home networks.
Mehta, Saurabh; Kwak, Kyung Sup
2010-01-01
The wireless personal area network (WPAN) is an emerging wireless technology for future short range indoor and outdoor communication applications. The IEEE 802.15.3 medium access control (MAC) is proposed to coordinate the access to the wireless medium among the competing devices, especially for short range and high data rate applications in home networks. In this paper we use analytical modeling to study the performance analysis of WPAN (IEEE 802.15.3) MAC in terms of throughput, efficient bandwidth utilization, and delay with various ACK policies under error channel condition. This allows us to introduce a K-Dly-ACK-AGG policy, payload size adjustment mechanism, and Improved Backoff algorithm to improve the performance of the WPAN MAC. Performance evaluation results demonstrate the impact of our improvements on network capacity. Moreover, these results can be very useful to WPAN application designers and protocol architects to easily and correctly implement WPAN for home networking.
Martinez, Dani; Teixidó, Mercè; Font, Davinia; Moreno, Javier; Tresanchez, Marcel; Marco, Santiago; Palacín, Jordi
2014-03-27
This paper proposes the use of an autonomous assistant mobile robot in order to monitor the environmental conditions of a large indoor area and develop an ambient intelligence application. The mobile robot uses single high performance embedded sensors in order to collect and geo-reference environmental information such as ambient temperature, air velocity and orientation and gas concentration. The data collected with the assistant mobile robot is analyzed in order to detect unusual measurements or discrepancies and develop focused corrective ambient actions. This paper shows an example of the measurements performed in a research facility which have enabled the detection and location of an uncomfortable temperature profile inside an office of the research facility. The ambient intelligent application has been developed by performing some localized ambient measurements that have been analyzed in order to propose some ambient actuations to correct the uncomfortable temperature profile.
Martinez, Dani; Teixidó, Mercè; Font, Davinia; Moreno, Javier; Tresanchez, Marcel; Marco, Santiago; Palacín, Jordi
2014-01-01
This paper proposes the use of an autonomous assistant mobile robot in order to monitor the environmental conditions of a large indoor area and develop an ambient intelligence application. The mobile robot uses single high performance embedded sensors in order to collect and geo-reference environmental information such as ambient temperature, air velocity and orientation and gas concentration. The data collected with the assistant mobile robot is analyzed in order to detect unusual measurements or discrepancies and develop focused corrective ambient actions. This paper shows an example of the measurements performed in a research facility which have enabled the detection and location of an uncomfortable temperature profile inside an office of the research facility. The ambient intelligent application has been developed by performing some localized ambient measurements that have been analyzed in order to propose some ambient actuations to correct the uncomfortable temperature profile. PMID:24681671
A Comparison of Three Programming Models for Adaptive Applications
NASA Technical Reports Server (NTRS)
Shan, Hong-Zhang; Singh, Jaswinder Pal; Oliker, Leonid; Biswa, Rupak; Kwak, Dochan (Technical Monitor)
2000-01-01
We study the performance and programming effort for two major classes of adaptive applications under three leading parallel programming models. We find that all three models can achieve scalable performance on the state-of-the-art multiprocessor machines. The basic parallel algorithms needed for different programming models to deliver their best performance are similar, but the implementations differ greatly, far beyond the fact of using explicit messages versus implicit loads/stores. Compared with MPI and SHMEM, CC-SAS (cache-coherent shared address space) provides substantial ease of programming at the conceptual and program orchestration level, which often leads to the performance gain. However it may also suffer from the poor spatial locality of physically distributed shared data on large number of processors. Our CC-SAS implementation of the PARMETIS partitioner itself runs faster than in the other two programming models, and generates more balanced result for our application.
Performance Analysis and Improvement of WPAN MAC for Home Networks
Mehta, Saurabh; Kwak, Kyung Sup
2010-01-01
The wireless personal area network (WPAN) is an emerging wireless technology for future short range indoor and outdoor communication applications. The IEEE 802.15.3 medium access control (MAC) is proposed to coordinate the access to the wireless medium among the competing devices, especially for short range and high data rate applications in home networks. In this paper we use analytical modeling to study the performance analysis of WPAN (IEEE 802.15.3) MAC in terms of throughput, efficient bandwidth utilization, and delay with various ACK policies under error channel condition. This allows us to introduce a K-Dly-ACK-AGG policy, payload size adjustment mechanism, and Improved Backoff algorithm to improve the performance of the WPAN MAC. Performance evaluation results demonstrate the impact of our improvements on network capacity. Moreover, these results can be very useful to WPAN application designers and protocol architects to easily and correctly implement WPAN for home networking. PMID:22319274
Carbon and Carbon Hybrid Materials as Anodes for Sodium-Ion Batteries.
Zhong, Xiongwu; Wu, Ying; Zeng, Sifan; Yu, Yan
2018-02-12
Sodium-ion batteries (SIBs) have attracted much attention for application in large-scale grid energy storage owing to the abundance and low cost of sodium sources. However, low energy density and poor cycling life hinder practical application of SIBs. Recently, substantial efforts have been made to develop electrode materials to push forward large-scale practical applications. Carbon materials can be directly used as anode materials, and they show excellent sodium storage performance. Additionally, designing and constructing carbon hybrid materials is an effective strategy to obtain high-performance anodes for SIBs. In this review, we summarize recent research progress on carbon and carbon hybrid materials as anodes for SIBs. Nanostructural design to enhance the sodium storage performance of anode materials is discussed, and we offer some insight into the potential directions of and future high-performance anode materials for SIBs. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Performance Enhancement Strategies for Multi-Block Overset Grid CFD Applications
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Biswas, Rupak
2003-01-01
The overset grid methodology has significantly reduced time-to-solution of highfidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process resolves the geometrical complexity of the problem domain by using separately generated but overlapping structured discretization grids that periodically exchange information through interpolation. However, high performance computations of such large-scale realistic applications must be handled efficiently on state-of-the-art parallel supercomputers. This paper analyzes the effects of various performance enhancement strategies on the parallel efficiency of an overset grid Navier-Stokes CFD application running on an SGI Origin2000 machinc. Specifically, the role of asynchronous communication, grid splitting, and grid grouping strategies are presented and discussed. Details of a sophisticated graph partitioning technique for grid grouping are also provided. Results indicate that performance depends critically on the level of latency hiding and the quality of load balancing across the processors.
The performance of solar thermal electric power systems employing small heat engines
NASA Technical Reports Server (NTRS)
Pons, R. L.
1980-01-01
The paper presents a comparative analysis of small (10 to 100 KWe) heat engines for use with a solar thermal electric system employing the point-focusing, distributed receiver (PF-DR) concept. Stirling, Brayton, and Rankine cycle engines are evaluated for a nominal overall system power level of 1 MWe, although the concept is applicable to power levels up to at least 10 MWe. Multiple concentrators are electrically connected to achieve the desired plant output. Best performance is achieved with the Stirling engine, resulting in a system Levelized Busbar Energy Cost of just under 50 mills/kWH and a Capital Cost of $900/kW, based on the use of mass-produced components. Brayton and Rankine engines show somewhat less performance but are viable alternatives with particular benefits for special applications. All three engines show excellent performance for the small community application.
High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering
NASA Technical Reports Server (NTRS)
Maly, K.
1998-01-01
Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated with the monitoring architecture to reduce the volume of event traffic flow in the system, and thereby reduce the intrusiveness of the monitoring process. We are developing an event filtering architecture to efficiently process the large volume of event traffic generated by LSD systems (such as distributed interactive applications). This filtering architecture is used to monitor collaborative distance learning application for obtaining debugging and feedback information. Our architecture supports the dynamic (re)configuration and optimization of event filters in large-scale distributed systems. Our work represents a major contribution by (1) survey and evaluating existing event filtering mechanisms In supporting monitoring LSD systems and (2) devising an integrated scalable high- performance architecture of event filtering that spans several kev application domains, presenting techniques to improve the functionality, performance and scalability. This paper describes the primary characteristics and challenges of developing high-performance event filtering for monitoring LSD systems. We survey existing event filtering mechanisms and explain key characteristics for each technique. In addition, we discuss limitations with existing event filtering mechanisms and outline how our architecture will improve key aspects of event filtering.
PERI - Auto-tuning Memory Intensive Kernels for Multicore
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, David H; Williams, Samuel; Datta, Kaushik
2008-06-24
We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to Sparse Matrix Vector Multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we developmore » a code generator for each kernel that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4X improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.« less
Center for Technology for Advanced Scientific Componet Software (TASCS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Govindaraju, Madhusudhan
Advanced Scientific Computing Research Computer Science FY 2010Report Center for Technology for Advanced Scientific Component Software: Distributed CCA State University of New York, Binghamton, NY, 13902 Summary The overall objective of Binghamton's involvement is to work on enhancements of the CCA environment, motivated by the applications and research initiatives discussed in the proposal. This year we are working on re-focusing our design and development efforts to develop proof-of-concept implementations that have the potential to significantly impact scientific components. We worked on developing parallel implementations for non-hydrostatic code and worked on a model coupling interface for biogeochemical computations coded in MATLAB.more » We also worked on the design and implementation modules that will be required for the emerging MapReduce model to be effective for scientific applications. Finally, we focused on optimizing the processing of scientific datasets on multi-core processors. Research Details We worked on the following research projects that we are working on applying to CCA-based scientific applications. 1. Non-Hydrostatic Hydrodynamics: Non-static hydrodynamics are significantly more accurate at modeling internal waves that may be important in lake ecosystems. Non-hydrostatic codes, however, are significantly more computationally expensive, often prohibitively so. We have worked with Chin Wu at the University of Wisconsin to parallelize non-hydrostatic code. We have obtained a speed up of about 26 times maximum. Although this is significant progress, we hope to improve the performance further, such that it becomes a practical alternative to hydrostatic codes. 2. Model-coupling for water-based ecosystems: To answer pressing questions about water resources requires that physical models (hydrodynamics) be coupled with biological and chemical models. Most hydrodynamics codes are written in Fortran, however, while most ecologists work in MATLAB. This disconnect creates a great barrier. To address this, we are working on a model coupling interface that will allow biogeochemical computations written in MATLAB to couple with Fortran codes. This will greatly improve the productivity of ecosystem scientists. 2. Low overhead and Elastic MapReduce Implementation Optimized for Memory and CPU-Intensive Applications: Since its inception, MapReduce has frequently been associated with Hadoop and large-scale datasets. Its deployment at Amazon in the cloud, and its applications at Yahoo! for large-scale distributed document indexing and database building, among other tasks, have thrust MapReduce to the forefront of the data processing application domain. The applicability of the paradigm however extends far beyond its use with data intensive applications and diskbased systems, and can also be brought to bear in processing small but CPU intensive distributed applications. MapReduce however carries its own burdens. Through experiments using Hadoop in the context of diverse applications, we uncovered latencies and delay conditions potentially inhibiting the expected performance of a parallel execution in CPU-intensive applications. Furthermore, as it currently stands, MapReduce is favored for data-centric applications, and as such tends to be solely applied to disk-based applications. The paradigm, falls short in bringing its novelty to diskless systems dedicated to in-memory applications, and compute intensive programs processing much smaller data, but requiring intensive computations. In this project, we focused both on the performance of processing large-scale hierarchical data in distributed scientific applications, as well as the processing of smaller but demanding input sizes primarily used in diskless, and memory resident I/O systems. We designed LEMO-MR [1], a Low overhead, elastic, configurable for in- memory applications, and on-demand fault tolerance, an optimized implementation of MapReduce, for both on disk and in memory applications. We conducted experiments to identify not only the necessary components of this model, but also trade offs and factors to be considered. We have initial results to show the efficacy of our implementation in terms of potential speedup that can be achieved for representative data sets used by cloud applications. We have quantified the performance gains exhibited by our MapReduce implementation over Apache Hadoop in a compute intensive environment. 3. Cache Performance Optimization for Processing XML and HDF-based Application Data on Multi-core Processors: It is important to design and develop scientific middleware libraries to harness the opportunities presented by emerging multi-core processors. Implementations of scientific middleware and applications that do not adapt to the programming paradigm when executing on emerging processors can severely impact the overall performance. In this project, we focused on the utilization of the L2 cache, which is a critical shared resource on chip multiprocessors (CMP). The access pattern of the shared L2 cache, which is dependent on how the application schedules and assigns processing work to each thread, can either enhance or hurt the ability to hide memory latency on a multi-core processor. Therefore, while processing scientific datasets such as HDF5, it is essential to conduct fine-grained analysis of cache utilization, to inform scheduling decisions in multi-threaded programming. In this project, using the TAU toolkit for performance feedback from dual- and quad-core machines, we conducted performance analysis and recommendations on how processing threads can be scheduled on multi-core nodes to enhance the performance of a class of scientific applications that requires processing of HDF5 data. In particular, we quantified the gains associated with the use of the adaptations we have made to the Cache-Affinity and Balanced-Set scheduling algorithms to improve L2 cache performance, and hence the overall application execution time [2]. References: 1. Zacharia Fadika, Madhusudhan Govindaraju, ``MapReduce Implementation for Memory-Based and Processing Intensive Applications'', accepted in 2nd IEEE International Conference on Cloud Computing Technology and Science, Indianapolis, USA, Nov 30 - Dec 3, 2010. 2. Rajdeep Bhowmik, Madhusudhan Govindaraju, ``Cache Performance Optimization for Processing XML-based Application Data on Multi-core Processors'', in proceedings of The 10th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, May 17-20, 2010, Melbourne, Victoria, Australia. Contact Information: Madhusudhan Govindaraju Binghamton University State University of New York (SUNY) mgovinda@cs.binghamton.edu Phone: 607-777-4904« less
Wetherell, A
1996-01-01
This paper discusses the use of psychological performance tests to assess the effects of environmental stressors. The large number and the variety of performance tests are illustrated, and the differences between performance tests and other psychological tests are described in terms of their design, construction, use, and purpose. The stressor emphasis is on the effects of drugs since that is where most performance tests have found their main application, although other stressors, e.g., fatigue, toxic chemicals, are mentioned where appropriate. Diazepam is used as an example. There is no particular performance emphasis since the tests are intended to have wide applicability. However, vehicle-driving performance is discussed because it has been the subject of a great deal of research and is probably one of the most important areas of application. Performance tests are discussed in terms of the four main underlying models--factor analysis, general information processing, multiple resource and strategy models, and processing-stage models--and in terms of their psychometric properties--sensitivity, reliability, and content, criterion, construct, and face validity. Some test taxonomies are presented. Standardization is also discussed with reference to the reaction time, mathematical processing, memory search, spatial processing, unstable tracking, verbal processing, and dual task tests used in the AGARD STRES battery. Some comments on measurement strengths and appropriate study designs and methods are included. PMID:9182033
1979-04-01
FRONT STEEL MILL CONNEAUT, OHIO 6. PERFORMING ORG. REPORT NUMBER 7. AUTHOR(a) S. CONTRACT OR GRANT NUMBERS) Paul G. Leuchner and Gregory P. Keppel... PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT, TASK AREA & WORK UNIT NUMBERS U.S. Army Engineer District, Buffalo 1776 Niagara...the Army permit to perform certain work in Lake Erie and its tributaries. Activities proposed by the applicant include the construction of a water
Asymmetric Core Computing for U.S. Army High-Performance Computing Applications
2009-04-01
Playstation 4 (should one be announced). 8 4.2 FPGAs Reconfigurable computing refers to performing computations using Field Programmable Gate Arrays...2008 4 . TITLE AND SUBTITLE Asymmetric Core Computing for U.S. Army High-Performance Computing Applications 5a. CONTRACT NUMBER 5b. GRANT NUMBER...Acknowledgments vi 1. Introduction 1 2. Relevant Technologies 2 3. Technical Approach 5 4 . Research and Development Highlights 7 4.1 Cell
Study of advanced techniques for determining the long-term performance of components
NASA Technical Reports Server (NTRS)
1972-01-01
A study was conducted of techniques having the capability of determining the performance and reliability of components for spacecraft liquid propulsion applications for long term missions. The study utilized two major approaches; improvement in the existing technology, and the evolution of new technology. The criteria established and methods evolved are applicable to valve components. Primary emphasis was placed on the propellants oxygen difluoride and diborane combination. The investigation included analysis, fabrication, and tests of experimental equipment to provide data and performance criteria.
USDA-ARS?s Scientific Manuscript database
An aerial variable-rate application system consisting of a DGPS-based guidance system, automatic flow controller, and hydraulically controlled pump/valve was evaluated for response time to rapidly changing flow requirements and accuracy of application. Spray deposition position error was evaluated ...
Sealed-cell nickel-cadmium battery applications manual
NASA Technical Reports Server (NTRS)
Scott, W. R.; Rusta, D. W.
1979-01-01
The design, procurement, testing, and application of aerospace quality, hermetically sealed nickel-cadmium cells and batteries are presented. Cell technology, cell and battery development, and spacecraft applications are emphasized. Long term performance is discussed in terms of the effect of initial design, process, and application variables. Design guidelines and practices are given.
77 FR 51024 - Information Collection Being Reviewed by the Federal Communications Commission
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-23
... interference environment would allow the applicant to use a less stringent Category B antenna (although the applicant could choose to sue a higher performance Category A antenna); The applicant specifically acknowledges its duty to upgrade to a Category A antenna and come into compliance with the applicable...
10 CFR 800.102 - Review by Application Evaluation Panel.
Code of Federal Regulations, 2011 CFR
2011-01-01
... arrange for risk analysis, independent of any such analysis submitted by or on behalf of the applicant. Risk analysis shall be directed both to the loan request and to applicant's prospective performance of... risk analysis, and shall give its conclusions in writing to the Application Approving Official, with...
10 CFR 800.102 - Review by Application Evaluation Panel.
Code of Federal Regulations, 2013 CFR
2013-01-01
... arrange for risk analysis, independent of any such analysis submitted by or on behalf of the applicant. Risk analysis shall be directed both to the loan request and to applicant's prospective performance of... risk analysis, and shall give its conclusions in writing to the Application Approving Official, with...
10 CFR 800.102 - Review by Application Evaluation Panel.
Code of Federal Regulations, 2012 CFR
2012-01-01
... arrange for risk analysis, independent of any such analysis submitted by or on behalf of the applicant. Risk analysis shall be directed both to the loan request and to applicant's prospective performance of... risk analysis, and shall give its conclusions in writing to the Application Approving Official, with...
10 CFR 800.102 - Review by Application Evaluation Panel.
Code of Federal Regulations, 2010 CFR
2010-01-01
... arrange for risk analysis, independent of any such analysis submitted by or on behalf of the applicant. Risk analysis shall be directed both to the loan request and to applicant's prospective performance of... risk analysis, and shall give its conclusions in writing to the Application Approving Official, with...
10 CFR 800.102 - Review by Application Evaluation Panel.
Code of Federal Regulations, 2014 CFR
2014-01-01
... arrange for risk analysis, independent of any such analysis submitted by or on behalf of the applicant. Risk analysis shall be directed both to the loan request and to applicant's prospective performance of... risk analysis, and shall give its conclusions in writing to the Application Approving Official, with...
Physical Properties and Durability of New Materials for Space and Commercial Applications
NASA Technical Reports Server (NTRS)
Hambourger, Paul D.
2003-01-01
To develop and test new materials for use in space power systems and related space and commercial applications, to assist industry in the application of these materials, and to achieve an adequate understanding of the mechanisms by which the materials perform in their intended applications.
A Performance-Oriented Approach to E-Learning in the Workplace
ERIC Educational Resources Information Center
Wang, Minhong; Ran, Weijia; Liao, Jian; Yang, Stephen J. H.
2010-01-01
Despite the ever-increasing practice of using e-learning in the workplace, most of the applications perform poorly in motivating employees to learn. Most workplace e-learning applications fail to meet the needs of learners and ultimately fail to serve the organization's quest for success. To solve this problem, we need to examine what workplace…
Code of Federal Regulations, 2010 CFR
2010-07-01
... 34 Education 3 2010-07-01 2010-07-01 false How does the Secretary use an applicant's performance under a previous development grant when awarding a development grant? 606.24 Section 606.24 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 34 Education 3 2010-07-01 2010-07-01 false How does the Secretary use an applicant's performance under a previous development grant when awarding a development grant? 607.24 Section 607.24 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION...
ERIC Educational Resources Information Center
Aljraiwi, Seham Salman
2017-01-01
The current study proposes web applications-based learning environment to promote teaching and learning activities in the classrooms. It also helps teachers facilitate learners' contributions in the process of learning and improving their motivation and performance. The case study illustrated that female students were more interested in learning…
Modular HPC I/O characterization with Darshan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snyder, Shane; Carns, Philip; Harms, Kevin
2016-11-13
Contemporary high-performance computing (HPC) applications encompass a broad range of distinct I/O strategies and are often executed on a number of different compute platforms in their lifetime. These large-scale HPC platforms employ increasingly complex I/O subsystems to provide a suitable level of I/O performance to applications. Tuning I/O workloads for such a system is nontrivial, and the results generally are not portable to other HPC systems. I/O profiling tools can help to address this challenge, but most existing tools only instrument specific components within the I/O subsystem that provide a limited perspective on I/O performance. The increasing diversity of scientificmore » applications and computing platforms calls for greater flexibililty and scope in I/O characterization.« less
A Review of the CMOS Buried Double Junction (BDJ) Photodetector and its Applications
Feruglio, Sylvain; Lu, Guo-Neng; Garda, Patrick; Vasilescu, Gabriel
2008-01-01
A CMOS Buried Double Junction PN (BDJ) photodetector consists of two vertically-stacked photodiodes. It can be operated as a photodiode with improved performance and wavelength-sensitive response. This paper presents a review of this device and its applications. The CMOS implementation and operating principle are firstly described. This includes the description of several key aspects directly related to the device performances, such as surface reflection, photon absorption and electron-hole pair generation, photocurrent and dark current generation, etc. SPICE modelling of the detector is then presented. Next, design and process considerations are proposed in order to improve the BDJ performance. Finally, several BDJ-detector-based image sensors provide a survey of their applications. PMID:27873887
Software-defined Radio Based Measurement Platform for Wireless Networks
Chao, I-Chun; Lee, Kang B.; Candell, Richard; Proctor, Frederick; Shen, Chien-Chung; Lin, Shinn-Yan
2015-01-01
End-to-end latency is critical to many distributed applications and services that are based on computer networks. There has been a dramatic push to adopt wireless networking technologies and protocols (such as WiFi, ZigBee, WirelessHART, Bluetooth, ISA100.11a, etc.) into time-critical applications. Examples of such applications include industrial automation, telecommunications, power utility, and financial services. While performance measurement of wired networks has been extensively studied, measuring and quantifying the performance of wireless networks face new challenges and demand different approaches and techniques. In this paper, we describe the design of a measurement platform based on the technologies of software-defined radio (SDR) and IEEE 1588 Precision Time Protocol (PTP) for evaluating the performance of wireless networks. PMID:27891210
Scientific Programming Using Java: A Remote Sensing Example
NASA Technical Reports Server (NTRS)
Prados, Don; Mohamed, Mohamed A.; Johnson, Michael; Cao, Changyong; Gasser, Jerry
1999-01-01
This paper presents results of a project to port remote sensing code from the C programming language to Java. The advantages and disadvantages of using Java versus C as a scientific programming language in remote sensing applications are discussed. Remote sensing applications deal with voluminous data that require effective memory management, such as buffering operations, when processed. Some of these applications also implement complex computational algorithms, such as Fast Fourier Transformation analysis, that are very performance intensive. Factors considered include performance, precision, complexity, rapidity of development, ease of code reuse, ease of maintenance, memory management, and platform independence. Performance of radiometric calibration code written in Java for the graphical user interface and of using C for the domain model are also presented.
Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda A [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN
2012-01-10
Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.
Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Cambridge, MA; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN
2012-04-17
Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.
Software-defined Radio Based Measurement Platform for Wireless Networks.
Chao, I-Chun; Lee, Kang B; Candell, Richard; Proctor, Frederick; Shen, Chien-Chung; Lin, Shinn-Yan
2015-10-01
End-to-end latency is critical to many distributed applications and services that are based on computer networks. There has been a dramatic push to adopt wireless networking technologies and protocols (such as WiFi, ZigBee, WirelessHART, Bluetooth, ISA100.11a, etc. ) into time-critical applications. Examples of such applications include industrial automation, telecommunications, power utility, and financial services. While performance measurement of wired networks has been extensively studied, measuring and quantifying the performance of wireless networks face new challenges and demand different approaches and techniques. In this paper, we describe the design of a measurement platform based on the technologies of software-defined radio (SDR) and IEEE 1588 Precision Time Protocol (PTP) for evaluating the performance of wireless networks.
Air Force electrochemical power research and technology program for space applications
NASA Technical Reports Server (NTRS)
Allen, Douglas
1987-01-01
An overview is presented of the existing Air Force electrochemical power, battery, and fuel cell programs for space application. Present thrusts are described along with anticipated technology availability dates. Critical problems to be solved before system applications occur are highlighted. Areas of needed performance improvement of batteries and fuel cells presently used are outlined including target dates for key demonstrations of advanced technology. Anticipated performance and current schedules for present technology programs are reviewed. Programs that support conventional military satellite power systems and special high power applications are reviewed. Battery types include bipolar lead-acid, nickel-cadmium, silver-zinc, nickel-hydrogen, sodium-sulfur, and some candidate advanced couples. Fuel cells for pulsed and transportation power applications are discussed as are some candidate advanced regenerative concepts.
Innovative ceramic slab lasers for high power laser applications
NASA Astrophysics Data System (ADS)
Lapucci, Antonio; Ciofini, Marco
2005-09-01
Diode Pumped Solid State Lasers (DPSSL) are gaining increasing interest for high power industrial application, given the continuous improvement in high power diode laser technology reliability and affordability. These sources open new windows in the parameter space for traditional applications such as cutting , welding, marking and engraving for high reflectance metallic materials. Other interesting applications for this kind of sources include high speed thermal printing, precision drilling, selective soldering and thin film etching. In this paper we examine the most important DPSS laser source types for industrial applications and we describe in details the performances of some slab laser configurations investigated at our facilities. The different architectures' advantages and draw-backs are briefly compared in terms of performances, system complexity and ease of scalability to the multi-kW level.
RAPPORT: running scientific high-performance computing applications on the cloud.
Cohen, Jeremy; Filippis, Ioannis; Woodbridge, Mark; Bauer, Daniela; Hong, Neil Chue; Jackson, Mike; Butcher, Sarah; Colling, David; Darlington, John; Fuchs, Brian; Harvey, Matt
2013-01-28
Cloud computing infrastructure is now widely used in many domains, but one area where there has been more limited adoption is research computing, in particular for running scientific high-performance computing (HPC) software. The Robust Application Porting for HPC in the Cloud (RAPPORT) project took advantage of existing links between computing researchers and application scientists in the fields of bioinformatics, high-energy physics (HEP) and digital humanities, to investigate running a set of scientific HPC applications from these domains on cloud infrastructure. In this paper, we focus on the bioinformatics and HEP domains, describing the applications and target cloud platforms. We conclude that, while there are many factors that need consideration, there is no fundamental impediment to the use of cloud infrastructure for running many types of HPC applications and, in some cases, there is potential for researchers to benefit significantly from the flexibility offered by cloud platforms.
NASA Astrophysics Data System (ADS)
Tyson, Eric J.; Buckley, James; Franklin, Mark A.; Chamberlain, Roger D.
2008-10-01
The imaging atmospheric Cherenkov technique for high-energy gamma-ray astronomy is emerging as an important new technique for studying the high energy universe. Current experiments have data rates of ≈20TB/year and duty cycles of about 10%. In the future, more sensitive experiments may produce up to 1000 TB/year. The data analysis task for these experiments requires keeping up with this data rate in close to real-time. Such data analysis is a classic example of a streaming application with very high performance requirements. This class of application often benefits greatly from the use of non-traditional approaches for computation including using special purpose hardware (FPGAs and ASICs), or sophisticated parallel processing techniques. However, designing, debugging, and deploying to these architectures is difficult and thus they are not widely used by the astrophysics community. This paper presents the Auto-Pipe design toolset that has been developed to address many of the difficulties in taking advantage of complex streaming computer architectures for such applications. Auto-Pipe incorporates a high-level coordination language, functional and performance simulation tools, and the ability to deploy applications to sophisticated architectures. Using the Auto-Pipe toolset, we have implemented the front-end portion of an imaging Cherenkov data analysis application, suitable for real-time or offline analysis. The application operates on data from the VERITAS experiment, and shows how Auto-Pipe can greatly ease performance optimization and application deployment of a wide variety of platforms. We demonstrate a performance improvement over a traditional software approach of 32x using an FPGA solution and 3.6x using a multiprocessor based solution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moody, Adam
2007-05-22
MpiGraph consists of an MPI application called mpiGraph written in C to measure message bandwidth and an associated crunch_mpiGraph script written in Perl to process the application output into an HTMO report. The mpiGraph application is designed to inspect the health and scalability of a high-performance interconnect while under heavy load. This is useful to detect hardware and software problems in a system, such as slow nodes, links, switches, or contention in switch routing. It is also useful to characterize how interconnect performance changes with different settings or how one interconnect type compares to another.
Marshall Application Realignment System (MARS) Architecture
NASA Technical Reports Server (NTRS)
Belshe, Andrea; Sutton, Mandy
2010-01-01
The Marshall Application Realignment System (MARS) Architecture project was established to meet the certification requirements of the Department of Defense Architecture Framework (DoDAF) V2.0 Federal Enterprise Architecture Certification (FEAC) Institute program and to provide added value to the Marshall Space Flight Center (MSFC) Application Portfolio Management process. The MARS Architecture aims to: (1) address the NASA MSFC Chief Information Officer (CIO) strategic initiative to improve Application Portfolio Management (APM) by optimizing investments and improving portfolio performance, and (2) develop a decision-aiding capability by which applications registered within the MSFC application portfolio can be analyzed and considered for retirement or decommission. The MARS Architecture describes a to-be target capability that supports application portfolio analysis against scoring measures (based on value) and overall portfolio performance objectives (based on enterprise needs and policies). This scoring and decision-aiding capability supports the process by which MSFC application investments are realigned or retired from the application portfolio. The MARS Architecture is a multi-phase effort to: (1) conduct strategic architecture planning and knowledge development based on the DoDAF V2.0 six-step methodology, (2) describe one architecture through multiple viewpoints, (3) conduct portfolio analyses based on a defined operational concept, and (4) enable a new capability to support the MSFC enterprise IT management mission, vision, and goals. This report documents Phase 1 (Strategy and Design), which includes discovery, planning, and development of initial architecture viewpoints. Phase 2 will move forward the process of building the architecture, widening the scope to include application realignment (in addition to application retirement), and validating the underlying architecture logic before moving into Phase 3. The MARS Architecture key stakeholders are most interested in Phase 3 because this is where the data analysis, scoring, and recommendation capability is realized. Stakeholders want to see the benefits derived from reducing the steady-state application base and identify opportunities for portfolio performance improvement and application realignment.
Breitkopf, Daniel M; Vaughan, Lisa E; Hopkins, Matthew R
To determine which individual residency applicant characteristics were associated with improved performance on standardized behavioral interviews. Behavioral interviewing has become a common technique for assessing resident applicants. Few data exist on factors that predict success during the behavioral interview component of the residency application process. Interviewers were trained in behavioral interviewing techniques before each application season. Standardized questions were used. Behavioral interview scores and Electronic Residency Application Service data from residency applicants was collected prospectively for 3 years. It included the Accreditation Council for Graduate Medical Education-accredited obstetrics-gynecology residency program at a Midwestern academic medical center. Medical students applying to a single obstetrics-gynecology residency program from 2012 to 2014 participated in the study. Data were collected from 104 applicants during 3 successive interview seasons. Applicant's age was associated with higher overall scores on questions about leadership, coping, and conflict management (for applicants aged ≤25, 26-27, or ≥28y, mean scores were 15.2, 16.0, and 17.2, respectively; p = 0.03), as was a history of employment before medical school (16.8 vs 15.5; p = 0.03). Applicants who participated in collegiate team sports scored lower on questions asking influence/persuasion, initiative, and relationship management compared with those who did not (mean, 15.5 vs 17.1; p = 0.02). Advanced applicant age and history of work experience before medical school may improve skills in dealing with difficult situations and offer opportunities in leadership. In the behavioral interview format, having relevant examples from life experience to share during the interviews may improve the quality of the applicant's responses. Increased awareness of the factors predicting interview performance helps inform the selection process and allows program directors to prioritize the most appropriate candidates for the match. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Gong, Chuanhui; Zhang, Yuxi; Chen, Wei; Chu, Junwei; Lei, Tianyu; Pu, Junru; Dai, Liping; Wu, Chunyang; Cheng, Yuhua; Zhai, Tianyou; Li, Liang; Xiong, Jie
2017-12-01
With the continuous exploration of 2D transition metal dichalcogenides (TMDs), novel high-performance devices based on the remarkable electronic and optoelectronic natures of 2D TMDs are increasingly emerging. As fresh blood of 2D TMD family, anisotropic MTe 2 and ReX 2 (M = Mo, W, and X = S, Se) have drawn increasing attention owing to their low-symmetry structures and charming properties of mechanics, electronics, and optoelectronics, which are suitable for the applications of field-effect transistors (FETs), photodetectors, thermoelectric and piezoelectric applications, especially catering to anisotropic devices. Herein, a comprehensive review is introduced, concentrating on their recent progresses and various applications in recent years. First, the crystalline structure and the origin of the strong anisotropy characterized by various techniques are discussed. Specifically, the preparation of these 2D materials is presented and various growth methods are summarized. Then, high-performance applications of these anisotropic TMDs, including FETs, photodetectors, and thermoelectric and piezoelectric applications are discussed. Finally, the conclusion and outlook of these applications are proposed.
Kluge, Annette; Termer, Anatoli
2017-03-01
The present article describes the design process of a fault-finding application for mobile devices, which was built to support workers' performance by guiding them through a systematic strategy to stay focused during a fault-finding process. In collaboration with a project partner in the manufacturing industry, a fault diagnosis application was conceptualized based on a human-centered design approach (ISO 9241-210:2010). A field study with 42 maintenance workers was conducted for the purpose of evaluating the performance enhancement of fault finding in three different scenarios as well as for assessing the workers' acceptance of the technology. Workers using the mobile device application were twice as fast at fault finding as the control group without the application and perceived the application as very useful. The results indicate a vast potential of the mobile application for fault diagnosis in contemporary manufacturing systems. Copyright © 2016 Elsevier Ltd. All rights reserved.
Aggregate Interview Method of ranking orthopedic applicants predicts future performance.
Geissler, Jacqueline; VanHeest, Ann; Tatman, Penny; Gioe, Terence
2013-07-01
This article evaluates and describes a process of ranking orthopedic applicants using what the authors term the Aggregate Interview Method. The authors hypothesized that higher-ranking applicants using this method at their institution would perform better than those ranked lower using multiple measures of resident performance. A retrospective review of 115 orthopedic residents was performed at the authors' institution. Residents were grouped into 3 categories by matching rank numbers: 1-5, 6-14, and 15 or higher. Each rank group was compared with resident performance as measured by faculty evaluations, the Orthopaedic In-Training Examination (OITE), and American Board of Orthopaedic Surgery (ABOS) test results. Residents ranked 1-5 scored significantly better on patient care, behavior, and overall competence by faculty evaluation (P<.05). Residents ranked 1-5 scored higher on the OITE compared with those ranked 6-14 during postgraduate years 2 and 3 (P⩽.5). Graduates who had been ranked 1-5 had a 100% pass rate on the ABOS part 1 examination on the first attempt. The most favorably ranked residents performed at or above the level of other residents in the program; they did not score inferiorly on any measure. These results support the authors' method of ranking residents. The rigorous Aggregate Interview Method for ranking applicants consistently identified orthopedic resident candidates who scored highly on the Accreditation Council for Graduate Medical Education resident core competencies as measured by faculty evaluations, performed above the national average on the OITE, and passed the ABOS part 1 examination at rates exceeding the national average. Copyright 2013, SLACK Incorporated.
TADSim: Discrete Event-based Performance Prediction for Temperature Accelerated Dynamics
Mniszewski, Susan M.; Junghans, Christoph; Voter, Arthur F.; ...
2015-04-16
Next-generation high-performance computing will require more scalable and flexible performance prediction tools to evaluate software--hardware co-design choices relevant to scientific applications and hardware architectures. Here, we present a new class of tools called application simulators—parameterized fast-running proxies of large-scale scientific applications using parallel discrete event simulation. Parameterized choices for the algorithmic method and hardware options provide a rich space for design exploration and allow us to quickly find well-performing software--hardware combinations. We demonstrate our approach with a TADSim simulator that models the temperature-accelerated dynamics (TAD) method, an algorithmically complex and parameter-rich member of the accelerated molecular dynamics (AMD) family ofmore » molecular dynamics methods. The essence of the TAD application is captured without the computational expense and resource usage of the full code. We accomplish this by identifying the time-intensive elements, quantifying algorithm steps in terms of those elements, abstracting them out, and replacing them by the passage of time. We use TADSim to quickly characterize the runtime performance and algorithmic behavior for the otherwise long-running simulation code. We extend TADSim to model algorithm extensions, such as speculative spawning of the compute-bound stages, and predict performance improvements without having to implement such a method. Validation against the actual TAD code shows close agreement for the evolution of an example physical system, a silver surface. Finally, focused parameter scans have allowed us to study algorithm parameter choices over far more scenarios than would be possible with the actual simulation. This has led to interesting performance-related insights and suggested extensions.« less
Network protocols for real-time applications
NASA Technical Reports Server (NTRS)
Johnson, Marjory J.
1987-01-01
The Fiber Distributed Data Interface (FDDI) and the SAE AE-9B High Speed Ring Bus (HSRB) are emerging standards for high-performance token ring local area networks. FDDI was designed to be a general-purpose high-performance network. HSRB was designed specifically for military real-time applications. A workshop was conducted at NASA Ames Research Center in January, 1987 to compare and contrast these protocols with respect to their ability to support real-time applications. This report summarizes workshop presentations and includes an independent comparison of the two protocols. A conclusion reached at the workshop was that current protocols for the upper layers of the Open Systems Interconnection (OSI) network model are inadequate for real-time applications.
Tremor Frequency Assessment by iPhone® Applications: Correlation with EMG Analysis.
Araújo, Rui; Tábuas-Pereira, Miguel; Almendra, Luciano; Ribeiro, Joana; Arenga, Marta; Negrão, Luis; Matos, Anabela; Morgadinho, Ana; Januário, Cristina
2016-10-19
Tremor frequency analysis is usually performed by EMG studies but accelerometers are progressively being more used. The iPhone® contains an accelerometer and many applications claim to be capable of measuring tremor frequency. We tested three applications in twenty-two patients with a diagnosis of PD, ET and Holmes' tremor. EMG needle assessment as well as accelerometry was performed at the same time. There was very strong correlation (Pearson >0.8, p < 0.001) between the three applications, the EMG needle and the accelerometry. Our data suggests the apps LiftPulse®, iSeismometer® and Studymytremor® are a reliable alternative to the EMG for tremor frequency assessment.
Winkler-Schwartz, Alexander; Bajunaid, Khalid; Mullah, Muhammad A S; Marwa, Ibrahim; Alotaibi, Fahad E; Fares, Jawad; Baggiani, Marta; Azarnoush, Hamed; Zharni, Gmaan Al; Christie, Sommer; Sabbagh, Abdulrahman J; Werthner, Penny; Del Maestro, Rolando F
Current selection methods for neurosurgical residents fail to include objective measurements of bimanual psychomotor performance. Advancements in computer-based simulation provide opportunities to assess cognitive and psychomotor skills in surgically naive populations during complex simulated neurosurgical tasks in risk-free environments. This pilot study was designed to answer 3 questions: (1) What are the differences in bimanual psychomotor performance among neurosurgical residency applicants using NeuroTouch? (2) Are there exceptionally skilled medical students in the applicant cohort? and (3) Is there an influence of previous surgical exposure on surgical performance? Participants were instructed to remove 3 simulated brain tumors with identical visual appearance, stiffness, and random bleeding points. Validated tier 1, tier 2, and advanced tier 2 metrics were used to assess bimanual psychomotor performance. Demographic data included weeks of neurosurgical elective and prior operative exposure. This pilot study was carried out at the McGill Neurosurgical Simulation Research and Training Center immediately following neurosurgical residency interviews at McGill University, Montreal, Canada. All 17 medical students interviewed were asked to participate, of which 16 agreed. Performances were clustered in definable top, middle, and bottom groups with significant differences for all metrics. Increased time spent playing music, increased applicant self-evaluated technical skills, high self-ratings of confidence, and increased skin closures statistically influenced performance on univariate analysis. A trend for both self-rated increased operating room confidence and increased weeks of neurosurgical exposure to increased blood loss was seen in multivariate analysis. Simulation technology identifies neurosurgical residency applicants with differing levels of technical ability. These results provide information for studies being developed for longitudinal studies on the acquisition, development, and maintenance of psychomotor skills. Technical abilities customized training programs that maximize individual resident bimanual psychomotor training dependant on continuously updated and validated metrics from virtual reality simulation studies should be explored. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
High-Performance Compute Infrastructure in Astronomy: 2020 Is Only Months Away
NASA Astrophysics Data System (ADS)
Berriman, B.; Deelman, E.; Juve, G.; Rynge, M.; Vöckler, J. S.
2012-09-01
By 2020, astronomy will be awash with as much as 60 PB of public data. Full scientific exploitation of such massive volumes of data will require high-performance computing on server farms co-located with the data. Development of this computing model will be a community-wide enterprise that has profound cultural and technical implications. Astronomers must be prepared to develop environment-agnostic applications that support parallel processing. The community must investigate the applicability and cost-benefit of emerging technologies such as cloud computing to astronomy, and must engage the Computer Science community to develop science-driven cyberinfrastructure such as workflow schedulers and optimizers. We report here the results of collaborations between a science center, IPAC, and a Computer Science research institute, ISI. These collaborations may be considered pathfinders in developing a high-performance compute infrastructure in astronomy. These collaborations investigated two exemplar large-scale science-driver workflow applications: 1) Calculation of an infrared atlas of the Galactic Plane at 18 different wavelengths by placing data from multiple surveys on a common plate scale and co-registering all the pixels; 2) Calculation of an atlas of periodicities present in the public Kepler data sets, which currently contain 380,000 light curves. These products have been generated with two workflow applications, written in C for performance and designed to support parallel processing on multiple environments and platforms, but with different compute resource needs: the Montage image mosaic engine is I/O-bound, and the NASA Star and Exoplanet Database periodogram code is CPU-bound. Our presentation will report cost and performance metrics and lessons-learned for continuing development. Applicability of Cloud Computing: Commercial Cloud providers generally charge for all operations, including processing, transfer of input and output data, and for storage of data, and so the costs of running applications vary widely according to how they use resources. The cloud is well suited to processing CPU-bound (and memory bound) workflows such as the periodogram code, given the relatively low cost of processing in comparison with I/O operations. I/O-bound applications such as Montage perform best on high-performance clusters with fast networks and parallel file-systems. Science-driven Cyberinfrastructure: Montage has been widely used as a driver application to develop workflow management services, such as task scheduling in distributed environments, designing fault tolerance techniques for job schedulers, and developing workflow orchestration techniques. Running Parallel Applications Across Distributed Cloud Environments: Data processing will eventually take place in parallel distributed across cyber infrastructure environments having different architectures. We have used the Pegasus Work Management System (WMS) to successfully run applications across three very different environments: TeraGrid, OSG (Open Science Grid), and FutureGrid. Provisioning resources across different grids and clouds (also referred to as Sky Computing), involves establishing a distributed environment, where issues of, e.g, remote job submission, data management, and security need to be addressed. This environment also requires building virtual machine images that can run in different environments. Usually, each cloud provides basic images that can be customized with additional software and services. In most of our work, we provisioned compute resources using a custom application, called Wrangler. Pegasus WMS abstracts the architectures of the compute environments away from the end-user, and can be considered a first-generation tool suitable for scientists to run their applications on disparate environments.
Proprioception and Throwing Accuracy in the Dominant Shoulder After Cryotherapy
Wassinger, Craig A; Myers, Joseph B; Gatti, Joseph M; Conley, Kevin M; Lephart, Scott M
2007-01-01
Context: Application of cryotherapy modalities is common after acute shoulder injury and as part of rehabilitation. During athletic events, athletes may return to play after this treatment. The effects of cryotherapy on dominant shoulder proprioception have been assessed, yet the effects on throwing performance are unknown. Objective: To determine the effects of a cryotherapy application on shoulder proprioception and throwing accuracy. Design: Single-group, pretest-posttest control session design. Setting: University-based biomechanics laboratory. Patients or Other Participants: Healthy college-aged subjects (n = 22). Intervention(s): Twenty-minute ice pack application to the dominant shoulder. Main Outcome Measure(s): Active joint position replication, path of joint motion replication, and the Functional Throwing Performance Index. Results: Subjects demonstrated significant increases in deviation for path of joint motion replication when moving from 90° of abduction with 90° of external rotation to 20° of flexion with neutral shoulder rotation after ice pack application. Also, subjects exhibited a decrease in Functional Throwing Performance Index after cryotherapy application. No differences were found in subjects for active joint position replication after cryotherapy application. Conclusions: Proprioception and throwing accuracy were decreased after ice pack application to the shoulder. It is important that clinicians understand the deficits that occur after cryotherapy, as this modality is commonly used following acute injury and during rehabilitation. This information should also be considered when attempting to return an athlete to play after treatment. PMID:17597948
24 CFR 954.104 - Performance thresholds.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 4 2011-04-01 2011-04-01 false Performance thresholds. 954.104... DEVELOPMENT INDIAN HOME PROGRAM Applying for Assistance § 954.104 Performance thresholds. Applicants must have... HOME program must have performed adequately. In cases of previously documented deficient performance...
24 CFR 954.104 - Performance thresholds.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Performance thresholds. 954.104... DEVELOPMENT INDIAN HOME PROGRAM Applying for Assistance § 954.104 Performance thresholds. Applicants must have... HOME program must have performed adequately. In cases of previously documented deficient performance...
Evaluation Of Odors Associated With Land Application Of Biosolids
An odor study was performed at a biosolids application demonstration site using several different gas collection devices and analytical methods to determine changes in air concentration of several organic and inorganic compounds associated with biosolids application over various ...
Task Assignment Heuristics for Distributed CFD Applications
NASA Technical Reports Server (NTRS)
Lopez-Benitez, N.; Djomehri, M. J.; Biswas, R.; Biegel, Bryan (Technical Monitor)
2001-01-01
CFD applications require high-performance computational platforms: 1. Complex physics and domain configuration demand strongly coupled solutions; 2. Applications are CPU and memory intensive; and 3. Huge resource requirements can only be satisfied by teraflop-scale machines or distributed computing.
Transit bus stop pedestrian warning application : acceptance test plan : final report.
DOT National Transportation Integrated Search
2016-10-14
This document is the Acceptance Test Plan for the Transit Bus Stop Pedestrian Warning (TSPW) application. This report describes the test and demonstration plan to verify that the application meets its functional and performance requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... awarding a development grant? (a)(1) In addition to evaluating an application under the selection criteria..., including, but not limited to, the applicant's success in institutionalizing practices developed and...
High performance polymer development
NASA Technical Reports Server (NTRS)
Hergenrother, Paul M.
1991-01-01
The term high performance as applied to polymers is generally associated with polymers that operate at high temperatures. High performance is used to describe polymers that perform at temperatures of 177 C or higher. In addition to temperature, other factors obviously influence the performance of polymers such as thermal cycling, stress level, and environmental effects. Some recent developments at NASA Langley in polyimides, poly(arylene ethers), and acetylenic terminated materials are discussed. The high performance/high temperature polymers discussed are representative of the type of work underway at NASA Langley Research Center. Further improvement in these materials as well as the development of new polymers will provide technology to help meet NASA future needs in high performance/high temperature applications. In addition, because of the combination of properties offered by many of these polymers, they should find use in many other applications.
Performance and scalability evaluation of "Big Memory" on Blue Gene Linux.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoshii, K.; Iskra, K.; Naik, H.
2011-05-01
We address memory performance issues observed in Blue Gene Linux and discuss the design and implementation of 'Big Memory' - an alternative, transparent memory space introduced to eliminate the memory performance issues. We evaluate the performance of Big Memory using custom memory benchmarks, NAS Parallel Benchmarks, and the Parallel Ocean Program, at a scale of up to 4,096 nodes. We find that Big Memory successfully resolves the performance issues normally encountered in Blue Gene Linux. For the ocean simulation program, we even find that Linux with Big Memory provides better scalability than does the lightweight compute node kernel designed solelymore » for high-performance applications. Originally intended exclusively for compute node tasks, our new memory subsystem dramatically improves the performance of certain I/O node applications as well. We demonstrate this performance using the central processor of the LOw Frequency ARray radio telescope as an example.« less
Moore, Eric J; Price, Daniel L; Van Abel, Kathryn M; Carlson, Matthew L
2015-02-01
Application to otolaryngology-head and neck surgery residency is highly competitive, and the interview process strives to select qualified applicants with a high aptitude for the specialty. Commonly employed criteria for applicant selection have failed to show correlation with proficiency during residency training. We evaluate the correlation between the results of a surgical aptitude test administered to otolaryngology resident applicants and their performance during residency. Retrospective study at an academic otolaryngology-head and neck surgery residency program. Between 2007 and 2013, 224 resident applicants participated in a previously described surgical aptitude test administered at a microvascular surgical station. The composite score and attitudinal scores for 24 consecutive residents who matched at our institution were recorded, and their residency performance was analyzed by faculty survey on a five-point scale. The composite and attitudinal scores were analyzed for correlation with residency performance score by regression analysis. Twenty-four residents were evaluated for overall quality as a clinician by eight faculty members who were blinded to the results of surgical aptitude testing. The results of these surveys showed good inter-rater reliability. Both the overall aptitude test scores and the subset attitudinal score showed reliability in predicting performance during residency training. The goal of the residency selection process is to evaluate the candidate's potential for success in residency and beyond. The results of this study suggest that a simple-to-administer clinical skills test may have predictive value for success in residency and clinician quality. 4. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.
Comparison of dry-textile electrodes for electrical bioimpedance spectroscopy measurements
NASA Astrophysics Data System (ADS)
Márquez, J. C.; Seoane, F.; Välimäki, E.; Lindecrantz, K.
2010-04-01
Textile Electrodes have been widely studied for biopotentials recordings, specially for monitoring the cardiac activity. Commercially available applications, such as Adistar T-shirt and Textronics Cardioshirt, have proved a good performance for heart rate monitoring and are available worldwide. Textile technology can also be used for Electrical Bioimpedance Spectroscopy measurements enabling home and personalized health monitoring applications however solid ground research about the measurement performance of the electrodes must be done prior to the development of any textile-enabled EBI application. In this work a comparison of the measurement performance of two different types of dry-textile electrodes and manufacturers has been performed against standardized RedDot 3M Ag/AgCl electrolytic electrodes. 4-Electrode, whole body, Ankle-to-Wrist EBI measurements have been taken with the Impedimed spectrometer SFB7 from healthy subjects in the frequency range of 3kHz to 500kHz. Measurements have been taken with dry electrodes at different times to study the influence of the interaction skin-electrode interface on the EBI measurements. The analysis of the obtained complex EBI spectra shows that the measurements performed with textile electrodes produce constant and reliable EBI spectra. Certain deviation can be observed at higher frequencies and the measurements obtained with Textronics and Ag/AgCl electrodes present a better resemblance. Textile technology, if successfully integrated it, may enable the performance of EBI measurements in new scenarios allowing the rising of novel wearable monitoring applications for home and personal care as well as car safety.
The Centre of High-Performance Scientific Computing, Geoverbund, ABC/J - Geosciences enabled by HPSC
NASA Astrophysics Data System (ADS)
Kollet, Stefan; Görgen, Klaus; Vereecken, Harry; Gasper, Fabian; Hendricks-Franssen, Harrie-Jan; Keune, Jessica; Kulkarni, Ketan; Kurtz, Wolfgang; Sharples, Wendy; Shrestha, Prabhakar; Simmer, Clemens; Sulis, Mauro; Vanderborght, Jan
2016-04-01
The Centre of High-Performance Scientific Computing (HPSC TerrSys) was founded 2011 to establish a centre of competence in high-performance scientific computing in terrestrial systems and the geosciences enabling fundamental and applied geoscientific research in the Geoverbund ABC/J (geoscientfic research alliance of the Universities of Aachen, Cologne, Bonn and the Research Centre Jülich, Germany). The specific goals of HPSC TerrSys are to achieve relevance at the national and international level in (i) the development and application of HPSC technologies in the geoscientific community; (ii) student education; (iii) HPSC services and support also to the wider geoscientific community; and in (iv) the industry and public sectors via e.g., useful applications and data products. A key feature of HPSC TerrSys is the Simulation Laboratory Terrestrial Systems, which is located at the Jülich Supercomputing Centre (JSC) and provides extensive capabilities with respect to porting, profiling, tuning and performance monitoring of geoscientific software in JSC's supercomputing environment. We will present a summary of success stories of HPSC applications including integrated terrestrial model development, parallel profiling and its application from watersheds to the continent; massively parallel data assimilation using physics-based models and ensemble methods; quasi-operational terrestrial water and energy monitoring; and convection permitting climate simulations over Europe. The success stories stress the need for a formalized education of students in the application of HPSC technologies in future.