Precise and Efficient Static Array Bound Checking for Large Embedded C Programs
NASA Technical Reports Server (NTRS)
Venet, Arnaud
2004-01-01
In this paper we describe the design and implementation of a static array-bound checker for a family of embedded programs: the flight control software of recent Mars missions. These codes are large (up to 250 KLOC), pointer intensive, heavily multithreaded and written in an object-oriented style, which makes their analysis very challenging. We designed a tool called C Global Surveyor (CGS) that can analyze the largest code in a couple of hours with a precision of 80%. The scalability and precision of the analyzer are achieved by using an incremental framework in which a pointer analysis and a numerical analysis of array indices mutually refine each other. CGS has been designed so that it can distribute the analysis over several processors in a cluster of machines. To the best of our knowledge this is the first distributed implementation of static analysis algorithms. Throughout the paper we will discuss the scalability setbacks that we encountered during the construction of the tool and their impact on the initial design decisions.
Automation of multi-agent control for complex dynamic systems in heterogeneous computational network
NASA Astrophysics Data System (ADS)
Oparin, Gennady; Feoktistov, Alexander; Bogdanova, Vera; Sidorov, Ivan
2017-01-01
The rapid progress of high-performance computing entails new challenges related to solving large scientific problems for various subject domains in a heterogeneous distributed computing environment (e.g., a network, Grid system, or Cloud infrastructure). The specialists in the field of parallel and distributed computing give the special attention to a scalability of applications for problem solving. An effective management of the scalable application in the heterogeneous distributed computing environment is still a non-trivial issue. Control systems that operate in networks, especially relate to this issue. We propose a new approach to the multi-agent management for the scalable applications in the heterogeneous computational network. The fundamentals of our approach are the integrated use of conceptual programming, simulation modeling, network monitoring, multi-agent management, and service-oriented programming. We developed a special framework for an automation of the problem solving. Advantages of the proposed approach are demonstrated on the parametric synthesis example of the static linear regulator for complex dynamic systems. Benefits of the scalable application for solving this problem include automation of the multi-agent control for the systems in a parallel mode with various degrees of its detailed elaboration.
Precise and Scalable Static Program Analysis of NASA Flight Software
NASA Technical Reports Server (NTRS)
Brat, G.; Venet, A.
2005-01-01
Recent NASA mission failures (e.g., Mars Polar Lander and Mars Orbiter) illustrate the importance of having an efficient verification and validation process for such systems. One software error, as simple as it may be, can cause the loss of an expensive mission, or lead to budget overruns and crunched schedules. Unfortunately, traditional verification methods cannot guarantee the absence of errors in software systems. Therefore, we have developed the CGS static program analysis tool, which can exhaustively analyze large C programs. CGS analyzes the source code and identifies statements in which arrays are accessed out of bounds, or, pointers are used outside the memory region they should address. This paper gives a high-level description of CGS and its theoretical foundations. It also reports on the use of CGS on real NASA software systems used in Mars missions (from Mars PathFinder to Mars Exploration Rover) and on the International Space Station.
Energy-absorption capability and scalability of square cross section composite tube specimens
NASA Technical Reports Server (NTRS)
Farley, Gary L.
1987-01-01
Static crushing tests were conducted on graphite/epoxy and Kevlar/epoxy square cross section tubes to study the influence of specimen geometry on the energy-absorption capability and scalability of composite materials. The tube inside width-to-wall thickness (W/t) ratio was determined to significantly affect the energy-absorption capability of composite materials. As W/t ratio decreases, the energy-absorption capability increases nonlinearly. The energy-absorption capability of Kevlar epoxy tubes was found to be geometrically scalable, but the energy-absorption capability of graphite/epoxy tubes was not geometrically scalable.
Algorithmic Coordination in Robotic Networks
2010-11-29
appropriate performance, robustness and scalability properties for various task allocation , surveillance, and information gathering applications is...networking, we envision designing and analyzing algorithms with appropriate performance, robustness and scalability properties for various task ...distributed algorithms for target assignments; based on the classic auction algorithms in static networks, we intend to design efficient algorithms in worst
NASA Technical Reports Server (NTRS)
Aiken, Alexander
2001-01-01
The Scalable Analysis Toolkit (SAT) project aimed to demonstrate that it is feasible and useful to statically detect software bugs in very large systems. The technical focus of the project was on a relatively new class of constraint-based techniques for analysis software, where the desired facts about programs (e.g., the presence of a particular bug) are phrased as constraint problems to be solved. At the beginning of this project, the most successful forms of formal software analysis were limited forms of automatic theorem proving (as exemplified by the analyses used in language type systems and optimizing compilers), semi-automatic theorem proving for full verification, and model checking. With a few notable exceptions these approaches had not been demonstrated to scale to software systems of even 50,000 lines of code. Realistic approaches to large-scale software analysis cannot hope to make every conceivable formal method scale. Thus, the SAT approach is to mix different methods in one application by using coarse and fast but still adequate methods at the largest scales, and reserving the use of more precise but also more expensive methods at smaller scales for critical aspects (that is, aspects critical to the analysis problem under consideration) of a software system. The principled method proposed for combining a heterogeneous collection of formal systems with different scalability characteristics is mixed constraints. This idea had been used previously in small-scale applications with encouraging results: using mostly coarse methods and narrowly targeted precise methods, useful information (meaning the discovery of bugs in real programs) was obtained with excellent scalability.
IKOS: A Framework for Static Analysis based on Abstract Interpretation (Tool Paper)
NASA Technical Reports Server (NTRS)
Brat, Guillaume P.; Laserna, Jorge A.; Shi, Nija; Venet, Arnaud Jean
2014-01-01
The RTCA standard (DO-178C) for developing avionic software and getting certification credits includes an extension (DO-333) that describes how developers can use static analysis in certification. In this paper, we give an overview of the IKOS static analysis framework that helps developing static analyses that are both precise and scalable. IKOS harnesses the power of Abstract Interpretation and makes it accessible to a larger class of static analysis developers by separating concerns such as code parsing, model development, abstract domain management, results management, and analysis strategy. The benefits of the approach is demonstrated by a buffer overflow analysis applied to flight control systems.
Monitoring Data-Structure Evolution in Distributed Message-Passing Programs
NASA Technical Reports Server (NTRS)
Sarukkai, Sekhar R.; Beers, Andrew; Woodrow, Thomas S. (Technical Monitor)
1996-01-01
Monitoring the evolution of data structures in parallel and distributed programs, is critical for debugging its semantics and performance. However, the current state-of-art in tracking and presenting data-structure information on parallel and distributed environments is cumbersome and does not scale. In this paper we present a methodology that automatically tracks memory bindings (not the actual contents) of static and dynamic data-structures of message-passing C programs, using PVM. With the help of a number of examples we show that in addition to determining the impact of memory allocation overheads on program performance, graphical views can help in debugging the semantics of program execution. Scalable animations of virtual address bindings of source-level data-structures are used for debugging the semantics of parallel programs across all processors. In conjunction with light-weight core-files, this technique can be used to complement traditional debuggers on single processors. Detailed information (such as data-structure contents), on specific nodes, can be determined using traditional debuggers after the data structure evolution leading to the semantic error is observed graphically.
Khurshid, Madiha; Mulet-Sierra, Aillette; Adesida, Adetola; Sen, Arindom
2018-03-01
Osteoarthritis (OA) is a painful disease, characterized by progressive surface erosion of articular cartilage. The use of human articular chondrocytes (hACs) sourced from OA patients has been proposed as a potential therapy for cartilage repair, but this approach is limited by the lack of scalable methods to produce clinically relevant quantities of cartilage-generating cells. Previous studies in static culture have shown that hACs co-cultured with human mesenchymal stem cells (hMSCs) as 3D pellets can upregulate proliferation and generate neocartilage with enhanced functional matrix formation relative to that produced from either cell type alone. However, because static culture flasks are not readily amenable to scale up, scalable suspension bioreactors were investigated to determine if they could support the co-culture of hMSCs and OA hACs under serum-free conditions to facilitate clinical translation of this approach. When hACs and hMSCs (1:3 ratio) were inoculated at 20,000 cells/ml into 125-ml suspension bioreactors and fed weekly, they spontaneously formed 3D aggregates and proliferated, resulting in a 4.75-fold increase over 16 days. Whereas the apparent growth rate was lower than that achieved during co-culture as a 2D monolayer in static culture flasks, bioreactor co-culture as 3D aggregates resulted in a significantly lower collagen I to II mRNA expression ratio and more than double the glycosaminoglycan/DNA content (5.8 vs. 2.5 μg/μg). The proliferation of hMSCs and hACs as 3D aggregates in serum-free suspension culture demonstrates that scalable bioreactors represent an accessible platform capable of supporting the generation of clinical quantities of cells for use in cell-based cartilage repair. Copyright © 2017 John Wiley & Sons, Ltd.
Aagaard, Brad T.; Knepley, M.G.; Williams, C.A.
2013-01-01
We employ a domain decomposition approach with Lagrange multipliers to implement fault slip in a finite-element code, PyLith, for use in both quasi-static and dynamic crustal deformation applications. This integrated approach to solving both quasi-static and dynamic simulations leverages common finite-element data structures and implementations of various boundary conditions, discretization schemes, and bulk and fault rheologies. We have developed a custom preconditioner for the Lagrange multiplier portion of the system of equations that provides excellent scalability with problem size compared to conventional additive Schwarz methods. We demonstrate application of this approach using benchmarks for both quasi-static viscoelastic deformation and dynamic spontaneous rupture propagation that verify the numerical implementation in PyLith.
NASA Astrophysics Data System (ADS)
Mapakshi, N. K.; Chang, J.; Nakshatrala, K. B.
2018-04-01
Mathematical models for flow through porous media typically enjoy the so-called maximum principles, which place bounds on the pressure field. It is highly desirable to preserve these bounds on the pressure field in predictive numerical simulations, that is, one needs to satisfy discrete maximum principles (DMP). Unfortunately, many of the existing formulations for flow through porous media models do not satisfy DMP. This paper presents a robust, scalable numerical formulation based on variational inequalities (VI), to model non-linear flows through heterogeneous, anisotropic porous media without violating DMP. VI is an optimization technique that places bounds on the numerical solutions of partial differential equations. To crystallize the ideas, a modification to Darcy equations by taking into account pressure-dependent viscosity will be discretized using the lowest-order Raviart-Thomas (RT0) and Variational Multi-scale (VMS) finite element formulations. It will be shown that these formulations violate DMP, and, in fact, these violations increase with an increase in anisotropy. It will be shown that the proposed VI-based formulation provides a viable route to enforce DMP. Moreover, it will be shown that the proposed formulation is scalable, and can work with any numerical discretization and weak form. A series of numerical benchmark problems are solved to demonstrate the effects of heterogeneity, anisotropy and non-linearity on DMP violations under the two chosen formulations (RT0 and VMS), and that of non-linearity on solver convergence for the proposed VI-based formulation. Parallel scalability on modern computational platforms will be illustrated through strong-scaling studies, which will prove the efficiency of the proposed formulation in a parallel setting. Algorithmic scalability as the problem size is scaled up will be demonstrated through novel static-scaling studies. The performed static-scaling studies can serve as a guide for users to be able to select an appropriate discretization for a given problem size.
3D-printed conductive static mixers enable all-vanadium redox flow battery using slurry electrodes
NASA Astrophysics Data System (ADS)
Percin, Korcan; Rommerskirchen, Alexandra; Sengpiel, Robert; Gendel, Youri; Wessling, Matthias
2018-03-01
State-of-the-art all-vanadium redox flow batteries employ porous carbonaceous materials as electrodes. The battery cells possess non-scalable fixed electrodes inserted into a cell stack. In contrast, a conductive particle network dispersed in the electrolyte, known as slurry electrode, may be beneficial for a scalable redox flow battery. In this work, slurry electrodes are successfully introduced to an all-vanadium redox flow battery. Activated carbon and graphite powder particles are dispersed up to 20 wt% in the vanadium electrolyte and charge-discharge behavior is inspected via polarization studies. Graphite powder slurry is superior over activated carbon with a polarization behavior closer to the standard graphite felt electrodes. 3D-printed conductive static mixers introduced to the slurry channel improve the charge transfer via intensified slurry mixing and increased surface area. Consequently, a significant increase in the coulombic efficiency up to 95% and energy efficiency up to 65% is obtained. Our results show that slurry electrodes supported by conductive static mixers can be competitive to state-of-the-art electrodes yielding an additional degree of freedom in battery design. Research into carbon properties (particle size, internal surface area, pore size distribution) tailored to the electrolyte system and optimization of the mixer geometry may yield even better battery properties.
Scalable isosurface visualization of massive datasets on commodity off-the-shelf clusters
Bajaj, Chandrajit
2009-01-01
Tomographic imaging and computer simulations are increasingly yielding massive datasets. Interactive and exploratory visualizations have rapidly become indispensable tools to study large volumetric imaging and simulation data. Our scalable isosurface visualization framework on commodity off-the-shelf clusters is an end-to-end parallel and progressive platform, from initial data access to the final display. Interactive browsing of extracted isosurfaces is made possible by using parallel isosurface extraction, and rendering in conjunction with a new specialized piece of image compositing hardware called Metabuffer. In this paper, we focus on the back end scalability by introducing a fully parallel and out-of-core isosurface extraction algorithm. It achieves scalability by using both parallel and out-of-core processing and parallel disks. It statically partitions the volume data to parallel disks with a balanced workload spectrum, and builds I/O-optimal external interval trees to minimize the number of I/O operations of loading large data from disk. We also describe an isosurface compression scheme that is efficient for progress extraction, transmission and storage of isosurfaces. PMID:19756231
Multi-mode sensor processing on a dynamically reconfigurable massively parallel processor array
NASA Astrophysics Data System (ADS)
Chen, Paul; Butts, Mike; Budlong, Brad; Wasson, Paul
2008-04-01
This paper introduces a novel computing architecture that can be reconfigured in real time to adapt on demand to multi-mode sensor platforms' dynamic computational and functional requirements. This 1 teraOPS reconfigurable Massively Parallel Processor Array (MPPA) has 336 32-bit processors. The programmable 32-bit communication fabric provides streamlined inter-processor connections with deterministically high performance. Software programmability, scalability, ease of use, and fast reconfiguration time (ranging from microseconds to milliseconds) are the most significant advantages over FPGAs and DSPs. This paper introduces the MPPA architecture, its programming model, and methods of reconfigurability. An MPPA platform for reconfigurable computing is based on a structural object programming model. Objects are software programs running concurrently on hundreds of 32-bit RISC processors and memories. They exchange data and control through a network of self-synchronizing channels. A common application design pattern on this platform, called a work farm, is a parallel set of worker objects, with one input and one output stream. Statically configured work farms with homogeneous and heterogeneous sets of workers have been used in video compression and decompression, network processing, and graphics applications.
A TCP/IP framework for ethernet-based measurement, control and experiment data distribution
NASA Astrophysics Data System (ADS)
Ocaya, R. O.; Minny, J.
2010-11-01
A complete modular but scalable TCP/IP based scientific instrument control and data distribution system has been designed and realized. The system features an IEEE 802.3 compliant 10 Mbps Medium Access Controller (MAC) and Physical Layer Device that is suitable for the full-duplex monitoring and control of various physically widespread measurement transducers in the presence of a local network infrastructure. The cumbersomeness of exchanging and synchronizing data between the various transducer units using physical storage media led to the choice of TCP/IP as a logical alternative. The system and methods developed are scalable for broader usage over the Internet. The system comprises a PIC18f2620 and ENC28j60 based hardware and a software component written in C, Java/Javascript and Visual Basic.NET programming languages for event-level monitoring and browser user-interfaces respectively. The system exchanges data with the host network through IPv4 packets requested and received on a HTTP page. It also responds to ICMP echo, UDP and ARP requests through a user selectable integrated DHCP and static IPv4 address allocation scheme. The round-trip time, throughput and polling frequency are estimated and reported. A typical application to temperature monitoring and logging is also presented.
Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing
2012-12-14
Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing Matei Zaharia Tathagata Das Haoyuan Li Timothy Hunter Scott Shenker Ion...SUBTITLE Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...time. However, current programming models for distributed stream processing are relatively low-level often leaving the user to worry about consistency of
Stylized facts in social networks: Community-based static modeling
NASA Astrophysics Data System (ADS)
Jo, Hang-Hyun; Murase, Yohsuke; Török, János; Kertész, János; Kaski, Kimmo
2018-06-01
The past analyses of datasets of social networks have enabled us to make empirical findings of a number of aspects of human society, which are commonly featured as stylized facts of social networks, such as broad distributions of network quantities, existence of communities, assortative mixing, and intensity-topology correlations. Since the understanding of the structure of these complex social networks is far from complete, for deeper insight into human society more comprehensive datasets and modeling of the stylized facts are needed. Although the existing dynamical and static models can generate some stylized facts, here we take an alternative approach by devising a community-based static model with heterogeneous community sizes and larger communities having smaller link density and weight. With these few assumptions we are able to generate realistic social networks that show most stylized facts for a wide range of parameters, as demonstrated numerically and analytically. Since our community-based static model is simple to implement and easily scalable, it can be used as a reference system, benchmark, or testbed for further applications.
Field-Programmable Gate Array Computer in Structural Analysis: An Initial Exploration
NASA Technical Reports Server (NTRS)
Singleterry, Robert C., Jr.; Sobieszczanski-Sobieski, Jaroslaw; Brown, Samuel
2002-01-01
This paper reports on an initial assessment of using a Field-Programmable Gate Array (FPGA) computational device as a new tool for solving structural mechanics problems. A FPGA is an assemblage of binary gates arranged in logical blocks that are interconnected via software in a manner dependent on the algorithm being implemented and can be reprogrammed thousands of times per second. In effect, this creates a computer specialized for the problem that automatically exploits all the potential for parallel computing intrinsic in an algorithm. This inherent parallelism is the most important feature of the FPGA computational environment. It is therefore important that if a problem offers a choice of different solution algorithms, an algorithm of a higher degree of inherent parallelism should be selected. It is found that in structural analysis, an 'analog computer' style of programming, which solves problems by direct simulation of the terms in the governing differential equations, yields a more favorable solution algorithm than current solution methods. This style of programming is facilitated by a 'drag-and-drop' graphic programming language that is supplied with the particular type of FPGA computer reported in this paper. Simple examples in structural dynamics and statics illustrate the solution approach used. The FPGA system also allows linear scalability in computing capability. As the problem grows, the number of FPGA chips can be increased with no loss of computing efficiency due to data flow or algorithmic latency that occurs when a single problem is distributed among many conventional processors that operate in parallel. This initial assessment finds the FPGA hardware and software to be in their infancy in regard to the user conveniences; however, they have enormous potential for shrinking the elapsed time of structural analysis solutions if programmed with algorithms that exhibit inherent parallelism and linear scalability. This potential warrants further development of FPGA-tailored algorithms for structural analysis.
Parkison, Steven A.; Carlson, Jay D.; Chaudoin, Tammy R.; Hoke, Traci A.; Schenk, A. Katrin; Goulding, Evan H.; Pérez, Lance C.; Bonasera, Stephen J.
2016-01-01
Inexpensive, high-throughput, low maintenance systems for precise temporal and spatial measurement of mouse home cage behavior (including movement, feeding, and drinking) are required to evaluate products from large scale pharmaceutical design and genetic lesion programs. These measurements are also required to interpret results from more focused behavioral assays. We describe the design and validation of a highly-scalable, reliable mouse home cage behavioral monitoring system modeled on a previously described, one-of-a-kind system [1]. Mouse position was determined by solving static equilibrium equations describing the force and torques acting on the system strain gauges; feeding events were detected by a photobeam across the food hopper, and drinking events were detected by a capacitive lick sensor. Validation studies show excellent agreement between mouse position and drinking events measured by the system compared with video-based observation – a gold standard in neuroscience. PMID:23366406
Declarative language design for interactive visualization.
Heer, Jeffrey; Bostock, Michael
2010-01-01
We investigate the design of declarative, domain-specific languages for constructing interactive visualizations. By separating specification from execution, declarative languages can simplify development, enable unobtrusive optimization, and support retargeting across platforms. We describe the design of the Protovis specification language and its implementation within an object-oriented, statically-typed programming language (Java). We demonstrate how to support rich visualizations without requiring a toolkit-specific data model and extend Protovis to enable declarative specification of animated transitions. To support cross-platform deployment, we introduce rendering and event-handling infrastructures decoupled from the runtime platform, letting designers retarget visualization specifications (e.g., from desktop to mobile phone) with reduced effort. We also explore optimizations such as runtime compilation of visualization specifications, parallelized execution, and hardware-accelerated rendering. We present benchmark studies measuring the performance gains provided by these optimizations and compare performance to existing Java-based visualization tools, demonstrating scalability improvements exceeding an order of magnitude.
Scalability in Distance Education: "Can We Have Our Cake and Eat It Too?"
ERIC Educational Resources Information Center
Laws, R. Dwight; Howell, Scott L.; Lindsay, Nathan K.
2003-01-01
The decision to increase distance education enrollment hinges on the factors of pedagogical effectiveness, interactivity, audience, faculty incentives, retention, program type, and profitability. A complex interplay exists among these scalability concerns (i.e., issues related to meeting the growing enrollment demand), and any program's approach…
Directed Incremental Symbolic Execution
NASA Technical Reports Server (NTRS)
Person, Suzette; Yang, Guowei; Rungta, Neha; Khurshid, Sarfraz
2011-01-01
The last few years have seen a resurgence of interest in the use of symbolic execution -- a program analysis technique developed more than three decades ago to analyze program execution paths. Scaling symbolic execution and other path-sensitive analysis techniques to large systems remains challenging despite recent algorithmic and technological advances. An alternative to solving the problem of scalability is to reduce the scope of the analysis. One approach that is widely studied in the context of regression analysis is to analyze the differences between two related program versions. While such an approach is intuitive in theory, finding efficient and precise ways to identify program differences, and characterize their effects on how the program executes has proved challenging in practice. In this paper, we present Directed Incremental Symbolic Execution (DiSE), a novel technique for detecting and characterizing the effects of program changes. The novelty of DiSE is to combine the efficiencies of static analysis techniques to compute program difference information with the precision of symbolic execution to explore program execution paths and generate path conditions affected by the differences. DiSE is a complementary technique to other reduction or bounding techniques developed to improve symbolic execution. Furthermore, DiSE does not require analysis results to be carried forward as the software evolves -- only the source code for two related program versions is required. A case-study of our implementation of DiSE illustrates its effectiveness at detecting and characterizing the effects of program changes.
Supporting Adaptive Ubiquitous Applications With the Solar System
2001-05-31
stackable operators to manage ubiqui- tous information sources. After developing a set of di - verse adaptive applications, we expect to identify fun...performance. Solar provides flexibility by allowing applications to define and interconnect op- erator objects. Solar provides scalability by dis ...children by publishing events. (Static directory nodes are sources and dynamic di - rectory nodes are operators.) Alias nodes are pub- lishers that announce
A Numerical Study of Scalable Cardiac Electro-Mechanical Solvers on HPC Architectures
Colli Franzone, Piero; Pavarino, Luca F.; Scacchi, Simone
2018-01-01
We introduce and study some scalable domain decomposition preconditioners for cardiac electro-mechanical 3D simulations on parallel HPC (High Performance Computing) architectures. The electro-mechanical model of the cardiac tissue is composed of four coupled sub-models: (1) the static finite elasticity equations for the transversely isotropic deformation of the cardiac tissue; (2) the active tension model describing the dynamics of the intracellular calcium, cross-bridge binding and myofilament tension; (3) the anisotropic Bidomain model describing the evolution of the intra- and extra-cellular potentials in the deforming cardiac tissue; and (4) the ionic membrane model describing the dynamics of ionic currents, gating variables, ionic concentrations and stretch-activated channels. This strongly coupled electro-mechanical model is discretized in time with a splitting semi-implicit technique and in space with isoparametric finite elements. The resulting scalable parallel solver is based on Multilevel Additive Schwarz preconditioners for the solution of the Bidomain system and on BDDC preconditioned Newton-Krylov solvers for the non-linear finite elasticity system. The results of several 3D parallel simulations show the scalability of both linear and non-linear solvers and their application to the study of both physiological excitation-contraction cardiac dynamics and re-entrant waves in the presence of different mechano-electrical feedbacks. PMID:29674971
Parallel scalability of Hartree-Fock calculations
NASA Astrophysics Data System (ADS)
Chow, Edmond; Liu, Xing; Smelyanskiy, Mikhail; Hammond, Jeff R.
2015-03-01
Quantum chemistry is increasingly performed using large cluster computers consisting of multiple interconnected nodes. For a fixed molecular problem, the efficiency of a calculation usually decreases as more nodes are used, due to the cost of communication between the nodes. This paper empirically investigates the parallel scalability of Hartree-Fock calculations. The construction of the Fock matrix and the density matrix calculation are analyzed separately. For the former, we use a parallelization of Fock matrix construction based on a static partitioning of work followed by a work stealing phase. For the latter, we use density matrix purification from the linear scaling methods literature, but without using sparsity. When using large numbers of nodes for moderately sized problems, density matrix computations are network-bandwidth bound, making purification methods potentially faster than eigendecomposition methods.
An efficient and scalable deformable model for virtual reality-based medical applications.
Choi, Kup-Sze; Sun, Hanqiu; Heng, Pheng-Ann
2004-09-01
Modeling of tissue deformation is of great importance to virtual reality (VR)-based medical simulations. Considerable effort has been dedicated to the development of interactively deformable virtual tissues. In this paper, an efficient and scalable deformable model is presented for virtual-reality-based medical applications. It considers deformation as a localized force transmittal process which is governed by algorithms based on breadth-first search (BFS). The computational speed is scalable to facilitate real-time interaction by adjusting the penetration depth. Simulated annealing (SA) algorithms are developed to optimize the model parameters by using the reference data generated with the linear static finite element method (FEM). The mechanical behavior and timing performance of the model have been evaluated. The model has been applied to simulate the typical behavior of living tissues and anisotropic materials. Integration with a haptic device has also been achieved on a generic personal computer (PC) platform. The proposed technique provides a feasible solution for VR-based medical simulations and has the potential for multi-user collaborative work in virtual environment.
Jang, Jae-Wook; Yun, Jaesung; Mohaisen, Aziz; Woo, Jiyoung; Kim, Huy Kang
2016-01-01
Mass-market mobile security threats have increased recently due to the growth of mobile technologies and the popularity of mobile devices. Accordingly, techniques have been introduced for identifying, classifying, and defending against mobile threats utilizing static, dynamic, on-device, and off-device techniques. Static techniques are easy to evade, while dynamic techniques are expensive. On-device techniques are evasion, while off-device techniques need being always online. To address some of those shortcomings, we introduce Andro-profiler, a hybrid behavior based analysis and classification system for mobile malware. Andro-profiler main goals are efficiency, scalability, and accuracy. For that, Andro-profiler classifies malware by exploiting the behavior profiling extracted from the integrated system logs including system calls. Andro-profiler executes a malicious application on an emulator in order to generate the integrated system logs, and creates human-readable behavior profiles by analyzing the integrated system logs. By comparing the behavior profile of malicious application with representative behavior profile for each malware family using a weighted similarity matching technique, Andro-profiler detects and classifies it into malware families. The experiment results demonstrate that Andro-profiler is scalable, performs well in detecting and classifying malware with accuracy greater than 98 %, outperforms the existing state-of-the-art work, and is capable of identifying 0-day mobile malware samples.
Scalability and Validation of Big Data Bioinformatics Software.
Yang, Andrian; Troup, Michael; Ho, Joshua W K
2017-01-01
This review examines two important aspects that are central to modern big data bioinformatics analysis - software scalability and validity. We argue that not only are the issues of scalability and validation common to all big data bioinformatics analyses, they can be tackled by conceptually related methodological approaches, namely divide-and-conquer (scalability) and multiple executions (validation). Scalability is defined as the ability for a program to scale based on workload. It has always been an important consideration when developing bioinformatics algorithms and programs. Nonetheless the surge of volume and variety of biological and biomedical data has posed new challenges. We discuss how modern cloud computing and big data programming frameworks such as MapReduce and Spark are being used to effectively implement divide-and-conquer in a distributed computing environment. Validation of software is another important issue in big data bioinformatics that is often ignored. Software validation is the process of determining whether the program under test fulfils the task for which it was designed. Determining the correctness of the computational output of big data bioinformatics software is especially difficult due to the large input space and complex algorithms involved. We discuss how state-of-the-art software testing techniques that are based on the idea of multiple executions, such as metamorphic testing, can be used to implement an effective bioinformatics quality assurance strategy. We hope this review will raise awareness of these critical issues in bioinformatics.
: A Scalable and Transparent System for Simulating MPI Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perumalla, Kalyan S
2010-01-01
is a scalable, transparent system for experimenting with the execution of parallel programs on simulated computing platforms. The level of simulated detail can be varied for application behavior as well as for machine characteristics. Unique features of are repeatability of execution, scalability to millions of simulated (virtual) MPI ranks, scalability to hundreds of thousands of host (real) MPI ranks, portability of the system to a variety of host supercomputing platforms, and the ability to experiment with scientific applications whose source-code is available. The set of source-code interfaces supported by is being expanded to support a wider set of applications, andmore » MPI-based scientific computing benchmarks are being ported. In proof-of-concept experiments, has been successfully exercised to spawn and sustain very large-scale executions of an MPI test program given in source code form. Low slowdowns are observed, due to its use of purely discrete event style of execution, and due to the scalability and efficiency of the underlying parallel discrete event simulation engine, sik. In the largest runs, has been executed on up to 216,000 cores of a Cray XT5 supercomputer, successfully simulating over 27 million virtual MPI ranks, each virtual rank containing its own thread context, and all ranks fully synchronized by virtual time.« less
de Soure, António M; Fernandes-Platzgummer, Ana; Moreira, Francisco; Lilaia, Carla; Liu, Shi-Hwei; Ku, Chen-Peng; Huang, Yi-Feng; Milligan, William; Cabral, Joaquim M S; da Silva, Cláudia L
2017-05-01
Umbilical cord matrix (UCM)-derived mesenchymal stem/stromal cells (MSCs) are promising therapeutic candidates for regenerative medicine settings. UCM MSCs have advantages over adult cells as these can be obtained through a non-invasive harvesting procedure and display a higher proliferative capacity. However, the high cell doses required in the clinical setting make large-scale manufacturing of UCM MSCs mandatory. A commercially available human platelet lysate-based culture supplement (UltraGRO TM , AventaCell BioMedical) (5%(v/v)) was tested to effectively isolate UCM MSCs and to expand these cells under (1) static conditions, using planar culture systems and (2) stirred culture using plastic microcarriers in a spinner flask. The MSC-like cells were isolated from UCM explant cultures after 11 ± 2 days. After five passages in static culture, UCM MSCs retained their immunophenotype and multilineage differentiation potential. The UCM MSCs cultured under static conditions using UltraGRO TM -supplemented medium expanded more rapidly compared with UCM MSCs expanded using a previously established protocol. Importantly, UCM MSCs were successfully expanded under dynamic conditions on plastic microcarriers using UltraGRO TM -supplemented medium in spinner flasks. Upon an initial 54% cell adhesion to the beads, UCM MSCs expanded by >13-fold after 5-6 days, maintaining their immunophenotype and multilineage differentiation ability. The present paper reports the establishment of an easily scalable integrated culture platform based on a human platelet lysate supplement for the effective isolation and expansion of UCM MSCs in a xenogeneic-free microcarrier-based system. This platform represents an important advance in obtaining safer and clinically meaningful MSC numbers for clinical translation. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
A Hybrid EAV-Relational Model for Consistent and Scalable Capture of Clinical Research Data.
Khan, Omar; Lim Choi Keung, Sarah N; Zhao, Lei; Arvanitis, Theodoros N
2014-01-01
Many clinical research databases are built for specific purposes and their design is often guided by the requirements of their particular setting. Not only does this lead to issues of interoperability and reusability between research groups in the wider community but, within the project itself, changes and additions to the system could be implemented using an ad hoc approach, which may make the system difficult to maintain and even more difficult to share. In this paper, we outline a hybrid Entity-Attribute-Value and relational model approach for modelling data, in light of frequently changing requirements, which enables the back-end database schema to remain static, improving the extensibility and scalability of an application. The model also facilitates data reuse. The methods used build on the modular architecture previously introduced in the CURe project.
Validation of a Scalable Solar Sailcraft
NASA Technical Reports Server (NTRS)
Murphy, D. M.
2006-01-01
The NASA In-Space Propulsion (ISP) program sponsored intensive solar sail technology and systems design, development, and hardware demonstration activities over the past 3 years. Efforts to validate a scalable solar sail system by functional demonstration in relevant environments, together with test-analysis correlation activities on a scalable solar sail system have recently been successfully completed. A review of the program, with descriptions of the design, results of testing, and analytical model validations of component and assembly functional, strength, stiffness, shape, and dynamic behavior are discussed. The scaled performance of the validated system is projected to demonstrate the applicability to flight demonstration and important NASA road-map missions.
Temporally Scalable Visual SLAM using a Reduced Pose Graph
2012-05-25
m b r i d g e , m a 0 213 9 u s a — w w w. c s a i l . m i t . e d u MIT-CSAIL-TR-2012-013 May 25, 2012 Temporally Scalable Visual SLAM using a...00-00-2012 4. TITLE AND SUBTITLE Temporally Scalable Visual SLAM using a Reduced Pose Graph 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...demonstrate a system for temporally scalable visual SLAM using a reduced pose graph representation. Unlike previous visual SLAM approaches that use
Highlights of X-Stack ExM Deliverable Swift/T
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wozniak, Justin M.
Swift/T is a key success from the ExM: System support for extreme-scale, many-task applications1 X-Stack project, which proposed to use concurrent dataflow as an innovative programming model to exploit extreme parallelism in exascale computers. The Swift/T component of the project reimplemented the Swift language from scratch to allow applications that compose scientific modules together to be build and run on available petascale computers (Blue Gene, Cray). Swift/T does this via a new compiler and runtime that generates and executes the application as an MPI program. We assume that mission-critical emerging exascale applications will be composed as scalable applications using existingmore » software components, connected by data dependencies. Developers wrap native code fragments using a higherlevel language, then build composite applications to form a computational experiment. This exemplifies hierarchical concurrency: lower-level messaging libraries are used for fine-grained parallelism; highlevel control is used for inter-task coordination. These patterns are best expressed with dataflow, but static DAGs (i.e., other workflow languages) limit the applications that can be built; they do not provide the expressiveness of Swift, such as conditional execution, iteration, and recursive functions.« less
al3c: high-performance software for parameter inference using Approximate Bayesian Computation.
Stram, Alexander H; Marjoram, Paul; Chen, Gary K
2015-11-01
The development of Approximate Bayesian Computation (ABC) algorithms for parameter inference which are both computationally efficient and scalable in parallel computing environments is an important area of research. Monte Carlo rejection sampling, a fundamental component of ABC algorithms, is trivial to distribute over multiple processors but is inherently inefficient. While development of algorithms such as ABC Sequential Monte Carlo (ABC-SMC) help address the inherent inefficiencies of rejection sampling, such approaches are not as easily scaled on multiple processors. As a result, current Bayesian inference software offerings that use ABC-SMC lack the ability to scale in parallel computing environments. We present al3c, a C++ framework for implementing ABC-SMC in parallel. By requiring only that users define essential functions such as the simulation model and prior distribution function, al3c abstracts the user from both the complexities of parallel programming and the details of the ABC-SMC algorithm. By using the al3c framework, the user is able to scale the ABC-SMC algorithm in parallel computing environments for his or her specific application, with minimal programming overhead. al3c is offered as a static binary for Linux and OS-X computing environments. The user completes an XML configuration file and C++ plug-in template for the specific application, which are used by al3c to obtain the desired results. Users can download the static binaries, source code, reference documentation and examples (including those in this article) by visiting https://github.com/ahstram/al3c. astram@usc.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
TreeVector: scalable, interactive, phylogenetic trees for the web.
Pethica, Ralph; Barker, Gary; Kovacs, Tim; Gough, Julian
2010-01-28
Phylogenetic trees are complex data forms that need to be graphically displayed to be human-readable. Traditional techniques of plotting phylogenetic trees focus on rendering a single static image, but increases in the production of biological data and large-scale analyses demand scalable, browsable, and interactive trees. We introduce TreeVector, a Scalable Vector Graphics-and Java-based method that allows trees to be integrated and viewed seamlessly in standard web browsers with no extra software required, and can be modified and linked using standard web technologies. There are now many bioinformatics servers and databases with a range of dynamic processes and updates to cope with the increasing volume of data. TreeVector is designed as a framework to integrate with these processes and produce user-customized phylogenies automatically. We also address the strengths of phylogenetic trees as part of a linked-in browsing process rather than an end graphic for print. TreeVector is fast and easy to use and is available to download precompiled, but is also open source. It can also be run from the web server listed below or the user's own web server. It has already been deployed on two recognized and widely used database Web sites.
Wafer-scalable high-performance CVD graphene devices and analog circuits
NASA Astrophysics Data System (ADS)
Tao, Li; Lee, Jongho; Li, Huifeng; Piner, Richard; Ruoff, Rodney; Akinwande, Deji
2013-03-01
Graphene field effect transistors (GFETs) will serve as an essential component for functional modules like amplifier and frequency doublers in analog circuits. The performance of these modules is directly related to the mobility of charge carriers in GFETs, which per this study has been greatly improved. Low-field electrostatic measurements show field mobility values up to 12k cm2/Vs at ambient conditions with our newly developed scalable CVD graphene. For both hole and electron transport, fabricated GFETs offer substantial amplification for small and large signals at quasi-static frequencies limited only by external capacitances at high-frequencies. GFETs biased at the peak transconductance point featured high small-signal gain with eventual output power compression similar to conventional transistor amplifiers. GFETs operating around the Dirac voltage afforded positive conversion gain for the first time, to our knowledge, in experimental graphene frequency doublers. This work suggests a realistic prospect for high performance linear and non-linear analog circuits based on the unique electron-hole symmetry and fast transport now accessible in wafer-scalable CVD graphene. *Support from NSF CAREER award (ECCS-1150034) and the W. M. Keck Foundation are appreicated.
A Transparently-Scalable Metadata Service for the Ursa Minor Storage System
2010-06-25
provide application-level guarantees. For example, many document editing programs imple- ment atomic updates by writing the new document ver- sion into a...Transparently-Scalable Metadata Service for the Ursa Minor Storage System 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...operations that could involve multiple servers, how close existing systems come to transparent scala - bility, how systems that handle multi-server
NASA Technical Reports Server (NTRS)
West, Jeff; Yang, H. Q.
2014-01-01
There are many instances involving liquid/gas interfaces and their dynamics in the design of liquid engine powered rockets such as the Space Launch System (SLS). Some examples of these applications are: Propellant tank draining and slosh, subcritical condition injector analysis for gas generators, preburners and thrust chambers, water deluge mitigation for launch induced environments and even solid rocket motor liquid slag dynamics. Commercially available CFD programs simulating gas/liquid interfaces using the Volume of Fluid approach are currently limited in their parallel scalability. In 2010 for instance, an internal NASA/MSFC review of three commercial tools revealed that parallel scalability was seriously compromised at 8 cpus and no additional speedup was possible after 32 cpus. Other non-interface CFD applications at the time were demonstrating useful parallel scalability up to 4,096 processors or more. Based on this review, NASA/MSFC initiated an effort to implement a Volume of Fluid implementation within the unstructured mesh, pressure-based algorithm CFD program, Loci-STREAM. After verification was achieved by comparing results to the commercial CFD program CFD-Ace+, and validation by direct comparison with data, Loci-STREAM-VoF is now the production CFD tool for propellant slosh force and slosh damping rate simulations at NASA/MSFC. On these applications, good parallel scalability has been demonstrated for problems sizes of tens of millions of cells and thousands of cpu cores. Ongoing efforts are focused on the application of Loci-STREAM-VoF to predict the transient flow patterns of water on the SLS Mobile Launch Platform in order to support the phasing of water for launch environment mitigation so that vehicle determinantal effects are not realized.
NASA Technical Reports Server (NTRS)
Whitaker, Mike
1991-01-01
Severe precipitation static problems affecting the communication equipment onboard the P-3B aircraft were recently studied. The study was conducted after precipitation static created potential safety-of-flight problems on Naval Reserve aircraft. A specially designed flight test program was conducted in order to measure, record, analyze, and characterize potential precipitation static problem areas. The test program successfully characterized the precipitation static interference problems while the P-3B was flown in moderate to extreme precipitation conditions. Data up to 400 MHz were collected on the effects of engine charging, precipitation static, and extreme cross fields. These data were collected using a computer controlled acquisition system consisting of a signal generator, RF spectrum and audio analyzers, data recorders, and instrumented static dischargers. The test program is outlined and the computer controlled data acquisition system is described in detail which was used during flight and ground testing. The correlation of test results is also discussed which were recorded during the flight test program and those measured during ground testing.
Performance tradeoffs in static and dynamic load balancing strategies
NASA Technical Reports Server (NTRS)
Iqbal, M. A.; Saltz, J. H.; Bokhart, S. H.
1986-01-01
The problem of uniformly distributing the load of a parallel program over a multiprocessor system was considered. A program was analyzed whose structure permits the computation of the optimal static solution. Then four strategies for load balancing were described and their performance compared. The strategies are: (1) the optimal static assignment algorithm which is guaranteed to yield the best static solution, (2) the static binary dissection method which is very fast but sub-optimal, (3) the greedy algorithm, a static fully polynomial time approximation scheme, which estimates the optimal solution to arbitrary accuracy, and (4) the predictive dynamic load balancing heuristic which uses information on the precedence relationships within the program and outperforms any of the static methods. It is also shown that the overhead incurred by the dynamic heuristic is reduced considerably if it is started off with a static assignment provided by either of the other three strategies.
Scalable microcarrier-based manufacturing of mesenchymal stem/stromal cells.
de Soure, António M; Fernandes-Platzgummer, Ana; da Silva, Cláudia L; Cabral, Joaquim M S
2016-10-20
Due to their unique features, mesenchymal stem/stromal cells (MSC) have been exploited in clinical settings as therapeutic candidates for the treatment of a variety of diseases. However, the success in obtaining clinically-relevant MSC numbers for cell-based therapies is dependent on efficient isolation and ex vivo expansion protocols, able to comply with good manufacturing practices (GMP). In this context, the 2-dimensional static culture systems typically used for the expansion of these cells present several limitations that may lead to reduced cell numbers and compromise cell functions. Furthermore, many studies in the literature report the expansion of MSC using fetal bovine serum (FBS)-supplemented medium, which has been critically rated by regulatory agencies. Alternative platforms for the scalable manufacturing of MSC have been developed, namely using microcarriers in bioreactors, with also a considerable number of studies now reporting the production of MSC using xenogeneic/serum-free medium formulations. In this review we provide a comprehensive overview on the scalable manufacturing of human mesenchymal stem/stromal cells, depicting the various steps involved in the process from cell isolation to ex vivo expansion, using different cell tissue sources and culture medium formulations and exploiting bioprocess engineering tools namely microcarrier technology and bioreactors. Copyright © 2016 Elsevier B.V. All rights reserved.
Adapting for Scalability: Automating the Video Assessment of Instructional Learning
ERIC Educational Resources Information Center
Roberts , Amy M.; LoCasale-Crouch, Jennifer; Hamre, Bridget K.; Buckrop, Jordan M.
2017-01-01
Although scalable programs, such as online courses, have the potential to reach broad audiences, they may pose challenges to evaluating learners' knowledge and skills. Automated scoring offers a possible solution. In the current paper, we describe the process of creating and testing an automated means of scoring a validated measure of teachers'…
Scalability, Timing, and System Design Issues for Intrinsic Evolvable Hardware
NASA Technical Reports Server (NTRS)
Hereford, James; Gwaltney, David
2004-01-01
In this paper we address several issues pertinent to intrinsic evolvable hardware (EHW). The first issue is scalability; namely, how the design space scales as the programming string for the programmable device gets longer. We develop a model for population size and the number of generations as a function of the programming string length, L, and show that the number of circuit evaluations is an O(L2) process. We compare our model to several successful intrinsic EHW experiments and discuss the many implications of our model. The second issue that we address is the timing of intrinsic EHW experiments. We show that the processing time is a small part of the overall time to derive or evolve a circuit and that major improvements in processor speed alone will have only a minimal impact on improving the scalability of intrinsic EHW. The third issue we consider is the system-level design of intrinsic EHW experiments. We review what other researchers have done to break the scalability barrier and contend that the type of reconfigurable platform and the evolutionary algorithm are tied together and impose limits on each other.
Nanomanufacturing-related programs at NSF
NASA Astrophysics Data System (ADS)
Cooper, Khershed P.
2015-08-01
The National Science Foundation is meeting the challenge of transitioning lab-scale nanoscience and technology to commercial-scale through several nanomanufacturing-related research programs. The goal of the core Nanomanufacturing (NM) and the inter-disciplinary Scalable Nanomanufacturing (SNM) programs is to meet the barriers to manufacturability at the nano-scale by developing the fundamental principles for the manufacture of nanomaterials, nanostructures, nanodevices, and engineered nanosystems. These programs address issues such as scalability, reliability, quality, performance, yield, metrics, and cost, among others. The NM and SNM programs seek nano-scale manufacturing ideas that are transformative, that will be widely applicable and that will have far-reaching technological and societal impacts. It is envisioned that the results from these basic research programs will provide the knowledge base for larger programs such as the manufacturing Nanotechnology Science and Engineering Centers (NSECs) and the Nanosystems Engineering Research Centers (NERCs). Besides brief descriptions of these different programs, this paper will include discussions on novel
Equalizer: a scalable parallel rendering framework.
Eilemann, Stefan; Makhinya, Maxim; Pajarola, Renato
2009-01-01
Continuing improvements in CPU and GPU performances as well as increasing multi-core processor and cluster-based parallelism demand for flexible and scalable parallel rendering solutions that can exploit multipipe hardware accelerated graphics. In fact, to achieve interactive visualization, scalable rendering systems are essential to cope with the rapid growth of data sets. However, parallel rendering systems are non-trivial to develop and often only application specific implementations have been proposed. The task of developing a scalable parallel rendering framework is even more difficult if it should be generic to support various types of data and visualization applications, and at the same time work efficiently on a cluster with distributed graphics cards. In this paper we introduce a novel system called Equalizer, a toolkit for scalable parallel rendering based on OpenGL which provides an application programming interface (API) to develop scalable graphics applications for a wide range of systems ranging from large distributed visualization clusters and multi-processor multipipe graphics systems to single-processor single-pipe desktop machines. We describe the system architecture, the basic API, discuss its advantages over previous approaches, present example configurations and usage scenarios as well as scalability results.
A scalable parallel algorithm for multiple objective linear programs
NASA Technical Reports Server (NTRS)
Wiecek, Malgorzata M.; Zhang, Hong
1994-01-01
This paper presents an ADBASE-based parallel algorithm for solving multiple objective linear programs (MOLP's). Job balance, speedup and scalability are of primary interest in evaluating efficiency of the new algorithm. Implementation results on Intel iPSC/2 and Paragon multiprocessors show that the algorithm significantly speeds up the process of solving MOLP's, which is understood as generating all or some efficient extreme points and unbounded efficient edges. The algorithm gives specially good results for large and very large problems. Motivation and justification for solving such large MOLP's are also included.
A software methodology for compiling quantum programs
NASA Astrophysics Data System (ADS)
Häner, Thomas; Steiger, Damian S.; Svore, Krysta; Troyer, Matthias
2018-04-01
Quantum computers promise to transform our notions of computation by offering a completely new paradigm. To achieve scalable quantum computation, optimizing compilers and a corresponding software design flow will be essential. We present a software architecture for compiling quantum programs from a high-level language program to hardware-specific instructions. We describe the necessary layers of abstraction and their differences and similarities to classical layers of a computer-aided design flow. For each layer of the stack, we discuss the underlying methods for compilation and optimization. Our software methodology facilitates more rapid innovation among quantum algorithm designers, quantum hardware engineers, and experimentalists. It enables scalable compilation of complex quantum algorithms and can be targeted to any specific quantum hardware implementation.
Component-cost and performance based comparison of flow and static batteries
NASA Astrophysics Data System (ADS)
Hopkins, Brandon J.; Smith, Kyle C.; Slocum, Alexander H.; Chiang, Yet-Ming
2015-10-01
Flow batteries are a promising grid-storage technology that is scalable, inherently flexible in power/energy ratio, and potentially low cost in comparison to conventional or ;static; battery architectures. Recent advances in flow chemistries are enabling significantly higher energy density flow electrodes. When the same battery chemistry can arguably be used in either a flow or static electrode design, the relative merits of either design choice become of interest. Here, we analyze the costs of the electrochemically active stack for both architectures under the constraint of constant energy efficiency and charge and discharge rates, using as case studies the aqueous vanadium-redox chemistry, widely used in conventional flow batteries, and aqueous lithium-iron-phosphate (LFP)/lithium-titanium-phosphate (LTP) suspensions, an example of a higher energy density suspension-based electrode. It is found that although flow batteries always have a cost advantage (kWh-1) at the stack level modeled, the advantage is a strong function of flow electrode energy density. For the LFP/LTP case, the cost advantages decreases from ∼50% to ∼10% over experimentally reasonable ranges of suspension loading. Such results are important input for design choices when both battery architectures are viable options.
Supporting secure programming in web applications through interactive static analysis.
Zhu, Jun; Xie, Jing; Lipford, Heather Richter; Chu, Bill
2014-07-01
Many security incidents are caused by software developers' failure to adhere to secure programming practices. Static analysis tools have been used to detect software vulnerabilities. However, their wide usage by developers is limited by the special training required to write rules customized to application-specific logic. Our approach is interactive static analysis, to integrate static analysis into Integrated Development Environment (IDE) and provide in-situ secure programming support to help developers prevent vulnerabilities during code construction. No additional training is required nor are there any assumptions on ways programs are built. Our work is motivated in part by the observation that many vulnerabilities are introduced due to failure to practice secure programming by knowledgeable developers. We implemented a prototype interactive static analysis tool as a plug-in for Java in Eclipse. Our technical evaluation of our prototype detected multiple zero-day vulnerabilities in a large open source project. Our evaluations also suggest that false positives may be limited to a very small class of use cases.
Supporting secure programming in web applications through interactive static analysis
Zhu, Jun; Xie, Jing; Lipford, Heather Richter; Chu, Bill
2013-01-01
Many security incidents are caused by software developers’ failure to adhere to secure programming practices. Static analysis tools have been used to detect software vulnerabilities. However, their wide usage by developers is limited by the special training required to write rules customized to application-specific logic. Our approach is interactive static analysis, to integrate static analysis into Integrated Development Environment (IDE) and provide in-situ secure programming support to help developers prevent vulnerabilities during code construction. No additional training is required nor are there any assumptions on ways programs are built. Our work is motivated in part by the observation that many vulnerabilities are introduced due to failure to practice secure programming by knowledgeable developers. We implemented a prototype interactive static analysis tool as a plug-in for Java in Eclipse. Our technical evaluation of our prototype detected multiple zero-day vulnerabilities in a large open source project. Our evaluations also suggest that false positives may be limited to a very small class of use cases. PMID:25685513
Kiranyaz, Serkan; Mäkinen, Toni; Gabbouj, Moncef
2012-10-01
In this paper, we propose a novel framework based on a collective network of evolutionary binary classifiers (CNBC) to address the problems of feature and class scalability. The main goal of the proposed framework is to achieve a high classification performance over dynamic audio and video repositories. The proposed framework adopts a "Divide and Conquer" approach in which an individual network of binary classifiers (NBC) is allocated to discriminate each audio class. An evolutionary search is applied to find the best binary classifier in each NBC with respect to a given criterion. Through the incremental evolution sessions, the CNBC framework can dynamically adapt to each new incoming class or feature set without resorting to a full-scale re-training or re-configuration. Therefore, the CNBC framework is particularly designed for dynamically varying databases where no conventional static classifiers can adapt to such changes. In short, it is entirely a novel topology, an unprecedented approach for dynamic, content/data adaptive and scalable audio classification. A large set of audio features can be effectively used in the framework, where the CNBCs make appropriate selections and combinations so as to achieve the highest discrimination among individual audio classes. Experiments demonstrate a high classification accuracy (above 90%) and efficiency of the proposed framework over large and dynamic audio databases. Copyright © 2012 Elsevier Ltd. All rights reserved.
Novel high-fidelity realistic explosion damage simulation for urban environments
NASA Astrophysics Data System (ADS)
Liu, Xiaoqing; Yadegar, Jacob; Zhu, Youding; Raju, Chaitanya; Bhagavathula, Jaya
2010-04-01
Realistic building damage simulation has a significant impact in modern modeling and simulation systems especially in diverse panoply of military and civil applications where these simulation systems are widely used for personnel training, critical mission planning, disaster management, etc. Realistic building damage simulation should incorporate accurate physics-based explosion models, rubble generation, rubble flyout, and interactions between flying rubble and their surrounding entities. However, none of the existing building damage simulation systems sufficiently faithfully realize the criteria of realism required for effective military applications. In this paper, we present a novel physics-based high-fidelity and runtime efficient explosion simulation system to realistically simulate destruction to buildings. In the proposed system, a family of novel blast models is applied to accurately and realistically simulate explosions based on static and/or dynamic detonation conditions. The system also takes account of rubble pile formation and applies a generic and scalable multi-component based object representation to describe scene entities and highly scalable agent-subsumption architecture and scheduler to schedule clusters of sequential and parallel events. The proposed system utilizes a highly efficient and scalable tetrahedral decomposition approach to realistically simulate rubble formation. Experimental results demonstrate that the proposed system has the capability to realistically simulate rubble generation, rubble flyout and their primary and secondary impacts on surrounding objects including buildings, constructions, vehicles and pedestrians in clusters of sequential and parallel damage events.
Scalability of Classical Terramechanics Models for Lightweight Vehicle Applications
2013-08-01
Models for Lightweight Vehicle Applications Paramsothy Jayakumar Daniel Melanz Jamie MacLennan U.S. Army TARDEC Warren, MI, USA Carmine...NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Paramsothy Jayakumar ; Daniel Melanz; Jamie MacLennan; Carmine Senatore; Karl Iagnemma 5d. PROJECT...GVSETS), UNCLASSIFIED Scalability of Classical Terramechanics Models for Lightweight Vehicle Applications, Jayakumar , et al., UNCLASSIFIED Page 1 of 19
A Scalable Nonuniform Pointer Analysis for Embedded Program
NASA Technical Reports Server (NTRS)
Venet, Arnaud
2004-01-01
In this paper we present a scalable pointer analysis for embedded applications that is able to distinguish between instances of recursively defined data structures and elements of arrays. The main contribution consists of an efficient yet precise algorithm that can handle multithreaded programs. We first perform an inexpensive flow-sensitive analysis of each function in the program that generates semantic equations describing the effect of the function on the memory graph. These equations bear numerical constraints that describe nonuniform points-to relationships. We then iteratively solve these equations in order to obtain an abstract storage graph that describes the shape of data structures at every point of the program for all possible thread interleavings. We bring experimental evidence that this approach is tractable and precise for real-size embedded applications.
Structural Analysis and Test Comparison of a 20-Meter Inflation-Deployed Solar Sail
NASA Technical Reports Server (NTRS)
Sleight, David W.; Mann, Troy; Lichodziejewski, David; Derbes, Billy
2006-01-01
Under the direction of the NASA In-Space Propulsion Technology Office, the team of L Garde, NASA Jet Propulsion Laboratory, Ball Aerospace, and NASA Langley Research Center has been developing a scalable solar sail configuration to address NASA s future space propulsion needs. Prior to a flight experiment of a full-scale solar sail, a comprehensive test program was implemented to advance the technology readiness level of the solar sail design. These tests consisted of solar sail component, subsystem, and sub-scale system ground tests that simulated the aspects of the space environment such as vacuum and thermal conditions. In July 2005, a 20-m four-quadrant solar sail system test article was tested in the NASA Glenn Research Center s Space Power Facility to measure its static and dynamic structural responses. Key to the maturation of solar sail technology is the development of validated finite element analysis (FEA) models that can be used for design and analysis of solar sails. A major objective of the program was to utilize the test data to validate the FEA models simulating the solar sail ground tests. The FEA software, ABAQUS, was used to perform the structural analyses to simulate the ground tests performed on the 20-m solar sail test article. This paper presents the details of the FEA modeling, the structural analyses simulating the ground tests, and a comparison of the pretest and post-test analysis predictions with the ground test results for the 20-m solar sail system test article. The structural responses that are compared in the paper include load-deflection curves and natural frequencies for the beam structural assembly and static shape, natural frequencies, and mode shapes for the solar sail membrane. The analysis predictions were in reasonable agreement with the test data. Factors that precluded better correlation of the analyses and the tests were unmeasured initial conditions in the test set-up.
Hill, Kristian J; Robinson, Kendall P; Cuchna, Jennifer W; Hoch, Matthew C
2017-11-01
Clinical Scenario: Increasing hamstring flexibility through clinical stretching interventions may be an effective means to prevent hamstring injuries. However the most effective method to increase hamstring flexibility has yet to be determined. For a healthy individual, are proprioceptive neuromuscular facilitation (PNF) stretching programs more effective in immediately improving hamstring flexibility when compared with static stretching programs? Summary of Key Findings: A thorough literature search returned 195 possible studies; 5 studies met the inclusion criteria and were included. Current evidence supports the use of PNF stretching or static stretching programs for increasing hamstring flexibility. However, neither program demonstrated superior effectiveness when examining immediate increases in hamstring flexibility. Clinical Bottom Line: There were consistent findings from multiple low-quality studies that indicate there is no difference in the immediate improvements in hamstring flexibility when comparing PNF stretching programs to static stretching programs in physically active adults. Strength of Recommendation: Grade B evidence exists that PNF and static stretching programs equally increase hamstring flexibility immediately following the stretching program.
Expansion of Human Induced Pluripotent Stem Cells in Stirred Suspension Bioreactors.
Almutawaa, Walaa; Rohani, Leili; Rancourt, Derrick E
2016-01-01
Human induced pluripotent stem cells (hiPSCs) hold great promise as a cell source for therapeutic applications and regenerative medicine. Traditionally, hiPSCs are expanded in two-dimensional static culture as colonies in the presence or absence of feeder cells. However, this expansion procedure is associated with lack of reproducibility and low cell yields. To fulfill the large cell number demand for clinical use, robust large-scale production of these cells under defined conditions is needed. Herein, we describe a scalable, low-cost protocol for expanding hiPSCs as aggregates in a lab-scale bioreactor.
Formal Verification of Large Software Systems
NASA Technical Reports Server (NTRS)
Yin, Xiang; Knight, John
2010-01-01
We introduce a scalable proof structure to facilitate formal verification of large software systems. In our approach, we mechanically synthesize an abstract specification from the software implementation, match its static operational structure to that of the original specification, and organize the proof as the conjunction of a series of lemmas about the specification structure. By setting up a different lemma for each distinct element and proving each lemma independently, we obtain the important benefit that the proof scales easily for large systems. We present details of the approach and an illustration of its application on a challenge problem from the security domain
ERIC Educational Resources Information Center
SUTTON, MACK C.
THIS SELF-INSTRUCTIONAL PROGRAMED TEXT IS FOR INDIVIDUAL STUDENT USE IN STUDYING STATIC CONTROL IN ELECTRICAL-ELECTRONIC PROGRAMS. IT WAS DEVELOPED BY AN INSTRUCTIONAL MATERIALS SPECIALIST AND ADVISERS AND HAS BEEN TESTED BY STUDENT USE. THE OBJECTIVE OF THE COURSE IS TO HELP THE ELECTRICAL-TECHNICIAN DEVELOP AN UNDERSTANDING OF STATIC CONTROL…
Evolution of a minimal parallel programming model
Lusk, Ewing; Butler, Ralph; Pieper, Steven C.
2017-04-30
Here, we take a historical approach to our presentation of self-scheduled task parallelism, a programming model with its origins in early irregular and nondeterministic computations encountered in automated theorem proving and logic programming. We show how an extremely simple task model has evolved into a system, asynchronous dynamic load balancing (ADLB), and a scalable implementation capable of supporting sophisticated applications on today’s (and tomorrow’s) largest supercomputers; and we illustrate the use of ADLB with a Green’s function Monte Carlo application, a modern, mature nuclear physics code in production use. Our lesson is that by surrendering a certain amount of generalitymore » and thus applicability, a minimal programming model (in terms of its basic concepts and the size of its application programmer interface) can achieve extreme scalability without introducing complexity.« less
Alkaline static feed electrolyzer based oxygen generation system
NASA Technical Reports Server (NTRS)
Noble, L. D.; Kovach, A. J.; Fortunato, F. A.; Schubert, F. H.; Grigger, D. J.
1988-01-01
In preparation for the future deployment of the Space Station, an R and D program was established to demonstrate integrated operation of an alkaline Water Electrolysis System and a fuel cell as an energy storage device. The program's scope was revised when the Space Station Control Board changed the energy storage baseline for the Space Station. The new scope was aimed at the development of an alkaline Static Feed Electrolyzer for use in an Environmental Control/Life Support System as an oxygen generation system. As a result, the program was divided into two phases. The phase 1 effort was directed at the development of the Static Feed Electrolyzer for application in a Regenerative Fuel Cell System. During this phase, the program emphasized incorporation of the Regenerative Fuel Cell System design requirements into the Static Feed Electrolyzer electrochemical module design and the mechanical components design. The mechanical components included a Pressure Control Assembly, a Water Supply Assembly and a Thermal Control Assembly. These designs were completed through manufacturing drawing during Phase 1. The Phase 2 effort was directed at advancing the Alkaline Static Feed Electrolyzer database for an oxygen generation system. This development was aimed at extending the Static Feed Electrolyzer database in areas which may be encountered from initial fabrication through transportation, storage, launch and eventual Space Station startup. During this Phase, the Program emphasized three major areas: materials evaluation, electrochemical module scaling and performance repeatability and Static Feed Electrolyzer operational definition and characterization.
Di Maio, Dario
2017-01-01
The majority of currently published dispersion protocols of carbon nanotubes rely on techniques that are not scalable to an industrial level. This work shows how to obtain polymer nanocomposites with good mechanical characteristics using multi-walled carbon nanotubes epoxy resins obtained by mechanical mixing only. The mechanical dispersion method illustrated in this work is easily scalable to industrial level. The high shearing force due to the complex field of motion produces a good and reproducible carbon nanotube dispersion. We have tested an industrial epoxy matrix with good baseline mechanical characteristics at different carbon nanotube weight loads. ASTM-derived tensile and compressive tests show an increment in both Young’s modulus and compressive strength compared with the pristine resin from a starting low wt %. Comparative vibration tests show improvement in the damping capacity. The new carbon nanotube enhanced epoxy resin has superior mechanical proprieties compared to the market average competitor, and is among the top products in the bi-components epoxy resins market. The new dispersion method shows significant potential for the industrial use of CNTs in epoxy matrices. PMID:29064400
Giovannelli, Andrea; Di Maio, Dario; Scarpa, Fabrizio
2017-10-24
The majority of currently published dispersion protocols of carbon nanotubes rely on techniques that are not scalable to an industrial level. This work shows how to obtain polymer nanocomposites with good mechanical characteristics using multi-walled carbon nanotubes epoxy resins obtained by mechanical mixing only. The mechanical dispersion method illustrated in this work is easily scalable to industrial level. The high shearing force due to the complex field of motion produces a good and reproducible carbon nanotube dispersion. We have tested an industrial epoxy matrix with good baseline mechanical characteristics at different carbon nanotube weight loads. ASTM-derived tensile and compressive tests show an increment in both Young's modulus and compressive strength compared with the pristine resin from a starting low wt %. Comparative vibration tests show improvement in the damping capacity. The new carbon nanotube enhanced epoxy resin has superior mechanical proprieties compared to the market average competitor, and is among the top products in the bi-components epoxy resins market. The new dispersion method shows significant potential for the industrial use of CNTs in epoxy matrices.
NASA Astrophysics Data System (ADS)
Grasso, J. R.; Bachèlery, P.
Self-organized systems are often used to describe natural phenomena where power laws and scale invariant geometry are observed. The Piton de la Fournaise volcano shows power-law behavior in many aspects. These include the temporal distribution of eruptions, the frequency-size distributions of induced earthquakes, dikes, fissures, lava flows and interflow periods, all evidence of self-similarity over a finite scale range. We show that the bounds to scale-invariance can be used to derive geomechanical constraints on both the volcano structure and the volcano mechanics. We ascertain that the present magma bodies are multi-lens reservoirs in a quasi-eruptive condition, i.e. a marginally critical state. The scaling organization of dynamic fluid-induced observables on the volcano, such as fluid induced earthquakes, dikes and surface fissures, appears to be controlled by underlying static hierarchical structure (geology) similar to that proposed for fluid circulations in human physiology. The emergence of saturation lengths for the scalable volcanic observable argues for the finite scalability of complex naturally self-organized critical systems, including volcano dynamics.
NASA Astrophysics Data System (ADS)
Al Hadhrami, Tawfik; Nightingale, James M.; Wang, Qi; Grecos, Christos
2014-05-01
In emergency situations, the ability to remotely monitor unfolding events using high-quality video feeds will significantly improve the incident commander's understanding of the situation and thereby aids effective decision making. This paper presents a novel, adaptive video monitoring system for emergency situations where the normal communications network infrastructure has been severely impaired or is no longer operational. The proposed scheme, operating over a rapidly deployable wireless mesh network, supports real-time video feeds between first responders, forward operating bases and primary command and control centers. Video feeds captured on portable devices carried by first responders and by static visual sensors are encoded in H.264/SVC, the scalable extension to H.264/AVC, allowing efficient, standard-based temporal, spatial, and quality scalability of the video. A three-tier video delivery system is proposed, which balances the need to avoid overuse of mesh nodes with the operational requirements of the emergency management team. In the first tier, the video feeds are delivered at a low spatial and temporal resolution employing only the base layer of the H.264/SVC video stream. Routing in this mode is designed to employ all nodes across the entire mesh network. In the second tier, whenever operational considerations require that commanders or operators focus on a particular video feed, a `fidelity control' mechanism at the monitoring station sends control messages to the routing and scheduling agents in the mesh network, which increase the quality of the received picture using SNR scalability while conserving bandwidth by maintaining a low frame rate. In this mode, routing decisions are based on reliable packet delivery with the most reliable routes being used to deliver the base and lower enhancement layers; as fidelity is increased and more scalable layers are transmitted they will be assigned to routes in descending order of reliability. The third tier of video delivery transmits a high-quality video stream including all available scalable layers using the most reliable routes through the mesh network ensuring the highest possible video quality. The proposed scheme is implemented in a proven simulator, and the performance of the proposed system is numerically evaluated through extensive simulations. We further present an in-depth analysis of the proposed solutions and potential approaches towards supporting high-quality visual communications in such a demanding context.
XPRESS: eXascale PRogramming Environment and System Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brightwell, Ron; Sterling, Thomas; Koniges, Alice
The XPRESS Project is one of four major projects of the DOE Office of Science Advanced Scientific Computing Research X-stack Program initiated in September, 2012. The purpose of XPRESS is to devise an innovative system software stack to enable practical and useful exascale computing around the end of the decade with near-term contributions to efficient and scalable operation of trans-Petaflops performance systems in the next two to three years; both for DOE mission-critical applications. To this end, XPRESS directly addresses critical challenges in computing of efficiency, scalability, and programmability through introspective methods of dynamic adaptive resource management and task scheduling.
Lam, Alan Tin-Lun; Li, Jian; Chen, Allen Kuan-Liang; Reuveny, Shaul
2014-01-01
The expansion of human pluripotent stem cells (hPSC) for biomedical applications generally compels a defined, reliable, and scalable platform. Bioreactors offer a three-dimensional culture environment that relies on the implementation of microcarriers (MC), as supports for cell anchorage and their subsequent growth. Polystyrene microspheres/MC coated with adhesion-promoting extracellular matrix (ECM) protein, vitronectin (VN), or laminin (LN) have been shown to support hPSC expansion in a static environment. However, they are insufficient to promote human embryonic stem cells (hESC) seeding and their expansion in an agitated environment. The present study describes an innovative technology, consisting of a cationic charge that underlies the ECM coatings. By combining poly-L-lysine (PLL) with a coating of ECM protein, cell attachment efficiency and cell spreading are improved, thus enabling seeding under agitation in a serum-free medium. This coating combination also critically enables the subsequent formation and evolution of hPSC/MC aggregates, which ensure cell viability and generate high yields. Aggregate dimensions of at least 300 μm during early cell growth give rise to ≈15-fold expansion at 7 days' culture. Increasing aggregate numbers at a quasi-constant size of ≈300 μm indicates hESC growth within a self-regulating microenvironment. PLL+LN enables cell seeding and aggregate evolution under constant agitation, whereas PLL+VN requires an intermediate 2-day static pause to attain comparable aggregate sizes and correspondingly high expansion yields. The cells' highly reproducible bioresponse to these defined and characterized MC surface properties is universal across multiple cell lines, thus confirming the robustness of this scalable expansion process in a defined environment. PMID:24641164
Sawja: Static Analysis Workshop for Java
NASA Astrophysics Data System (ADS)
Hubert, Laurent; Barré, Nicolas; Besson, Frédéric; Demange, Delphine; Jensen, Thomas; Monfort, Vincent; Pichardie, David; Turpin, Tiphaine
Static analysis is a powerful technique for automatic verification of programs but raises major engineering challenges when developing a full-fledged analyzer for a realistic language such as Java. Efficiency and precision of such a tool rely partly on low level components which only depend on the syntactic structure of the language and therefore should not be redesigned for each implementation of a new static analysis. This paper describes the Sawja library: a static analysis workshop fully compliant with Java 6 which provides OCaml modules for efficiently manipulating Java bytecode programs. We present the main features of the library, including i) efficient functional data-structures for representing a program with implicit sharing and lazy parsing, ii) an intermediate stack-less representation, and iii) fast computation and manipulation of complete programs. We provide experimental evaluations of the different features with respect to time, memory and precision.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lusk, Ewing; Butler, Ralph; Pieper, Steven C.
Here, we take a historical approach to our presentation of self-scheduled task parallelism, a programming model with its origins in early irregular and nondeterministic computations encountered in automated theorem proving and logic programming. We show how an extremely simple task model has evolved into a system, asynchronous dynamic load balancing (ADLB), and a scalable implementation capable of supporting sophisticated applications on today’s (and tomorrow’s) largest supercomputers; and we illustrate the use of ADLB with a Green’s function Monte Carlo application, a modern, mature nuclear physics code in production use. Our lesson is that by surrendering a certain amount of generalitymore » and thus applicability, a minimal programming model (in terms of its basic concepts and the size of its application programmer interface) can achieve extreme scalability without introducing complexity.« less
PLAStiCC: Predictive Look-Ahead Scheduling for Continuous dataflows on Clouds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumbhare, Alok; Simmhan, Yogesh; Prasanna, Viktor K.
2014-05-27
Scalable stream processing and continuous dataflow systems are gaining traction with the rise of big data due to the need for processing high velocity data in near real time. Unlike batch processing systems such as MapReduce and workflows, static scheduling strategies fall short for continuous dataflows due to the variations in the input data rates and the need for sustained throughput. The elastic resource provisioning of cloud infrastructure is valuable to meet the changing resource needs of such continuous applications. However, multi-tenant cloud resources introduce yet another dimension of performance variability that impacts the application’s throughput. In this paper wemore » propose PLAStiCC, an adaptive scheduling algorithm that balances resource cost and application throughput using a prediction-based look-ahead approach. It not only addresses variations in the input data rates but also the underlying cloud infrastructure. In addition, we also propose several simpler static scheduling heuristics that operate in the absence of accurate performance prediction model. These static and adaptive heuristics are evaluated through extensive simulations using performance traces obtained from public and private IaaS clouds. Our results show an improvement of up to 20% in the overall profit as compared to the reactive adaptation algorithm.« less
Dennis, Eslie; Banks, Peter; Murata, Lauren B; Sanchez, Stephanie A; Pennington, Christie; Hockersmith, Linda; Miller, Rachel; Lambe, Jess; Feng, Janine; Kapadia, Monesh; Clements, June; Loftin, Isabell; Singh, Shalini; Das-Gupta, Ashis; Lloyd, William; Bloom, Kenneth
2016-10-01
Companion diagnostics assay interpretation can select patients with the greatest targeted therapy benefits. We present the results from a prospective study demonstrating that pathologists can effectively learn immunohistochemical assay-interpretation skills from digital image-based electronic training (e-training). In this study, e-training was used to train board-certified pathologists to evaluate non-small cell lung carcinoma for eligibility for treatment with onartuzumab, a MET-inhibiting agent. The training program mimicked the live training that was previously validated in clinical trials for onartuzumab. A digital interface was developed for pathologists to review high-resolution, static images of stained slides. Sixty-four pathologists practicing in the United States enrolled while blinded to the type of training. After training, both groups completed a mandatory final test using glass slides. The results indicated both training modalities to be effective. Overall, 80.6% of e-trainees and 72.7% of live trainees achieved passing scores (at least 85%) on the final test. All study participants reported that their training experience was "good" and that they had received sufficient information to determine the adequacy of case slide staining to score each case. This study established that an e-training program conducted under highly controlled conditions can provide pathologists with the skills necessary to interpret a complex assay and that these skills can be equivalent to those achieved with face-to-face training using conventional microscopy. Programs of this type are scalable for global distribution and offer pathologists the potential for readily accessible and robust training in new companion diagnostic assays linked to novel, targeted, adjuvant therapies for cancer patients. Copyright © 2016 Elsevier Inc. All rights reserved.
Winning One Program at a Time: A Systemic Approach
ERIC Educational Resources Information Center
Schultz, Adam; Zimmerman, Kay
2016-01-01
Many Universities are missing an opportunity to focus student recruitment marketing efforts and budget at the program level, which can offer lower priced advertising opportunities with higher conversion rates than traditional University level marketing initiatives. At NC State University, we have begun to deploy a scalable, low-cost, program level…
ACT-Vision: active collaborative tracking for multiple PTZ cameras
NASA Astrophysics Data System (ADS)
Broaddus, Christopher; Germano, Thomas; Vandervalk, Nicholas; Divakaran, Ajay; Wu, Shunguang; Sawhney, Harpreet
2009-04-01
We describe a novel scalable approach for the management of a large number of Pan-Tilt-Zoom (PTZ) cameras deployed outdoors for persistent tracking of humans and vehicles, without resorting to the large fields of view of associated static cameras. Our system, Active Collaborative Tracking - Vision (ACT-Vision), is essentially a real-time operating system that can control hundreds of PTZ cameras to ensure uninterrupted tracking of target objects while maintaining image quality and coverage of all targets using a minimal number of sensors. The system ensures the visibility of targets between PTZ cameras by using criteria such as distance from sensor and occlusion.
NASA Technical Reports Server (NTRS)
Bittker, D. A.; Scullin, V. J.
1972-01-01
A general chemical kinetics program is described for complex, homogeneous ideal-gas reactions in any chemical system. Its main features are flexibility and convenience in treating many different reaction conditions. The program solves numerically the differential equations describing complex reaction in either a static system or one-dimensional inviscid flow. Applications include ignition and combustion, shock wave reactions, and general reactions in a flowing or static system. An implicit numerical solution method is used which works efficiently for the extreme conditions of a very slow or a very fast reaction. The theory is described, and the computer program and users' manual are included.
Scalable Light Module for Low-Cost, High-Efficiency Light- Emitting Diode Luminaires
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tarsa, Eric
2015-08-31
During this two-year program Cree developed a scalable, modular optical architecture for low-cost, high-efficacy light emitting diode (LED) luminaires. Stated simply, the goal of this architecture was to efficiently and cost-effectively convey light from LEDs (point sources) to broad luminaire surfaces (area sources). By simultaneously developing warm-white LED components and low-cost, scalable optical elements, a high system optical efficiency resulted. To meet program goals, Cree evaluated novel approaches to improve LED component efficacy at high color quality while not sacrificing LED optical efficiency relative to conventional packages. Meanwhile, efficiently coupling light from LEDs into modular optical elements, followed by optimallymore » distributing and extracting this light, were challenges that were addressed via novel optical design coupled with frequent experimental evaluations. Minimizing luminaire bill of materials and assembly costs were two guiding principles for all design work, in the effort to achieve luminaires with significantly lower normalized cost ($/klm) than existing LED fixtures. Chief project accomplishments included the achievement of >150 lm/W warm-white LEDs having primary optics compatible with low-cost modular optical elements. In addition, a prototype Light Module optical efficiency of over 90% was measured, demonstrating the potential of this scalable architecture for ultra-high-efficacy LED luminaires. Since the project ended, Cree has continued to evaluate optical element fabrication and assembly methods in an effort to rapidly transfer this scalable, cost-effective technology to Cree production development groups. The Light Module concept is likely to make a strong contribution to the development of new cost-effective, high-efficacy luminaries, thereby accelerating widespread adoption of energy-saving SSL in the U.S.« less
ERIC Educational Resources Information Center
Blikstein, Paulo; Worsley, Marcelo; Piech, Chris; Sahami, Mehran; Cooper, Steven; Koller, Daphne
2014-01-01
New high-frequency, automated data collection and analysis algorithms could offer new insights into complex learning processes, especially for tasks in which students have opportunities to generate unique open-ended artifacts such as computer programs. These approaches should be particularly useful because the need for scalable project-based and…
Eccentric Training and Static Stretching Improve Hamstring Flexibility of High School Males
Bandy, William D.
2004-01-01
Objective: To determine if the flexibility of high-school-aged males would improve after a 6-week eccentric exercise program. In addition, the changes in hamstring flexibility that occurred after the eccentric program were compared with a 6-week program of static stretching and with a control group (no stretching). Design and Setting: We used a test-retest control group design in a laboratory setting. Subjects were assigned randomly to 1 of 3 groups: eccentric training, static stretching, or control. Subjects: A total of 69 subjects, with a mean age of 16.45 ± 0.96 years and with limited hamstring flexibility (defined as 20° loss of knee extension measured with the thigh held at 90° of hip flexion) were recruited for this study. Measurements: Hamstring flexibility was measured using the passive 90/90 test before and after the 6-week program. Results: Differences were significant for test and for the test-by-group interaction. Follow-up analysis indicated significant differences between the control group (gain = 1.67°) and both the eccentric-training (gain = 12.79°) and static-stretching (gain = 12.05°) groups. No difference was found between the eccentric and static-stretching groups. Conclusions: The gains achieved in range of motion of knee extension (indicating improvement in hamstring flexibility) with eccentric training were equal to those made by statically stretching the hamstring muscles. PMID:15496995
Eccentric Training and Static Stretching Improve Hamstring Flexibility of High School Males.
Nelson, Russell T; Bandy, William D
2004-09-01
OBJECTIVE: To determine if the flexibility of high-school-aged males would improve after a 6-week eccentric exercise program. In addition, the changes in hamstring flexibility that occurred after the eccentric program were compared with a 6-week program of static stretching and with a control group (no stretching). DESIGN AND SETTING: We used a test-retest control group design in a laboratory setting. Subjects were assigned randomly to 1 of 3 groups: eccentric training, static stretching, or control. SUBJECTS: A total of 69 subjects, with a mean age of 16.45 +/- 0.96 years and with limited hamstring flexibility (defined as 20 degrees loss of knee extension measured with the thigh held at 90 degrees of hip flexion) were recruited for this study. MEASUREMENTS: Hamstring flexibility was measured using the passive 90/90 test before and after the 6-week program. RESULTS: Differences were significant for test and for the test-by-group interaction. Follow-up analysis indicated significant differences between the control group (gain = 1.67 degrees ) and both the eccentric-training (gain = 12.79 degrees ) and static-stretching (gain = 12.05 degrees ) groups. No difference was found between the eccentric and static-stretching groups. CONCLUSIONS: The gains achieved in range of motion of knee extension (indicating improvement in hamstring flexibility) with eccentric training were equal to those made by statically stretching the hamstring muscles.
Ostrowski, M; Paulevé, L; Schaub, T; Siegel, A; Guziolowski, C
2016-11-01
Boolean networks (and more general logic models) are useful frameworks to study signal transduction across multiple pathways. Logic models can be learned from a prior knowledge network structure and multiplex phosphoproteomics data. However, most efficient and scalable training methods focus on the comparison of two time-points and assume that the system has reached an early steady state. In this paper, we generalize such a learning procedure to take into account the time series traces of phosphoproteomics data in order to discriminate Boolean networks according to their transient dynamics. To that end, we identify a necessary condition that must be satisfied by the dynamics of a Boolean network to be consistent with a discretized time series trace. Based on this condition, we use Answer Set Programming to compute an over-approximation of the set of Boolean networks which fit best with experimental data and provide the corresponding encodings. Combined with model-checking approaches, we end up with a global learning algorithm. Our approach is able to learn logic models with a true positive rate higher than 78% in two case studies of mammalian signaling networks; for a larger case study, our method provides optimal answers after 7min of computation. We quantified the gain in our method predictions precision compared to learning approaches based on static data. Finally, as an application, our method proposes erroneous time-points in the time series data with respect to the optimal learned logic models. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
2010-05-01
connections near the hub end, and containing up to 0.48 million degrees of freedom. The models are analyzed for scala - bility and timing for hover and...Parallel and Scalable Rotor Dynamic Analysis 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK...will enable the modeling of critical couplings that occur in hingeless and bearingless hubs with advanced flex structures. Second , it will enable the
ERIC Educational Resources Information Center
Giagazoglou, Paraskevi; Arabatzi, Fotini; Dipla, Konstantina; Liga, Maria; Kellis, Eleftherios
2012-01-01
The aim of this study was to assess the effects of a hippotherapy program on static balance and strength in adolescents with intellectual disability (ID). Nineteen adolescents with moderate ID were assigned either an experimental group (n = 10) or a control group (n = 9). The experimental group attended a 10-week hippotherapy program. To assess…
Effects of virtual reality programs on balance in functional ankle instability
Kim, Ki-Jong; Heo, Myoung
2015-01-01
[Purpose] The aim of present study was to identify the impact that recent virtual reality training programs used in a variety of fields have had on the ankle’s static and dynamic senses of balance among subjects with functional ankle instability. [Subjects and Methods] This study randomly divided research subjects into two groups, a strengthening exercise group (Group I) and a balance exercise group (Group II), with each group consisting of 10 people. A virtual reality program was performed three times a week for four weeks. Exercises from the Nintendo Wii Fit Plus program were applied to each group for twenty minutes along with ten minutes of warming up and wrap-up exercises. [Results] Group II showed a significant decrease of post-intervention static and dynamic balance overall in the anterior-posterior, and mediolateral directions, compared with the pre-intervention test results. In comparison of post-intervention static and dynamic balance between Group I and Group II, a significant decrease was observed overall. [Conclusion] Virtual reality programs improved the static balance and dynamic balance of subjects with functional ankle instability. Virtual reality programs can be used more safely and efficiently if they are implemented under appropriate monitoring by a physiotherapist. PMID:26644652
Effects of virtual reality programs on balance in functional ankle instability.
Kim, Ki-Jong; Heo, Myoung
2015-10-01
[Purpose] The aim of present study was to identify the impact that recent virtual reality training programs used in a variety of fields have had on the ankle's static and dynamic senses of balance among subjects with functional ankle instability. [Subjects and Methods] This study randomly divided research subjects into two groups, a strengthening exercise group (Group I) and a balance exercise group (Group II), with each group consisting of 10 people. A virtual reality program was performed three times a week for four weeks. Exercises from the Nintendo Wii Fit Plus program were applied to each group for twenty minutes along with ten minutes of warming up and wrap-up exercises. [Results] Group II showed a significant decrease of post-intervention static and dynamic balance overall in the anterior-posterior, and mediolateral directions, compared with the pre-intervention test results. In comparison of post-intervention static and dynamic balance between Group I and Group II, a significant decrease was observed overall. [Conclusion] Virtual reality programs improved the static balance and dynamic balance of subjects with functional ankle instability. Virtual reality programs can be used more safely and efficiently if they are implemented under appropriate monitoring by a physiotherapist.
NASA Astrophysics Data System (ADS)
Xu, Boyi; Xu, Li Da; Fei, Xiang; Jiang, Lihong; Cai, Hongming; Wang, Shuai
2017-08-01
Facing the rapidly changing business environments, implementation of flexible business process is crucial, but difficult especially in data-intensive application areas. This study aims to provide scalable and easily accessible information resources to leverage business process management. In this article, with a resource-oriented approach, enterprise data resources are represented as data-centric Web services, grouped on-demand of business requirement and configured dynamically to adapt to changing business processes. First, a configurable architecture CIRPA involving information resource pool is proposed to act as a scalable and dynamic platform to virtualise enterprise information resources as data-centric Web services. By exposing data-centric resources as REST services in larger granularities, tenant-isolated information resources could be accessed in business process execution. Second, dynamic information resource pool is designed to fulfil configurable and on-demand data accessing in business process execution. CIRPA also isolates transaction data from business process while supporting diverse business processes composition. Finally, a case study of using our method in logistics application shows that CIRPA provides an enhanced performance both in static service encapsulation and dynamic service execution in cloud computing environment.
Learning directed acyclic graphs from large-scale genomics data.
Nikolay, Fabio; Pesavento, Marius; Kritikos, George; Typas, Nassos
2017-09-20
In this paper, we consider the problem of learning the genetic interaction map, i.e., the topology of a directed acyclic graph (DAG) of genetic interactions from noisy double-knockout (DK) data. Based on a set of well-established biological interaction models, we detect and classify the interactions between genes. We propose a novel linear integer optimization program called the Genetic-Interactions-Detector (GENIE) to identify the complex biological dependencies among genes and to compute the DAG topology that matches the DK measurements best. Furthermore, we extend the GENIE program by incorporating genetic interaction profile (GI-profile) data to further enhance the detection performance. In addition, we propose a sequential scalability technique for large sets of genes under study, in order to provide statistically significant results for real measurement data. Finally, we show via numeric simulations that the GENIE program and the GI-profile data extended GENIE (GI-GENIE) program clearly outperform the conventional techniques and present real data results for our proposed sequential scalability technique.
Automated Performance Prediction of Message-Passing Parallel Programs
NASA Technical Reports Server (NTRS)
Block, Robert J.; Sarukkai, Sekhar; Mehra, Pankaj; Woodrow, Thomas S. (Technical Monitor)
1995-01-01
The increasing use of massively parallel supercomputers to solve large-scale scientific problems has generated a need for tools that can predict scalability trends of applications written for these machines. Much work has been done to create simple models that represent important characteristics of parallel programs, such as latency, network contention, and communication volume. But many of these methods still require substantial manual effort to represent an application in the model's format. The NIK toolkit described in this paper is the result of an on-going effort to automate the formation of analytic expressions of program execution time, with a minimum of programmer assistance. In this paper we demonstrate the feasibility of our approach, by extending previous work to detect and model communication patterns automatically, with and without overlapped computations. The predictions derived from these models agree, within reasonable limits, with execution times of programs measured on the Intel iPSC/860 and Paragon. Further, we demonstrate the use of MK in selecting optimal computational grain size and studying various scalability metrics.
Scalable Static and Dynamic Community Detection Using Grappolo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halappanavar, Mahantesh; Lu, Hao; Kalyanaraman, Anantharaman
Graph clustering, popularly known as community detection, is a fundamental kernel for several applications of relevance to the Defense Advanced Research Projects Agency’s (DARPA) Hierarchical Identify Verify Exploit (HIVE) Pro- gram. Clusters or communities represent natural divisions within a network that are densely connected within a cluster and sparsely connected to the rest of the network. The need to compute clustering on large scale data necessitates the development of efficient algorithms that can exploit modern architectures that are fundamentally parallel in nature. How- ever, due to their irregular and inherently sequential nature, many of the current algorithms for community detectionmore » are challenging to parallelize. In response to the HIVE Graph Challenge, we present several parallelization heuristics for fast community detection using the Louvain method as the serial template. We implement all the heuristics in a software library called Grappolo. Using the inputs from the HIVE Challenge, we demonstrate superior performance and high quality solutions based on four parallelization heuristics. We use Grappolo on static graphs as the first step towards community detection on streaming graphs.« less
Caridakis, G; Karpouzis, K; Drosopoulos, A; Kollias, S
2012-12-01
Modeling and recognizing spatiotemporal, as opposed to static input, is a challenging task since it incorporates input dynamics as part of the problem. The vast majority of existing methods tackle the problem as an extension of the static counterpart, using dynamics, such as input derivatives, at feature level and adopting artificial intelligence and machine learning techniques originally designed for solving problems that do not specifically address the temporal aspect. The proposed approach deals with temporal and spatial aspects of the spatiotemporal domain in a discriminative as well as coupling manner. Self Organizing Maps (SOM) model the spatial aspect of the problem and Markov models its temporal counterpart. Incorporation of adjacency, both in training and classification, enhances the overall architecture with robustness and adaptability. The proposed scheme is validated both theoretically, through an error propagation study, and experimentally, on the recognition of individual signs, performed by different, native Greek Sign Language users. Results illustrate the architecture's superiority when compared to Hidden Markov Model techniques and variations both in terms of classification performance and computational cost. Copyright © 2012 Elsevier Ltd. All rights reserved.
Liu, Chong; Xie, Xing; Zhao, Wenting; Yao, Jie; Kong, Desheng; Boehm, Alexandria B; Cui, Yi
2014-10-08
Safe water scarcity occurs mostly in developing regions that also suffer from energy shortages and infrastructure deficiencies. Low-cost and energy-efficient water disinfection methods have the potential to make great impacts on people in these regions. At the present time, most water disinfection methods being promoted to households in developing countries are aqueous chemical-reaction-based or filtration-based. Incorporating nanomaterials into these existing disinfection methods could improve the performance; however, the high cost of material synthesis and recovery as well as fouling and slow treatment speed is still limiting their application. Here, we demonstrate a novel flow device that enables fast water disinfection using one-dimensional copper oxide nanowire (CuONW) assisted electroporation powered by static electricity. Electroporation relies on a strong electric field to break down microorganism membranes and only consumes a very small amount of energy. Static electricity as the power source can be generated by an individual person's motion in a facile and low-cost manner, which ensures its application anywhere in the world. The CuONWs used were synthesized through a scalable one-step air oxidation of low-cost copper mesh. With a single filtration, we achieved complete disinfection of bacteria and viruses in both raw tap and lake water with a high flow rate of 3000 L/(h·m(2)), equivalent to only 1 s of contact time. Copper leaching from the nanowire mesh was minimal.
Ensuring That Family Engagement Initiatives Are Successful, Sustainable, and Scalable
ERIC Educational Resources Information Center
Geller, Joanna D.
2016-01-01
In 2009, the U.S. Department of Education launched the highly competitive Investing in Innovation (i3) initiative. School districts and nonprofit partners nationwide have competed for coveted funds to develop a new program, validate an existing program with some evidence of success, or scale up a program backed by ample evidence. Very quickly,…
Difficulties with True Interoperability in Modeling & Simulation
2011-12-01
2009. Programming Scala : Scalability = Functional Programming + Ob- jects. 1 st ed. O‟Reilly Media. 2652 Gallant and Gaughan AUTHOR BIOGRAPHIES...that develops a model or simulation has a specific purpose, set of requirements and limited funding. These programs cannot afford to coordinate with...implementation. The program offices should budget for and plan for coordination across domain projects within a limited scope to improve interoperability with
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erez, Mattan; Yelick, Katherine; Sarkar, Vivek
The Dynamic, Exascale Global Address Space programming environment (DEGAS) project will develop the next generation of programming models and runtime systems to meet the challenges of Exascale computing. Our approach is to provide an efficient and scalable programming model that can be adapted to application needs through the use of dynamic runtime features and domain-specific languages for computational kernels. We address the following technical challenges: Programmability: Rich set of programming constructs based on a Hierarchical Partitioned Global Address Space (HPGAS) model, demonstrated in UPC++. Scalability: Hierarchical locality control, lightweight communication (extended GASNet), and ef- ficient synchronization mechanisms (Phasers). Performance Portability:more » Just-in-time specialization (SEJITS) for generating hardware-specific code and scheduling libraries for domain-specific adaptive runtimes (Habanero). Energy Efficiency: Communication-optimal code generation to optimize energy efficiency by re- ducing data movement. Resilience: Containment Domains for flexible, domain-specific resilience, using state capture mechanisms and lightweight, asynchronous recovery mechanisms. Interoperability: Runtime and language interoperability with MPI and OpenMP to encourage broad adoption.« less
Mickey Leland Energy Fellowship Report: Development of Advanced Window Coatings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolton, Ladena A.; Alvine, Kyle J.; Schemer-Kohrn, Alan L.
2014-08-05
Advanced fenestration technologies for light and thermal management in building applications are of great recent research interest for improvements in energy efficiency. Of these technologies, there is specific interest in advanced window coating technologies that have tailored control over the visible and infrared (IR) scattering into a room for both static and dynamic applications. Recently, PNNL has investigated novel subwavelength nanostructured coatings for both daylighting, and IR thermal management applications. Such coatings rese still in the early stages and additional research is needed in terms of scalable manufacturing. This project investigates aspects of a potential new methodology for low-cost scalablemore » manufacture of said subwavelength coatings.« less
Linear static structural and vibration analysis on high-performance computers
NASA Technical Reports Server (NTRS)
Baddourah, M. A.; Storaasli, O. O.; Bostic, S. W.
1993-01-01
Parallel computers offer the oppurtunity to significantly reduce the computation time necessary to analyze large-scale aerospace structures. This paper presents algorithms developed for and implemented on massively-parallel computers hereafter referred to as Scalable High-Performance Computers (SHPC), for the most computationally intensive tasks involved in structural analysis, namely, generation and assembly of system matrices, solution of systems of equations and calculation of the eigenvalues and eigenvectors. Results on SHPC are presented for large-scale structural problems (i.e. models for High-Speed Civil Transport). The goal of this research is to develop a new, efficient technique which extends structural analysis to SHPC and makes large-scale structural analyses tractable.
Parallel Transport Quantum Logic Gates with Trapped Ions.
de Clercq, Ludwig E; Lo, Hsiang-Yu; Marinelli, Matteo; Nadlinger, David; Oswald, Robin; Negnevitsky, Vlad; Kienzler, Daniel; Keitch, Ben; Home, Jonathan P
2016-02-26
We demonstrate single-qubit operations by transporting a beryllium ion with a controlled velocity through a stationary laser beam. We use these to perform coherent sequences of quantum operations, and to perform parallel quantum logic gates on two ions in different processing zones of a multiplexed ion trap chip using a single recycled laser beam. For the latter, we demonstrate individually addressed single-qubit gates by local control of the speed of each ion. The fidelities we observe are consistent with operations performed using standard methods involving static ions and pulsed laser fields. This work therefore provides a path to scalable ion trap quantum computing with reduced requirements on the optical control complexity.
Technology advancement of the static feed water electrolysis process
NASA Technical Reports Server (NTRS)
Schubert, F. H.; Wynveen, R. A.
1977-01-01
A program to advance the technology of oxygen- and hydrogen-generating subsystems based on water electrolysis was studied. Major emphasis was placed on static feed water electrolysis, a concept characterized by low power consumption and high intrinsic reliability. The static feed based oxygen generation subsystem consists basically of three subassemblies: (1) a combined water electrolysis and product gas dehumidifier module; (2) a product gas pressure controller and; (3) a cyclically filled water feed tank. Development activities were completed at the subsystem as well as at the component level. An extensive test program including single cell, subsystem and integrated system testing was completed with the required test support accessories designed, fabricated, and assembled. Mini-product assurance activities were included throughout all phases of program activities. An extensive number of supporting technology studies were conducted to advance the technology base of the static feed water electrolysis process and to resolve problems.
Decision Engines for Software Analysis Using Satisfiability Modulo Theories Solvers
NASA Technical Reports Server (NTRS)
Bjorner, Nikolaj
2010-01-01
The area of software analysis, testing and verification is now undergoing a revolution thanks to the use of automated and scalable support for logical methods. A well-recognized premise is that at the core of software analysis engines is invariably a component using logical formulas for describing states and transformations between system states. The process of using this information for discovering and checking program properties (including such important properties as safety and security) amounts to automatic theorem proving. In particular, theorem provers that directly support common software constructs offer a compelling basis. Such provers are commonly called satisfiability modulo theories (SMT) solvers. Z3 is a state-of-the-art SMT solver. It is developed at Microsoft Research. It can be used to check the satisfiability of logical formulas over one or more theories such as arithmetic, bit-vectors, lists, records and arrays. The talk describes some of the technology behind modern SMT solvers, including the solver Z3. Z3 is currently mainly targeted at solving problems that arise in software analysis and verification. It has been applied to various contexts, such as systems for dynamic symbolic simulation (Pex, SAGE, Vigilante), for program verification and extended static checking (Spec#/Boggie, VCC, HAVOC), for software model checking (Yogi, SLAM), model-based design (FORMULA), security protocol code (F7), program run-time analysis and invariant generation (VS3). We will describe how it integrates support for a variety of theories that arise naturally in the context of the applications. There are several new promising avenues and the talk will touch on some of these and the challenges related to SMT solvers. Proceedings
Daneshjoo, Abdolhamid; Mokhtar, Abdul Halim; Rahnama, Nader; Yusof, Ashril
2012-01-01
Purpose The study investigated the effects of FIFA 11+ and HarmoKnee, both being popular warm-up programs, on proprioception, and on the static and dynamic balance of professional male soccer players. Methods Under 21 year-old soccer players (n = 36) were divided randomly into 11+, HarmoKnee and control groups. The programs were performed for 2 months (24 sessions). Proprioception was measured bilaterally at 30°, 45° and 60° knee flexion using the Biodex Isokinetic Dynamometer. Static and dynamic balances were evaluated using the stork stand test and Star Excursion Balance Test (SEBT), respectively. Results The proprioception error of dominant leg significantly decreased from pre- to post-test by 2.8% and 1.7% in the 11+ group at 45° and 60° knee flexion, compared to 3% and 2.1% in the HarmoKnee group. The largest joint positioning error was in the non-dominant leg at 30° knee flexion (mean error value = 5.047), (p<0.05). The static balance with the eyes opened increased in the 11+ by 10.9% and in the HarmoKnee by 6.1% (p<0.05). The static balance with eyes closed significantly increased in the 11+ by 12.4% and in the HarmoKnee by 17.6%. The results indicated that static balance was significantly higher in eyes opened compared to eyes closed (p = 0.000). Significant improvements in SEBT in the 11+ (12.4%) and HarmoKnee (17.6%) groups were also found. Conclusion Both the 11+ and HarmoKnee programs were proven to be useful warm-up protocols in improving proprioception at 45° and 60° knee flexion as well as static and dynamic balance in professional male soccer players. Data from this research may be helpful in encouraging coaches or trainers to implement the two warm-up programs in their soccer teams. PMID:23251579
Animation, audio, and spatial ability: Optimizing multimedia for scientific explanations
NASA Astrophysics Data System (ADS)
Koroghlanian, Carol May
This study investigated the effects of audio, animation and spatial ability in a computer based instructional program for biology. The program presented instructional material via text or audio with lean text and included eight instructional sequences presented either via static illustrations or animations. High school students enrolled in a biology course were blocked by spatial ability and randomly assigned to one of four treatments (Text-Static Illustration Audio-Static Illustration, Text-Animation, Audio-Animation). The study examined the effects of instructional mode (Text vs. Audio), illustration mode (Static Illustration vs. Animation) and spatial ability (Low vs. High) on practice and posttest achievement, attitude and time. Results for practice achievement indicated that high spatial ability participants achieved more than low spatial ability participants. Similar results for posttest achievement and spatial ability were not found. Participants in the Static Illustration treatments achieved the same as participants in the Animation treatments on both the practice and posttest. Likewise, participants in the Text treatments achieved the same as participants in the Audio treatments on both the practice and posttest. In terms of attitude, participants responded favorably to the computer based instructional program. They found the program interesting, felt the static illustrations or animations made the explanations easier to understand and concentrated on learning the material. Furthermore, participants in the Animation treatments felt the information was easier to understand than participants in the Static Illustration treatments. However, no difference for any attitude item was found for participants in the Text as compared to those in the Audio treatments. Significant differences were found by Spatial Ability for three attitude items concerning concentration and interest. In all three items, the low spatial ability participants responded more positively than high spatial ability participants. In addition, low spatial ability participants reported greater mental effort than high spatial ability participants. Findings for time-in-program and time-in-instruction indicated that participants in the Animation treatments took significantly more time than participants in the Static Illustration treatments. No time differences of any type were found for participants in the Text versus Audio treatments. Implications for the design of multimedia instruction and topics for future research are included in the discussion.
MSC products for the simulation of tire behavior
NASA Technical Reports Server (NTRS)
Muskivitch, John C.
1995-01-01
The modeling of tires and the simulation of tire behavior are complex problems. The MacNeal-Schwendler Corporation (MSC) has a number of finite element analysis products that can be used to address the complexities of tire modeling and simulation. While there are many similarities between the products, each product has a number of capabilities that uniquely enable it to be used for a specific aspect of tire behavior. This paper discusses the following programs: (1) MSC/NASTRAN - general purpose finite element program for linear and nonlinear static and dynamic analysis; (2) MSC/ADAQUS - nonlinear statics and dynamics finite element program; (3) MSC/PATRAN AFEA (Advanced Finite Element Analysis) - general purpose finite element program with a subset of linear and nonlinear static and dynamic analysis capabilities with an integrated version of MSC/PATRAN for pre- and post-processing; and (4) MSC/DYTRAN - nonlinear explicit transient dynamics finite element program.
Computational Cognition and Robust Decision Making
2013-03-06
much more powerful neuromorphic chips than current state of the art. L. Chua 10 DISTRIBUTION STATEMENT A – Unclassified, Unlimited Distribution 2...Cognition Program DARPA (Gill Pratt) • Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) Program IARPA (Brad Minnery...2012 - Four projects at SNU and KAIST co-funded with AOARD DARPA SyNAPSE Program: - Design, fabrication, and demonstration of neuromorphic
A complexity-scalable software-based MPEG-2 video encoder.
Chen, Guo-bin; Lu, Xin-ning; Wang, Xing-guo; Liu, Ji-lin
2004-05-01
With the development of general-purpose processors (GPP) and video signal processing algorithms, it is possible to implement a software-based real-time video encoder on GPP, and its low cost and easy upgrade attract developers' interests to transfer video encoding from specialized hardware to more flexible software. In this paper, the encoding structure is set up first to support complexity scalability; then a lot of high performance algorithms are used on the key time-consuming modules in coding process; finally, at programming level, processor characteristics are considered to improve data access efficiency and processing parallelism. Other programming methods such as lookup table are adopted to reduce the computational complexity. Simulation results showed that these ideas could not only improve the global performance of video coding, but also provide great flexibility in complexity regulation.
Leverage hadoop framework for large scale clinical informatics applications.
Dong, Xiao; Bahroos, Neil; Sadhu, Eugene; Jackson, Tommie; Chukhman, Morris; Johnson, Robert; Boyd, Andrew; Hynes, Denise
2013-01-01
In this manuscript, we present our experiences using the Apache Hadoop framework for high data volume and computationally intensive applications, and discuss some best practice guidelines in a clinical informatics setting. There are three main aspects in our approach: (a) process and integrate diverse, heterogeneous data sources using standard Hadoop programming tools and customized MapReduce programs; (b) after fine-grained aggregate results are obtained, perform data analysis using the Mahout data mining library; (c) leverage the column oriented features in HBase for patient centric modeling and complex temporal reasoning. This framework provides a scalable solution to meet the rapidly increasing, imperative "Big Data" needs of clinical and translational research. The intrinsic advantage of fault tolerance, high availability and scalability of Hadoop platform makes these applications readily deployable at the enterprise level cluster environment.
Pak, JuGeon; Park, KeeHyun
2012-01-01
We propose a smart medication dispenser having a high degree of scalability and remote manageability. We construct the dispenser to have extensible hardware architecture for achieving scalability, and we install an agent program in it for achieving remote manageability. The dispenser operates as follows: when the real-time clock reaches the predetermined medication time and the user presses the dispense button at that time, the predetermined medication is dispensed from the medication dispensing tray (MDT). In the proposed dispenser, the medication for each patient is stored in an MDT. One smart medication dispenser contains mainly one MDT; however, the dispenser can be extended to include more MDTs in order to support multiple users using one dispenser. For remote management, the proposed dispenser transmits the medication status and the system configurations to the monitoring server. In the case of a specific event such as a shortage of medication, memory overload, software error, or non-adherence, the event is transmitted immediately. All these operations are performed automatically without the intervention of patients, through the agent program installed in the dispenser. Results of implementation and verification show that the proposed dispenser operates normally and performs the management operations from the medication monitoring server suitably.
Static Analysis of Programming Exercises: Fairness, Usefulness and a Method for Application
ERIC Educational Resources Information Center
Nutbrown, Stephen; Higgins, Colin
2016-01-01
This article explores the suitability of static analysis techniques based on the abstract syntax tree (AST) for the automated assessment of early/mid degree level programming. Focus is on fairness, timeliness and consistency of grades and feedback. Following investigation into manual marking practises, including a survey of markers, the assessment…
ERIC Educational Resources Information Center
Mahoney, Joyce; And Others
1988-01-01
Evaluates 10 courseware packages covering topics for introductory physics. Discusses the price; sub-topics; program type; interaction; possible hardware; time; calculus required; graphics; and comments on each program. Recommends two packages in projectile and circular motion, and three packages in statics and rotational dynamics. (YP)
Experimental characterization of composites. [load test methods
NASA Technical Reports Server (NTRS)
Bert, C. W.
1975-01-01
The experimental characterization for composite materials is generally more complicated than for ordinary homogeneous, isotropic materials because composites behave in a much more complex fashion, due to macroscopic anisotropic effects and lamination effects. Problems concerning the static uniaxial tension test for composite materials are considered along with approaches for conducting static uniaxial compression tests and static uniaxial bending tests. Studies of static shear properties are discussed, taking into account in-plane shear, twisting shear, and thickness shear. Attention is given to static multiaxial loading, systematized experimental programs for the complete characterization of static properties, and dynamic properties.
Nowomiejska, Katarzyna; Oleszczuk, Agnieszka; Zubilewicz, Anna; Krukowski, Jacek; Mańkowska, Anna; Rejdak, Robert; Zagórski, Zbigniew
2007-01-01
To compare the visual field results obtained by static perimetry, microperimetry and rabbit perimetry in patients suffering from dry age related macular degeneration (AMD). Fifteen eyes with dry AMD (hard or soft macula drusen and RPE disorders) were enrolled into the study. Static perimetry was performed using M2 macula program included in Octopus 101 instrument. Microperimetry was performed using macula program (14-2 threshold, 10dB) within 10 degrees of the central visual field. The fovea program within 4 degrees was used while performing rarebit perimetry. The mean sensitivity was significantly lower (p<0.001) during microperimetry (13.5 dB) comparing to static perimetry (26.7 dB). The mean deviation was significantly higher (p<0.001) during microperimetry (-6.32 dB) comparing to static perimetry (-3.11 dB). The fixation was unstable in 47% and eccentric in 40% while performing microperimetry. The median of the "mean hit rate" in rarebit perimetry was 90% (range 40-100%). The mean examination duration was 6.5 min. in static perimetry, 10.6 min. in microperimetry and 5,5 min. in rarebit perimetry (p<0.001). Sensitivity was 30%, 53% and 93% respectively. The visual field defects obtained by microperimetry were more pronounced than those obtained by static perimetry. Microperimetry was the most sensitive procedure although the most time-consuming. Microperimetry enables the control of the fixation position and stability, that is not possible using the remaining methods. Rarebit perimetry revealed slight reduction of the integrity of neural architecture of the retina. Microperimetry and rarebit perimetry provide more information in regard to the visual function than static perimetry, thus are the valuable method in the diagnosis of dry AMD.
Integration of an intelligent systems behavior simulator and a scalable soldier-machine interface
NASA Astrophysics Data System (ADS)
Johnson, Tony; Manteuffel, Chris; Brewster, Benjamin; Tierney, Terry
2007-04-01
As the Army's Future Combat Systems (FCS) introduce emerging technologies and new force structures to the battlefield, soldiers will increasingly face new challenges in workload management. The next generation warfighter will be responsible for effectively managing robotic assets in addition to performing other missions. Studies of future battlefield operational scenarios involving the use of automation, including the specification of existing and proposed technologies, will provide significant insight into potential problem areas regarding soldier workload. The US Army Tank Automotive Research, Development, and Engineering Center (TARDEC) is currently executing an Army technology objective program to analyze and evaluate the effect of automated technologies and their associated control devices with respect to soldier workload. The Human-Robotic Interface (HRI) Intelligent Systems Behavior Simulator (ISBS) is a human performance measurement simulation system that allows modelers to develop constructive simulations of military scenarios with various deployments of interface technologies in order to evaluate operator effectiveness. One such interface is TARDEC's Scalable Soldier-Machine Interface (SMI). The scalable SMI provides a configurable machine interface application that is capable of adapting to several hardware platforms by recognizing the physical space limitations of the display device. This paper describes the integration of the ISBS and Scalable SMI applications, which will ultimately benefit both systems. The ISBS will be able to use the Scalable SMI to visualize the behaviors of virtual soldiers performing HRI tasks, such as route planning, and the scalable SMI will benefit from stimuli provided by the ISBS simulation environment. The paper describes the background of each system and details of the system integration approach.
Build IT: Scaling and Sustaining an Afterschool Computer Science Program for Girls
ERIC Educational Resources Information Center
Koch, Melissa; Gorges, Torie; Penuel, William R.
2012-01-01
"Co-design"--including youth development staff along with curriculum designers--is the key to developing an effective program that is both scalable and sustainable. This article describes Build IT, a two-year afterschool and summer curriculum designed to help middle school girls develop fluency in information technology (IT), interest in…
Mapping Uncharted Territory: Launching an Online Embedded Librarian Program
ERIC Educational Resources Information Center
Allen, Seth
2017-01-01
Developing a strategy for embedding librarians in online courses can be challenging, but it is essential to demonstrate to accrediting agencies how libraries serve online students. A well-thought-out plan can be scalable and sustainable for rapidly growing online programs and can satisfy accreditation standards. This article examines how one…
Gatica-Rojas, Valeska; Cartes-Velásquez, Ricardo; Méndez-Rebolledo, Guillermo; Guzman-Muñoz, Eduardo; Lizama, L Eduardo Cofré
2017-08-01
This study sought to evaluate the effects of a Nintendo Wii Balance Board (NWBB) intervention on ankle spasticity and static standing balance in young people with spastic cerebral palsy (SCP). Ten children and adolescents (aged 72-204 months) with SCP participated in an exercise program with NWBB. The intervention lasted 6 weeks, 3 sessions per week, 25 minutes for each session. Ankle spasticity was assessed using the Modified Modified Ashworth Scale (MMAS), and static standing balance was quantified using posturographic measures (center-of-pressure [CoP] measures). Pre- and post-intervention measures were compared. Significant decreases of spasticity in the ankle plantar flexor muscles (p < 0.01). There was also a significant reduction in the CoP sway area (p = 0.04), CoP mediolateral velocity (p =0.03), and CoP anterior-posterior velocity (p = 0.03). A 6-session NWBB program reduces the spasticity at the ankle plantar flexors and improves the static standing balance in young people with SCP.
Security and Cloud Outsourcing Framework for Economic Dispatch
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarker, Mushfiqur R.; Wang, Jianhui; Li, Zuyi
The computational complexity and problem sizes of power grid applications have increased significantly with the advent of renewable resources and smart grid technologies. The current paradigm of solving these issues consist of inhouse high performance computing infrastructures, which have drawbacks of high capital expenditures, maintenance, and limited scalability. Cloud computing is an ideal alternative due to its powerful computational capacity, rapid scalability, and high cost-effectiveness. A major challenge, however, remains in that the highly confidential grid data is susceptible for potential cyberattacks when outsourced to the cloud. In this work, a security and cloud outsourcing framework is developed for themore » Economic Dispatch (ED) linear programming application. As a result, the security framework transforms the ED linear program into a confidentiality-preserving linear program, that masks both the data and problem structure, thus enabling secure outsourcing to the cloud. Results show that for large grid test cases the performance gain and costs outperforms the in-house infrastructure.« less
Security and Cloud Outsourcing Framework for Economic Dispatch
Sarker, Mushfiqur R.; Wang, Jianhui; Li, Zuyi; ...
2017-04-24
The computational complexity and problem sizes of power grid applications have increased significantly with the advent of renewable resources and smart grid technologies. The current paradigm of solving these issues consist of inhouse high performance computing infrastructures, which have drawbacks of high capital expenditures, maintenance, and limited scalability. Cloud computing is an ideal alternative due to its powerful computational capacity, rapid scalability, and high cost-effectiveness. A major challenge, however, remains in that the highly confidential grid data is susceptible for potential cyberattacks when outsourced to the cloud. In this work, a security and cloud outsourcing framework is developed for themore » Economic Dispatch (ED) linear programming application. As a result, the security framework transforms the ED linear program into a confidentiality-preserving linear program, that masks both the data and problem structure, thus enabling secure outsourcing to the cloud. Results show that for large grid test cases the performance gain and costs outperforms the in-house infrastructure.« less
Scalable DB+IR Technology: Processing Probabilistic Datalog with HySpirit.
Frommholz, Ingo; Roelleke, Thomas
2016-01-01
Probabilistic Datalog (PDatalog, proposed in 1995) is a probabilistic variant of Datalog and a nice conceptual idea to model Information Retrieval in a logical, rule-based programming paradigm. Making PDatalog work in real-world applications requires more than probabilistic facts and rules, and the semantics associated with the evaluation of the programs. We report in this paper some of the key features of the HySpirit system required to scale the execution of PDatalog programs. Firstly, there is the requirement to express probability estimation in PDatalog. Secondly, fuzzy-like predicates are required to model vague predicates (e.g. vague match of attributes such as age or price). Thirdly, to handle large data sets there are scalability issues to be addressed, and therefore, HySpirit provides probabilistic relational indexes and parallel and distributed processing . The main contribution of this paper is a consolidated view on the methods of the HySpirit system to make PDatalog applicable in real-scale applications that involve a wide range of requirements typical for data (information) management and analysis.
Cardiovascular responses to static exercise in distance runners and weight lifters
NASA Technical Reports Server (NTRS)
Longhurst, J. C.; Kelly, A. R.; Gonyea, W. J.; Mitchell, J. H.
1980-01-01
Three groups of athletes including long-distance runners, competitive and amateur weight lifters, and age- and sex-matched control subjects have been studied by hemodynamic and echocardiographic methods in order to determine the effect of the training programs on the cardiovascular response to static exercise. Blood pressure, heart rate, and double product data at rest and at fatigue suggest that competitive endurance (dynamic exercise) training alters the cardiovascular response to static exercise. In contrast to endurance exercise, weight lifting (static exercise) training does not alter the cardiovascular response to static exercise: weight lifters responded to static exercise in a manner very similar to that of the control subjects.
Scalability of dark current in silicon PIN photodiode
NASA Astrophysics Data System (ADS)
Feng, Ya-Jie; Li, Chong; Liu, Qiao-Li; Wang, Hua-Qiang; Hu, An-Qi; He, Xiao-Ying; Guo, Xia
2018-04-01
Not Available Project supported by the National Key Research and Development Program of China (Grant No. 2017YFF0104801) and the National Natural Science Foundation of China (Grant Nos. 61335004, 61675046, and 61505003).
Electrical control of a solid-state flying qubit.
Yamamoto, Michihisa; Takada, Shintaro; Bäuerle, Christopher; Watanabe, Kenta; Wieck, Andreas D; Tarucha, Seigo
2012-03-18
Solid-state approaches to quantum information technology are attractive because they are scalable. The coherent transport of quantum information over large distances is a requirement for any practical quantum computer and has been demonstrated by coupling super-conducting qubits to photons. Single electrons have also been transferred between distant quantum dots in times shorter than their spin coherence time. However, until now, there have been no demonstrations of scalable 'flying qubit' architectures-systems in which it is possible to perform quantum operations on qubits while they are being coherently transferred-in solid-state systems. These architectures allow for control over qubit separation and for non-local entanglement, which makes them more amenable to integration and scaling than static qubit approaches. Here, we report the transport and manipulation of qubits over distances of 6 µm within 40 ps, in an Aharonov-Bohm ring connected to two-channel wires that have a tunable tunnel coupling between channels. The flying qubit state is defined by the presence of a travelling electron in either channel of the wire, and can be controlled without a magnetic field. Our device has shorter quantum gates (<1 µm), longer coherence lengths (∼86 µm at 70 mK) and higher operating frequencies (∼100 GHz) than other solid-state implementations of flying qubits.
Nemati, Shiva; Abbasalizadeh, Saeed; Baharvand, Hossein
2016-01-01
Recent advances in neural differentiation technology have paved the way to generate clinical grade neural progenitor populations from human pluripotent stem cells. These cells are an excellent source for the production of neural cell-based therapeutic products to treat incurable central nervous system disorders such as Parkinson's disease and spinal cord injuries. This progress can be complemented by the development of robust bioprocessing technologies for large scale expansion of clinical grade neural progenitors under GMP conditions for promising clinical use and drug discovery applications. Here, we describe a protocol for a robust, scalable expansion of human neural progenitor cells from pluripotent stem cells as 3D aggregates in a stirred suspension bioreactor. The use of this platform has resulted in easily expansion of neural progenitor cells for several passages with a fold increase of up to 4.2 over a period of 5 days compared to a maximum 1.5-2-fold increase in the adherent static culture over a 1 week period. In the bioreactor culture, these cells maintained self-renewal, karyotype stability, and cloning efficiency capabilities. This approach can be also used for human neural progenitor cells derived from other sources such as the human fetal brain.
Using Pot-Magnets to Enable Stable and Scalable Electromagnetic Tactile Displays.
Zarate, Juan Jose; Shea, Herbert
2017-01-01
We present the design, fabrication, characterization, and psychophysical testing of a scalable haptic display based on electromagnetic (EM) actuators. The display consists of a 4 × 4 array of taxels, each of which can be in a raised or a lowered position, thus generating different static configurations. One of the most challenging aspects when designing densely-packed arrays of EM actuators is obtaining large actuation forces while simultaneously generating only weak interactions between neighboring taxels. In this work, we introduce a lightweight and effective magnetic shielding architecture. The moving part of each taxel is a cylindrical permanent magnet embedded in a ferromagnetic pot, forming a pot-magnet. An array of planar microcoils attracts or repels each pot-magnet. This configuration reduces the interaction between neighboring magnets by more than one order of magnitude, while the coil/magnet interaction is only reduced by 10 percent. For 4 mm diameter pins on an 8 mm pitch, we obtained displacements of 0.55 mm and forces of 40 mN using 1.7 W. We measured the accuracy of human perception under two actuation configurations which differed in the force versus displacement curve. We obtained 91 percent of correct answers in pulling configuration and 100 percent in pushing configuration.
The Effects of Exercise on the Firing Patterns of Single Motor Units.
ERIC Educational Resources Information Center
Cracraft, Joe D.
In this study, the training effects of static and dynamic exercise programs on the firing patterns of 450 single motor units (SMU) in the human tibialis anterior muscle were investigated. In a six week program, the static group (N=5) participated in daily high intensity, short duration, isometric exercises while the dynamic group (N=5)…
A Categorization of Dynamic Analyzers
NASA Technical Reports Server (NTRS)
Lujan, Michelle R.
1997-01-01
Program analysis techniques and tools are essential to the development process because of the support they provide in detecting errors and deficiencies at different phases of development. The types of information rendered through analysis includes the following: statistical measurements of code, type checks, dataflow analysis, consistency checks, test data,verification of code, and debugging information. Analyzers can be broken into two major categories: dynamic and static. Static analyzers examine programs with respect to syntax errors and structural properties., This includes gathering statistical information on program content, such as the number of lines of executable code, source lines. and cyclomatic complexity. In addition, static analyzers provide the ability to check for the consistency of programs with respect to variables. Dynamic analyzers in contrast are dependent on input and the execution of a program providing the ability to find errors that cannot be detected through the use of static analysis alone. Dynamic analysis provides information on the behavior of a program rather than on the syntax. Both types of analysis detect errors in a program, but dynamic analyzers accomplish this through run-time behavior. This paper focuses on the following broad classification of dynamic analyzers: 1) Metrics; 2) Models; and 3) Monitors. Metrics are those analyzers that provide measurement. The next category, models, captures those analyzers that present the state of the program to the user at specified points in time. The last category, monitors, checks specified code based on some criteria. The paper discusses each classification and the techniques that are included under them. In addition, the role of each technique in the software life cycle is discussed. Familiarization with the tools that measure, model and monitor programs provides a framework for understanding the program's dynamic behavior from different, perspectives through analysis of the input/output data.
An Element-Based Concurrent Partitioner for Unstructured Finite Element Meshes
NASA Technical Reports Server (NTRS)
Ding, Hong Q.; Ferraro, Robert D.
1996-01-01
A concurrent partitioner for partitioning unstructured finite element meshes on distributed memory architectures is developed. The partitioner uses an element-based partitioning strategy. Its main advantage over the more conventional node-based partitioning strategy is its modular programming approach to the development of parallel applications. The partitioner first partitions element centroids using a recursive inertial bisection algorithm. Elements and nodes then migrate according to the partitioned centroids, using a data request communication template for unpredictable incoming messages. Our scalable implementation is contrasted to a non-scalable implementation which is a straightforward parallelization of a sequential partitioner.
Static Memory Deduplication for Performance Optimization in Cloud Computing.
Jia, Gangyong; Han, Guangjie; Wang, Hao; Yang, Xuan
2017-04-27
In a cloud computing environment, the number of virtual machines (VMs) on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD) technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible.
Static Memory Deduplication for Performance Optimization in Cloud Computing
Jia, Gangyong; Han, Guangjie; Wang, Hao; Yang, Xuan
2017-01-01
In a cloud computing environment, the number of virtual machines (VMs) on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD) technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible. PMID:28448434
Trade Studies of Space Launch Architectures using Modular Probabilistic Risk Analysis
NASA Technical Reports Server (NTRS)
Mathias, Donovan L.; Go, Susie
2006-01-01
A top-down risk assessment in the early phases of space exploration architecture development can provide understanding and intuition of the potential risks associated with new designs and technologies. In this approach, risk analysts draw from their past experience and the heritage of similar existing systems as a source for reliability data. This top-down approach captures the complex interactions of the risk driving parts of the integrated system without requiring detailed knowledge of the parts themselves, which is often unavailable in the early design stages. Traditional probabilistic risk analysis (PRA) technologies, however, suffer several drawbacks that limit their timely application to complex technology development programs. The most restrictive of these is a dependence on static planning scenarios, expressed through fault and event trees. Fault trees incorporating comprehensive mission scenarios are routinely constructed for complex space systems, and several commercial software products are available for evaluating fault statistics. These static representations cannot capture the dynamic behavior of system failures without substantial modification of the initial tree. Consequently, the development of dynamic models using fault tree analysis has been an active area of research in recent years. This paper discusses the implementation and demonstration of dynamic, modular scenario modeling for integration of subsystem fault evaluation modules using the Space Architecture Failure Evaluation (SAFE) tool. SAFE is a C++ code that was originally developed to support NASA s Space Launch Initiative. It provides a flexible framework for system architecture definition and trade studies. SAFE supports extensible modeling of dynamic, time-dependent risk drivers of the system and functions at the level of fidelity for which design and failure data exists. The approach is scalable, allowing inclusion of additional information as detailed data becomes available. The tool performs a Monte Carlo analysis to provide statistical estimates. Example results of an architecture system reliability study are summarized for an exploration system concept using heritage data from liquid-fueled expendable Saturn V/Apollo launch vehicles.
Extending substructure based iterative solvers to multiple load and repeated analyses
NASA Technical Reports Server (NTRS)
Farhat, Charbel
1993-01-01
Direct solvers currently dominate commercial finite element structural software, but do not scale well in the fine granularity regime targeted by emerging parallel processors. Substructure based iterative solvers--often called also domain decomposition algorithms--lend themselves better to parallel processing, but must overcome several obstacles before earning their place in general purpose structural analysis programs. One such obstacle is the solution of systems with many or repeated right hand sides. Such systems arise, for example, in multiple load static analyses and in implicit linear dynamics computations. Direct solvers are well-suited for these problems because after the system matrix has been factored, the multiple or repeated solutions can be obtained through relatively inexpensive forward and backward substitutions. On the other hand, iterative solvers in general are ill-suited for these problems because they often must restart from scratch for every different right hand side. In this paper, we present a methodology for extending the range of applications of domain decomposition methods to problems with multiple or repeated right hand sides. Basically, we formulate the overall problem as a series of minimization problems over K-orthogonal and supplementary subspaces, and tailor the preconditioned conjugate gradient algorithm to solve them efficiently. The resulting solution method is scalable, whereas direct factorization schemes and forward and backward substitution algorithms are not. We illustrate the proposed methodology with the solution of static and dynamic structural problems, and highlight its potential to outperform forward and backward substitutions on parallel computers. As an example, we show that for a linear structural dynamics problem with 11640 degrees of freedom, every time-step beyond time-step 15 is solved in a single iteration and consumes 1.0 second on a 32 processor iPSC-860 system; for the same problem and the same parallel processor, a pair of forward/backward substitutions at each step consumes 15.0 seconds.
ERIC Educational Resources Information Center
Anthony, Jason L.; Williams, Jeffrey M.; Zhang, Zhoe; Landry, Susan H.; Dunkelberger, Martha J.
2014-01-01
Research Findings: In an effort toward developing a comprehensive, effective, scalable, and sustainable early childhood education program for at-risk populations, we conducted an experimental evaluation of the value added by 2 family involvement programs to the Texas Early Education Model (TEEM). A total of 91 preschool classrooms that served…
Patents, Innovation, and the Welfare Effects of Medicare Part D*
Gailey, Adam; Lakdawalla, Darius; Sood, Neeraj
2013-01-01
Purpose To evaluate the efficiency consequences of the Medicare Part D program. Methods We develop and empirically calibrate a simple theoretical model to examine the static and dynamic welfare effects of Medicare Part D. Findings We show that Medicare Part D can simultaneously reduce static deadweight loss from monopoly pricing of drugs and improve incentives for innovation. We estimate that even after excluding the insurance value of the program, the welfare gain of Medicare Part D roughly equals its social costs. The program generates $5.11 billion of annual static deadweight loss reduction, and at least $3.0 billion of annual value from extra innovation. Implications Medicare Part D and other public prescription drug programs can be welfare-improving, even for risk-neutral and purely self-interested consumers. Furthermore, negotiation for lower branded drug prices may further increase the social return to the program. Originality This study demonstrates that pure efficiency motives, which do not even surface in the policy debate over Medicare Part D, can nearly justify the program on their own merits. PMID:20575239
Estabrooks, Paul A; Wilson, Kathryn E; McGuire, Todd J; Harden, Samantha M; Ramalingam, NithyaPriya; Schoepke, Lia; Almeida, Fabio A; Bayer, Amy L
2017-04-01
Primary care addresses obesity through physician oversight of intensive lifestyle interventions or referral to external programs with demonstrated efficacy. However, limited information exists on community program reach, effectiveness, and costs across different groups of participants. To evaluate a scalable, community weight loss program using reach, effectiveness, and cost metrics. Longitudinal pre-post quasi-experiment without control. Enrolled participants in Weigh and Win (WAW), a community-based weight loss program. A 12-month program with daily social cognitive theory-based email and/or text support, online access to health coaches, objective weight assessment through 83 community-based kiosks, and modest financial incentives to increase program reach. Number of participants, representativeness, weight loss achievement (3%, 5% of initial weight lost), and cost of implementation. A total of 40,308 adults (79% women; 73% white; BMI = 32.3 ± 7.44, age = 43.9 ± 13.1 years) enrolled in WAW. Women were more likely than men to enroll in the program and continue engagement beyond an initial weigh-in (57% vs. 53%). Based on census data, African Americans were over-represented in the sample. Among participants who engaged in the program beyond an initial weigh-in (n = 19,029), 47% and 34% of participants lost 3% and 5% of their initial body weight, respectively. The average duration for those who achieved 5% weight loss was 1.7 ± 1.3 years. African American participants were more likely to achieve 5% weight loss and remain enrolled in the program longer compared to non-African American participants (2.0 ± 1.3 vs. 1.6 ± 1.2 years). Implementation costs were $2,822,698. Cost per clinically meaningful weight loss for African Americans ($257.97/3% loss; $335.96/5% loss) was lower than that for Hispanics ($318.62; $431.10) and Caucasians ($313.65; $441.87), due to the higher success rate of that subgroup of participants. Weigh and Win is a scalable technology-supported and community-based weight loss program that reaches a large number of participants and may contribute to reducing health disparities.
Automatic Estimation of Verified Floating-Point Round-Off Errors via Static Analysis
NASA Technical Reports Server (NTRS)
Moscato, Mariano; Titolo, Laura; Dutle, Aaron; Munoz, Cesar A.
2017-01-01
This paper introduces a static analysis technique for computing formally verified round-off error bounds of floating-point functional expressions. The technique is based on a denotational semantics that computes a symbolic estimation of floating-point round-o errors along with a proof certificate that ensures its correctness. The symbolic estimation can be evaluated on concrete inputs using rigorous enclosure methods to produce formally verified numerical error bounds. The proposed technique is implemented in the prototype research tool PRECiSA (Program Round-o Error Certifier via Static Analysis) and used in the verification of floating-point programs of interest to NASA.
Integrating multiple data sources for malware classification
Anderson, Blake Harrell; Storlie, Curtis B; Lane, Terran
2015-04-28
Disclosed herein are representative embodiments of tools and techniques for classifying programs. According to one exemplary technique, at least one graph representation of at least one dynamic data source of at least one program is generated. Also, at least one graph representation of at least one static data source of the at least one program is generated. Additionally, at least using the at least one graph representation of the at least one dynamic data source and the at least one graph representation of the at least one static data source, the at least one program is classified.
The Effects of Two Different Stretching Programs on Balance Control and Motor Neuron Excitability
ERIC Educational Resources Information Center
Kaya, Fatih; Biçer, Bilal; Yüktasir, Bekir; Willems, Mark E. T.; Yildiz, Nebil
2018-01-01
We examined the effects of training (4d/wk for 6 wks) with static stretching (SS) or contract-relax proprioceptive neuromuscular facilitation (PNF) on static balance time and motor neuron excitability. Static balance time, H[subscript max]/M[subscript max] ratios and H-reflex recovery curves (HRRC) were measured in 28 healthy subjects (SS: n = 10,…
Performance and scalability evaluation of "Big Memory" on Blue Gene Linux.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoshii, K.; Iskra, K.; Naik, H.
2011-05-01
We address memory performance issues observed in Blue Gene Linux and discuss the design and implementation of 'Big Memory' - an alternative, transparent memory space introduced to eliminate the memory performance issues. We evaluate the performance of Big Memory using custom memory benchmarks, NAS Parallel Benchmarks, and the Parallel Ocean Program, at a scale of up to 4,096 nodes. We find that Big Memory successfully resolves the performance issues normally encountered in Blue Gene Linux. For the ocean simulation program, we even find that Linux with Big Memory provides better scalability than does the lightweight compute node kernel designed solelymore » for high-performance applications. Originally intended exclusively for compute node tasks, our new memory subsystem dramatically improves the performance of certain I/O node applications as well. We demonstrate this performance using the central processor of the LOw Frequency ARray radio telescope as an example.« less
Links, Amanda E.; Draper, David; Lee, Elizabeth; Guzman, Jessica; Valivullah, Zaheer; Maduro, Valerie; Lebedev, Vlad; Didenko, Maxim; Tomlin, Garrick; Brudno, Michael; Girdea, Marta; Dumitriu, Sergiu; Haendel, Melissa A.; Mungall, Christopher J.; Smedley, Damian; Hochheiser, Harry; Arnold, Andrew M.; Coessens, Bert; Verhoeven, Steven; Bone, William; Adams, David; Boerkoel, Cornelius F.; Gahl, William A.; Sincan, Murat
2016-01-01
The National Institutes of Health Undiagnosed Diseases Program (NIH UDP) applies translational research systematically to diagnose patients with undiagnosed diseases. The challenge is to implement an information system enabling scalable translational research. The authors hypothesized that similar complex problems are resolvable through process management and the distributed cognition of communities. The team, therefore, built the NIH UDP integrated collaboration system (UDPICS) to form virtual collaborative multidisciplinary research networks or communities. UDPICS supports these communities through integrated process management, ontology-based phenotyping, biospecimen management, cloud-based genomic analysis, and an electronic laboratory notebook. UDPICS provided a mechanism for efficient, transparent, and scalable translational research and thereby addressed many of the complex and diverse research and logistical problems of the NIH UDP. Full definition of the strengths and deficiencies of UDPICS will require formal qualitative and quantitative usability and process improvement measurement. PMID:27785453
Diskless supercomputers: Scalable, reliable I/O for the Tera-Op technology base
NASA Technical Reports Server (NTRS)
Katz, Randy H.; Ousterhout, John K.; Patterson, David A.
1993-01-01
Computing is seeing an unprecedented improvement in performance; over the last five years there has been an order-of-magnitude improvement in the speeds of workstation CPU's. At least another order of magnitude seems likely in the next five years, to machines with 500 MIPS or more. The goal of the ARPA Teraop program is to realize even larger, more powerful machines, executing as many as a trillion operations per second. Unfortunately, we have seen no comparable breakthroughs in I/O performance; the speeds of I/O devices and the hardware and software architectures for managing them have not changed substantially in many years. We have completed a program of research to demonstrate hardware and software I/O architectures capable of supporting the kinds of internetworked 'visualization' workstations and supercomputers that will appear in the mid 1990s. The project had three overall goals: high performance, high reliability, and scalable, multipurpose system.
Proceedings of the DICE THROW Symposium 21-23 June 1977. Volume 1
1977-07-01
different scaled ANFO events to insure yield scalability. Phase 1 of the program consisted of a series of one-pound events to examine cratering and...characterization of a 500-ton-equivalent event. A large number of agencies were involved in different facets of the development program. Probably most...charge geometry observed in the 1000-pound series, supported the observations from the Phase 1 program. Differences were observed in the fireball
User document for computer programs for ring-stiffened shells of revolution
NASA Technical Reports Server (NTRS)
Cohen, G. A.
1973-01-01
A user manual and related program documentation is presented for six compatible computer programs for structural analysis of axisymmetric shell structures. The programs apply to a common structural model but analyze different modes of structural response. In particular, they are: (1) Linear static response under asymmetric loads; (2) Buckling of linear states under asymmetric loads; (3) Nonlinear static response under axisymmetric loads; (4) Buckling nonlinear states under axisymmetric (5) Imperfection sensitivity of buckling modes under axisymmetric loads; and (6) Vibrations about nonlinear states under axisymmetric loads. These programs treat branched shells of revolution with an arbitrary arrangement of a large number of open branches but with at most one closed branch.
EvoGraph: On-The-Fly Efficient Mining of Evolving Graphs on GPU
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sengupta, Dipanjan; Song, Shuaiwen
With the prevalence of the World Wide Web and social networks, there has been a growing interest in high performance analytics for constantly-evolving dynamic graphs. Modern GPUs provide massive AQ1 amount of parallelism for efficient graph processing, but the challenges remain due to their lack of support for the near real-time streaming nature of dynamic graphs. Specifically, due to the current high volume and velocity of graph data combined with the complexity of user queries, traditional processing methods by first storing the updates and then repeatedly running static graph analytics on a sequence of versions or snapshots are deemed undesirablemore » and computational infeasible on GPU. We present EvoGraph, a highly efficient and scalable GPU- based dynamic graph analytics framework.« less
Effects of Ordering Strategies and Programming Paradigms on Sparse Matrix Computations
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Li, Xiaoye; Husbands, Parry; Biswas, Rupak; Biegel, Bryan (Technical Monitor)
2002-01-01
The Conjugate Gradient (CG) algorithm is perhaps the best-known iterative technique to solve sparse linear systems that are symmetric and positive definite. For systems that are ill-conditioned, it is often necessary to use a preconditioning technique. In this paper, we investigate the effects of various ordering and partitioning strategies on the performance of parallel CG and ILU(O) preconditioned CG (PCG) using different programming paradigms and architectures. Results show that for this class of applications: ordering significantly improves overall performance on both distributed and distributed shared-memory systems, that cache reuse may be more important than reducing communication, that it is possible to achieve message-passing performance using shared-memory constructs through careful data ordering and distribution, and that a hybrid MPI+OpenMP paradigm increases programming complexity with little performance gains. A implementation of CG on the Cray MTA does not require special ordering or partitioning to obtain high efficiency and scalability, giving it a distinct advantage for adaptive applications; however, it shows limited scalability for PCG due to a lack of thread level parallelism.
Geospatial considerations for a multiorganizational, landscape-scale program
O'Donnell, Michael S.; Assal, Timothy J.; Anderson, Patrick J.; Bowen, Zachary H.
2013-01-01
Geospatial data play an increasingly important role in natural resources management, conservation, and science-based projects. The management and effective use of spatial data becomes significantly more complex when the efforts involve a myriad of landscape-scale projects combined with a multiorganizational collaboration. There is sparse literature to guide users on this daunting subject; therefore, we present a framework of considerations for working with geospatial data that will provide direction to data stewards, scientists, collaborators, and managers for developing geospatial management plans. The concepts we present apply to a variety of geospatial programs or projects, which we describe as a “scalable framework” of processes for integrating geospatial efforts with management, science, and conservation initiatives. Our framework includes five tenets of geospatial data management: (1) the importance of investing in data management and standardization, (2) the scalability of content/efforts addressed in geospatial management plans, (3) the lifecycle of a geospatial effort, (4) a framework for the integration of geographic information systems (GIS) in a landscape-scale conservation or management program, and (5) the major geospatial considerations prior to data acquisition. We conclude with a discussion of future considerations and challenges.
Vacuum Deployment and Testing of a 4-Quadrant Scalable Inflatable Solar Sail System
NASA Technical Reports Server (NTRS)
Lichodziejewski, David; Derbes, Billy; Galena, Daisy; Friese, Dave
2005-01-01
Solar sails reflect photons streaming from the sun and transfer momentum to the sail. The thrust, though small, is continuous and acts for the life of the mission without the need for propellant. Recent advances in materials and ultra-low mass gossamer structures have enabled a host of useful missions utilizing solar sail propulsion. The team of L'Garde, Jet Propulsion Laboratories, Ball Aerospace, and Langley Research Center, under the direction of the NASA In-Space Propulsion office, has been developing a scalable solar sail configuration to address NASA s future space propulsion needs. The baseline design currently in development and testing was optimized around the 1 AU solar sentinel mission. Featuring inflatably deployed sub-T(sub g), rigidized beam components, the 10,000 sq m sail and support structure weighs only 47.5 kg, including margin, yielding an areal density of 4.8 g/sq m. Striped sail architecture, net/membrane sail design, and L'Garde's conical boom deployment technique allows scalability without high mass penalties. This same structural concept can be scaled to meet and exceed the requirements of a number of other useful NASA missions. This paper discusses the interim accomplishments of phase 3 of a 3-phase NASA program to advance the technology readiness level (TRL) of the solar sail system from 3 toward a technology readiness level of 6 in 2005. Under earlier phases of the program many test articles have been fabricated and tested successfully. Most notably an unprecedented 4-quadrant 10 m solar sail ground test article was fabricated, subjected to launch environment tests, and was successfully deployed under simulated space conditions at NASA Plum Brook s 30m vacuum facility. Phase 2 of the program has seen much development and testing of this design validating assumptions, mass estimates, and predicted mission scalability. Under Phase 3 a much larger 20 m square test article including subscale vane has been fabricated and tested. A 20 m system ambient deployment has been successfully conducted after enduring Delta-2 launch environment testing. The program will culminate in a vacuum deployment of a 20 m subscale test article at the NASA Glenn s Plum Brook 30 m vacuum test facility to bring the TRL level as close to 6 as possible in 1 g. This focused program will pave the way for a flight experiment of this highly efficient space propulsion technology.
Method for Statically Checking an Object-oriented Computer Program Module
NASA Technical Reports Server (NTRS)
Bierhoff, Kevin M. (Inventor); Aldrich, Jonathan (Inventor)
2012-01-01
A method for statically checking an object-oriented computer program module includes the step of identifying objects within a computer program module, at least one of the objects having a plurality of references thereto, possibly from multiple clients. A discipline of permissions is imposed on the objects identified within the computer program module. The permissions enable tracking, from among a discrete set of changeable states, a subset of states each object might be in. A determination is made regarding whether the imposed permissions are violated by a potential reference to any of the identified objects. The results of the determination are output to a user.
Efficient and Scalable Graph Similarity Joins in MapReduce
Chen, Yifan; Zhang, Weiming; Tang, Jiuyang
2014-01-01
Along with the emergence of massive graph-modeled data, it is of great importance to investigate graph similarity joins due to their wide applications for multiple purposes, including data cleaning, and near duplicate detection. This paper considers graph similarity joins with edit distance constraints, which return pairs of graphs such that their edit distances are no larger than a given threshold. Leveraging the MapReduce programming model, we propose MGSJoin, a scalable algorithm following the filtering-verification framework for efficient graph similarity joins. It relies on counting overlapping graph signatures for filtering out nonpromising candidates. With the potential issue of too many key-value pairs in the filtering phase, spectral Bloom filters are introduced to reduce the number of key-value pairs. Furthermore, we integrate the multiway join strategy to boost the verification, where a MapReduce-based method is proposed for GED calculation. The superior efficiency and scalability of the proposed algorithms are demonstrated by extensive experimental results. PMID:25121135
Efficient and scalable graph similarity joins in MapReduce.
Chen, Yifan; Zhao, Xiang; Xiao, Chuan; Zhang, Weiming; Tang, Jiuyang
2014-01-01
Along with the emergence of massive graph-modeled data, it is of great importance to investigate graph similarity joins due to their wide applications for multiple purposes, including data cleaning, and near duplicate detection. This paper considers graph similarity joins with edit distance constraints, which return pairs of graphs such that their edit distances are no larger than a given threshold. Leveraging the MapReduce programming model, we propose MGSJoin, a scalable algorithm following the filtering-verification framework for efficient graph similarity joins. It relies on counting overlapping graph signatures for filtering out nonpromising candidates. With the potential issue of too many key-value pairs in the filtering phase, spectral Bloom filters are introduced to reduce the number of key-value pairs. Furthermore, we integrate the multiway join strategy to boost the verification, where a MapReduce-based method is proposed for GED calculation. The superior efficiency and scalability of the proposed algorithms are demonstrated by extensive experimental results.
Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R; Bock, Davi D; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R Clay; Smith, Stephen J; Szalay, Alexander S; Vogelstein, Joshua T; Vogelstein, R Jacob
2013-01-01
We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes - neural connectivity maps of the brain-using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems-reads to parallel disk arrays and writes to solid-state storage-to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization.
Scalable real space pseudopotential density functional codes for materials in the exascale regime
NASA Astrophysics Data System (ADS)
Lena, Charles; Chelikowsky, James; Schofield, Grady; Biller, Ariel; Kronik, Leeor; Saad, Yousef; Deslippe, Jack
Real-space pseudopotential density functional theory has proven to be an efficient method for computing the properties of matter in many different states and geometries, including liquids, wires, slabs, and clusters with and without spin polarization. Fully self-consistent solutions using this approach have been routinely obtained for systems with thousands of atoms. Yet, there are many systems of notable larger sizes where quantum mechanical accuracy is desired, but scalability proves to be a hindrance. Such systems include large biological molecules, complex nanostructures, or mismatched interfaces. We will present an overview of our new massively parallel algorithms, which offer improved scalability in preparation for exascale supercomputing. We will illustrate these algorithms by considering the electronic structure of a Si nanocrystal exceeding 104 atoms. Support provided by the SciDAC program, Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences. Grant Numbers DE-SC0008877 (Austin) and DE-FG02-12ER4 (Berkeley).
Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R.; Bock, Davi D.; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C.; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R. Clay; Smith, Stephen J.; Szalay, Alexander S.; Vogelstein, Joshua T.; Vogelstein, R. Jacob
2013-01-01
We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes— neural connectivity maps of the brain—using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems—reads to parallel disk arrays and writes to solid-state storage—to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization. PMID:24401992
SP-100 - The national space reactor power system program in response to future needs
NASA Astrophysics Data System (ADS)
Armijo, J. S.; Josloff, A. T.; Bailey, H. S.; Matteo, D. N.
The SP-100 system has been designed to meet comprehensive and demanding NASA/DOD/DOE requirements. The key requirements include: nuclear safety for all mission phases, scalability from 10's to 100's of kWe, reliable performance at full power for seven years of partial power for ten years, survivability in civil or military threat environments, capability to operate autonomously for up to six months, capability to protect payloads from excessive radiation, and compatibility with shuttle and expendable launch vehicles. The authors address of major progress in terms of design, flexibility/scalability, survivability, and development. These areas, with the exception of survivability, are discussed in detail. There has been significant improvement in the generic flight system design with substantial mass savings and simplification that enhance performance and reliability. Design activity has confirmed the scalability and flexibility of the system and the ability to efficiently meet NASA, AF, and SDIO needs. SP-100 development continues to make significant progress in all key technology areas.
Sustainability and scalability of the hospital elder life program at a community hospital.
Rubin, Fred H; Neal, Kelly; Fenlon, Kerry; Hassan, Shuja; Inouye, Sharon K
2011-02-01
The Hospital Elder Life Program (HELP), an effective intervention to prevent delirium in older hospitalized adults, has been successfully replicated in a community teaching hospital as a quality improvement project. This article reports on successfully sustaining the program over 7 years and expanding its scale from one to six inpatient units at the same hospital. The program currently serves more than 7,000 older patients annually and is accepted as the standard of care throughout the hospital. Innovations that enhanced scalability and widespread implementation included ensuring dedicated staffing for the program, local adaptations to streamline protocols, continuous recruitment of volunteers, and more-efficient data collection. Outcomes include a lower rate of incident delirium; shorter length of stay (LOS); greater satisfaction of patients, families, and nursing staff; and significantly lower costs for the hospital. The financial return of the program, estimated at more than $7.3 million per year during 2008, comprises cost savings from delirium prevention and revenue generated from freeing up hospital beds (shorter LOS of HELP patients with and without delirium). Delirium poses a major challenge for hospital quality of care, patient safety, Medicare no-pay conditions, and costs of hospital care for older persons. Faced with rising numbers of elderly patients, hospitals can use HELP to improve the quality and cost-effectiveness of care. © 2011, Copyright the Authors. Journal compilation © 2011, The American Geriatrics Society.
NASA Technical Reports Server (NTRS)
Svalbonas, V.
1973-01-01
The theoretical analysis background for the STARS-2 (shell theory automated for rotational structures) program is presented. The theory involved in the axisymmetric nonlinear and unsymmetric linear static analyses, and the stability and vibrations (including critical rotation speed) analyses involving axisymmetric prestress are discussed. The theory for nonlinear static, stability, and vibrations analyses, involving shells with unsymmetric loadings are included.
User's Manual for Aerofcn: a FORTRAN Program to Compute Aerodynamic Parameters
NASA Technical Reports Server (NTRS)
Conley, Joseph L.
1992-01-01
The computer program AeroFcn is discussed. AeroFcn is a utility program that computes the following aerodynamic parameters: geopotential altitude, Mach number, true velocity, dynamic pressure, calibrated airspeed, equivalent airspeed, impact pressure, total pressure, total temperature, Reynolds number, speed of sound, static density, static pressure, static temperature, coefficient of dynamic viscosity, kinematic viscosity, geometric altitude, and specific energy for a standard- or a modified standard-day atmosphere using compressible flow and normal shock relations. Any two parameters that define a unique flight condition are selected, and their values are entered interactively. The remaining parameters are computed, and the solutions are stored in an output file. Multiple cases can be run, and the multiple case solutions can be stored in another output file for plotting. Parameter units, the output format, and primary constants in the atmospheric and aerodynamic equations can also be changed.
NASA Astrophysics Data System (ADS)
Sastry, Kumara Narasimha
2007-03-01
Effective and efficient rnultiscale modeling is essential to advance both the science and synthesis in a, wide array of fields such as physics, chemistry, materials science; biology, biotechnology and pharmacology. This study investigates the efficacy and potential of rising genetic algorithms for rnultiscale materials modeling and addresses some of the challenges involved in designing competent algorithms that solve hard problems quickly, reliably and accurately. In particular, this thesis demonstrates the use of genetic algorithms (GAs) and genetic programming (GP) in multiscale modeling with the help of two non-trivial case studies in materials science and chemistry. The first case study explores the utility of genetic programming (GP) in multi-timescaling alloy kinetics simulations. In essence, GP is used to bridge molecular dynamics and kinetic Monte Carlo methods to span orders-of-magnitude in simulation time. Specifically, GP is used to regress symbolically an inline barrier function from a limited set of molecular dynamics simulations to enable kinetic Monte Carlo that simulate seconds of real time. Results on a non-trivial example of vacancy-assisted migration on a surface of a face-centered cubic (fcc) Copper-Cobalt (CuxCo 1-x) alloy show that GP predicts all barriers with 0.1% error from calculations for less than 3% of active configurations, independent of type of potentials used to obtain the learning set of barriers via molecular dynamics. The resulting method enables 2--9 orders-of-magnitude increase in real-time dynamics simulations taking 4--7 orders-of-magnitude less CPU time. The second case study presents the application of multiobjective genetic algorithms (MOGAs) in multiscaling quantum chemistry simulations. Specifically, MOGAs are used to bridge high-level quantum chemistry and semiempirical methods to provide accurate representation of complex molecular excited-state and ground-state behavior. Results on ethylene and benzene---two common building blocks in organic chemistry---indicate that MOGAs produce High-quality semiempirical methods that (1) are stable to small perturbations, (2) yield accurate configuration energies on untested and critical excited states, and (3) yield ab initio quality excited-state dynamics. The proposed method enables simulations of more complex systems to realistic, multi-picosecond timescales, well beyond previous attempts or expectation of human experts, and 2--3 orders-of-magnitude reduction in computational cost. While the two applications use simple evolutionary operators, in order to tackle more complex systems, their scalability and limitations have to be investigated. The second part of the thesis addresses some of the challenges involved with a successful design of genetic algorithms and genetic programming for multiscale modeling. The first issue addressed is the scalability of genetic programming, where facetwise models are built to assess the population size required by GP to ensure adequate supply of raw building blocks and also to ensure accurate decision-making between competing building blocks. This study also presents a design of competent genetic programming, where traditional fixed recombination operators are replaced by building and sampling probabilistic models of promising candidate programs. The proposed scalable GP, called extended compact GP (eCGP), combines the ideas from extended compact genetic algorithm (eCGA) and probabilistic incremental program evolution (PIPE) and adaptively identifies, propagates and exchanges important subsolutions of a search problem. Results show that eCGP scales cubically with problem size on both GP-easy and GP-hard problems. Finally, facetwise models are developed to explore limitations of scalability of MOGAs, where the scalability of multiobjective algorithms in reliably maintaining Pareto-optimal solutions is addressed. The results show that even when the building blocks are accurately identified, massive multimodality of the search problems can easily overwhelm the nicher (diversity preserving operator) and lead to exponential scale-up. Facetwise models are developed, which incorporate the combined effects of model accuracy, decision making, and sub-structure supply, as well as the effect of niching on the population sizing, to predict a limit on the growth rate of a maximum number of sub-structures that can compete in the two objectives to circumvent the failure of the niching method. The results show that if the number of competing building blocks between multiple objectives is less than the proposed limit, multiobjective GAs scale-up polynomially with the problem size on boundedly-difficult problems.
ERIC Educational Resources Information Center
Orey, Michael; Koenecke, Lynne; Snider, Richard C.; Perkins, Ross A.; Holmes, Glen A.; Lockee, Barbara B.; Moller, Leslie A.; Harvey, Douglas; Downs, Margaret; Godshalk, Veronica M.
2003-01-01
Contains four articles covering trends and issues on distance learning including: the experience of two learners learning via the Internet; a systematic approach to determining the scalability of a distance education program; identifying factors that affect learning community development and performance in asynchronous distance education; and…
GoFFish: A Sub-Graph Centric Framework for Large-Scale Graph Analytics1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simmhan, Yogesh; Kumbhare, Alok; Wickramaarachchi, Charith
2014-08-25
Large scale graph processing is a major research area for Big Data exploration. Vertex centric programming models like Pregel are gaining traction due to their simple abstraction that allows for scalable execution on distributed systems naturally. However, there are limitations to this approach which cause vertex centric algorithms to under-perform due to poor compute to communication overhead ratio and slow convergence of iterative superstep. In this paper we introduce GoFFish a scalable sub-graph centric framework co-designed with a distributed persistent graph storage for large scale graph analytics on commodity clusters. We introduce a sub-graph centric programming abstraction that combines themore » scalability of a vertex centric approach with the flexibility of shared memory sub-graph computation. We map Connected Components, SSSP and PageRank algorithms to this model to illustrate its flexibility. Further, we empirically analyze GoFFish using several real world graphs and demonstrate its significant performance improvement, orders of magnitude in some cases, compared to Apache Giraph, the leading open source vertex centric implementation. We map Connected Components, SSSP and PageRank algorithms to this model to illustrate its flexibility. Further, we empirically analyze GoFFish using several real world graphs and demonstrate its significant performance improvement, orders of magnitude in some cases, compared to Apache Giraph, the leading open source vertex centric implementation.« less
Scalable Rapidly Deployable Convex Optimization for Data Analytics
SOCPs , SDPs, exponential cone programs, and power cone programs. CVXPY supports basic methods for distributed optimization, on...multiple heterogenous platforms. We have also done basic research in various application areas , using CVXPY , to demonstrate its usefulness. See attached report for publication information....Over the period of the contract we have developed the full stack for wide use of convex optimization, in machine learning and many other areas .
NASA Technical Reports Server (NTRS)
Laird, Philip
1992-01-01
We distinguish static and dynamic optimization of programs: whereas static optimization modifies a program before runtime and is based only on its syntactical structure, dynamic optimization is based on the statistical properties of the input source and examples of program execution. Explanation-based generalization is a commonly used dynamic optimization method, but its effectiveness as a speedup-learning method is limited, in part because it fails to separate the learning process from the program transformation process. This paper describes a dynamic optimization technique called a learn-optimize cycle that first uses a learning element to uncover predictable patterns in the program execution and then uses an optimization algorithm to map these patterns into beneficial transformations. The technique has been used successfully for dynamic optimization of pure Prolog.
Regression Verification Using Impact Summaries
NASA Technical Reports Server (NTRS)
Backes, John; Person, Suzette J.; Rungta, Neha; Thachuk, Oksana
2013-01-01
Regression verification techniques are used to prove equivalence of syntactically similar programs. Checking equivalence of large programs, however, can be computationally expensive. Existing regression verification techniques rely on abstraction and decomposition techniques to reduce the computational effort of checking equivalence of the entire program. These techniques are sound but not complete. In this work, we propose a novel approach to improve scalability of regression verification by classifying the program behaviors generated during symbolic execution as either impacted or unimpacted. Our technique uses a combination of static analysis and symbolic execution to generate summaries of impacted program behaviors. The impact summaries are then checked for equivalence using an o-the-shelf decision procedure. We prove that our approach is both sound and complete for sequential programs, with respect to the depth bound of symbolic execution. Our evaluation on a set of sequential C artifacts shows that reducing the size of the summaries can help reduce the cost of software equivalence checking. Various reduction, abstraction, and compositional techniques have been developed to help scale software verification techniques to industrial-sized systems. Although such techniques have greatly increased the size and complexity of systems that can be checked, analysis of large software systems remains costly. Regression analysis techniques, e.g., regression testing [16], regression model checking [22], and regression verification [19], restrict the scope of the analysis by leveraging the differences between program versions. These techniques are based on the idea that if code is checked early in development, then subsequent versions can be checked against a prior (checked) version, leveraging the results of the previous analysis to reduce analysis cost of the current version. Regression verification addresses the problem of proving equivalence of closely related program versions [19]. These techniques compare two programs with a large degree of syntactic similarity to prove that portions of one program version are equivalent to the other. Regression verification can be used for guaranteeing backward compatibility, and for showing behavioral equivalence in programs with syntactic differences, e.g., when a program is refactored to improve its performance, maintainability, or readability. Existing regression verification techniques leverage similarities between program versions by using abstraction and decomposition techniques to improve scalability of the analysis [10, 12, 19]. The abstractions and decomposition in the these techniques, e.g., summaries of unchanged code [12] or semantically equivalent methods [19], compute an over-approximation of the program behaviors. The equivalence checking results of these techniques are sound but not complete-they may characterize programs as not functionally equivalent when, in fact, they are equivalent. In this work we describe a novel approach that leverages the impact of the differences between two programs for scaling regression verification. We partition program behaviors of each version into (a) behaviors impacted by the changes and (b) behaviors not impacted (unimpacted) by the changes. Only the impacted program behaviors are used during equivalence checking. We then prove that checking equivalence of the impacted program behaviors is equivalent to checking equivalence of all program behaviors for a given depth bound. In this work we use symbolic execution to generate the program behaviors and leverage control- and data-dependence information to facilitate the partitioning of program behaviors. The impacted program behaviors are termed as impact summaries. The dependence analyses that facilitate the generation of the impact summaries, we believe, could be used in conjunction with other abstraction and decomposition based approaches, [10, 12], as a complementary reduction technique. An evaluation of our regression verification technique shows that our approach is capable of leveraging similarities between program versions to reduce the size of the queries and the time required to check for logical equivalence. The main contributions of this work are: - A regression verification technique to generate impact summaries that can be checked for functional equivalence using an off-the-shelf decision procedure. - A proof that our approach is sound and complete with respect to the depth bound of symbolic execution. - An implementation of our technique using the LLVMcompiler infrastructure, the klee Symbolic Virtual Machine [4], and a variety of Satisfiability Modulo Theory (SMT) solvers, e.g., STP [7] and Z3 [6]. - An empirical evaluation on a set of C artifacts which shows that the use of impact summaries can reduce the cost of regression verification.
Vazini Taher, Amir; Parnow, Abdolhossein
2017-05-01
Different methods of warm-up may have implications in improving various aspects of soccer performance. The present study aimed to investigate acute effects of soccer specific warm-up protocols on functional performance tests. This study using randomized within-subject design, investigated the performance of 22 collegiate elite soccer player following soccer specific warm-ups using dynamic stretching, static stretching, and FIFA 11+ program. Post warm-up examinations consisted: 1) Illinois Agility Test; 2) vertical jump; 3) 30 meter sprint; 4) consecutive turns; 5) flexibility of knee. Vertical jump performance was significantly lower following static stretching, as compared to dynamic stretching (P=0.005). Sprint performance declined significantly following static stretching as compared to FIFA 11+ (P=0.023). Agility time was significantly faster following dynamic stretching as compared to FIFA 11+ (P=0.001) and static stretching (P=0.001). Knee flexibility scores were significantly improved following the static stretching as compared to dynamic stretching (P=016). No significant difference was observed for consecutive turns between three warm-up protocol. The present finding showed that a soccer specific warm-up protocol relied on dynamic stretching is preferable in enhancing performance as compared to protocols relying on static stretches and FIFA 11+ program. Investigators suggest that while different soccer specific warm-up protocols have varied types of effects on performance, acute effects of dynamic stretching on performance in elite soccer players are assured, however application of static stretching in reducing muscle stiffness is demonstrated.
A Dynamic Framework for Water Security
NASA Astrophysics Data System (ADS)
Srinivasan, Veena; Konar, Megan; Sivapalan, Murugesu
2017-04-01
Water security is a multi-faceted problem, going beyond mere balancing of supply and demand. Conventional attempts to quantify water security starting rely on static indices at a particular place and point in time. While these are simple and scalable, they lack predictive or explanatory power. 1) Most static indices focus on specific spatial scales and largely ignore cross-scale feedbacks between human and water systems. 2) They fail to account for the increasing spatial specialization in the modern world - some regions are cities others are agricultural breadbaskets; so water security means different things in different places. Human adaptation to environmental change necessitates a dynamic view of water security. We present a framework that defines water security as an emergent outcome of a coupled socio-hydrologic system. Over the medium term (5-25 years), water security models might hold governance, culture and infrastructure constant, but allow humans to respond to changes and thus predict how water security would evolve. But over very long time-frames (25-100 years), a society's values, norms and beliefs themselves may themselves evolve; these in turn may prompt changes in policy, governance and infrastructure. Predictions of water security in the long term involve accounting for such regime shifts in the cultural and political context of a watershed by allowing the governing equations of the models to change.
Yang, Jiaheng; He, Xiaodong; Guo, Ruijun; Xu, Peng; Wang, Kunpeng; Sheng, Cheng; Liu, Min; Wang, Jin; Derevianko, Andrei; Zhan, Mingsheng
2016-09-16
We demonstrate that the coherence of a single mobile atomic qubit can be well preserved during a transfer process among different optical dipole traps (ODTs). This is a prerequisite step in realizing a large-scale neutral atom quantum information processing platform. A qubit encoded in the hyperfine manifold of an ^{87}Rb atom is dynamically extracted from the static quantum register by an auxiliary moving ODT and reinserted into the static ODT. Previous experiments were limited by decoherences induced by the differential light shifts of qubit states. Here, we apply a magic-intensity trapping technique which mitigates the detrimental effects of light shifts and substantially enhances the coherence time to 225±21 ms. The experimentally demonstrated magic trapping technique relies on the previously neglected hyperpolarizability contribution to the light shifts, which makes the light shift dependence on the trapping laser intensity parabolic. Because of the parabolic dependence, at a certain "magic" intensity, the first order sensitivity to trapping light-intensity variations over ODT volume is eliminated. We experimentally demonstrate the utility of this approach and measure hyperpolarizability for the first time. Our results pave the way for constructing scalable quantum-computing architectures with single atoms trapped in an array of magic ODTs.
Multi-static networked 3D ladar for surveillance and access control
NASA Astrophysics Data System (ADS)
Wang, Y.; Ogirala, S. S. R.; Hu, B.; Le, Han Q.
2007-04-01
A theoretical design and simulation of a 3D ladar system concept for surveillance, intrusion detection, and access control is described. It is a non-conventional system architecture that consists of: i) multi-static configuration with an arbitrarily scalable number of transmitters (Tx's) and receivers (Rx's) that form an optical wireless code-division-multiple-access (CDMA) network, and ii) flexible system architecture with modular plug-and-play components that can be deployed for any facility with arbitrary topology. Affordability is a driving consideration; and a key feature for low cost is an asymmetric use of many inexpensive Rx's in conjunction with fewer Tx's, which are generally more expensive. The Rx's are spatially distributed close to the surveyed area for large coverage, and capable of receiving signals from multiple Tx's with moderate laser power. The system produces sensing information that scales as NxM, where N, M are the number of Tx's and Rx's, as opposed to linear scaling ~N in non-network system. Also, for target positioning, besides laser pointing direction and time-of-flight, the algorithm includes multiple point-of-view image fusion and triangulation for enhanced accuracy, which is not applicable to non-networked monostatic ladars. Simulation and scaled model experiments on some aspects of this concept are discussed.
Detection Of Malware Collusion With Static Dependence Analysis On Inter-App Communication
2016-12-08
DETECTION OF MALWARE COLLUSION WITH STATIC DEPENDENCE ANALYSIS ON INTER-APP COMMUNICATION VIRGINIA TECH DECEMBER 2016 FINAL TECHNICAL REPORT... DEPENDENCE ANALYSIS ON INTER-APP COMMUNICATION 5a. CONTRACT NUMBER FA8750-15-2-0076 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 61101E 6. AUTHOR(S...exploited. 15. SUBJECT TERMS Malware Collusion; Inter-App Communication; Static Dependence Analysis 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF
NASA Technical Reports Server (NTRS)
Mclain, A. G.; Rao, C. S. R.
1976-01-01
A hybrid chemical kinetic computer program was assembled which provides a rapid solution to problems involving flowing or static, chemically reacting, gas mixtures. The computer program uses existing subroutines for problem setup, initialization, and preliminary calculations and incorporates a stiff ordinary differential equation solution technique. A number of check cases were recomputed with the hybrid program and the results were almost identical to those previously obtained. The computational time saving was demonstrated with a propane-oxygen-argon shock tube combustion problem involving 31 chemical species and 64 reactions. Information is presented to enable potential users to prepare an input data deck for the calculation of a problem.
BEST3D user's manual: Boundary Element Solution Technology, 3-Dimensional Version 3.0
NASA Technical Reports Server (NTRS)
1991-01-01
The theoretical basis and programming strategy utilized in the construction of the computer program BEST3D (boundary element solution technology - three dimensional) and detailed input instructions are provided for the use of the program. An extensive set of test cases and sample problems is included in the manual and is also available for distribution with the program. The BEST3D program was developed under the 3-D Inelastic Analysis Methods for Hot Section Components contract (NAS3-23697). The overall objective of this program was the development of new computer programs allowing more accurate and efficient three-dimensional thermal and stress analysis of hot section components, i.e., combustor liners, turbine blades, and turbine vanes. The BEST3D program allows both linear and nonlinear analysis of static and quasi-static elastic problems and transient dynamic analysis for elastic problems. Calculation of elastic natural frequencies and mode shapes is also provided.
Asynchronous Object Storage with QoS for Scientific and Commercial Big Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brim, Michael J; Dillow, David A; Oral, H Sarp
2013-01-01
This paper presents our design for an asynchronous object storage system intended for use in scientific and commercial big data workloads. Use cases from the target workload do- mains are used to motivate the key abstractions used in the application programming interface (API). The architecture of the Scalable Object Store (SOS), a prototype object stor- age system that supports the API s facilities, is presented. The SOS serves as a vehicle for future research into scalable and resilient big data object storage. We briefly review our research into providing efficient storage servers capable of providing quality of service (QoS) contractsmore » relevant for big data use cases.« less
Djukic, Maja; Fulmer, Terry; Adams, Jennifer G; Lee, Sabrina; Triola, Marc M
2012-09-01
Interprofessional education is a critical precursor to effective teamwork and the collaboration of health care professionals in clinical settings. Numerous barriers have been identified that preclude scalable and sustainable interprofessional education (IPE) efforts. This article describes NYU3T: Teaching, Technology, Teamwork, a model that uses novel technologies such as Web-based learning, virtual patients, and high-fidelity simulation to overcome some of the common barriers and drive implementation of evidence-based teamwork curricula. It outlines the program's curricular components, implementation strategy, evaluation methods, and lessons learned from the first year of delivery and describes implications for future large-scale IPE initiatives. Copyright © 2012 Elsevier Inc. All rights reserved.
Quicksilver: Middleware for Scalable Self-Regenerative Systems
2006-04-01
Applications can be coded in any of about 25 programming languages ranging from the obvious ones to some very obscure languages , such as OCaml ...technology. Like Tempest, Quicksilver can support applications written in any of a wide range of programming languages supported by .NET. However, whereas...so that developers can work in standard languages and with standard tools and still exploit those solutions. Vendors need to see some success
Systematic and Scalable Testing of Concurrent Programs
2013-12-16
The evaluation of CHESS [107] checked eight different programs ranging from process management libraries to a distributed execution engine to a research...tool (§3.1) targets systematic testing of scheduling nondeterminism in multi- threaded components of the Omega cluster management system [129], while...tool for systematic testing of multithreaded com- ponents of the Omega cluster management system [129]. In particular, §3.1.1 defines a model for
Selecting the Right Courseware for Your Online Learning Program.
ERIC Educational Resources Information Center
O'Mara, Heather
2000-01-01
Presents criteria for selecting courseware for online classes. Highlights include ease of use, including navigation; assessment tools; advantages of Java-enabled courseware; advantages of Oracle databases, including scalability; future possibilities for multimedia technology; and open architecture that will integrate with other systems. (LRW)
ERIC Educational Resources Information Center
Rutledge, Lorelei; LeMire, Sarah
2017-01-01
This article proposes that libraries reimagine their information literacy instructional programs using a broader conceptualization and implementation of information literacy that promotes collaborative and personalized learning experiences for students, faculty, and staff, while embracing scalable instruction and reference strategies to maximize…
Automatic Parallelization of Numerical Python Applications using the Global Arrays Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daily, Jeffrey A.; Lewis, Robert R.
2011-11-30
Global Arrays is a software system from Pacific Northwest National Laboratory that enables an efficient, portable, and parallel shared-memory programming interface to manipulate distributed dense arrays. The NumPy module is the de facto standard for numerical calculation in the Python programming language, a language whose use is growing rapidly in the scientific and engineering communities. NumPy provides a powerful N-dimensional array class as well as other scientific computing capabilities. However, like the majority of the core Python modules, NumPy is inherently serial. Using a combination of Global Arrays and NumPy, we have reimplemented NumPy as a distributed drop-in replacement calledmore » Global Arrays in NumPy (GAiN). Serial NumPy applications can become parallel, scalable GAiN applications with only minor source code changes. Scalability studies of several different GAiN applications will be presented showing the utility of developing serial NumPy codes which can later run on more capable clusters or supercomputers.« less
Quasi-Static Tensile Stress-Strain Curves. 1, 2024-T3510 Aluminum Alloy
1976-02-01
herein were conducted as part of the Core Materials Program of the Solid Mechanics Branch of the Terminal Ballistics Laboratory. The objective of this...describing the results of the Core Materials Program, covers quasi-static terVsile tests of 2024-T3510 aluminum E’lloy. The results include Young’s...11.31 4 580.6 9.94 TABLE II MATERIAL PROPERTIES OF 2024-T3510 ALUMINUM ALLOYa Results of Results of Results of Tensileb Compres ion Sonic Testing
Dynamic and galvanic stability of stretchable supercapacitors.
Li, Xin; Gu, Taoli; Wei, Bingqing
2012-12-12
Stretchable electronics are emerging as a new technological advancement, since they can be reversibly stretched while maintaining functionality. To power stretchable electronics, rechargeable and stretchable energy storage devices become a necessity. Here, we demonstrate a facile and scalable fabrication of full stretchable supercapacitor, using buckled single-walled carbon nanotube macrofilms as the electrodes, an electrospun membrane of elastomeric polyurethane as the separator, and an organic electrolyte. We examine the electrochemical performance of the fully stretchable supercapacitors under dynamic stretching/releasing modes in different stretching strain rates, which reveal the true performance of the stretchable cells, compared to the conventional method of testing the cells under a statically stretched state. In addition, the self-discharge of the supercapacitor and the electrochemical behavior under bending mode are also examined. The stretchable supercapacitors show excellent cyclic stability under electrochemical charge/discharge during in situ dynamic stretching/releasing.
Massively Scalable Near Duplicate Detection in Streams of Documents using MDSH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bogen, Paul Logasa; Symons, Christopher T; McKenzie, Amber T
2013-01-01
In a world where large-scale text collections are not only becoming ubiquitous but also are growing at increasing rates, near duplicate documents are becoming a growing concern that has the potential to hinder many different information filtering tasks. While others have tried to address this problem, prior techniques have only been used on limited collection sizes and static cases. We will briefly describe the problem in the context of Open Source Intelligence (OSINT) along with our additional constraints for performance. In this work we propose two variations on Multi-dimensional Spectral Hash (MDSH) tailored for working on extremely large, growing setsmore » of text documents. We analyze the memory and runtime characteristics of our techniques and provide an informal analysis of the quality of the near-duplicate clusters produced by our techniques.« less
Vocal activity as a low cost and scalable index of seabird colony size
Borker, Abraham L.; McKown, Matthew W.; Ackerman, Joshua T.; Eagles-Smith, Collin A.; Tershy, Bernie R.; Croll, Donald A.
2014-01-01
Although wildlife conservation actions have increased globally in number and complexity, the lack of scalable, cost-effective monitoring methods limits adaptive management and the evaluation of conservation efficacy. Automated sensors and computer-aided analyses provide a scalable and increasingly cost-effective tool for conservation monitoring. A key assumption of automated acoustic monitoring of birds is that measures of acoustic activity at colony sites are correlated with the relative abundance of nesting birds. We tested this assumption for nesting Forster's terns (Sterna forsteri) in San Francisco Bay for 2 breeding seasons. Sensors recorded ambient sound at 7 colonies that had 15–111 nests in 2009 and 2010. Colonies were spaced at least 250 m apart and ranged from 36 to 2,571 m2. We used spectrogram cross-correlation to automate the detection of tern calls from recordings. We calculated mean seasonal call rate and compared it with mean active nest count at each colony. Acoustic activity explained 71% of the variation in nest abundance between breeding sites and 88% of the change in colony size between years. These results validate a primary assumption of acoustic indices; that is, for terns, acoustic activity is correlated to relative abundance, a fundamental step toward designing rigorous and scalable acoustic monitoring programs to measure the effectiveness of conservation actions for colonial birds and other acoustically active wildlife.
DISP: Optimizations towards Scalable MPI Startup
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Huansong; Pophale, Swaroop S; Gorentla Venkata, Manjunath
2016-01-01
Despite the popularity of MPI for high performance computing, the startup of MPI programs faces a scalability challenge as both the execution time and memory consumption increase drastically at scale. We have examined this problem using the collective modules of Cheetah and Tuned in Open MPI as representative implementations. Previous improvements for collectives have focused on algorithmic advances and hardware off-load. In this paper, we examine the startup cost of the collective module within a communicator and explore various techniques to improve its efficiency and scalability. Accordingly, we have developed a new scalable startup scheme with three internal techniques, namelymore » Delayed Initialization, Module Sharing and Prediction-based Topology Setup (DISP). Our DISP scheme greatly benefits the collective initialization of the Cheetah module. At the same time, it helps boost the performance of non-collective initialization in the Tuned module. We evaluate the performance of our implementation on Titan supercomputer at ORNL with up to 4096 processes. The results show that our delayed initialization can speed up the startup of Tuned and Cheetah by an average of 32.0% and 29.2%, respectively, our module sharing can reduce the memory consumption of Tuned and Cheetah by up to 24.1% and 83.5%, respectively, and our prediction-based topology setup can speed up the startup of Cheetah by up to 80%.« less
GASPRNG: GPU accelerated scalable parallel random number generator library
NASA Astrophysics Data System (ADS)
Gao, Shuang; Peterson, Gregory D.
2013-04-01
Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or workstation with NVIDIA GPU (Tested on Fermi GTX480, Tesla C1060, Tesla M2070). Operating system: Linux with CUDA version 4.0 or later. Should also run on MacOS, Windows, or UNIX. Has the code been vectorized or parallelized?: Yes. Parallelized using MPI directives. RAM: 512 MB˜ 732 MB (main memory on host CPU, depending on the data type of random numbers.) / 512 MB (GPU global memory) Classification: 4.13, 6.5. Nature of problem: Many computational science applications are able to consume large numbers of random numbers. For example, Monte Carlo simulations are able to consume limitless random numbers for the computation as long as resources for the computing are supported. Moreover, parallel computational science applications require independent streams of random numbers to attain statistically significant results. The SPRNG library provides this capability, but at a significant computational cost. The GASPRNG library presented here accelerates the generators of independent streams of random numbers using graphical processing units (GPUs). Solution method: Multiple copies of random number generators in GPUs allow a computational science application to consume large numbers of random numbers from independent, parallel streams. GASPRNG is a random number generators library to allow a computational science application to employ multiple copies of random number generators to boost performance. Users can interface GASPRNG with software code executing on microprocessors and/or GPUs. Running time: The tests provided take a few minutes to run.
Noise characteristics of upper surface blown configurations. Experimental program and results
NASA Technical Reports Server (NTRS)
Brown, W. H.; Searle, N.; Blakney, D. F.; Pennock, A. P.; Gibson, J. S.
1977-01-01
An experimental data base was developed from the model upper surface blowing (USB) propulsive lift system hardware. While the emphasis was on far field noise data, a considerable amount of relevant flow field data were also obtained. The data were derived from experiments in four different facilities resulting in: (1) small scale static flow field data; (2) small scale static noise data; (3) small scale simulated forward speed noise and load data; and (4) limited larger-scale static noise flow field and load data. All of the small scale tests used the same USB flap parts. Operational and geometrical variables covered in the test program included jet velocity, nozzle shape, nozzle area, nozzle impingement angle, nozzle vertical and horizontal location, flap length, flap deflection angle, and flap radius of curvature.
Think 500, not 50! A scalable approach to student success in STEM.
LaCourse, William R; Sutphin, Kathy Lee; Ott, Laura E; Maton, Kenneth I; McDermott, Patrice; Bieberich, Charles; Farabaugh, Philip; Rous, Philip
2017-01-01
UMBC, a diverse public research university, "builds" upon its reputation in producing highly capable undergraduate scholars to create a comprehensive new model, STEM BUILD at UMBC. This program is designed to help more students develop the skills, experience and motivation to excel in science, technology, engineering, and mathematics (STEM). This article provides an in-depth description of STEM BUILD at UMBC and provides the context of this initiative within UMBC's vision and mission. The STEM BUILD model targets promising STEM students who enter as freshmen or transfer students and do not qualify for significant university or other scholarship support. Of primary importance to this initiative are capacity, scalability, and institutional sustainability, as we distill the advantages and opportunities of UMBC's successful scholars programs and expand their application to more students. The general approach is to infuse the mentoring and training process into the fabric of the undergraduate experience while fostering community, scientific identity, and resilience. At the heart of STEM BUILD at UMBC is the development of BUILD Group Research (BGR), a sequence of experiences designed to overcome the challenges that undergraduates without programmatic support often encounter (e.g., limited internship opportunities, mentorships, and research positions for which top STEM students are favored). BUILD Training Program (BTP) Trainees serve as pioneers in this initiative, which is potentially a national model for universities as they address the call to retain and graduate more students in STEM disciplines - especially those from underrepresented groups. As such, BTP is a research study using random assignment trial methodology that focuses on the scalability and eventual incorporation of successful measures into the traditional format of the academy. Critical measures to transform institutional culture include establishing an extensive STEM Living and Learning Community to increase undergraduate retention, expanding the adoption of "active learning" pedagogies to increase the efficiency of learning, and developing programs to train researchers to effectively mentor a greater portion of the student population. The overarching goal of STEM BUILD at UMBC is to retain students in STEM majors and better prepare them for post baccalaureate, graduate, or professional programs as well as careers in biomedical and behavioral research.
Processing Diabetes Mellitus Composite Events in MAGPIE.
Brugués, Albert; Bromuri, Stefano; Barry, Michael; Del Toro, Óscar Jiménez; Mazurkiewicz, Maciej R; Kardas, Przemyslaw; Pegueroles, Josep; Schumacher, Michael
2016-02-01
The focus of this research is in the definition of programmable expert Personal Health Systems (PHS) to monitor patients affected by chronic diseases using agent oriented programming and mobile computing to represent the interactions happening amongst the components of the system. The paper also discusses issues of knowledge representation within the medical domain when dealing with temporal patterns concerning the physiological values of the patient. In the presented agent based PHS the doctors can personalize for each patient monitoring rules that can be defined in a graphical way. Furthermore, to achieve better scalability, the computations for monitoring the patients are distributed among their devices rather than being performed in a centralized server. The system is evaluated using data of 21 diabetic patients to detect temporal patterns according to a set of monitoring rules defined. The system's scalability is evaluated by comparing it with a centralized approach. The evaluation concerning the detection of temporal patterns highlights the system's ability to monitor chronic patients affected by diabetes. Regarding the scalability, the results show the fact that an approach exploiting the use of mobile computing is more scalable than a centralized approach. Therefore, more likely to satisfy the needs of next generation PHSs. PHSs are becoming an adopted technology to deal with the surge of patients affected by chronic illnesses. This paper discusses architectural choices to make an agent based PHS more scalable by using a distributed mobile computing approach. It also discusses how to model the medical knowledge in the PHS in such a way that it is modifiable at run time. The evaluation highlights the necessity of distributing the reasoning to the mobile part of the system and that modifiable rules are able to deal with the change in lifestyle of the patients affected by chronic illnesses.
COMPREHENSIVE PBPK MODELING APPROACH USING THE EXPOSURE RELATED DOSE ESTIMATING MODEL (ERDEM)
ERDEM, a complex PBPK modeling system, is the result of the implementation of a comprehensive PBPK modeling approach. ERDEM provides a scalable and user-friendly environment that enables researchers to focus on data input values rather than writing program code. It efficiently ...
Sustainable Implementation of Interprofessional Education Using an Adoption Model Framework
ERIC Educational Resources Information Center
Grymonpre, Ruby E.; Ateah, Christine A.; Dean, Heather J.; Heinonen, Tuula I.; Holmqvist, Maxine E.; MacDonald, Laura L.; Ready, A. Elizabeth; Wener, Pamela F.
2016-01-01
Interprofessional education (IPE) is a growing focus for educators in health professional academic programs. Recommendations to successfully implement IPE are emerging in the literature, but there remains a dearth of evidence informing the bigger challenges of sustainability and scalability. Transformation to interprofessional education for…
ERIC Educational Resources Information Center
Gordon, Dan
2011-01-01
When it comes to implementing innovative classroom technology programs, urban school districts face significant challenges stemming from their big-city status. These range from large bureaucracies, to scalability, to how to meet the needs of a more diverse group of students. Because of their size, urban districts tend to have greater distance…
Interfaith Leaders as Social Entrepreneurs
ERIC Educational Resources Information Center
Patel, Eboo; Meyer, Cassie
2012-01-01
Social entrepreneurs work to find concrete solutions to large-scale problems that are scalable and sustainable. In this article, the authors explore what the framework of social entrepreneurship might offer those seeking to positively engage religious diversity on college campuses, and highlight two programs that offer examples of what such…
The FORTRAN static source code analyzer program (SAP) user's guide, revision 1
NASA Technical Reports Server (NTRS)
Decker, W.; Taylor, W.; Eslinger, S.
1982-01-01
The FORTRAN Static Source Code Analyzer Program (SAP) User's Guide (Revision 1) is presented. SAP is a software tool designed to assist Software Engineering Laboratory (SEL) personnel in conducting studies of FORTRAN programs. SAP scans FORTRAN source code and produces reports that present statistics and measures of statements and structures that make up a module. This document is a revision of the previous SAP user's guide, Computer Sciences Corporation document CSC/TM-78/6045. SAP Revision 1 is the result of program modifications to provide several new reports, additional complexity analysis, and recognition of all statements described in the FORTRAN 77 standard. This document provides instructions for operating SAP and contains information useful in interpreting SAP output.
Mammalian carnivores are increasingly the focus of reintroduction attempts in areas from which
they have been extirpated by historic persecution. We used static and dynamic spatial models to evaluate whether a proposed wolf reintroduction to the southern Rocky Mountain region ...
1967-10-01
Workmen at the Marshall Space Flight Center's (MSFC's) dock on the Ternessee River unload S-IB-211, the flight version of the Saturn IB launch vehicle's first stage, from the NASA barge Palaemon. Between December 1967 and April 1968, the stage would undergo seven static test firings in MSFC's S-IB static test stand.
1967-10-01
Workmen at the Marshall Space Flight Center's (MSFC's) dock on the Ternessee River unload S-IB-211, the flight version of the Saturn IB launch vehicle's first stage, from the NASA barge Palaemon. Between December 1967 and April 1968, the stage would undergo seven static test firings in Marshall's S-IB static test stand.
Scalable and expressive medical terminologies.
Mays, E; Weida, R; Dionne, R; Laker, M; White, B; Liang, C; Oles, F J
1996-01-01
The K-Rep system, based on description logic, is used to represent and reason with large and expressive controlled medical terminologies. Expressive concept descriptions incorporate semantically precise definitions composed using logical operators, together with important non-semantic information such as synonyms and codes. Examples are drawn from our experience with K-Rep in modeling the InterMed laboratory terminology and also developing a large clinical terminology now in production use at Kaiser-Permanente. System-level scalability of performance is achieved through an object-oriented database system which efficiently maps persistent memory to virtual memory. Equally important is conceptual scalability-the ability to support collaborative development, organization, and visualization of a substantial terminology as it evolves over time. K-Rep addresses this need by logically completing concept definitions and automatically classifying concepts in a taxonomy via subsumption inferences. The K-Rep system includes a general-purpose GUI environment for terminology development and browsing, a custom interface for formulary term maintenance, a C+2 application program interface, and a distributed client-server mode which provides lightweight clients with efficient run-time access to K-Rep by means of a scripting language.
Programming time-multiplexed reconfigurable hardware using a scalable neuromorphic compiler.
Minkovich, Kirill; Srinivasa, Narayan; Cruz-Albrecht, Jose M; Cho, Youngkwan; Nogin, Aleksey
2012-06-01
Scalability and connectivity are two key challenges in designing neuromorphic hardware that can match biological levels. In this paper, we describe a neuromorphic system architecture design that addresses an approach to meet these challenges using traditional complementary metal-oxide-semiconductor (CMOS) hardware. A key requirement in realizing such neural architectures in hardware is the ability to automatically configure the hardware to emulate any neural architecture or model. The focus for this paper is to describe the details of such a programmable front-end. This programmable front-end is composed of a neuromorphic compiler and a digital memory, and is designed based on the concept of synaptic time-multiplexing (STM). The neuromorphic compiler automatically translates any given neural architecture to hardware switch states and these states are stored in digital memory to enable desired neural architectures. STM enables our proposed architecture to address scalability and connectivity using traditional CMOS hardware. We describe the details of the proposed design and the programmable front-end, and provide examples to illustrate its capabilities. We also provide perspectives for future extensions and potential applications.
Advanced imaging programs: maximizing a multislice CT investment.
Falk, Robert
2008-01-01
Advanced image processing has moved from a luxury to a necessity in the practice of medicine. A hospital's adoption of sophisticated 3D imaging entails several important steps with many factors to consider in order to be successful. Like any new hospital program, 3D post-processing should be introduced through a strategic planning process that includes administrators, physicians, and technologists to design, implement, and market a program that is scalable-one that minimizes up front costs while providing top level service. This article outlines the steps for planning, implementation, and growth of an advanced imaging program.
Kibar, Sibel; Yardimci, Fatma Ö; Evcik, Deniz; Ay, Saime; Alhan, Aslıhan; Manço, Miray; Ergin, Emine S
2016-10-01
This randomized controlled study aims to determine the effect of pilates mat exercises on dynamic and static balance, hamstring flexibility, abdominal muscle activity and endurance in healthy adults. Female healthy volunteer university students randomly assigned into two groups. Group 1 followed a pilates program for an hour two times a week. Group 2 continued daily activities as control group. Dynamic and static balance were evaluated by Sport Kinesthetic Ability Trainer (KAT) 4000 device. Hamstring flexibility and abdominal endurance were determined by sit-and-reach test, curl-up test respectively. Pressure biofeedback unit (PBU) was used to measure transversus abdominis and lumbar muscle activity. The physical activity of the participants was followed by International Physical Activity Questionnaire-Short Form. Twenty-three subjects in pilates group and 24 control subjects completed the study. In pilates group, statistical significant improvements were observed in curl-up, sit-and-reach test, PBU scores at sixth week (P<0.001), and KAT static and dynamic balance scores (P<0.001), waist circumference (P=0.007) at eighth week. In the comparison between two groups, there were significant improvements in pilates group for sit-and-reach test (P=0.01) and PBU scores (P<0.001) at sixth week, additionally curl-up and static KAT scores progressed in eighth week (P<0.001). No correlation was found between flexibility, endurance, trunk muscle activity and balance parameters. An eight-week pilates training program has been found to have beneficial effect on static balance, flexibility, abdominal muscle endurance, abdominal and lumbar muscle activity. These parameters have no effect on balance.
Polak, Rani; Pober, David M; Budd, Maggi A; Silver, Julie K; Phillips, Edward M; Abrahamson, Martin J
2017-08-01
This case series describes and examines the outcomes of a remote culinary coaching program aimed at improving nutrition through home cooking. Participants (n = 4) improved attitudes about the perceived ease of home cooking (p < 0.01) and self-efficacy to perform various culinary skills (p = 0.02); and also improved in confidence to continue online learning of culinary skills and consume healthier food. We believe this program might be a viable response to the need for effective and scalable health-related culinary interventions.
Modern Gemini-Approach to Technology Development for Human Space Exploration
NASA Technical Reports Server (NTRS)
White, Harold
2010-01-01
In NASA's plan to put men on the moon, there were three sequential programs: Mercury, Gemini, and Apollo. The Gemini program was used to develop and integrate the technologies that would be necessary for the Apollo program to successfully put men on the moon. We would like to present an analogous modern approach that leverages legacy ISS hardware designs, and integrates developing new technologies into a flexible architecture This new architecture is scalable, sustainable, and can be used to establish human exploration infrastructure beyond low earth orbit and into deep space.
The implementation of a comprehensive PBPK modeling approach resulted in ERDEM, a complex PBPK modeling system. ERDEM provides a scalable and user-friendly environment that enables researchers to focus on data input values rather than writing program code. ERDEM efficiently m...
Polymorphous Computing Architectures
2007-12-12
provide a multiprocessor implementation. In this work, we introduce the Atomos transactional programming language, which is the first to include...implicit transactions, strong atomicity, and a scalable multiprocessor implementation [47]. Atomos is derived from Java, but replaces its synchronization...and conditional waiting constructs with transactional alternatives. The Atomos conditional waiting proposal is tailored to allow efficient
Rapid Prototyping of High Performance Signal Processing Applications
NASA Astrophysics Data System (ADS)
Sane, Nimish
Advances in embedded systems for digital signal processing (DSP) are enabling many scientific projects and commercial applications. At the same time, these applications are key to driving advances in many important kinds of computing platforms. In this region of high performance DSP, rapid prototyping is critical for faster time-to-market (e.g., in the wireless communications industry) or time-to-science (e.g., in radio astronomy). DSP system architectures have evolved from being based on application specific integrated circuits (ASICs) to incorporate reconfigurable off-the-shelf field programmable gate arrays (FPGAs), the latest multiprocessors such as graphics processing units (GPUs), or heterogeneous combinations of such devices. We, thus, have a vast design space to explore based on performance trade-offs, and expanded by the multitude of possibilities for target platforms. In order to allow systematic design space exploration, and develop scalable and portable prototypes, model based design tools are increasingly used in design and implementation of embedded systems. These tools allow scalable high-level representations, model based semantics for analysis and optimization, and portable implementations that can be verified at higher levels of abstractions and targeted toward multiple platforms for implementation. The designer can experiment using such tools at an early stage in the design cycle, and employ the latest hardware at later stages. In this thesis, we have focused on dataflow-based approaches for rapid DSP system prototyping. This thesis contributes to various aspects of dataflow-based design flows and tools as follows: 1. We have introduced the concept of topological patterns, which exploits commonly found repetitive patterns in DSP algorithms to allow scalable, concise, and parameterizable representations of large scale dataflow graphs in high-level languages. We have shown how an underlying design tool can systematically exploit a high-level application specification consisting of topological patterns in various aspects of the design flow. 2. We have formulated the core functional dataflow (CFDF) model of computation, which can be used to model a wide variety of deterministic dynamic dataflow behaviors. We have also presented key features of the CFDF model and tools based on these features. These tools provide support for heterogeneous dataflow behaviors, an intuitive and common framework for functional specification, support for functional simulation, portability from several existing dataflow models to CFDF, integrated emphasis on minimally-restricted specification of actor functionality, and support for efficient static, quasi-static, and dynamic scheduling techniques. 3. We have developed a generalized scheduling technique for CFDF graphs based on decomposition of a CFDF graph into static graphs that interact at run-time. Furthermore, we have refined this generalized scheduling technique using a new notion of "mode grouping," which better exposes the underlying static behavior. We have also developed a scheduling technique for a class of dynamic applications that generates parameterized looped schedules (PLSs), which can handle dynamic dataflow behavior without major limitations on compile-time predictability. 4. We have demonstrated the use of dataflow-based approaches for design and implementation of radio astronomy DSP systems using an application example of a tunable digital downconverter (TDD) for spectrometers. Design and implementation of this module has been an integral part of this thesis work. This thesis demonstrates a design flow that consists of a high-level software prototype, analysis, and simulation using the dataflow interchange format (DIF) tool, and integration of this design with the existing tool flow for the target implementation on an FPGA platform, called interconnect break-out board (IBOB). We have also explored the trade-off between low hardware cost for fixed configurations of digital downconverters and flexibility offered by TDD designs. 5. This thesis has contributed significantly to the development and release of the latest version of a graph package oriented toward models of computation (MoCGraph). Our enhancements to this package include support for tree data structures, and generalized schedule trees (GSTs), which provide a useful data structure for a wide variety of schedule representations. Our extensions to the MoCGraph package provided key support for the CFDF model, and functional simulation capabilities in the DIF package.
1992-11-01
November 1992 1992 INTERNATIONAL AEROSPACE AND GROUND CONFERENCE 6. Perfrming Orgnis.aten Code ON LIGHTNING AND STATIC ELECTRICITY - ADDENDUM 111...October 6-8 1992 Program and the Federal Aviation Administration 14. Sponsoring Agency Code Technical Center ACD-230 15. Supplementary Metes The NICG...area]. The program runs well on an IBM PC or compatible 386 with a math co-processor 387 chip and a VGA monitor. For this study, streamers were added
Model-Driven Engineering of Machine Executable Code
NASA Astrophysics Data System (ADS)
Eichberg, Michael; Monperrus, Martin; Kloppenburg, Sven; Mezini, Mira
Implementing static analyses of machine-level executable code is labor intensive and complex. We show how to leverage model-driven engineering to facilitate the design and implementation of programs doing static analyses. Further, we report on important lessons learned on the benefits and drawbacks while using the following technologies: using the Scala programming language as target of code generation, using XML-Schema to express a metamodel, and using XSLT to implement (a) transformations and (b) a lint like tool. Finally, we report on the use of Prolog for writing model transformations.
NASA Technical Reports Server (NTRS)
Fertis, D. G.; Simon, A. L.
1981-01-01
The requisite methodology to solve linear and nonlinear problems associated with the static and dynamic analysis of rotating machinery, their static and dynamic behavior, and the interaction between the rotating and nonrotating parts of an engine is developed. Linear and nonlinear structural engine problems are investigated by developing solution strategies and interactive computational methods whereby the man and computer can communicate directly in making analysis decisions. Representative examples include modifying structural models, changing material, parameters, selecting analysis options and coupling with interactive graphical display for pre- and postprocessing capability.
Scalable persistent identifier systems for dynamic datasets
NASA Astrophysics Data System (ADS)
Golodoniuc, P.; Cox, S. J. D.; Klump, J. F.
2016-12-01
Reliable and persistent identification of objects, whether tangible or not, is essential in information management. Many Internet-based systems have been developed to identify digital data objects, e.g., PURL, LSID, Handle, ARK. These were largely designed for identification of static digital objects. The amount of data made available online has grown exponentially over the last two decades and fine-grained identification of dynamically generated data objects within large datasets using conventional systems (e.g., PURL) has become impractical. We have compared capabilities of various technological solutions to enable resolvability of data objects in dynamic datasets, and developed a dataset-centric approach to resolution of identifiers. This is particularly important in Semantic Linked Data environments where dynamic frequently changing data is delivered live via web services, so registration of individual data objects to obtain identifiers is impractical. We use identifier patterns and pattern hierarchies for identification of data objects, which allows relationships between identifiers to be expressed, and also provides means for resolving a single identifier into multiple forms (i.e. views or representations of an object). The latter can be implemented through (a) HTTP content negotiation, or (b) use of URI querystring parameters. The pattern and hierarchy approach has been implemented in the Linked Data API supporting the United Nations Spatial Data Infrastructure (UNSDI) initiative and later in the implementation of geoscientific data delivery for the Capricorn Distal Footprints project using International Geo Sample Numbers (IGSN). This enables flexible resolution of multi-view persistent identifiers and provides a scalable solution for large heterogeneous datasets.
Grindon, Christina; Harris, Sarah; Evans, Tom; Novik, Keir; Coveney, Peter; Laughton, Charles
2004-07-15
Molecular modelling played a central role in the discovery of the structure of DNA by Watson and Crick. Today, such modelling is done on computers: the more powerful these computers are, the more detailed and extensive can be the study of the dynamics of such biological macromolecules. To fully harness the power of modern massively parallel computers, however, we need to develop and deploy algorithms which can exploit the structure of such hardware. The Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) is a scalable molecular dynamics code including long-range Coulomb interactions, which has been specifically designed to function efficiently on parallel platforms. Here we describe the implementation of the AMBER98 force field in LAMMPS and its validation for molecular dynamics investigations of DNA structure and flexibility against the benchmark of results obtained with the long-established code AMBER6 (Assisted Model Building with Energy Refinement, version 6). Extended molecular dynamics simulations on the hydrated DNA dodecamer d(CTTTTGCAAAAG)(2), which has previously been the subject of extensive dynamical analysis using AMBER6, show that it is possible to obtain excellent agreement in terms of static, dynamic and thermodynamic parameters between AMBER6 and LAMMPS. In comparison with AMBER6, LAMMPS shows greatly improved scalability in massively parallel environments, opening up the possibility of efficient simulations of order-of-magnitude larger systems and/or for order-of-magnitude greater simulation times.
Perfusion directed 3D mineral formation within cell-laden hydrogels.
Sawyer, Stephen William; Shridhar, Shivkumar Vishnempet; Zhang, Kairui; Albrecht, Lucas; Filip, Alex; Horton, Jason; Soman, Pranav
2018-06-08
Despite the promise of stem cell engineering and the new advances in bioprinting technologies, one of the major challenges in the manufacturing of large scale bone tissue scaffolds is the inability to perfuse nutrients throughout thick constructs. Here, we report a scalable method to create thick, perfusable bone constructs using a combination of cell-laden hydrogels and a 3D printed sacrificial polymer. Osteoblast-like Saos-2 cells were encapsulated within a gelatin methacrylate (GelMA) hydrogel and 3D printed polyvinyl alcohol (PVA) pipes were used to create perfusable channels. A custom-built bioreactor was used to perfuse osteogenic media directly through the channels in order to induce mineral deposition which was subsequently quantified via microCT. Histological staining was used to verify mineral deposition around the perfused channels, while COMSOL modeling was used to simulate oxygen diffusion between adjacent channels. This information was used to design a scaled-up construct containing a 3D array of perfusable channels within cell-laden GelMA. Progressive matrix mineralization was observed by cells surrounding perfused channels as opposed to random mineral deposition in static constructs. MicroCT confirmed that there was a direct relationship between channel mineralization within perfused constructs and time within the bioreactor. Furthermore, the scalable method presented in this work serves as a model on how large-scale bone tissue replacement constructs could be made using commonly available 3D printers, sacrificial materials, and hydrogels. © 2018 IOP Publishing Ltd.
Dynamic full-scalability conversion in scalable video coding
NASA Astrophysics Data System (ADS)
Lee, Dong Su; Bae, Tae Meon; Thang, Truong Cong; Ro, Yong Man
2007-02-01
For outstanding coding efficiency with scalability functions, SVC (Scalable Video Coding) is being standardized. SVC can support spatial, temporal and SNR scalability and these scalabilities are useful to provide a smooth video streaming service even in a time varying network such as a mobile environment. But current SVC is insufficient to support dynamic video conversion with scalability, thereby the adaptation of bitrate to meet a fluctuating network condition is limited. In this paper, we propose dynamic full-scalability conversion methods for QoS adaptive video streaming in SVC. To accomplish full scalability dynamic conversion, we develop corresponding bitstream extraction, encoding and decoding schemes. At the encoder, we insert the IDR NAL periodically to solve the problems of spatial scalability conversion. At the extractor, we analyze the SVC bitstream to get the information which enable dynamic extraction. Real time extraction is achieved by using this information. Finally, we develop the decoder so that it can manage the changing scalability. Experimental results showed that dynamic full-scalability conversion was verified and it was necessary for time varying network condition.
Impact evaluation of composite floor sections
NASA Technical Reports Server (NTRS)
Boitnott, Richard L.; Fasanella, Edwin L.
1989-01-01
Graphite-epoxy floor sections representative of aircraft fuselage construction were statically and dynamically tested to evaluate their response to crash loadings. These floor sections were fabricated using a frame-stringer design typical of present aluminum aircraft without features to enhance crashworthiness. The floor sections were tested as part of a systematic research program developed to study the impact response of composite components of increasing complexity. The ultimate goal of the research program is to develop crashworthy design features for future composite aircraft. Initially, individual frames of six-foot diameter were tested both statically and dynamically. The frames were then used to construct built-up floor sections for dynamic tests at impact velocities of approximately 20 feet/sec to simulate survivable crash velocities. In addition, static tests were conducted to gain a better understanding of the failure mechanisms seen in the dynamic tests.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Atzeni, Simone; Ahn, Dong; Gopalakrishnan, Ganesh
2017-01-12
Archer is built on top of the LLVM/Clang compilers that support OpenMP. It applies static and dynamic analysis techniques to detect data races in OpenMP programs generating a very low runtime and memory overhead. Static analyses identify data race free OpenMP regions and exclude them from runtime analysis, which is performed by ThreadSanitizer included in LLVM/Clang.
30 CFR 784.16 - Reclamation plan: Siltation structures, impoundments, and refuse piles.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Resources Conservation Service's Web site athttp://www.info.usda.gov/scripts/lpsiis.dll/TR/TR_210_60.htm... State program approval process engineering design standards that ensure stability comparable to a 1.3 minimum static safety factor in lieu of engineering tests to establish compliance with the minimum static...
NASA Technical Reports Server (NTRS)
Pennock, A. P.; Swift, G.; Marbert, J. A.
1975-01-01
Externally blown flap models were tested for noise and performance at one-fifth scale in a static facility and at one-tenth scale in a large acoustically-treated wind tunnel. The static tests covered two flap designs, conical and ejector nozzles, third-flap noise-reduction treatments, internal blowing, and flap/nozzle geometry variations. The wind tunnel variables were triple-slotted or single-slotted flaps, sweep angle, and solid or perforated third flap. The static test program showed the following noise reductions at takeoff: 1.5 PNdB due to treating the third flap; 0.5 PNdB due to blowing from the third flap; 6 PNdB at flyover and 4.5 PNdB in the critical sideline plane (30 deg elevation) due to installation of the ejector nozzle. The wind tunnel program showed a reduction of 2 PNdB in the sideline plane due to a forward speed of 43.8 m/s (85 kn). The best combination of noise reduction concepts reduced the sideline noise of the reference aircraft at constant field length by 4 PNdB.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gartling, D.K.
The theoretical and numerical background for the finite element computer program, TORO II, is presented in detail. TORO II is designed for the multi-dimensional analysis of nonlinear, electromagnetic field problems described by the quasi-static form of Maxwell`s equations. A general description of the boundary value problems treated by the program is presented. The finite element formulation and the associated numerical methods used in TORO II are also outlined. Instructions for the use of the code are documented in SAND96-0903; examples of problems analyzed with the code are also provided in the user`s manual. 24 refs., 8 figs.
NASTRAN/FLEXSTAB procedure for static aeroelastic analysis
NASA Technical Reports Server (NTRS)
Schuster, L. S.
1984-01-01
Presented is a procedure for using the FLEXSTAB External Structural Influence Coefficients (ESIC) computer program to produce the structural data necessary for the FLEXSTAB Stability Derivatives and Static Stability (SD&SS) program. The SD&SS program computes trim state, stability derivatives, and pressure and deflection data for a flexible airplane having a plane of symmetry. The procedure used a NASTRAN finite-element structural model as the source of structural data in the form of flexibility matrices. Selection of a set of degrees of freedom, definition of structural nodes and panels, reordering and reformatting of the flexibility matrix, and redistribution of existing point mass data are among the topics discussed. Also discussed are boundary conditions and the NASTRAN substructuring technique.
Campbell, Karen J; Hesketh, Kylie D; McNaughton, Sarah A; Ball, Kylie; McCallum, Zoë; Lynch, John; Crawford, David A
2016-02-18
Understanding how we can prevent childhood obesity in scalable and sustainable ways is imperative. Early RCT interventions focused on the first two years of life have shown promise however, differences in Body Mass Index between intervention and control groups diminish once the interventions cease. Innovative and cost-effective strategies seeking to continue to support parents to engender appropriate energy balance behaviours in young children need to be explored. The Infant Feeding Activity and Nutrition Trial (InFANT) Extend Program builds on the early outcomes of the Melbourne InFANT Program. This cluster randomized controlled trial will test the efficacy of an extended (33 versus 15 month) and enhanced (use of web-based materials, and Facebook® engagement), version of the original Melbourne InFANT Program intervention in a new cohort. Outcomes at 36 months of age will be compared against the control group. This trial will provide important information regarding capacity and opportunities to maximize early childhood intervention effectiveness over the first three years of life. This study continues to build the evidence base regarding the design of cost-effective, scalable interventions to promote protective energy balance behaviors in early childhood, and in turn, promote improved child weight and health across the life course. ACTRN12611000386932. Registered 13 April 2011.
Online Program Capacity: Limited, Static, Elastic, or Infinite?
ERIC Educational Resources Information Center
Meyer, Katrina A.
2008-01-01
What is the capacity of online programs? Can these types of programs enroll more students than their face-to-face counterparts or not? This article looks at research on achieving cost-efficiencies through online learning, identifies the parts of an online program that can be changed to increase enrollments, and discusses whether a program's…
NASA Astrophysics Data System (ADS)
De Luca, A.; Iazzolino, A.; Salmon, J.-B.; Leng, J.; Ravaine, S.; Grigorenko, A. N.; Strangi, G.
2014-09-01
The interplay between plasmons and excitons in bulk metamaterials are investigated by performing spectroscopic studies, including variable angle pump-probe ellipsometry. Gain functionalized gold nanoparticles have been densely packed through a microfluidic chip, representing a scalable process towards bulk metamaterials based on self-assembly approach. Chromophores placed at the hearth of plasmonic subunits ensure exciton-plasmon coupling to convey excitation energy to the quasi-static electric field of the plasmon states. The overall complex polarizability of the system, probed by variable angle spectroscopic ellipsometry, shows a significant modification under optical excitation, as demonstrated by the behavior of the ellipsometric angles Ψ and Δ as a function of suitable excitation fields. The plasmon resonances observed in densely packed gain functionalized core-shell gold nanoparticles represent a promising step to enable a wide range of electromagnetic properties and fascinating applications of plasmonic bulk systems for advanced optical materials.
Load Balancing in Distributed Web Caching: A Novel Clustering Approach
NASA Astrophysics Data System (ADS)
Tiwari, R.; Kumar, K.; Khan, G.
2010-11-01
The World Wide Web suffers from scaling and reliability problems due to overloaded and congested proxy servers. Caching at local proxy servers helps, but cannot satisfy more than a third to half of requests; more requests are still sent to original remote origin servers. In this paper we have developed an algorithm for Distributed Web Cache, which incorporates cooperation among proxy servers of one cluster. This algorithm uses Distributed Web Cache concepts along with static hierarchies with geographical based clusters of level one proxy server with dynamic mechanism of proxy server during the congestion of one cluster. Congestion and scalability problems are being dealt by clustering concept used in our approach. This results in higher hit ratio of caches, with lesser latency delay for requested pages. This algorithm also guarantees data consistency between the original server objects and the proxy cache objects.
Polarization-independent actively tunable colour generation on imprinted plasmonic surfaces
Franklin, Daniel; Chen, Yuan; Vazquez-Guardado, Abraham; Modak, Sushrut; Boroumand, Javaneh; Xu, Daming; Wu, Shin-Tson; Chanda, Debashis
2015-01-01
Structural colour arising from nanostructured metallic surfaces offers many benefits compared to conventional pigmentation based display technologies, such as increased resolution and scalability of their optical response with structure dimensions. However, once these structures are fabricated their optical characteristics remain static, limiting their potential application. Here, by using a specially designed nanostructured plasmonic surface in conjunction with high birefringence liquid crystals, we demonstrate a tunable polarization-independent reflective surface where the colour of the surface is changed as a function of applied voltage. A large range of colour tunability is achieved over previous reports by utilizing an engineered surface which allows full liquid crystal reorientation while maximizing the overlap between plasmonic fields and liquid crystal. In combination with imprinted structures of varying periods, a full range of colours spanning the entire visible spectrum is achieved, paving the way towards dynamic pixels for reflective displays. PMID:26066375
Dibai-Filho, Almir Vieira; de Oliveira, Alessandra Kelly; Girasol, Carlos Eduardo; Dias, Fabiana Rodrigues Cancio; Guirro, Rinaldo Roberto de Jesus
2017-04-01
To assess the additional effect of static ultrasound and diadynamic currents on myofascial trigger points in a manual therapy program to treat individuals with chronic neck pain. A single-blind randomized trial was conducted. Both men and women, between ages 18 and 45, with chronic neck pain and active myofascial trigger points in the upper trapezius were included in the study. Subjects were assigned to 3 different groups: group 1 (n = 20) was treated with manual therapy; group 2 (n = 20) was treated with manual therapy and static ultrasound; group 3 (n = 20) was treated with manual therapy and diadynamic currents. Individuals were assessed before the first treatment session, 48 hours after the first treatment session, 48 hours after the tenth treatment session, and 4 weeks after the last session. There was no group-versus-time interaction for Numeric Rating Scale, Neck Disability Index, Pain-Related Self-Statement Scale, pressure pain threshold, cervical range of motion, and skin temperature (F-value range, 0.089-1.961; P-value range, 0.106-0.977). Moreover, we found no differences between groups regarding electromyographic activity (P > 0.05). The use of static ultrasound or diadynamic currents on myofascial trigger points in upper trapezius associated with a manual therapy program did not generate greater benefits than manual therapy alone.
A scalable neuroinformatics data flow for electrophysiological signals using MapReduce.
Jayapandian, Catherine; Wei, Annan; Ramesh, Priya; Zonjy, Bilal; Lhatoo, Samden D; Loparo, Kenneth; Zhang, Guo-Qiang; Sahoo, Satya S
2015-01-01
Data-driven neuroscience research is providing new insights in progression of neurological disorders and supporting the development of improved treatment approaches. However, the volume, velocity, and variety of neuroscience data generated from sophisticated recording instruments and acquisition methods have exacerbated the limited scalability of existing neuroinformatics tools. This makes it difficult for neuroscience researchers to effectively leverage the growing multi-modal neuroscience data to advance research in serious neurological disorders, such as epilepsy. We describe the development of the Cloudwave data flow that uses new data partitioning techniques to store and analyze electrophysiological signal in distributed computing infrastructure. The Cloudwave data flow uses MapReduce parallel programming algorithm to implement an integrated signal data processing pipeline that scales with large volume of data generated at high velocity. Using an epilepsy domain ontology together with an epilepsy focused extensible data representation format called Cloudwave Signal Format (CSF), the data flow addresses the challenge of data heterogeneity and is interoperable with existing neuroinformatics data representation formats, such as HDF5. The scalability of the Cloudwave data flow is evaluated using a 30-node cluster installed with the open source Hadoop software stack. The results demonstrate that the Cloudwave data flow can process increasing volume of signal data by leveraging Hadoop Data Nodes to reduce the total data processing time. The Cloudwave data flow is a template for developing highly scalable neuroscience data processing pipelines using MapReduce algorithms to support a variety of user applications.
A scalable neuroinformatics data flow for electrophysiological signals using MapReduce
Jayapandian, Catherine; Wei, Annan; Ramesh, Priya; Zonjy, Bilal; Lhatoo, Samden D.; Loparo, Kenneth; Zhang, Guo-Qiang; Sahoo, Satya S.
2015-01-01
Data-driven neuroscience research is providing new insights in progression of neurological disorders and supporting the development of improved treatment approaches. However, the volume, velocity, and variety of neuroscience data generated from sophisticated recording instruments and acquisition methods have exacerbated the limited scalability of existing neuroinformatics tools. This makes it difficult for neuroscience researchers to effectively leverage the growing multi-modal neuroscience data to advance research in serious neurological disorders, such as epilepsy. We describe the development of the Cloudwave data flow that uses new data partitioning techniques to store and analyze electrophysiological signal in distributed computing infrastructure. The Cloudwave data flow uses MapReduce parallel programming algorithm to implement an integrated signal data processing pipeline that scales with large volume of data generated at high velocity. Using an epilepsy domain ontology together with an epilepsy focused extensible data representation format called Cloudwave Signal Format (CSF), the data flow addresses the challenge of data heterogeneity and is interoperable with existing neuroinformatics data representation formats, such as HDF5. The scalability of the Cloudwave data flow is evaluated using a 30-node cluster installed with the open source Hadoop software stack. The results demonstrate that the Cloudwave data flow can process increasing volume of signal data by leveraging Hadoop Data Nodes to reduce the total data processing time. The Cloudwave data flow is a template for developing highly scalable neuroscience data processing pipelines using MapReduce algorithms to support a variety of user applications. PMID:25852536
A Rotating Bioreactor for Scalable Culture and Differentiation of Respiratory Epithelium
Raredon, Micha Sam Brickman; Ghaedi, Mahboobe; Calle, Elizabeth A.; Niklason, Laura E.
2015-01-01
Respiratory epithelium is difficult to grow in vitro, as it requires a well-maintained polarizing air–liquid interface (ALI) to maintain differentiation. Traditional methods rely on permeable membrane culture inserts, which are difficult to work with and are ill-suited for the production of large numbers of cells, such as the quantities required for cell-based clinical therapies. Herein, we investigate an alternative form of culture in which the cells are placed on a porous substrate that is continuously rolled, such that the monolayer of cells is alternately submerged in media or apically exposed to air. Our prototype bioreactor is reliable for up to 21 days of continuous culture and is designed for scale-up for large-scale cell culture with continuous medium and gas exchange. Normal human bronchial epithelial (NHBE) cells were cultured on an absorbent substrate in the reactor for periods of 7, 14, and 21 days and were compared to static controls that were submerged in media. Quantification by immunohistochemistry and quantitative PCR of markers specific to differentiated respiratory epithelium indicated increased cilia, mucous production, and tight junction formation in the rolled cultures, compared to static. Together with scanning electron microscopy and paraffin histology, the data indicate that the intermittent ALI provided by the rolling bioreactor promotes a polarized epithelial phenotype over a period of 21 days. PMID:26858899
NASA Astrophysics Data System (ADS)
Marker, Dan K.; Wilkes, James M.; Ruggiero, Eric J.; Inman, Daniel J.
2005-08-01
An innovative adaptive optic is discussed that provides a range of capabilities unavailable with either existing, or newly reported, research devices. It is believed that this device will be inexpensive and uncomplicated to construct and operate, with a large correction range that should dramatically relax the static and dynamic structural tolerances of a telescope. As the areal density of a telescope primary is reduced, the optimal optical figure and the structural stiffness are inherently compromised and this phenomenon will require a responsive, range-enhanced wavefront corrector. In addition to correcting for the aberrations in such innovative primary mirrors, sufficient throw remains to provide non-mechanical steering to dramatically improve the Field of regard. Time dependent changes such as thermal disturbances can also be accommodated. The proposed adaptive optic will overcome some of the issues facing conventional deformable mirrors, as well as current and proposed MEMS-based deformable mirrors and liquid crystal based adaptive optics. Such a device is scalable to meter diameter apertures, eliminates high actuation voltages with minimal power consumption, provides long throw optical path correction, provides polychromatic dispersion free operation, dramatically reduces the effects of adjacent actuator influence, and provides a nearly 100% useful aperture. This article will reveal top-level details of the proposed construction and include portions of a static, dynamic, and residual aberration analysis. This device will enable certain designs previously conceived by visionaries in the optical community.
Singh, Kunwar; Tiwari, Satish Chandra; Gupta, Maneesha
2014-01-01
The paper introduces novel architectures for implementation of fully static master-slave flip-flops for low power, high performance, and high density. Based on the proposed structure, traditional C(2)MOS latch (tristate inverter/clocked inverter) based flip-flop is implemented with fewer transistors. The modified C(2)MOS based flip-flop designs mC(2)MOSff1 and mC(2)MOSff2 are realized using only sixteen transistors each while the number of clocked transistors is also reduced in case of mC(2)MOSff1. Postlayout simulations indicate that mC(2)MOSff1 flip-flop shows 12.4% improvement in PDAP (power-delay-area product) when compared with transmission gate flip-flop (TGFF) at 16X capacitive load which is considered to be the best design alternative among the conventional master-slave flip-flops. To validate the correct behaviour of the proposed design, an eight bit asynchronous counter is designed to layout level. LVS and parasitic extraction were carried out on Calibre, whereas layouts were implemented using IC station (Mentor Graphics). HSPICE simulations were used to characterize the transient response of the flip-flop designs in a 180 nm/1.8 V CMOS technology. Simulations were also performed at 130 nm, 90 nm, and 65 nm to reveal the scalability of both the designs at modern process nodes.
Tiwari, Satish Chandra; Gupta, Maneesha
2014-01-01
The paper introduces novel architectures for implementation of fully static master-slave flip-flops for low power, high performance, and high density. Based on the proposed structure, traditional C2MOS latch (tristate inverter/clocked inverter) based flip-flop is implemented with fewer transistors. The modified C2MOS based flip-flop designs mC2MOSff1 and mC2MOSff2 are realized using only sixteen transistors each while the number of clocked transistors is also reduced in case of mC2MOSff1. Postlayout simulations indicate that mC2MOSff1 flip-flop shows 12.4% improvement in PDAP (power-delay-area product) when compared with transmission gate flip-flop (TGFF) at 16X capacitive load which is considered to be the best design alternative among the conventional master-slave flip-flops. To validate the correct behaviour of the proposed design, an eight bit asynchronous counter is designed to layout level. LVS and parasitic extraction were carried out on Calibre, whereas layouts were implemented using IC station (Mentor Graphics). HSPICE simulations were used to characterize the transient response of the flip-flop designs in a 180 nm/1.8 V CMOS technology. Simulations were also performed at 130 nm, 90 nm, and 65 nm to reveal the scalability of both the designs at modern process nodes. PMID:24723808
1965-04-01
S-IB-1, the first flight version of the Saturn IB launch vehicle's first stage (S-IB stage), undergoes a full-duration static firing in Saturn IB static test stand at the Marshall Space Flight Center (MSFC) on April 13, 1965. Developed by the MSFC and built by the Chrysler Corporation at the Michoud Assembly Facility (MAF) in New Orleans, Louisiana, the 90,000-pound booster utilized eight H-1 engines to produce a combined thrust of 1,600,000 pounds. Between April 1965 and July 1968, MSFC performed thirty-two static tests on twelve different S-IB stages.
NASA Technical Reports Server (NTRS)
Dewitt, R. L.; Mcintire, T. O.
1974-01-01
Pressurized expulsion tests were conducted to determine the effect of various physical parameters on the pressurant gas (methane, helium, hydrogen, and nitrogen) requirements during the expulsion of liquid methane from a 1.52-meter-(5-ft-) diameter spherical tank and to compare results with those predicted by an analytical program. Also studied were the effects on methane, helium, and hydrogen pressurant requirements of various slosh excitation frequencies and amplitudes, both with and without slosh suppressing baffles in the tank. The experimental results when using gaseous methane, helium, and hydrogen show that the predictions of the analytical program agreed well with the actual pressurant requirements for static tank expulsions. The analytical program could not be used for gaseous nitrogen expulsions because of the large quantities of nitrogen which can dissolve in liquid methane. Under slosh conditions, a pronounced increase in gaseous methane requirements was observed relative to results obtained for the static tank expulsions. Slight decreases in the helium and hydrogen requirements were noted under similar test conditions.
Scalable and Precise Abstraction of Programs for Trustworthy Software
2017-01-01
calculus for core Java. • 14 months: A systematic abstraction of core Java. • 18 months: A security auditor for core Java. • 24 months: A contract... auditor for full Java. • 42 months: A web-deployed service for security auditing. Approved for Public Release; Distribution Unlimited 4 4.0 RESULTS
Scalability in Production System Programs
1994-01-01
met this tall, stooping, mostly bald man who glared at me through large glasses . I guess I must have liked being glared at because within a few days, he...yes) < ceill >) {(cell Aid <id> ^modified no) <cell2>) (modify <celll> ^modified no) (remove <ce12>)) (p stage-computation-O (flag compaite-cells
NREL Announces Third Round of Start-Ups to Participate in the Wells Fargo
innovative commercial building technologies Photo of NREL researchers talking. George Lee and Steven Low that provide scalable solutions to reduce the energy impact of commercial buildings. Including Round 3 kit for commercial buildings. Referred to apply to program by University of Colorado Boulder Software
Teaching and Student Success: ACUE Makes the Link
ERIC Educational Resources Information Center
Mangum, Elmira
2017-01-01
In late 2014, higher education leaders and experts in pedagogy were convened by the Association of College and University Educators (ACUE) to develop a scalable and comprehensive program on the essentials of college teaching. The result is ACUE's Course in Effective Teaching Practices in which faculty learn about and implement evidence-based…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barbara Chapman
OpenMP was not well recognized at the beginning of the project, around year 2003, because of its limited use in DoE production applications and the inmature hardware support for an efficient implementation. Yet in the recent years, it has been graduately adopted both in HPC applications, mostly in the form of MPI+OpenMP hybrid code, and in mid-scale desktop applications for scientific and experimental studies. We have observed this trend and worked deligiently to improve our OpenMP compiler and runtimes, as well as to work with the OpenMP standard organization to make sure OpenMP are evolved in the direction close tomore » DoE missions. In the Center for Programming Models for Scalable Parallel Computing project, the HPCTools team at the University of Houston (UH), directed by Dr. Barbara Chapman, has been working with project partners, external collaborators and hardware vendors to increase the scalability and applicability of OpenMP for multi-core (and future manycore) platforms and for distributed memory systems by exploring different programming models, language extensions, compiler optimizations, as well as runtime library support.« less
Lee, Chaewoo
2014-01-01
The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862
ILP-based maximum likelihood genome scaffolding
2014-01-01
Background Interest in de novo genome assembly has been renewed in the past decade due to rapid advances in high-throughput sequencing (HTS) technologies which generate relatively short reads resulting in highly fragmented assemblies consisting of contigs. Additional long-range linkage information is typically used to orient, order, and link contigs into larger structures referred to as scaffolds. Due to library preparation artifacts and erroneous mapping of reads originating from repeats, scaffolding remains a challenging problem. In this paper, we provide a scalable scaffolding algorithm (SILP2) employing a maximum likelihood model capturing read mapping uncertainty and/or non-uniformity of contig coverage which is solved using integer linear programming. A Non-Serial Dynamic Programming (NSDP) paradigm is applied to render our algorithm useful in the processing of larger mammalian genomes. To compare scaffolding tools, we employ novel quantitative metrics in addition to the extant metrics in the field. We have also expanded the set of experiments to include scaffolding of low-complexity metagenomic samples. Results SILP2 achieves better scalability throughg a more efficient NSDP algorithm than previous release of SILP. The results show that SILP2 compares favorably to previous methods OPERA and MIP in both scalability and accuracy for scaffolding single genomes of up to human size, and significantly outperforms them on scaffolding low-complexity metagenomic samples. Conclusions Equipped with NSDP, SILP2 is able to scaffold large mammalian genomes, resulting in the longest and most accurate scaffolds. The ILP formulation for the maximum likelihood model is shown to be flexible enough to handle metagenomic samples. PMID:25253180
Comparison of Measures of Risk for Recidivism in Sexual Offenders
ERIC Educational Resources Information Center
Looman, Jan; Abracen, Jeffrey
2010-01-01
Data for both sexual and violent recidivism for the Static-99, Risk Matrix 2000 (RM 2000), Rapid Risk Assessment for Sex Offense Recidivism (RRASOR), and Static-2002 are reported for 419 released sexual offenders assessed at the Regional Treatment Centre Sexual Offender Treatment Program. Data are analyzed by offender type as well as the group as…
Computer program determines gas flow rates in piping systems
NASA Technical Reports Server (NTRS)
Franke, R.
1966-01-01
Computer program calculates the steady state flow characteristics of an ideal compressible gas in a complex piping system. The program calculates the stagnation and total temperature, static and total pressure, loss factor, and forces on each element in the piping system.
1967-08-01
This photograph is a view of the Saturn V S-IC-5 (first) flight stage static test firing at the S-IC-B1 test stand at the Mississippi Test Facility (MTF), Bay St. Louis, Mississippi. Begirning operations in 1966, the MTF has two test stands, a dual-position structure for running the S-IC stage at full throttle, and two separate stands for the S-II (Saturn V third) stage. It became the focus of the static test firing program. The completed S-IC stage was shipped from Michoud Assembly Facility (MAF) to the MTF. The stage was then installed into the 407-foot-high test stand for the static firing tests before shipment to the Kennedy Space Center for final assembly of the Saturn V vehicle. The MTF was renamed to the National Space Technology Laboratory (NSTL) in 1974 and later to the Stennis Space Center (SSC) in May 1988.
Analysis and testing of high entrainment single nozzle jet pumps with variable mixing tubes
NASA Technical Reports Server (NTRS)
Hickman, K. E.; Hill, P. G.; Gilbert, G. B.
1972-01-01
An analytical model was developed to predict the performance characteristics of axisymmetric single-nozzle jet pumps with variable area mixing tubes. The primary flow may be subsonic or supersonic. The computer program uses integral techniques to calculate the velocity profiles and the wall static pressures that result from the mixing of the supersonic primary jet and the subsonic secondary flow. An experimental program was conducted to measure mixing tube wall static pressure variations, velocity profiles, and temperature profiles in a variable area mixing tube with a supersonic primary jet. Static pressure variations were measured at four different secondary flow rates. These test results were used to evaluate the analytical model. The analytical results compared well to the experimental data. Therefore, the analysis is believed to be ready for use to relate jet pump performance characteristics to mixing tube design.
Toward Scalable Benchmarks for Mass Storage Systems
NASA Technical Reports Server (NTRS)
Miller, Ethan L.
1996-01-01
This paper presents guidelines for the design of a mass storage system benchmark suite, along with preliminary suggestions for programs to be included. The benchmarks will measure both peak and sustained performance of the system as well as predicting both short- and long-term behavior. These benchmarks should be both portable and scalable so they may be used on storage systems from tens of gigabytes to petabytes or more. By developing a standard set of benchmarks that reflect real user workload, we hope to encourage system designers and users to publish performance figures that can be compared with those of other systems. This will allow users to choose the system that best meets their needs and give designers a tool with which they can measure the performance effects of improvements to their systems.
A reconfigurable continuous-flow fluidic routing fabric using a modular, scalable primitive.
Silva, Ryan; Bhatia, Swapnil; Densmore, Douglas
2016-07-05
Microfluidic devices, by definition, are required to move liquids from one physical location to another. Given a finite and frequently fixed set of physical channels to route fluids, a primitive design element that allows reconfigurable routing of that fluid from any of n input ports to any n output ports will dramatically change the paradigms by which these chips are designed and applied. Furthermore, if these elements are "regular" regarding their design, the programming and fabrication of these elements becomes scalable. This paper presents such a design element called a transposer. We illustrate the design, fabrication and operation of a single transposer. We then scale this design to create a programmable fabric towards a general-purpose, reconfigurable microfluidic platform analogous to the Field Programmable Gate Array (FPGA) found in digital electronics.
A Generic Ground Framework for Image Expertise Centres and Small-Sized Production Centres
NASA Astrophysics Data System (ADS)
Sellé, A.
2009-05-01
Initiated by the Pleiadas Earth Observation Program, the CNES (French Space Agency) has developed a generic collaborative framework for its image quality centre, highly customisable for any upcoming expertise centre. This collaborative framework has been design to be used by a group of experts or scientists that want to share data and processings and manage interfaces with external entities. Its flexible and scalable architecture complies with the core requirements: defining a user data model with no impact on the software (generic access data), integrating user processings with a GUI builder and built-in APIs, and offering a scalable architecture to fit any preformance requirement and accompany growing projects. The CNES jas given licensing grants for two software companies that will be able to redistribute this framework to any customer.
Integrated Avionics System (IAS)
NASA Technical Reports Server (NTRS)
Hunter, D. J.
2001-01-01
As spacecraft designs converge toward miniaturization and with the volumetric and mass constraints placed on avionics, programs will continue to advance the 'state of the art' in spacecraft systems development with new challenges to reduce power, mass, and volume. Although new technologies have improved packaging densities, a total system packaging architecture is required that not only reduces spacecraft volume and mass budgets, but increase integration efficiencies, provide modularity and scalability to accommodate multiple missions. With these challenges in mind, a novel packaging approach incorporates solutions that provide broader environmental applications, more flexible system interconnectivity, scalability, and simplified assembly test and integration schemes. This paper will describe the fundamental elements of the Integrated Avionics System (IAS), Horizontally Mounted Cube (HMC) hardware design, system and environmental test results. Additional information is contained in the original extended abstract.
Accelerating k-NN Algorithm with Hybrid MPI and OpenSHMEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Jian; Hamidouche, Khaled; Zheng, Jie
2015-08-05
Machine Learning algorithms are benefiting from the continuous improvement of programming models, including MPI, MapReduce and PGAS. k-Nearest Neighbors (k-NN) algorithm is a widely used machine learning algorithm, applied to supervised learning tasks such as classification. Several parallel implementations of k-NN have been proposed in the literature and practice. However, on high-performance computing systems with high-speed interconnects, it is important to further accelerate existing designs of the k-NN algorithm through taking advantage of scalable programming models. To improve the performance of k-NN on large-scale environment with InfiniBand network, this paper proposes several alternative hybrid MPI+OpenSHMEM designs and performs a systemicmore » evaluation and analysis on typical workloads. The hybrid designs leverage the one-sided memory access to better overlap communication with computation than the existing pure MPI design, and propose better schemes for efficient buffer management. The implementation based on k-NN program from MaTEx with MVAPICH2-X (Unified MPI+PGAS Communication Runtime over InfiniBand) shows up to 9.0% time reduction for training KDD Cup 2010 workload over 512 cores, and 27.6% time reduction for small workload with balanced communication and computation. Experiments of running with varied number of cores show that our design can maintain good scalability.« less
NASA Technical Reports Server (NTRS)
Ball, R. E.
1972-01-01
A digital computer program known as SATANS (static and transient analysis, nonlinear, shells) for the geometrically nonlinear static and dynamic response of arbitrarily loaded shells of revolution is presented. Instructions for the preparation of the input data cards and other information necessary for the operation of the program are described in detail and two sample problems are included. The governing partial differential equations are based upon Sanders' nonlinear thin shell theory for the conditions of small strains and moderately small rotations. The governing equations are reduced to uncoupled sets of four linear, second order, partial differential equations in the meridional and time coordinates by expanding the dependent variables in a Fourier sine or cosine series in the circumferential coordinate and treating the nonlinear modal coupling terms as pseudo loads. The derivatives with respect to the meridional coordinate are approximated by central finite differences, and the displacement accelerations are approximated by the implicit Houbolt backward difference scheme with a constant time interval. The boundaries of the shell may be closed, free, fixed, or elastically restrained. The program is coded in the FORTRAN 4 language and is dimensioned to allow a maximum of 10 arbitrary Fourier harmonics and a maximum product of the total number of meridional stations and the total number of Fourier harmonics of 200. The program requires 155,000 bytes of core storage.
2011-06-01
4. Conclusion The Web -based AGeS system described in this paper is a computationally-efficient and scalable system for high- throughput genome...method for protecting web services involves making them more resilient to attack using autonomic computing techniques. This paper presents our initial...20–23, 2011 2011 DoD High Performance Computing Modernzation Program Users Group Conference HPCMP UGC 2011 The papers in this book comprise the
Static and dynamic stability analysis of the space shuttle vehicle-orbiter
NASA Technical Reports Server (NTRS)
Chyu, W. J.; Cavin, R. K.; Erickson, L. L.
1978-01-01
The longitudinal static and dynamic stability of a Space Shuttle Vehicle-Orbiter (SSV Orbiter) model is analyzed using the FLEXSTAB computer program. Nonlinear effects are accounted for by application of a correction technique in the FLEXSTAB system; the technique incorporates experimental force and pressure data into the linear aerodynamic theory. A flexible Orbiter model is treated in the static stability analysis for the flight conditions of Mach number 0.9 for rectilinear flight (1 g) and for a pull-up maneuver (2.5 g) at an altitude of 15.24 km. Static stability parameters and structural deformations of the Orbiter are calculated at trim conditions for the dynamic stability analysis, and the characteristics of damping in pitch are investigated for a Mach number range of 0.3 to 1.2. The calculated results for both the static and dynamic stabilities are compared with the available experimental data.
Space Shuttle Flight Support Motor no. 1 (FSM-1)
NASA Technical Reports Server (NTRS)
Hughes, Phil D.
1990-01-01
Space Shuttle Flight Support Motor No. 1 (FSM-1) was static test fired on 15 Aug. 1990 at the Thiokol Corporation Static Test Bay T-24. FSM-1 was a full-scale, full-duration static test fire of a redesigned solid rocket motor. FSM-1 was the first of seven flight support motors which will be static test fired. The Flight Support Motor program validates components, materials, and manufacturing processes. In addition, FSM-1 was the full-scale motor for qualification of Western Electrochemical Corporation ammonium perchlorate. This motor was subjected to all controls and documentation requirements CTP-0171, Revision A. Inspection and instrumentation data indicate that the FSM-1 static test firing was successful. The ambient temperature during the test was 87 F and the propellant mean bulk temperature was 82 F. Ballistics performance values were within the specified requirements. The overall performance of the FSM-1 components and test equipment was nominal.
Recommended Experimental Procedures for Evaluation of Abrupt Wing Stall Characteristics
NASA Technical Reports Server (NTRS)
Capone, F. J.; Hall, R. M.; Owens, D. B.; Lamar, J. E.; McMillin, S. N.
2003-01-01
This paper presents a review of the experimental program under the Abrupt Wing Stall (AWS) Program. Candidate figures of merit from conventional static tunnel tests are summarized and correlated with data obtained in unique free-to-roll tests. Where possible, free-to-roll results are also correlated with flight data. Based on extensive studies of static experimental figures of merit in the Abrupt Wing Stall Program for four different aircraft configurations, no one specific figure of merit consistently flagged a warning of potential lateral activity when actual activity was seen to occur in the free-to-roll experiments. However, these studies pointed out the importance of measuring and recording the root mean square signals of the force balance.
NASA Technical Reports Server (NTRS)
McGuire, Tim
1998-01-01
In this paper, we report the results of our recent research on the application of a multiprocessor Cray T916 supercomputer in modeling super-thermal electron transport in the earth's magnetic field. In general, this mathematical model requires numerical solution of a system of partial differential equations. The code we use for this model is moderately vectorized. By using Amdahl's Law for vector processors, it can be verified that the code is about 60% vectorized on a Cray computer. Speedup factors on the order of 2.5 were obtained compared to the unvectorized code. In the following sections, we discuss the methodology of improving the code. In addition to our goal of optimizing the code for solution on the Cray computer, we had the goal of scalability in mind. Scalability combines the concepts of portabilty with near-linear speedup. Specifically, a scalable program is one whose performance is portable across many different architectures with differing numbers of processors for many different problem sizes. Though we have access to a Cray at this time, the goal was to also have code which would run well on a variety of architectures.
GENESUS: a two-step sequence design program for DNA nanostructure self-assembly.
Tsutsumi, Takanobu; Asakawa, Takeshi; Kanegami, Akemi; Okada, Takao; Tahira, Tomoko; Hayashi, Kenshi
2014-01-01
DNA has been recognized as an ideal material for bottom-up construction of nanometer scale structures by self-assembly. The generation of sequences optimized for unique self-assembly (GENESUS) program reported here is a straightforward method for generating sets of strand sequences optimized for self-assembly of arbitrarily designed DNA nanostructures by a generate-candidates-and-choose-the-best strategy. A scalable procedure to prepare single-stranded DNA having arbitrary sequences is also presented. Strands for the assembly of various structures were designed and successfully constructed, validating both the program and the procedure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
March-Leuba, S.; Jansen, J.F.; Kress, R.L.
A new program package, Symbolic Manipulator Laboratory (SML), for the automatic generation of both kinematic and static manipulator models in symbolic form is presented. Critical design parameters may be identified and optimized using symbolic models as shown in the sample application presented for the Future Armor Rearm System (FARS) arm. The computer-aided development of the symbolic models yields equations with reduced numerical complexity. Important considerations have been placed on the closed form solutions simplification and on the user friendly operation. The main emphasis of this research is the development of a methodology which is implemented in a computer program capablemore » of generating symbolic kinematic and static forces models of manipulators. The fact that the models are obtained trigonometrically reduced is among the most significant results of this work and the most difficult to implement. Mathematica, a commercial program that allows symbolic manipulation, is used to implement the program package. SML is written such that the user can change any of the subroutines or create new ones easily. To assist the user, an on-line help has been written to make of SML a user friendly package. Some sample applications are presented. The design and optimization of the 5-degrees-of-freedom (DOF) FARS manipulator using SML is discussed. Finally, the kinematic and static models of two different 7-DOF manipulators are calculated symbolically.« less
ERIC Educational Resources Information Center
Macek, Victor C.
The nine Reactor Statics Modules are designed to introduce students to the use of numerical methods and digital computers for calculation of neutron flux distributions in space and energy which are needed to calculate criticality, power distribution, and fuel burnup for both slow neutron and fast neutron fission reactors. The last module, RS-9,…
Parachute Aerodynamics From Video Data
NASA Technical Reports Server (NTRS)
Schoenenberger, Mark; Queen, Eric M.; Cruz, Juan R.
2005-01-01
A new data analysis technique for the identification of static and dynamic aerodynamic stability coefficients from wind tunnel test video data is presented. This new technique was applied to video data obtained during a parachute wind tunnel test program conducted in support of the Mars Exploration Rover Mission. Total angle-of-attack data obtained from video images were used to determine the static pitching moment curve of the parachute. During the original wind tunnel test program the static pitching moment curve had been determined by forcing the parachute to a specific total angle-of -attack and measuring the forces generated. It is shown with the new technique that this parachute, when free to rotate, trims at an angle-of-attack two degrees lower than was measured during the forced-angle tests. An attempt was also made to extract pitch damping information from the video data. Results suggest that the parachute is dynamically unstable at the static trim point and tends to become dynamically stable away from the trim point. These trends are in agreement with limit-cycle-like behavior observed in the video. However, the chaotic motion of the parachute produced results with large uncertainty bands.
Bayraktar, Deniz; Guclu-Gunduz, Arzu; Lambeck, Johan; Yazici, Gokhan; Aykol, Sukru; Demirci, Harun
2016-01-01
To determine and compare the effects of core stability exercise programs performed in two different environments in lumbar disc herniation (LDH) patients. Thirty-one patients who were diagnosed with LDH and were experiencing pain or functional disability for at least 3 months were randomly divided into two groups as land-based exercises or water specific therapy. Also, 15 age-sex-matched healthy individuals were recruited as healthy controls. Both groups underwent an 8-week (3 times/week) core stabilization exercise program. Primary outcomes were pain, trunk muscle static endurance and perceived disability level. The secondary outcome was health-related quality of life. Level of static endurance of trunk muscles was found to be lower in the patients compared to the controls at baseline (p < 0.05). Both treatment groups showed significant improvements in all outcomes (p < 0.05) after 8-week intervention. When two treatment groups were compared, no differences were found in the amount of change after the intervention (p > 0.05). After the treatment, static endurance of trunk muscles of the LDH patients became similar to controls (p > 0.05). According to these results, core stabilization exercise training performed on land or in water both could be beneficial in LDH patients and there is no difference between the environments. An 8-week core stabilization program performed in water or on land decrease pain level and improve functional status in LDH patients. Both programs seem beneficial to increase health-related quality of life and static endurance of trunk muscles. Core stability exercises could be performed in water as well, no differences were found between methods due to environment.
1964-12-01
At the Marshall Space Flight Center (MSFC), the fuel tank assembly for the Saturn V S-IC-T (static test stage) fuel tank assembly is mated to the liquid oxygen (LOX) tank in building 4705. This stage underwent numerous static firings at the newly-built S-IC Static Test Stand at the MSFC west test area. The S-IC (first) stage used five F-1 engines that produced a total thrust of 7,500,000 pounds as each engine produced 1,500,000 pounds of thrust. The S-IC stage lifted the Saturn V vehicle and Apollo spacecraft from the launch pad.
Effects of static tensile load on the thermal expansion of Gr/PI composite material
NASA Technical Reports Server (NTRS)
Farley, G. L.
1981-01-01
The effect of static tensile load on the thermal expansion of Gr/PI composite material was measured for seven different laminate configurations. A computer program was developed which implements laminate theory in a piecewise linear fashion to predict the coupled nonlinear thermomechanical behavior. Static tensile load significantly affected the thermal expansion characteristics of the laminates tested. This effect is attributed to a fiber instability micromechanical behavior of the constituent materials. Analytical results correlated reasonably well with free thermal expansion tests (no load applied to the specimen). However, correlation was poor for tests with an applied load.
Tuin, Stephen A; Pourdeyhimi, Behnam; Loboa, Elizabeth G
2016-05-01
The fabrication and characterization of novel high surface area hollow gilled fiber tissue engineering scaffolds via industrially relevant, scalable, repeatable, high speed, and economical nonwoven carding technology is described. Scaffolds were validated as tissue engineering scaffolds using human adipose derived stem cells (hASC) exposed to pulsatile fluid flow (PFF). The effects of fiber morphology on the proliferation and viability of hASC, as well as effects of varied magnitudes of shear stress applied via PFF on the expression of the early osteogenic gene marker runt related transcription factor 2 (RUNX2) were evaluated. Gilled fiber scaffolds led to a significant increase in proliferation of hASC after seven days in static culture, and exhibited fewer dead cells compared to pure PLA round fiber controls. Further, hASC-seeded scaffolds exposed to 3 and 6dyn/cm(2) resulted in significantly increased mRNA expression of RUNX2 after one hour of PFF in the absence of soluble osteogenic induction factors. This is the first study to describe a method for the fabrication of high surface area gilled fibers and scaffolds. The scalable manufacturing process and potential fabrication across multiple nonwoven and woven platforms makes them promising candidates for a variety of applications that require high surface area fibrous materials. We report here for the first time the successful fabrication of novel high surface area gilled fiber scaffolds for tissue engineering applications. Gilled fibers led to a significant increase in proliferation of human adipose derived stem cells after one week in culture, and a greater number of viable cells compared to round fiber controls. Further, in the absence of osteogenic induction factors, gilled fibers led to significantly increased mRNA expression of an early marker for osteogenesis after exposure to pulsatile fluid flow. This is the first study to describe gilled fiber fabrication and their potential for tissue engineering applications. The repeatable, industrially scalable, and versatile fabrication process makes them promising candidates for a variety of scaffold-based tissue engineering applications. Copyright © 2016 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Implementation of a Quality Assurance Review System for the Scalable Development of Online Courses
ERIC Educational Resources Information Center
Ozdemir, Devrim; Loose, Rich
2014-01-01
With the growing demand for quality online education in the US, developing quality online courses and online programs, and more importantly maintaining this quality, have been an inevitable concern for higher education institutes. Current literature on quality assurance in online education mostly focuses on the development of review models and…
Scalable Emergency Response System for Oceangoing Assets Report on Defining Proposed Program
2008-10-17
electropherogram of RNA extracted from ocean water spiked with Salmonella sp ...meningoencephalitis Waterborne Balantidium coli Balantidosis (dysentery) Waterborne Cryptosporidium Cryptosporidiosis Waterborne Entamoeba histolytica Amoebic...extracted from ocean water spiked with Salmonella sp . The significance of bioanalyzer results lays in bands labeled as 23S rRNA in Figure 2 and 3. The
Mobile Phones, Civic Engagement, and School Performance in Pakistan. CEPA Working Paper No. 16-17
ERIC Educational Resources Information Center
Asim, Minahil; Dee, Thomas
2016-01-01
The effective governance of local public services depends critically on the civic engagement of local citizens. However, recent efforts to promote effective citizen oversight of the public-sector services in developing countries have had mixed results. This study discusses and evaluates a uniquely designed, low-cost, scalable program designed to…
Applications Development for a Parallel COTS Spaceborne Computer
NASA Technical Reports Server (NTRS)
Katz, Daniel S.; Springer, Paul L.; Granat, Robert; Turmon, Michael
2000-01-01
This presentation reviews the Remote Exploration and Experimentation Project (REE) program for utilization of scalable supercomputing technology in space. The implementation of REE will be the use of COTS hardware and software to the maximum extent possible, keeping overhead low. Since COTS systems will be used, with little or no special modification, there will be significant cost reduction.
Poirier, Josée; Bennett, Wendy L; Jerome, Gerald J; Shah, Nina G; Lazo, Mariana; Yeh, Hsin-Chieh; Clark, Jeanne M; Cobb, Nathan K
2016-02-09
The benefits of physical activity are well documented, but scalable programs to promote activity are needed. Interventions that assign tailored and dynamically adjusting goals could effect significant increases in physical activity but have not yet been implemented at scale. Our aim was to examine the effectiveness of an open access, Internet-based walking program that assigns daily step goals tailored to each participant. A two-arm, pragmatic randomized controlled trial compared the intervention to no treatment. Participants were recruited from a workplace setting and randomized to a no-treatment control (n=133) or to treatment (n=132). Treatment participants received a free wireless activity tracker and enrolled in the walking program, Walkadoo. Assessments were fully automated: activity tracker recorded primary outcomes (steps) without intervention by the participant or investigators. The two arms were compared on change in steps per day from baseline to follow-up (after 6 weeks of treatment) using a two-tailed independent samples t test. Participants (N=265) were 66.0% (175/265) female with an average age of 39.9 years. Over half of the participants (142/265, 53.6%) were sedentary (<5000 steps/day) and 44.9% (119/265) were low to somewhat active (5000-9999 steps/day). The intervention group significantly increased their steps by 970 steps/day over control (P<.001), with treatment effects observed in sedentary (P=.04) and low-to-somewhat active (P=.004) participants alike. The program is effective in increasing daily steps. Participants benefited from the program regardless of their initial activity level. A tailored, adaptive approach using wireless activity trackers is realistically implementable and scalable. Clinicaltrials.gov NCT02229409, https://clinicaltrials.gov/ct2/show/NCT02229409 (Archived by WebCite at http://www.webcitation.org/6eiWCvBYe).
Tezaur, I. K.; Perego, M.; Salinger, A. G.; ...
2015-04-27
This paper describes a new parallel, scalable and robust finite element based solver for the first-order Stokes momentum balance equations for ice flow. The solver, known as Albany/FELIX, is constructed using the component-based approach to building application codes, in which mature, modular libraries developed as a part of the Trilinos project are combined using abstract interfaces and template-based generic programming, resulting in a final code with access to dozens of algorithmic and advanced analysis capabilities. Following an overview of the relevant partial differential equations and boundary conditions, the numerical methods chosen to discretize the ice flow equations are described, alongmore » with their implementation. The results of several verification studies of the model accuracy are presented using (1) new test cases for simplified two-dimensional (2-D) versions of the governing equations derived using the method of manufactured solutions, and (2) canonical ice sheet modeling benchmarks. Model accuracy and convergence with respect to mesh resolution are then studied on problems involving a realistic Greenland ice sheet geometry discretized using hexahedral and tetrahedral meshes. Also explored as a part of this study is the effect of vertical mesh resolution on the solution accuracy and solver performance. The robustness and scalability of our solver on these problems is demonstrated. Lastly, we show that good scalability can be achieved by preconditioning the iterative linear solver using a new algebraic multilevel preconditioner, constructed based on the idea of semi-coarsening.« less
NASA Technical Reports Server (NTRS)
Giles, G. L.; Wallas, M.
1981-01-01
User documentation is presented for a computer program which considers the nonlinear properties of the strain isolator pad (SIP) in the static stress analysis of the shuttle thermal protection system. This program is generalized to handle an arbitrary SIP footprint including cutouts for instrumentation and filler bar. Multiple SIP surfaces are defined to model tiles in unique locations such as leading edges, intersections, and penetrations. The nonlinearity of the SIP is characterized by experimental stress displacement data for both normal and shear behavior. Stresses in the SIP are calculated using a Newton iteration procedure to determine the six rigid body displacements of the tile which develop reaction forces in the SIP to equilibrate the externally applied loads. This user documentation gives an overview of the analysis capabilities, a detailed description of required input data and an example to illustrate use of the program.
Large-scale parallel lattice Boltzmann-cellular automaton model of two-dimensional dendritic growth
NASA Astrophysics Data System (ADS)
Jelinek, Bohumir; Eshraghi, Mohsen; Felicelli, Sergio; Peters, John F.
2014-03-01
An extremely scalable lattice Boltzmann (LB)-cellular automaton (CA) model for simulations of two-dimensional (2D) dendritic solidification under forced convection is presented. The model incorporates effects of phase change, solute diffusion, melt convection, and heat transport. The LB model represents the diffusion, convection, and heat transfer phenomena. The dendrite growth is driven by a difference between actual and equilibrium liquid composition at the solid-liquid interface. The CA technique is deployed to track the new interface cells. The computer program was parallelized using the Message Passing Interface (MPI) technique. Parallel scaling of the algorithm was studied and major scalability bottlenecks were identified. Efficiency loss attributable to the high memory bandwidth requirement of the algorithm was observed when using multiple cores per processor. Parallel writing of the output variables of interest was implemented in the binary Hierarchical Data Format 5 (HDF5) to improve the output performance, and to simplify visualization. Calculations were carried out in single precision arithmetic without significant loss in accuracy, resulting in 50% reduction of memory and computational time requirements. The presented solidification model shows a very good scalability up to centimeter size domains, including more than ten million of dendrites. Catalogue identifier: AEQZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEQZ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, UK Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 29,767 No. of bytes in distributed program, including test data, etc.: 3131,367 Distribution format: tar.gz Programming language: Fortran 90. Computer: Linux PC and clusters. Operating system: Linux. Has the code been vectorized or parallelized?: Yes. Program is parallelized using MPI. Number of processors used: 1-50,000 RAM: Memory requirements depend on the grid size Classification: 6.5, 7.7. External routines: MPI (http://www.mcs.anl.gov/research/projects/mpi/), HDF5 (http://www.hdfgroup.org/HDF5/) Nature of problem: Dendritic growth in undercooled Al-3 wt% Cu alloy melt under forced convection. Solution method: The lattice Boltzmann model solves the diffusion, convection, and heat transfer phenomena. The cellular automaton technique is deployed to track the solid/liquid interface. Restrictions: Heat transfer is calculated uncoupled from the fluid flow. Thermal diffusivity is constant. Unusual features: Novel technique, utilizing periodic duplication of a pre-grown “incubation” domain, is applied for the scaleup test. Running time: Running time varies from minutes to days depending on the domain size and number of computational cores.
Li, Yang; Jiang, Xulin; Li, Ling; Chen, Zhi-Nan; Gao, Ge; Yao, Rui; Sun, Wei
2018-06-28
Human induced pluripotent stem cells (hiPSCs) are more likely to successfully avoid the immunological rejection and ethical problems that are often encountered by human embryonic stem cells in various stem cell studies and applications. To transfer hiPSCs from the laboratory to clinical applications, researchers must obtain sufficient cell numbers. In this study, 3D cell printing was used as a novel method for iPSC scalable expansion. Hydroxypropyl chitin (HPCH), utilized as a new type of bioink, and a set of optimized printing parameters were shown to achieve high cell survival (> 90%) after the printing process and high proliferation efficiency (~ 32.3 folds) during subsequent 10-day culture. After the culture, high levels of pluripotency maintenance were recognized by both qualitative and quantitative detections. Compared with static suspension (SS) culture, hiPSC aggregates formed in 3D printed constructs showed a higher uniformity in size. Using novel dual-fluorescent labelling method, hiPSC aggregates in the constructs were found more inclined to form by <i>in situ</i> proliferation rather than multicellular aggregation. This study revealed unique advantages of non-ionic crosslinking bioink material HPCH, including high gel strength and rapid temperature response in hiPSC printing, and achieved primed state hiPSC printing for the first time. Features achieved in this study, such as high cell yield, high pluripotency maintenance and uniform aggregation provide good foundations for further hiPSC studies on 3D micro-tissue differentiation and drug screening. © 2018 IOP Publishing Ltd.
Heidenreich, Elvio A; Ferrero, José M; Doblaré, Manuel; Rodríguez, José F
2010-07-01
Many problems in biology and engineering are governed by anisotropic reaction-diffusion equations with a very rapidly varying reaction term. This usually implies the use of very fine meshes and small time steps in order to accurately capture the propagating wave while avoiding the appearance of spurious oscillations in the wave front. This work develops a family of macro finite elements amenable for solving anisotropic reaction-diffusion equations with stiff reactive terms. The developed elements are incorporated on a semi-implicit algorithm based on operator splitting that includes adaptive time stepping for handling the stiff reactive term. A linear system is solved on each time step to update the transmembrane potential, whereas the remaining ordinary differential equations are solved uncoupled. The method allows solving the linear system on a coarser mesh thanks to the static condensation of the internal degrees of freedom (DOF) of the macroelements while maintaining the accuracy of the finer mesh. The method and algorithm have been implemented in parallel. The accuracy of the method has been tested on two- and three-dimensional examples demonstrating excellent behavior when compared to standard linear elements. The better performance and scalability of different macro finite elements against standard finite elements have been demonstrated in the simulation of a human heart and a heterogeneous two-dimensional problem with reentrant activity. Results have shown a reduction of up to four times in computational cost for the macro finite elements with respect to equivalent (same number of DOF) standard linear finite elements as well as good scalability properties.
An Interactive Computer-Based Training Program for Beginner Personal Computer Maintenance.
ERIC Educational Resources Information Center
Summers, Valerie Brooke
A computer-assisted instructional program, which was developed for teaching beginning computer maintenance to employees of Unisys, covered external hardware maintenance, proper diskette care, making software backups, and electro-static discharge prevention. The procedure used in developing the program was based upon the Dick and Carey (1985) model…
Support of Multidimensional Parallelism in the OpenMP Programming Model
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Jost, Gabriele
2003-01-01
OpenMP is the current standard for shared-memory programming. While providing ease of parallel programming, the OpenMP programming model also has limitations which often effect the scalability of applications. Examples for these limitations are work distribution and point-to-point synchronization among threads. We propose extensions to the OpenMP programming model which allow the user to easily distribute the work in multiple dimensions and synchronize the workflow among the threads. The proposed extensions include four new constructs and the associated runtime library. They do not require changes to the source code and can be implemented based on the existing OpenMP standard. We illustrate the concept in a prototype translator and test with benchmark codes and a cloud modeling code.
Report of the workshop on evidence-based design of national wildlife health programs
Nguyen, Natalie T.; Duff, J. Paul; Gavier-Widén, Dolores; Grillo, Tiggy; He, Hongxuan; Lee, Hang; Ratanakorn, Parntep; Rijks, Jolianne M.; Ryser-Degiorgis, Marie-Pierre; Sleeman, Jonathan M.; Stephen, Craig; Tana, Toni; Uhart, Marcela; Zimmer , Patrick
2017-05-08
SummaryThis report summarizes a Wildlife Disease Association sponsored workshop held in 2016. The overall objective of the workshop was to use available evidence and selected subject matter expertise to define the essential functions of a National Wildlife Health Program and the resources needed to deliver a robust and reliable program, including the basic infrastructure, workforce, data and information systems, governance, organizational capacity, and essential features, such as wildlife disease surveillance, diagnostic services, and epidemiological investigation. This workshop also provided the means to begin the process of defining the essential attributes of a national wildlife health program that could be scalable and adaptable to each nation’s needs.
MPI Runtime Error Detection with MUST: Advances in Deadlock Detection
Hilbrich, Tobias; Protze, Joachim; Schulz, Martin; ...
2013-01-01
The widely used Message Passing Interface (MPI) is complex and rich. As a result, application developers require automated tools to avoid and to detect MPI programming errors. We present the Marmot Umpire Scalable Tool (MUST) that detects such errors with significantly increased scalability. We present improvements to our graph-based deadlock detection approach for MPI, which cover future MPI extensions. Our enhancements also check complex MPI constructs that no previous graph-based detection approach handled correctly. Finally, we present optimizations for the processing of MPI operations that reduce runtime deadlock detection overheads. Existing approaches often require ( p ) analysis time permore » MPI operation, for p processes. We empirically observe that our improvements lead to sub-linear or better analysis time per operation for a wide range of real world applications.« less
Simplified Parallel Domain Traversal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erickson III, David J
2011-01-01
Many data-intensive scientific analysis techniques require global domain traversal, which over the years has been a bottleneck for efficient parallelization across distributed-memory architectures. Inspired by MapReduce and other simplified parallel programming approaches, we have designed DStep, a flexible system that greatly simplifies efficient parallelization of domain traversal techniques at scale. In order to deliver both simplicity to users as well as scalability on HPC platforms, we introduce a novel two-tiered communication architecture for managing and exploiting asynchronous communication loads. We also integrate our design with advanced parallel I/O techniques that operate directly on native simulation output. We demonstrate DStep bymore » performing teleconnection analysis across ensemble runs of terascale atmospheric CO{sub 2} and climate data, and we show scalability results on up to 65,536 IBM BlueGene/P cores.« less
Embedded parallel processing based ground control systems for small satellite telemetry
NASA Technical Reports Server (NTRS)
Forman, Michael L.; Hazra, Tushar K.; Troendly, Gregory M.; Nickum, William G.
1994-01-01
The use of networked terminals which utilize embedded processing techniques results in totally integrated, flexible, high speed, reliable, and scalable systems suitable for telemetry and data processing applications such as mission operations centers (MOC). Synergies of these terminals, coupled with the capability of terminal to receive incoming data, allow the viewing of any defined display by any terminal from the start of data acquisition. There is no single point of failure (other than with network input) such as exists with configurations where all input data goes through a single front end processor and then to a serial string of workstations. Missions dedicated to NASA's ozone measurements program utilize the methodologies which are discussed, and result in a multimission configuration of low cost, scalable hardware and software which can be run by one flight operations team with low risk.
Scalable Nonparametric Low-Rank Kernel Learning Using Block Coordinate Descent.
Hu, En-Liang; Kwok, James T
2015-09-01
Nonparametric kernel learning (NPKL) is a flexible approach to learn the kernel matrix directly without assuming any parametric form. It can be naturally formulated as a semidefinite program (SDP), which, however, is not very scalable. To address this problem, we propose the combined use of low-rank approximation and block coordinate descent (BCD). Low-rank approximation avoids the expensive positive semidefinite constraint in the SDP by replacing the kernel matrix variable with V(T)V, where V is a low-rank matrix. The resultant nonlinear optimization problem is then solved by BCD, which optimizes each column of V sequentially. It can be shown that the proposed algorithm has nice convergence properties and low computational complexities. Experiments on a number of real-world data sets show that the proposed algorithm outperforms state-of-the-art NPKL solvers.
Design integration and noise studies for jet STOL aircraft. Volume 1: Program summary
NASA Technical Reports Server (NTRS)
Okeefe, V. O.; Kelley, G. S.
1972-01-01
This program was undertaken to develop, through analysis, design, experimental static testing, wind tunnel testing, and design integration studies, an augmentor wing jet flap configuration for a jet STOL transport aircraft having maximum propulsion and aerodynamic performance with minimum noise generation. The program had three basic elements: (1) static testing of a scale wing section to demonstrate augmentor performance and noise characteristics; (2) two-dimensional wind tunnel testing to determine flight speed effects on performance; and (3) system design and evaluation which integrated the augmentor information obtained into a complete system and ensured that the design was compatible with the requirements for a large STOL transport having a 500-ft sideline noise of 95 PNdB or less. This objective has been achieved.
Static tests of the propulsion system. [Propfan Test Assessment program
NASA Technical Reports Server (NTRS)
Withers, C. C.; Bartel, H. W.; Turnberg, J. E.; Graber, E. J.
1987-01-01
Advanced, highly-loaded, high-speed propellers, called propfans, are promising to revolutionize the transport aircraft industry by offering a 15- to 30-percent fuel savings over the most advanced turbofans without sacrificing passenger comfort or violating community noise standards. NASA Lewis Research Center and industry have been working jointly to develop the needed propfan technology. The NASA-funded Propfan Test Assessment (PTA) Program represents a key element of this joint program. In PTA, Lockheed-Georgia, working in concert with Hamilton Standard, Rohr Industries, Gulfstream Aerospace, and Allison, is developing a propfan propulsion system which will be mounted on the left wing of a modified Gulfstream GII aircraft and flight tested to verify the in-flight characteristics of a 9-foot diameter, single-rotation propfan. The propfan, called SR-7L, was designed and fabricated by Hamilton Standard under a separate NASA contract. Prior to flight testing, the PTA propulsion system was static tested at the Rohr Brown Field facility. In this test, propulsion system operational capability was verified and data was obtained on propfan structural response, system acoustic characteristics, and system performance. This paper reports on the results of the static tests.
Multiple Changes to Reusable Solid Rocket Motors, Identifying Hidden Risks
NASA Technical Reports Server (NTRS)
Greenhalgh, Phillip O.; McCann, Bradley Q.
2003-01-01
The Space Shuttle Reusable Solid Rocket Motor (RSRM) baseline is subject to various changes. Changes are necessary due to safety and quality improvements, environmental considerations, vendor changes, obsolescence issues, etc. The RSRM program has a goal to test changes on full-scale static test motors prior to flight due to the unique RSRM operating environment. Each static test motor incorporates several significant changes and numerous minor changes. Flight motors often implement multiple changes simultaneously. While each change is individually verified and assessed, the potential for changes to interact constitutes additional hidden risk. Mitigating this risk depends upon identification of potential interactions. Therefore, the ATK Thiokol Propulsion System Safety organization initiated the use of a risk interaction matrix to identify potential interactions that compound risk. Identifying risk interactions supports flight and test motor decisions. Uncovering hidden risks of a full-scale static test motor gives a broader perspective of the changes being tested. This broader perspective compels the program to focus on solutions for implementing RSRM changes with minimal/mitigated risk. This paper discusses use of a change risk interaction matrix to identify test challenges and uncover hidden risks to the RSRM program.
Static Schedulers for Embedded Real-Time Systems
1989-12-01
Because of the need for having efficient scheduling algorithms in large scale real time systems , software engineers put a lot of effort on developing...provide static schedulers for he Embedded Real Time Systems with single processor using Ada programming language. The independent nonpreemptable...support the Computer Aided Rapid Prototyping for Embedded Real Time Systems so that we determine whether the system, as designed, meets the required
NASA Technical Reports Server (NTRS)
Jackson, A. C.; Dorwald, F.
1982-01-01
The ground tests conducted on the advanced composite vertical fin (ACVF) program are described. The design and fabrication of the test fixture and the transition structure, static test of Ground Test Article (GTA) No. 1, rework of GTA No. 2, and static, damage tolerance, fail-safe and residual strength tests of GTA No. 2 are described.
A Hartree-Fock Application Using UPC++ and the New DArray Library
Ozog, David; Kamil, Amir; Zheng, Yili; ...
2016-07-21
The Hartree-Fock (HF) method is the fundamental first step for incorporating quantum mechanics into many-electron simulations of atoms and molecules, and it is an important component of computational chemistry toolkits like NWChem. The GTFock code is an HF implementation that, while it does not have all the features in NWChem, represents crucial algorithmic advances that reduce communication and improve load balance by doing an up-front static partitioning of tasks, followed by work stealing whenever necessary. To enable innovations in algorithms and exploit next generation exascale systems, it is crucial to support quantum chemistry codes using expressive and convenient programming modelsmore » and runtime systems that are also efficient and scalable. Here, this paper presents an HF implementation similar to GTFock using UPC++, a partitioned global address space model that includes flexible communication, asynchronous remote computation, and a powerful multidimensional array library. UPC++ offers runtime features that are useful for HF such as active messages, a rich calculus for array operations, hardware-supported fetch-and-add, and functions for ensuring asynchronous runtime progress. We present a new distributed array abstraction, DArray, that is convenient for the kinds of random-access array updates and linear algebra operations on block-distributed arrays with irregular data ownership. Finally, we analyze the performance of atomic fetch-and-add operations (relevant for load balancing) and runtime attentiveness, then compare various techniques and optimizations for each. Our optimized implementation of HF using UPC++ and the DArrays library shows up to 20% improvement over GTFock with Global Arrays at scales up to 24,000 cores.« less
A Hartree-Fock Application Using UPC++ and the New DArray Library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ozog, David; Kamil, Amir; Zheng, Yili
The Hartree-Fock (HF) method is the fundamental first step for incorporating quantum mechanics into many-electron simulations of atoms and molecules, and it is an important component of computational chemistry toolkits like NWChem. The GTFock code is an HF implementation that, while it does not have all the features in NWChem, represents crucial algorithmic advances that reduce communication and improve load balance by doing an up-front static partitioning of tasks, followed by work stealing whenever necessary. To enable innovations in algorithms and exploit next generation exascale systems, it is crucial to support quantum chemistry codes using expressive and convenient programming modelsmore » and runtime systems that are also efficient and scalable. Here, this paper presents an HF implementation similar to GTFock using UPC++, a partitioned global address space model that includes flexible communication, asynchronous remote computation, and a powerful multidimensional array library. UPC++ offers runtime features that are useful for HF such as active messages, a rich calculus for array operations, hardware-supported fetch-and-add, and functions for ensuring asynchronous runtime progress. We present a new distributed array abstraction, DArray, that is convenient for the kinds of random-access array updates and linear algebra operations on block-distributed arrays with irregular data ownership. Finally, we analyze the performance of atomic fetch-and-add operations (relevant for load balancing) and runtime attentiveness, then compare various techniques and optimizations for each. Our optimized implementation of HF using UPC++ and the DArrays library shows up to 20% improvement over GTFock with Global Arrays at scales up to 24,000 cores.« less
[Effects of training on static and dynamic balance in elderly subjects who have had a fall or not].
Toulotte, C; Thévenon, A; Fabre, C
2004-11-01
To evaluate the effects of a physical training program on static and dynamic balance during single and dual task conditions in elderly subjects who have had a fall or not. Two groups, comprising a total of 33 elderly subjects, were trained: 16 who had a fall were 69.2 +/- 5.0 years old and 17 who had not had a fall were 67.3 +/- 3.8 years. All subjects underwent an unipedal test with eyes open and eyes closed, followed by gait assessment during single and dual motor task conditions, before and after a physical training program. All subjects showed a significant decrease, by six times for subjects who had fallen and four times by those who had not, in the number of touch-downs in the unipedal test with eyes open (P < 0.05), and by 2.5 and 2 times, respectively, with eyes closed (P < 0.05) after the training program. All subjects showed a significant increase in speed (P < 0.05), cadence (P < 0.05) and stride length (P < 0.05) and a significant decrease in the single support time (P < 0.05) and stride time (P < 0.05) in gait assessment during single and dual task conditions after the training program. During the training program, no subjects fell. The physical training program improved static balance and quality of gait in elderly subjects who had had a fall and those who had not, which could contribute to minimizing and/or retarding the effects of aging and maintaining physical independence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
March-Leuba, S.; Jansen, J.F.; Kress, R.L.
1992-08-01
A new program package, Symbolic Manipulator Laboratory (SML), for the automatic generation of both kinematic and static manipulator models in symbolic form is presented. Critical design parameters may be identified and optimized using symbolic models as shown in the sample application presented for the Future Armor Rearm System (FARS) arm. The computer-aided development of the symbolic models yields equations with reduced numerical complexity. Important considerations have been placed on the closed form solutions simplification and on the user friendly operation. The main emphasis of this research is the development of a methodology which is implemented in a computer program capablemore » of generating symbolic kinematic and static forces models of manipulators. The fact that the models are obtained trigonometrically reduced is among the most significant results of this work and the most difficult to implement. Mathematica, a commercial program that allows symbolic manipulation, is used to implement the program package. SML is written such that the user can change any of the subroutines or create new ones easily. To assist the user, an on-line help has been written to make of SML a user friendly package. Some sample applications are presented. The design and optimization of the 5-degrees-of-freedom (DOF) FARS manipulator using SML is discussed. Finally, the kinematic and static models of two different 7-DOF manipulators are calculated symbolically.« less
Matrix management in hospitals: testing theories of matrix structure and development.
Burns, L R
1989-09-01
A study of 315 hospitals with matrix management programs was used to test several hypotheses concerning matrix management advanced by earlier theorists. The study verifies that matrix management involves several distinctive elements that can be scaled to form increasingly complex types of lateral coordinative devices. The scalability of these elements is evident only cross-sectionally. The results show that matrix complexity is not an outcome of program age, nor does matrix complexity at the time of implementation appear to influence program survival. Matrix complexity, finally, is not determined by the organization's task diversity and uncertainty. The results suggest several modifications in prevailing theories of matrix organization.
Automated Verification of Design Patterns with LePUS3
NASA Technical Reports Server (NTRS)
Nicholson, Jonathan; Gasparis, Epameinondas; Eden, Ammon H.; Kazman, Rick
2009-01-01
Specification and [visual] modelling languages are expected to combine strong abstraction mechanisms with rigour, scalability, and parsimony. LePUS3 is a visual, object-oriented design description language axiomatized in a decidable subset of the first-order predicate logic. We demonstrate how LePUS3 is used to formally specify a structural design pattern and prove ( verify ) whether any JavaTM 1.4 program satisfies that specification. We also show how LePUS3 specifications (charts) are composed and how they are verified fully automatically in the Two-Tier Programming Toolkit.
Mitchell, Jonathan I.; Nicklin, Wendy; Macdonald, Bernadette
2014-01-01
Across Canada and internationally, the public and governments at all levels have increasing expectations for quality of care, value for healthcare dollars and accountability. Within this reality, there is increasing recognition of the value of accreditation as a barometer of quality and as a tool to assess and improve accountability and efficiency in healthcare delivery. In this commentary, we show how three key attributes of the Accreditation Canada Qmentum accreditation program – measurement, scalability and currency – promote accountability in healthcare. PMID:25305398
Nelson, Russell T
2006-05-01
A pre-event static stretching program is often used to prepare an athlete for competition. Recent studies have suggested that static stretching may not be an effective method for stretching the muscle prior to competition. The intent of this study was to compare the immediate effect of static stretching, eccentric training, and no stretching/training on hamstring flexibility in high school and college athletes. Seventy-five athletes, with a mean age of 17.22 (+/- 1.30) were randomly assigned to one of three groups - thirty- second static stretch one time, an eccentric training protocol through a full range of motion, and a control group. All athletes had limited hamstring flexibility, defined as a 20° loss of knee extension measured with the femur held at 90° of hip flexion. A significant difference was indicated by follow up analysis between the control group (gain = -1.08°) and both the static stretch (gain = 5.05°) and the eccentric training group (gain = 9.48°). In addition, the gains in the eccentric training group were significantly greater than the static stretch group. The findings of this study reveal that one session of eccentrically training through a full range of motion improved hamstring flexibility better than the gains made by a static stretch group or a control group.
2006-01-01
Background A pre-event static stretching program is often used to prepare an athlete for competition. Recent studies have suggested that static stretching may not be an effective method for stretching the muscle prior to competition. Objective The intent of this study was to compare the immediate effect of static stretching, eccentric training, and no stretching/training on hamstring flexibility in high school and college athletes. Methods Seventy-five athletes, with a mean age of 17.22 (+/- 1.30) were randomly assigned to one of three groups - thirty- second static stretch one time, an eccentric training protocol through a full range of motion, and a control group. All athletes had limited hamstring flexibility, defined as a 20° loss of knee extension measured with the femur held at 90° of hip flexion. Results A significant difference was indicated by follow up analysis between the control group (gain = -1.08°) and both the static stretch (gain = 5.05°) and the eccentric training group (gain = 9.48°). In addition, the gains in the eccentric training group were significantly greater than the static stretch group. Discussion and Conclusion The findings of this study reveal that one session of eccentrically training through a full range of motion improved hamstring flexibility better than the gains made by a static stretch group or a control group. PMID:21522215
XMDS2: Fast, scalable simulation of coupled stochastic partial differential equations
NASA Astrophysics Data System (ADS)
Dennis, Graham R.; Hope, Joseph J.; Johnsson, Mattias T.
2013-01-01
XMDS2 is a cross-platform, GPL-licensed, open source package for numerically integrating initial value problems that range from a single ordinary differential equation up to systems of coupled stochastic partial differential equations. The equations are described in a high-level XML-based script, and the package generates low-level optionally parallelised C++ code for the efficient solution of those equations. It combines the advantages of high-level simulations, namely fast and low-error development, with the speed, portability and scalability of hand-written code. XMDS2 is a complete redesign of the XMDS package, and features support for a much wider problem space while also producing faster code. Program summaryProgram title: XMDS2 Catalogue identifier: AENK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 2 No. of lines in distributed program, including test data, etc.: 872490 No. of bytes in distributed program, including test data, etc.: 45522370 Distribution format: tar.gz Programming language: Python and C++. Computer: Any computer with a Unix-like system, a C++ compiler and Python. Operating system: Any Unix-like system; developed under Mac OS X and GNU/Linux. RAM: Problem dependent (roughly 50 bytes per grid point) Classification: 4.3, 6.5. External routines: The external libraries required are problem-dependent. Uses FFTW3 Fourier transforms (used only for FFT-based spectral methods), dSFMT random number generation (used only for stochastic problems), MPI message-passing interface (used only for distributed problems), HDF5, GNU Scientific Library (used only for Bessel-based spectral methods) and a BLAS implementation (used only for non-FFT-based spectral methods). Nature of problem: General coupled initial-value stochastic partial differential equations. Solution method: Spectral method with method-of-lines integration Running time: Determined by the size of the problem
Scalable Quantum Information Processing and Applications
2008-01-19
qubit logic gates, and finally emitting an entangled photon from the single- photon emitter. For the program, we proposed to demonstrate the...coherent, single photon transmitter/receiver system. These requirements included careful tailoring of the g factor for conduction band electrons in...physics required for the realization of a spin-coherent, single photon transmitter/receiver system. These requirements included careful tailoring of
Ultra-Dense Quantum Communication Using Integrated Photonic Architecture: First Annual Report
2011-08-24
REPORT Ultra-Dense Quantum Communication Using Integrated Photonic Architecture: First Annual Report 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: The...goal of this program is to establish a fundamental information-theoretic understand of quantum secure communication and to devise a practical...scalable implementation of quantum key distribution protocols in an integrated photonic architecture. We report our progress on experimental and
NASA Technical Reports Server (NTRS)
Bielawa, Richard L.; Hefner, Rachel E.; Castagna, Andre
1991-01-01
The results are presented of an analytic and experimental research program involving a Sikorsky S-55 helicopter tail cone directed ultimately to the improved structural analysis of airframe substructures typical of moderate sized helicopters of metal semimonocoque construction. Experimental static strain and dynamic shake-testing measurements are presented. Correlation studies of each of these tests with a PC-based finite element analysis (COSMOS/M) are described. The tests included static loadings at the end of the tail cone supported in the cantilever configuration as well as vibrational shake-testing in both the cantilever and free-free configurations.
1967-09-09
This photograph depicts the F-1 engine firing in the Marshall Space Flight Center’s F-1 Engine Static Test Stand. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. It is a vertical engine firing test stand, 239 feet in elevation and 4,600 square feet in area at the base, designed to assist in the development of the F-1 Engine. Capability is provided for static firing of 1.5 million pounds of thrust using liquid oxygen and kerosene. The foundation of the stand is keyed into the bedrock approximately 40 feet below grade.
Scalable PGAS Metadata Management on Extreme Scale Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chavarría-Miranda, Daniel; Agarwal, Khushbu; Straatsma, TP
Programming models intended to run on exascale systems have a number of challenges to overcome, specially the sheer size of the system as measured by the number of concurrent software entities created and managed by the underlying runtime. It is clear from the size of these systems that any state maintained by the programming model has to be strictly sub-linear in size, in order not to overwhelm memory usage with pure overhead. A principal feature of Partitioned Global Address Space (PGAS) models is providing easy access to global-view distributed data structures. In order to provide efficient access to these distributedmore » data structures, PGAS models must keep track of metadata such as where array sections are located with respect to processes/threads running on the HPC system. As PGAS models and applications become ubiquitous on very large transpetascale systems, a key component to their performance and scalability will be efficient and judicious use of memory for model overhead (metadata) compared to application data. We present an evaluation of several strategies to manage PGAS metadata that exhibit different space/time tradeoffs. We use two real-world PGAS applications to capture metadata usage patterns and gain insight into their communication behavior.« less
Scalable Replay with Partial-Order Dependencies for Message-Logging Fault Tolerance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lifflander, Jonathan; Meneses, Esteban; Menon, Harshita
2014-09-22
Deterministic replay of a parallel application is commonly used for discovering bugs or to recover from a hard fault with message-logging fault tolerance. For message passing programs, a major source of overhead during forward execution is recording the order in which messages are sent and received. During replay, this ordering must be used to deterministically reproduce the execution. Previous work in replay algorithms often makes minimal assumptions about the programming model and application in order to maintain generality. However, in many cases, only a partial order must be recorded due to determinism intrinsic in the code, ordering constraints imposed bymore » the execution model, and events that are commutative (their relative execution order during replay does not need to be reproduced exactly). In this paper, we present a novel algebraic framework for reasoning about the minimum dependencies required to represent the partial order for different concurrent orderings and interleavings. By exploiting this theory, we improve on an existing scalable message-logging fault tolerance scheme. The improved scheme scales to 131,072 cores on an IBM BlueGene/P with up to 2x lower overhead than one that records a total order.« less
Programmers manual for static and dynamic reusable surface insulation stresses (resist)
NASA Technical Reports Server (NTRS)
Ogilvie, P. L.; Levy, A.; Austin, F.; Ojalvo, I. U.
1974-01-01
Programming information for the RESIST program for the dynamic and thermal stress analysis of the space shuttle surface insulation is presented. The overall flow chart of the program, overlay chart, data set allocation, and subprogram calling sequence are given along with a brief description of the individual subprograms and typical subprogram output.
Computer program user's manual for advanced general aviation propeller study
NASA Technical Reports Server (NTRS)
Worobel, R.
1972-01-01
A user's manual is presented for a computer program for predicting the performance (static, flight, and reverse), noise, weight and cost of propellers for advanced general aviation aircraft of the 1980 time period. Complete listings of this computer program with detailed instructions and samples of input and output are included.
Ability-Training-Oriented Automated Assessment in Introductory Programming Course
ERIC Educational Resources Information Center
Wang, Tiantian; Su, Xiaohong; Ma, Peijun; Wang, Yuying; Wang, Kuanquan
2011-01-01
Learning to program is a difficult process for novice programmers. AutoLEP, an automated learning and assessment system, was developed by us, to aid novice programmers to obtain programming skills. AutoLEP is ability-training-oriented. It adopts a novel assessment mechanism, which combines static analysis with dynamic testing to analyze student…
Hakim, Renée M; Ross, Michael D; Runco, Wendy; Kane, Michael T
2017-02-01
The purpose of this study was to investigate the impact of a community-based aquatic exercise program on physical performance among adults with mild to moderate intellectual disability (ID). Twenty-two community-dwelling adults with mild to moderate ID volunteered to participate in this study. Participants completed an 8-week aquatic exercise program (2 days/wk, 1 hr/session). Measures of physical performance, which were assessed prior to and following the completion of the aquatic exercise program, included the timed-up-and-go test, 6-min walk test, 30-sec chair stand test, 10-m timed walk test, hand grip strength, and the static plank test. When comparing participants' measures of physical performance prior to and following the 8-week aquatic exercise program, improvements were seen in all measures, but the change in scores for the 6-min walk test, 30-sec chair stand test, and the static plank test achieved statistical significance ( P <0.05). An 8-week group aquatic exercise program for adults with ID may promote improvements in endurance and balance/mobility.
Computer program to predict noise of general aviation aircraft: User's guide
NASA Technical Reports Server (NTRS)
Mitchell, J. A.; Barton, C. K.; Kisner, L. S.; Lyon, C. A.
1982-01-01
Program NOISE predicts General Aviation Aircraft far-field noise levels at FAA FAR Part 36 certification conditions. It will also predict near-field and cabin noise levels for turboprop aircraft and static engine component far-field noise levels.
Large-scale parallel genome assembler over cloud computing environment.
Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong
2017-06-01
The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.
MOLAR: Modular Linux and Adaptive Runtime Support for HEC OS/R Research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frank Mueller
2009-02-05
MOLAR is a multi-institution research effort that concentrates on adaptive, reliable,and efficient operating and runtime system solutions for ultra-scale high-end scientific computing on the next generation of supercomputers. This research addresses the challenges outlined by the FAST-OS - forum to address scalable technology for runtime and operating systems --- and HECRTF --- high-end computing revitalization task force --- activities by providing a modular Linux and adaptable runtime support for high-end computing operating and runtime systems. The MOLAR research has the following goals to address these issues. (1) Create a modular and configurable Linux system that allows customized changes based onmore » the requirements of the applications, runtime systems, and cluster management software. (2) Build runtime systems that leverage the OS modularity and configurability to improve efficiency, reliability, scalability, ease-of-use, and provide support to legacy and promising programming models. (3) Advance computer reliability, availability and serviceability (RAS) management systems to work cooperatively with the OS/R to identify and preemptively resolve system issues. (4) Explore the use of advanced monitoring and adaptation to improve application performance and predictability of system interruptions. The overall goal of the research conducted at NCSU is to develop scalable algorithms for high-availability without single points of failure and without single points of control.« less
ESD prevention, combating ESD problem — Solutions
NASA Astrophysics Data System (ADS)
Duban, M.
2002-12-01
In today's Electronic equipment manufacturing, managing an ESD (Electro static Discharge) plan is an integral part of a complete quality program. Every body has been in presence of static electricity one day or an other. But a discharge on a body of man is only felt when the potential of charge before the discharge is higher than 3000 volts but components can have a sensitivity less than 20 Volts !
Watson, Dennis P; Ray, Bradley; Robison, Lisa; Xu, Huiping; Edwards, Rhiannon; Salyers, Michelle P; Hill, James; Shue, Sarah
2017-01-01
There is a lack of evidence-based substance use disorder treatment and services targeting returning inmates. Substance Use Programming for Person-Oriented Recovery and Treatment (SUPPORT) is a community-driven, recovery-oriented approach to substance abuse care which has the potential to address this service gap. SUPPORT is modeled after Indiana's Access to Recovery program, which was closed due to lack of federal support despite positive improvements in clients' recovery outcomes. SUPPORT builds on noted limitations of Indiana's Access to Recovery program. The ultimate goal of this project is to establish SUPPORT as an effective and scalable recovery-oriented system of care. A necessary step we must take before launching a large clinical trial is pilot testing the SUPPORT intervention. The pilot will take place at Public Advocates in Community Re-Entry (PACE), nonprofit serving individuals with felony convictions who are located in Marion County, Indiana (Indianapolis). The pilot will follow a basic parallel randomized design to compare clients receiving SUPPORT with clients receiving standard services. A total of 80 clients within 3 months of prison release will be recruited to participate and randomly assigned to one of the two intervention arms. Quantitative measures will be collected at multiple time points to understand SUPPORT's impact on recovery capital and outcomes. We will also collect qualitative data from SUPPORT clients to better understand their program and post-discharge experiences. Successful completion of this pilot will prepare us to conduct a multi-site clinical trial. The ultimate goal of this future work is to develop an evidence-based and scalable approach to treating substance use disorder among persons returning to society after incarceration. ClinicalTrials.gov (Clinical Trials ID: NCT03132753 and Protocol Number: 1511731907). Registered 28 April 2017.
Thermally efficient and highly scalable In2Se3 nanowire phase change memory
NASA Astrophysics Data System (ADS)
Jin, Bo; Kang, Daegun; Kim, Jungsik; Meyyappan, M.; Lee, Jeong-Soo
2013-04-01
The electrical characteristics of nonvolatile In2Se3 nanowire phase change memory are reported. Size-dependent memory switching behavior was observed in nanowires of varying diameters and the reduction in set/reset threshold voltage was as low as 3.45 V/6.25 V for a 60 nm nanowire, which is promising for highly scalable nanowire memory applications. Also, size-dependent thermal resistance of In2Se3 nanowire memory cells was estimated with values as high as 5.86×1013 and 1.04×106 K/W for a 60 nm nanowire memory cell in amorphous and crystalline phases, respectively. Such high thermal resistances are beneficial for improvement of thermal efficiency and thus reduction in programming power consumption based on Fourier's law. The evaluation of thermal resistance provides an avenue to develop thermally efficient memory cell architecture.
Zhang, Mingyuan; Velasco, Ferdinand T.; Musser, R. Clayton; Kawamoto, Kensaku
2013-01-01
Enabling clinical decision support (CDS) across multiple electronic health record (EHR) systems has been a desired but largely unattained aim of clinical informatics, especially in commercial EHR systems. A potential opportunity for enabling such scalable CDS is to leverage vendor-supported, Web-based CDS development platforms along with vendor-supported application programming interfaces (APIs). Here, we propose a potential staged approach for enabling such scalable CDS, starting with the use of custom EHR APIs and moving towards standardized EHR APIs to facilitate interoperability. We analyzed three commercial EHR systems for their capabilities to support the proposed approach, and we implemented prototypes in all three systems. Based on these analyses and prototype implementations, we conclude that the approach proposed is feasible, already supported by several major commercial EHR vendors, and potentially capable of enabling cross-platform CDS at scale. PMID:24551426
Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas
2016-01-01
Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments.
Status Report on NEAMS PROTEUS/ORIGEN Integration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wieselquist, William A
2016-02-18
The US Department of Energy’s Nuclear Energy Advanced Modeling and Simulation (NEAMS) Program has contributed significantly to the development of the PROTEUS neutron transport code at Argonne National Laboratory and to the Oak Ridge Isotope Generation and Depletion Code (ORIGEN) depletion/decay code at Oak Ridge National Laboratory. PROTEUS’s key capability is the efficient and scalable (up to hundreds of thousands of cores) neutron transport solver on general, unstructured, three-dimensional finite-element-type meshes. The scalability and mesh generality enable the transfer of neutron and power distributions to other codes in the NEAMS toolkit for advanced multiphysics analysis. Recently, ORIGEN has received considerablemore » modernization to provide the high-performance depletion/decay capability within the NEAMS toolkit. This work presents a description of the initial integration of ORIGEN in PROTEUS, mainly performed during FY 2015, with minor updates in FY 2016.« less
Seventh NASTRAN User's Colloquium
NASA Technical Reports Server (NTRS)
1978-01-01
The general application of finite element methodology and the specific application of NASTRAN to a wide variety of static and dynamic structural problems are described. Topics include: fluids and thermal applications, NASTRAN programming, substructuring methods, unique new applications, general auxiliary programs, specific applications, and new capabilities.
NASA Astrophysics Data System (ADS)
Milojević, Slavka; Stojanovic, Vojislav
2017-04-01
Due to the continuous development of the seismic acquisition and processing method, the increase of the signal/fault ratio always represents a current target. The correct application of the latest software solutions improves the processing results and justifies their development. A correct computation and application of static corrections represents one of the most important tasks in pre-processing. This phase is of great importance for further processing steps. Static corrections are applied to seismic data in order to compensate the effects of irregular topography, the difference between the levels of source points and receipt in relation to the level of reduction, of close to the low-velocity surface layer (weathering correction), or any reasons that influence the spatial and temporal position of seismic routes. The refraction statics method is the most common method for computation of static corrections. It is successful in resolving of both the long-period statics problems and determining of the difference in the statics caused by abrupt lateral changes in velocity in close to the surface layer. XtremeGeo FlatironsTM is a program whose main purpose is computation of static correction through a refraction statics method and allows the application of the following procedures: picking of first arrivals, checking of geometry, multiple methods for analysis and modelling of statics, analysis of the refractor anisotropy and tomography (Eikonal Tomography). The exploration area is located on the southern edge of the Pannonian Plain, in the plain area with altitudes of 50 to 195 meters. The largest part of the exploration area covers Deliblato Sands, where the geological structure of the terrain and high difference in altitudes significantly affects the calculation of static correction. Software XtremeGeo FlatironsTM has powerful visualization and tools for statistical analysis which contributes to significantly more accurate assessment of geometry close to the surface layers and therefore more accurately computed static corrections.
The TeraShake Computational Platform for Large-Scale Earthquake Simulations
NASA Astrophysics Data System (ADS)
Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas
Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.
ERIC Educational Resources Information Center
Hasson, H.; Brown, C.; Hasson, D.
2010-01-01
In web-based health promotion programs, large variations in participant engagement are common. The aim was to investigate determinants of high use of a worksite self-help web-based program for stress management. Two versions of the program were offered to randomly selected departments in IT and media companies. A static version of the program…
Lean and Efficient Software: Whole-Program Optimization of Executables
2015-09-30
libraries. Many levels of library interfaces—where some libraries are dynamically linked and some are provided in binary form only—significantly limit...software at build time. The opportunity: Our objective in this project is to substantially improve the performance, size, and robustness of binary ...executables by using static and dynamic binary program analysis techniques to perform whole-program optimization directly on compiled programs
Evaluation of verification and testing tools for FORTRAN programs
NASA Technical Reports Server (NTRS)
Smith, K. A.
1980-01-01
Two automated software verification and testing systems were developed for use in the analysis of computer programs. An evaluation of the static analyzer DAVE and the dynamic analyzer PET, which are used in the analysis of FORTRAN programs on Control Data (CDC) computers, are described. Both systems were found to be effective and complementary, and are recommended for use in testing FORTRAN programs.
A Novel Platform for Evaluating the Environmental Impacts on Bacterial Cellulose Production.
Basu, Anindya; Vadanan, Sundaravadanam Vishnu; Lim, Sierin
2018-04-10
Bacterial cellulose (BC) is a biocompatible material with versatile applications. However, its large-scale production is challenged by the limited biological knowledge of the bacteria. The advent of synthetic biology has lead the way to the development of BC producing microbes as a novel chassis. Hence, investigation on optimal growth conditions for BC production and understanding of the fundamental biological processes are imperative. In this study, we report a novel analytical platform that can be used for studying the biology and optimizing growth conditions of cellulose producing bacteria. The platform is based on surface growth pattern of the organism and allows us to confirm that cellulose fibrils produced by the bacteria play a pivotal role towards their chemotaxis. The platform efficiently determines the impacts of different growth conditions on cellulose production and is translatable to static culture conditions. The analytical platform provides a means for fundamental biological studies of bacteria chemotaxis as well as systematic approach towards rational design and development of scalable bioprocessing strategies for industrial production of bacterial cellulose.
Recovering time-varying networks of dependencies in social and biological studies.
Ahmed, Amr; Xing, Eric P
2009-07-21
A plausible representation of the relational information among entities in dynamic systems such as a living cell or a social community is a stochastic network that is topologically rewiring and semantically evolving over time. Although there is a rich literature in modeling static or temporally invariant networks, little has been done toward recovering the network structure when the networks are not observable in a dynamic context. In this article, we present a machine learning method called TESLA, which builds on a temporally smoothed l(1)-regularized logistic regression formalism that can be cast as a standard convex-optimization problem and solved efficiently by using generic solvers scalable to large networks. We report promising results on recovering simulated time-varying networks and on reverse engineering the latent sequence of temporally rewiring political and academic social networks from longitudinal data, and the evolving gene networks over >4,000 genes during the life cycle of Drosophila melanogaster from a microarray time course at a resolution limited only by sample frequency.
Chun-Hai Fung, Isaac; Fitter, David L.; Borse, Rebekah H.; Meltzer, Martin I.; Tappero, Jordan W.
2013-01-01
In 2010, toxigenic Vibrio cholerae was newly introduced to Haiti. Because resources are limited, decision-makers need to understand the effect of different preventive interventions. We built a static model to estimate the potential number of cholera cases averted through improvements in coverage in water, sanitation and hygiene (WASH) (i.e., latrines, point-of-use chlorination, and piped water), oral cholera vaccine (OCV), or a combination of both. We allowed indirect effects and non-linear relationships between effect and population coverage. Because there are limited incidence data for endemic cholera in Haiti, we estimated the incidence of cholera over 20 years in Haiti by using data from Malawi. Over the next two decades, scalable WASH interventions could avert 57,949–78,567 cholera cases, OCV could avert 38,569–77,636 cases, and interventions that combined WASH and OCV could avert 71,586–88,974 cases. Rate of implementation is the most influential variable, and combined approaches maximized the effect. PMID:24106189
DISCO: A 3D Moving-mesh Magnetohydrodynamics Code Designed for the Study of Astrophysical Disks
NASA Astrophysics Data System (ADS)
Duffell, Paul C.
2016-09-01
This work presents the publicly available moving-mesh magnetohydrodynamics (MHD) code DISCO. DISCO is efficient and accurate at evolving orbital fluid motion in two and three dimensions, especially at high Mach numbers. DISCO employs a moving-mesh approach utilizing a dynamic cylindrical mesh that can shear azimuthally to follow the orbital motion of the gas. The moving mesh removes diffusive advection errors and allows for longer time-steps than a static grid. MHD is implemented in DISCO using an HLLD Riemann solver and a novel constrained transport (CT) scheme that is compatible with the mesh motion. DISCO is tested against a wide variety of problems, which are designed to test its stability, accuracy, and scalability. In addition, several MHD tests are performed which demonstrate the accuracy and stability of the new CT approach, including two tests of the magneto-rotational instability, one testing the linear growth rate and the other following the instability into the fully turbulent regime.
Li, Wei; Liu, Hongxia; Wang, Shulong; Chen, Shupeng; Wang, Qianqiong
2018-03-05
The DRAM based on the dual-gate tunneling FET (DGTFET) has the advantages of capacitor-less structure and high retention time. In this paper, the optimization of spacer engineering for DGTFET DRAM is systematically investigated by Silvaco-Atlas tool to further improve its performance, including the reduction of reading "0" current and extension of retention time. The simulation results show that spacers at the source and drain sides should apply the low-k and high-k dielectrics, respectively, which can enhance the reading "1" current and reduce reading "0" current. Applying this optimized spacer engineering, the DGTFET DRAM obtains the optimum performance-extremely low reading "0" current (10 -14 A/μm) and large retention time (10s), which decreases its static power consumption and dynamic refresh rate. And the low reading "0" current also enhances its current ratio (10 7 ) of reading "1" to reading "0". Furthermore, the analysis about scalability reveals its inherent shortcoming, which offers the further investigation direction for DGTFET DRAM.
DISCO: A 3D MOVING-MESH MAGNETOHYDRODYNAMICS CODE DESIGNED FOR THE STUDY OF ASTROPHYSICAL DISKS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duffell, Paul C., E-mail: duffell@berkeley.edu
2016-09-01
This work presents the publicly available moving-mesh magnetohydrodynamics (MHD) code DISCO. DISCO is efficient and accurate at evolving orbital fluid motion in two and three dimensions, especially at high Mach numbers. DISCO employs a moving-mesh approach utilizing a dynamic cylindrical mesh that can shear azimuthally to follow the orbital motion of the gas. The moving mesh removes diffusive advection errors and allows for longer time-steps than a static grid. MHD is implemented in DISCO using an HLLD Riemann solver and a novel constrained transport (CT) scheme that is compatible with the mesh motion. DISCO is tested against a wide varietymore » of problems, which are designed to test its stability, accuracy, and scalability. In addition, several MHD tests are performed which demonstrate the accuracy and stability of the new CT approach, including two tests of the magneto-rotational instability, one testing the linear growth rate and the other following the instability into the fully turbulent regime.« less
Detection of single ion channel activity with carbon nanotubes
NASA Astrophysics Data System (ADS)
Zhou, Weiwei; Wang, Yung Yu; Lim, Tae-Sun; Pham, Ted; Jain, Dheeraj; Burke, Peter J.
2015-03-01
Many processes in life are based on ion currents and membrane voltages controlled by a sophisticated and diverse family of membrane proteins (ion channels), which are comparable in size to the most advanced nanoelectronic components currently under development. Here we demonstrate an electrical assay of individual ion channel activity by measuring the dynamic opening and closing of the ion channel nanopores using single-walled carbon nanotubes (SWNTs). Two canonical dynamic ion channels (gramicidin A (gA) and alamethicin) and one static biological nanopore (α-hemolysin (α-HL)) were successfully incorporated into supported lipid bilayers (SLBs, an artificial cell membrane), which in turn were interfaced to the carbon nanotubes through a variety of polymer-cushion surface functionalization schemes. The ion channel current directly charges the quantum capacitance of a single nanotube in a network of purified semiconducting nanotubes. This work forms the foundation for a scalable, massively parallel architecture of 1d nanoelectronic devices interrogating electrophysiology at the single ion channel level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chamana, Manohar; Mather, Barry A
A library of load variability classes is created to produce scalable synthetic data sets using historical high-speed raw data. These data are collected from distribution monitoring units connected at the secondary side of a distribution transformer. Because of the irregular patterns and large volume of historical high-speed data sets, the utilization of current load characterization and modeling techniques are challenging. Multi-resolution analysis techniques are applied to extract the necessary components and eliminate the unnecessary components from the historical high-speed raw data to create the library of classes, which are then utilized to create new synthetic load data sets. A validationmore » is performed to ensure that the synthesized data sets contain the same variability characteristics as the training data sets. The synthesized data sets are intended to be utilized in quasi-static time-series studies for distribution system planning studies on a granular scale, such as detailed PV interconnection studies.« less
NASA Astrophysics Data System (ADS)
Li, Wei; Liu, Hongxia; Wang, Shulong; Chen, Shupeng; Wang, Qianqiong
2018-03-01
The DRAM based on the dual-gate tunneling FET (DGTFET) has the advantages of capacitor-less structure and high retention time. In this paper, the optimization of spacer engineering for DGTFET DRAM is systematically investigated by Silvaco-Atlas tool to further improve its performance, including the reduction of reading "0" current and extension of retention time. The simulation results show that spacers at the source and drain sides should apply the low-k and high-k dielectrics, respectively, which can enhance the reading "1" current and reduce reading "0" current. Applying this optimized spacer engineering, the DGTFET DRAM obtains the optimum performance-extremely low reading "0" current (10-14A/μm) and large retention time (10s), which decreases its static power consumption and dynamic refresh rate. And the low reading "0" current also enhances its current ratio (107) of reading "1" to reading "0". Furthermore, the analysis about scalability reveals its inherent shortcoming, which offers the further investigation direction for DGTFET DRAM.
Survivability characteristics of composite compression structure
NASA Technical Reports Server (NTRS)
Avery, John G.; Allen, M. R.; Sawdy, D.; Avery, S.
1990-01-01
Test and evaluation was performed to determine the compression residual capability of graphite reinforced composite panels following perforation by high-velocity fragments representative of combat threats. Assessments were made of the size of the ballistic damage, the effect of applied compression load at impact, damage growth during cyclic loading and residual static strength. Several fiber/matrix systems were investigated including high-strain fibers, tough epoxies, and APC-2 thermoplastic. Additionally, several laminate configurations were evaluated including hard and soft laminates and the incorporation of buffer strips and stitching for improved damage resistance of tolerance. Both panels (12 x 20-inches) and full scale box-beam components were tested to assure scalability of results. The evaluation generally showed small differences in the responses of the material systems tested. The soft laminate configurations with concentrated reinforcement exhibited the highest residual strength. Ballistic damage did not grow or increase in severity as a result of cyclic loading, and the effects of applied load at impact were not significant under the conditions tested.
Li, Congcong; Zhang, Xi; Wang, Haiping; Li, Dongfeng
2018-01-11
Vehicular sensor networks have been widely applied in intelligent traffic systems in recent years. Because of the specificity of vehicular sensor networks, they require an enhanced, secure and efficient authentication scheme. Existing authentication protocols are vulnerable to some problems, such as a high computational overhead with certificate distribution and revocation, strong reliance on tamper-proof devices, limited scalability when building many secure channels, and an inability to detect hardware tampering attacks. In this paper, an improved authentication scheme using certificateless public key cryptography is proposed to address these problems. A security analysis of our scheme shows that our protocol provides an enhanced secure anonymous authentication, which is resilient against major security threats. Furthermore, the proposed scheme reduces the incidence of node compromise and replication attacks. The scheme also provides a malicious-node detection and warning mechanism, which can quickly identify compromised static nodes and immediately alert the administrative department. With performance evaluations, the scheme can obtain better trade-offs between security and efficiency than the well-known available schemes.
2017-03-01
FINAL REPORT Demonstration of Energy Savings in Commercial Buildings for Tiered Trim and Respond Method in Resetting Static Pressure for VAV...release Page Intentionally Left Blank This report was prepared under contract to the Department of Defense Environmental Security Technology...Certification Program (ESTCP). The publication of this report does not indicate endorsement by the Department of Defense, nor should the contents be
Reference Manual for the Ada Programming Language
1983-01-01
Convercions 4-21 4.7 Qualified Expresclions 4-24 4.8 Allocators 4-24 4.9 Static Expressions and Static Subtypes 4-26 , 4.10 Universal Expressions 4-27 5...record type.•: • 2 Access types allow the construction of linked data structures created by the evaluation of / allocators . They allow several...the following: A * An assignment (In assignment statements and Initializations), an allocator , a membership test, or a short-circuit control form, * A
1967-10-01
S-IB-211, the flight version of the Saturn IB launch vehicle's first (S-IVB) stage, arrives at Marshall Space Flight Center's (MSFC's) S-IB static test stand. Between December 1967 and April 1968, the stage would undergo seven static test firings. The S-IB, developed by the MSFC and built by the Chrysler Corporation at the Michoud Assembly Facility near New Orleans, Louisiana, utilized eight H-1 engines and each produced 200,000 pounds of thrust.
1964-10-01
Test firing of the Saturn I S-I Stage (S-1-10) at the Marshall Space Flight Center. This test stand was originally constructed in 1951 and sometimes called the Redstone or T tower. In l961, the test stand was modified to permit static firing of the S-I/S-IB stages, which produced a total thrust of 1,600,000 pounds. The name of the stand was then changed to the S-IB Static Test Stand.
1967-10-01
S-IB-211, the flight version of the Saturn IB launch vehicle's (S-IVB) first stage, after installation at the Marshall Space Flight Center's (MSFC's) S-IB static test stand. Between December 1967 and April 1968, the stage would undergo seven static test firings. The S-IB, developed by the MSFC and built by the Chrysler Corporation at the Michoud Assembly Facility near New Orleans, Louisiana, utilized eight H-1 engines and each produced 200,000 pounds of thrust.
Perovskite Technology is Scalable, But Questions Remain about the Best
Methods | News | NREL Perovskite Technology is Scalable, But Questions Remain about the Best Methods News Release: Perovskite Technology is Scalable, But Questions Remain about the Best Methods NREL be used on a larger surface. The NREL researchers examined potential scalable deposition methods
Randomized, Controlled Trial of CBT Training for PTSD Providers
2015-10-01
design, implement and evaluate a cost effective, web based self paced training program to provide skills-oriented continuing education for mental...but has received little systematic evaluation to date. Noting the urgency and high priority of this issue, Fairburn and Cooper (2011) have... evaluate scalable and cost-effective new methods for training of mental health clinicians providing treatment services to veterans with PTSD. The
Department of Defense High Performance Computing Modernization Program. 2006 Annual Report
2007-03-01
Department. We successfully completed several software development projects that introduced parallel, scalable production software now in use across the...imagined. They are developing and deploying weather and ocean models that allow our soldiers, sailors, marines and airmen to plan missions more effectively...and to navigate adverse environments safely. They are modeling molecular interactions leading to the development of higher energy fuels, munitions
Empirical Knowledge Transfer and Collaboration with Self-Regenerative Systems
2007-06-01
SYSTEMS Raytheon Company Sponsored by Defense Advanced Research Projects Agency DARPA Order No. T120 APPROVED FOR PUBLIC RELEASE...FA8750-04-C-0286 5b. GRANT NUMBER 4. TITLE AND SUBTITLE EMPIRICAL KNOWLEDGE TRANSFER AND COLLABORATION WITH SELF-REGENERATIVE SYSTEMS 5c...Self-Regenerative Systems program to develop new technologies supporting granular scalable redundancy. The key focus of Raytheon’s effort was to
Quality Scalability Aware Watermarking for Visual Content.
Bhowmik, Deepayan; Abhayaratne, Charith
2016-11-01
Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking.
Three dimensional modeling of rigid pavement : executive summary, February 1995.
DOT National Transportation Integrated Search
1995-02-17
A finite-element program has been developed to model the response of rigid pavement to both static loads and temperature changes. The program is fully three-dimensional and incorporates not only the common twenty-node brick element but also a thin in...
Three-dimensional modeling of rigid pavement : final report, September 1995.
DOT National Transportation Integrated Search
1995-02-17
A finite-element program has been developed to model the response of rigid pavement to both static loads and temperature changes. The program is fully three-dimensional and incorporates not only the common twenty-node brick element but also a thin in...
NASA Technical Reports Server (NTRS)
Bedard, A. J., Jr.; Nishiyama, R. T.
1993-01-01
Instruments developed for making meteorological observations under adverse conditions on Earth can be applied to systems designed for other planetary atmospheres. Specifically, a wind sensor developed for making measurements within tornados is capable of detecting induced pressure differences proportional to wind speed. Adding strain gauges to the sensor would provide wind direction. The device can be constructed in a rugged form for measuring high wind speeds in the presence of blowing dust that would clog bearings and plug passages of conventional wind speed sensors. Sensing static pressure in the lower boundary layer required development of an omnidirectional, tilt-insensitive static pressure probe. The probe provides pressure inputs to a sensor with minimum error and is inherently weather-protected. The wind sensor and static pressure probes have been used in a variety of field programs and can be adapted for use in different planetary atmospheres.
Taechasubamorn, Panada; Nopkesorn, Tawesak; Pannarunothai, Supasit
2010-12-01
To compare physical fitness between rice farmers with chronic low back pain (CLBP) and a healthy control group. Sixty-eight rice farmers with CLBP were matched according to age and sex with healthy farmers. All subjects underwent nine physical fitness tests for body composition, lifting capacity, static back extensor endurance, leg strength, static abdominal endurance, handgrip strength, hamstring flexibility, posterior leg and back muscles flexibility and abdominal flexibility. There was no significant difference between CLBP and healthy groups for all tests, except the static back extensor endurance. The back extensor endurance times of the CLBP group was significantly lower than that of the control group (p = 0.002). Static back extensor endurance is the deficient physical fitness in CLBP rice farmers. Back extensor endurance training should be emphasized in both prevention and rehabilitation programs.
NASA Technical Reports Server (NTRS)
Larson, T. J.
1984-01-01
The measurement performance of a hemispherical flow-angularity probe and a fuselage-mounted pitot-static probe was evaluated at high flow angles as part of a test program on an F-14 airplane. These evaluations were performed using a calibrated pitot-static noseboom equipped with vanes for reference flow direction measurements, and another probe incorporating vanes but mounted on a pod under the fuselage nose. Data are presented for angles of attack up to 63, angles of sideslip from -22 deg to 22 deg, and for Mach numbers from approximately 0.3 to 1.3. During maneuvering flight, the hemispherical flow-angularity probe exhibited flow angle errors that exceeded 2 deg. Pressure measurements with the pitot-static probe resulted in very inaccurate data above a Mach number of 0.87 and exhibited large sensitivities with flow angle.
McHugh, Stuart
1976-01-01
The material in this report is concerned with the effects of a vertically oriented rectangular dislocation loop on the tilts observed at the free surface of an elastic half-space. Part I examines the effect of a spatially variable static strike-slip distribution across the slip surface. The tilt components as a function of distance parallel, or perpendicular, to the strike of the slip surface are displayed for different slip-versus-distance profiles. Part II examines the effect of spatially and temporally variable slip distributions across the dislocation loop on the quasi-static tilts at the free surface of an elastic half space. The model discussed in part II may be used to generate theoretical tilt versus time curves produced by creep events.
A Comparison of Three Programming Models for Adaptive Applications
NASA Technical Reports Server (NTRS)
Shan, Hong-Zhang; Singh, Jaswinder Pal; Oliker, Leonid; Biswa, Rupak; Kwak, Dochan (Technical Monitor)
2000-01-01
We study the performance and programming effort for two major classes of adaptive applications under three leading parallel programming models. We find that all three models can achieve scalable performance on the state-of-the-art multiprocessor machines. The basic parallel algorithms needed for different programming models to deliver their best performance are similar, but the implementations differ greatly, far beyond the fact of using explicit messages versus implicit loads/stores. Compared with MPI and SHMEM, CC-SAS (cache-coherent shared address space) provides substantial ease of programming at the conceptual and program orchestration level, which often leads to the performance gain. However it may also suffer from the poor spatial locality of physically distributed shared data on large number of processors. Our CC-SAS implementation of the PARMETIS partitioner itself runs faster than in the other two programming models, and generates more balanced result for our application.
1963-12-05
The test laboratory of the Marshall Space Flight Center (MSFC) tested the F-1 engine, the most powerful rocket engine ever fired at MSFC. The engine was tested on the newly modified Saturn IB Static Test Stand which had been used for three years to test the Saturn I eight-engine booster, S-I (first) stage. In 1961 the test stand was modified to permit static firing of the S-I/S-IB stage and the name of the stand was then changed to the S-IB Static Test Stand. Producing a combined thrust of 7,500,000 pounds, five F-1 engines powered the S-IC (first) stage of the Saturn V vehicle for the marned lunar mission.
1963-12-01
The test laboratory of the Marshall Space Flight Center (MSFC) tested the F-1 engine, the most powerful rocket engine ever fired at MSFC. The engine was tested on the newly modified Saturn IB static test stand that had been used for three years to test the Saturn I eight-engine booster, S-I (first) stage. In 1961, the test stand was modified to permit static firing of the S-I/S-IB stage and the name of the stand was then changed to the S-IB Static Test Stand. Producing a combined thrust of 7,500,000 pounds, five F-1 engines powered the S-IC (first) stage of the Saturn V vehicle for the marned lunar mission.
Kibar, Sibel; Yıldız, Hatice Ecem; Ay, Saime; Evcik, Deniz; Ergin, Emine Süreyya
2015-09-01
To determine the effectiveness of balance exercises on the functional level and quality of life (QOL) of patients with fibromyalgia syndrome (FMS) and to investigate the circumstances associated with balance disorders in FMS. Randomized controlled trial. Physical medicine and rehabilitation clinic. Patients (N=57) (age range, 18-65y) with FMS were randomly assigned into 2 groups. Group 1 was given flexibility and balance exercises for 6 weeks, whereas group 2 received only a flexibility program as the control group. Functional balance was measured by the Berg Balance Scale (BBS), and dynamic and static balance were evaluated by a kinesthetic ability trainer (KAT) device. Fall risk was assessed with the Hendrich II fall risk model. The Nottingham Health Profile, Fibromyalgia Impact Questionnaire (FIQ), and Beck Depression Inventory (BDI) were used to determine QOL and functional and depression levels, respectively. Assessments were performed at baseline and after the 6-week program. In group 1, statistically significant improvements were observed in all parameters (P<.05), but no improvement was seen in group 2 (P>.05). When comparing the 2 groups, there were significant differences in group 1 concerning the KAT static balance test (P=.017) and FIQ measurements (P=.005). In the correlation analysis, the BDI was correlated with the BBS (r=-.434) and Hendrich II results (r=.357), whereas body mass index (BMI) was correlated with the KAT static balance measurements (r=.433), BBS (r=-.285), and fall frequency (r=.328). A 6-week balance training program had a beneficial effect on the static balance and functional levels of patients with FMS. We also observed that depression deterioration was related to balance deficit and fall risk. Higher BMI was associated with balance deficit and fall frequency. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Sahoo, Satya S; Wei, Annan; Valdez, Joshua; Wang, Li; Zonjy, Bilal; Tatsuoka, Curtis; Loparo, Kenneth A; Lhatoo, Samden D
2016-01-01
The recent advances in neurological imaging and sensing technologies have led to rapid increase in the volume, rate of data generation, and variety of neuroscience data. This "neuroscience Big data" represents a significant opportunity for the biomedical research community to design experiments using data with greater timescale, large number of attributes, and statistically significant data size. The results from these new data-driven research techniques can advance our understanding of complex neurological disorders, help model long-term effects of brain injuries, and provide new insights into dynamics of brain networks. However, many existing neuroinformatics data processing and analysis tools were not built to manage large volume of data, which makes it difficult for researchers to effectively leverage this available data to advance their research. We introduce a new toolkit called NeuroPigPen that was developed using Apache Hadoop and Pig data flow language to address the challenges posed by large-scale electrophysiological signal data. NeuroPigPen is a modular toolkit that can process large volumes of electrophysiological signal data, such as Electroencephalogram (EEG), Electrocardiogram (ECG), and blood oxygen levels (SpO2), using a new distributed storage model called Cloudwave Signal Format (CSF) that supports easy partitioning and storage of signal data on commodity hardware. NeuroPigPen was developed with three design principles: (a) Scalability-the ability to efficiently process increasing volumes of data; (b) Adaptability-the toolkit can be deployed across different computing configurations; and (c) Ease of programming-the toolkit can be easily used to compose multi-step data processing pipelines using high-level programming constructs. The NeuroPigPen toolkit was evaluated using 750 GB of electrophysiological signal data over a variety of Hadoop cluster configurations ranging from 3 to 30 Data nodes. The evaluation results demonstrate that the toolkit is highly scalable and adaptable, which makes it suitable for use in neuroscience applications as a scalable data processing toolkit. As part of the ongoing extension of NeuroPigPen, we are developing new modules to support statistical functions to analyze signal data for brain connectivity research. In addition, the toolkit is being extended to allow integration with scientific workflow systems. NeuroPigPen is released under BSD license at: https://sites.google.com/a/case.edu/neuropigpen/.
COMP Superscalar, an interoperable programming framework
NASA Astrophysics Data System (ADS)
Badia, Rosa M.; Conejero, Javier; Diaz, Carlos; Ejarque, Jorge; Lezzi, Daniele; Lordan, Francesc; Ramon-Cortes, Cristian; Sirvent, Raul
2015-12-01
COMPSs is a programming framework that aims to facilitate the parallelization of existing applications written in Java, C/C++ and Python scripts. For that purpose, it offers a simple programming model based on sequential development in which the user is mainly responsible for (i) identifying the functions to be executed as asynchronous parallel tasks and (ii) annotating them with annotations or standard Python decorators. A runtime system is in charge of exploiting the inherent concurrency of the code, automatically detecting and enforcing the data dependencies between tasks and spawning these tasks to the available resources, which can be nodes in a cluster, clouds or grids. In cloud environments, COMPSs provides scalability and elasticity features allowing the dynamic provision of resources.
Lindborg, Beth A; Brekke, John H; Vegoe, Amanda L; Ulrich, Connor B; Haider, Kerri T; Subramaniam, Sandhya; Venhuizen, Scott L; Eide, Cindy R; Orchard, Paul J; Chen, Weili; Wang, Qi; Pelaez, Francisco; Scott, Carolyn M; Kokkoli, Efrosini; Keirstead, Susan A; Dutton, James R; Tolar, Jakub; O'Brien, Timothy D
2016-07-01
Tissue organoids are a promising technology that may accelerate development of the societal and NIH mandate for precision medicine. Here we describe a robust and simple method for generating cerebral organoids (cOrgs) from human pluripotent stem cells by using a chemically defined hydrogel material and chemically defined culture medium. By using no additional neural induction components, cOrgs appeared on the hydrogel surface within 10-14 days, and under static culture conditions, they attained sizes up to 3 mm in greatest dimension by day 28. Histologically, the organoids showed neural rosette and neural tube-like structures and evidence of early corticogenesis. Immunostaining and quantitative reverse-transcription polymerase chain reaction demonstrated protein and gene expression representative of forebrain, midbrain, and hindbrain development. Physiologic studies showed responses to glutamate and depolarization in many cells, consistent with neural behavior. The method of cerebral organoid generation described here facilitates access to this technology, enables scalable applications, and provides a potential pathway to translational applications where defined components are desirable. Tissue organoids are a promising technology with many potential applications, such as pharmaceutical screens and development of in vitro disease models, particularly for human polygenic conditions where animal models are insufficient. This work describes a robust and simple method for generating cerebral organoids from human induced pluripotent stem cells by using a chemically defined hydrogel material and chemically defined culture medium. This method, by virtue of its simplicity and use of defined materials, greatly facilitates access to cerebral organoid technology, enables scalable applications, and provides a potential pathway to translational applications where defined components are desirable. ©AlphaMed Press.
Gerlach, Jörg C; Lübberstedt, Marc; Edsbagge, Josefina; Ring, Alexander; Hout, Mariah; Baun, Matt; Rossberg, Ingrid; Knöspel, Fanny; Peters, Grant; Eckert, Klaus; Wulf-Goldenberg, Annika; Björquist, Petter; Stachelscheid, Harald; Urbaniak, Thomas; Schatten, Gerald; Miki, Toshio; Schmelzer, Eva; Zeilinger, Katrin
2010-01-01
We describe hollow fiber-based three-dimensional (3D) dynamic perfusion bioreactor technology for embryonic stem cells (ESC) which is scalable for laboratory and potentially clinical translation applications. We added 2 more compartments to the typical 2-compartment devices, namely an additional media capillary compartment for countercurrent 'arteriovenous' flow and an oxygenation capillary compartment. Each capillary membrane compartment can be perfused independently. Interweaving the 3 capillary systems to form repetitive units allows bioreactor scalability by multiplying the capillary units and provides decentralized media perfusion while enhancing mass exchange and reducing gradient distances from decimeters to more physiologic lengths of <1 mm. The exterior of the resulting membrane network, the cell compartment, is used as a physically active scaffold for cell aggregation; adjusting intercapillary distances enables control of the size of cell aggregates. To demonstrate the technology, mouse ESC (mESC) were cultured in 8- or 800-ml cell compartment bioreactors. We were able to confirm the hypothesis that this bioreactor enables mESC expansion qualitatively comparable to that obtained with Petri dishes, but on a larger scale. To test this, we compared the growth of 129/SVEV mESC in static two-dimensional Petri dishes with that in 3D perfusion bioreactors. We then tested the feasibility of scaling up the culture. In an 800-ml prototype, we cultured approximately 5 x 10(9) cells, replacing up to 800 conventional 100-mm Petri dishes. Teratoma formation studies in mice confirmed protein expression and gene expression results with regard to maintaining 'stemness' markers during cell expansion. Copyright 2010 S. Karger AG, Basel.
Banerjee, Arindam; Ghosh, Joydeep
2004-05-01
Competitive learning mechanisms for clustering, in general, suffer from poor performance for very high-dimensional (>1000) data because of "curse of dimensionality" effects. In applications such as document clustering, it is customary to normalize the high-dimensional input vectors to unit length, and it is sometimes also desirable to obtain balanced clusters, i.e., clusters of comparable sizes. The spherical kmeans (spkmeans) algorithm, which normalizes the cluster centers as well as the inputs, has been successfully used to cluster normalized text documents in 2000+ dimensional space. Unfortunately, like regular kmeans and its soft expectation-maximization-based version, spkmeans tends to generate extremely imbalanced clusters in high-dimensional spaces when the desired number of clusters is large (tens or more). This paper first shows that the spkmeans algorithm can be derived from a certain maximum likelihood formulation using a mixture of von Mises-Fisher distributions as the generative model, and in fact, it can be considered as a batch-mode version of (normalized) competitive learning. The proposed generative model is then adapted in a principled way to yield three frequency-sensitive competitive learning variants that are applicable to static data and produced high-quality and well-balanced clusters for high-dimensional data. Like kmeans, each iteration is linear in the number of data points and in the number of clusters for all the three algorithms. A frequency-sensitive algorithm to cluster streaming data is also proposed. Experimental results on clustering of high-dimensional text data sets are provided to show the effectiveness and applicability of the proposed techniques. Index Terms-Balanced clustering, expectation maximization (EM), frequency-sensitive competitive learning (FSCL), high-dimensional clustering, kmeans, normalized data, scalable clustering, streaming data, text clustering.
Lin, Youshan Melissa; Lim, Jessica Fang Yan; Lee, Jialing; Choolani, Mahesh; Chan, Jerry Kok Yen; Reuveny, Shaul; Oh, Steve Kah Weng
2016-06-01
Cartilage tissue engineering with human mesenchymal stromal cells (hMSC) is promising for allogeneic cell therapy. To achieve large-scale hMSC propagation, scalable microcarrier-based cultures are preferred over conventional static cultures on tissue culture plastic. Yet it remains unclear how microcarrier cultures affect hMSC chondrogenic potential, and how this potential is distinguished from that of tissue culture plastic. Hence, our study aims to compare the chondrogenic potential of human early MSC (heMSC) between microcarrier-spinner and tissue culture plastic cultures. heMSC expanded on either collagen-coated Cytodex 3 microcarriers in spinner cultures or tissue culture plastic were harvested for chondrogenic pellet differentiation with empirically determined chondrogenic inducer bone morphogenetic protein 2 (BMP2). Pellet diameter, DNA content, glycosaminoglycan (GAG) and collagen II production, histological staining and gene expression of chondrogenic markers including SOX9, S100β, MMP13 and ALPL, were investigated and compared in both conditions. BMP2 was the most effective chondrogenic inducer for heMSC. Chondrogenic pellets generated from microcarrier cultures developed larger pellet diameters, and produced more DNA, GAG and collagen II per pellet with greater GAG/DNA and collagen II/DNA ratios compared with that of tissue culture plastic. Moreover, they induced higher expression of chondrogenic genes (e.g., S100β) but not of hypertrophic genes (e.g., MMP13 and ALPL). A similar trend showing enhanced chondrogenic potential was achieved with another microcarrier type, suggesting that the mechanism is due to the agitated nature of microcarrier cultures. This is the first study demonstrating that scalable microcarrier-spinner cultures enhance the chondrogenic potential of heMSC, supporting their use for large-scale cell expansion in cartilage cell therapy. Copyright © 2016 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.
Tools for Large-Scale Mobile Malware Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bierma, Michael
Analyzing mobile applications for malicious behavior is an important area of re- search, and is made di cult, in part, by the increasingly large number of appli- cations available for the major operating systems. There are currently over 1.2 million apps available in both the Google Play and Apple App stores (the respec- tive o cial marketplaces for the Android and iOS operating systems)[1, 2]. Our research provides two large-scale analysis tools to aid in the detection and analysis of mobile malware. The rst tool we present, Andlantis, is a scalable dynamic analysis system capa- ble of processing over 3000more » Android applications per hour. Traditionally, Android dynamic analysis techniques have been relatively limited in scale due to the compu- tational resources required to emulate the full Android system to achieve accurate execution. Andlantis is the most scalable Android dynamic analysis framework to date, and is able to collect valuable forensic data, which helps reverse-engineers and malware researchers identify and understand anomalous application behavior. We discuss the results of running 1261 malware samples through the system, and provide examples of malware analysis performed with the resulting data. While techniques exist to perform static analysis on a large number of appli- cations, large-scale analysis of iOS applications has been relatively small scale due to the closed nature of the iOS ecosystem, and the di culty of acquiring appli- cations for analysis. The second tool we present, iClone, addresses the challenges associated with iOS research in order to detect application clones within a dataset of over 20,000 iOS applications.« less
Bubble gate for in-plane flow control.
Oskooei, Ali; Abolhasani, Milad; Günther, Axel
2013-07-07
We introduce a miniature gate valve as a readily implementable strategy for actively controlling the flow of liquids on-chip, within a footprint of less than one square millimetre. Bubble gates provide for simple, consistent and scalable control of liquid flow in microchannel networks, are compatible with different bulk microfabrication processes and substrate materials, and require neither electrodes nor moving parts. A bubble gate consists of two microchannel sections: a liquid-filled channel and a gas channel that intercepts the liquid channel to form a T-junction. The open or closed state of a bubble gate is determined by selecting between two distinct gas pressure levels: the lower level corresponds to the "open" state while the higher level corresponds to the "closed" state. During closure, a gas bubble penetrates from the gas channel into the liquid, flanked by a column of equidistantly spaced micropillars on each side, until the flow of liquid is completely obstructed. We fabricated bubble gates using single-layer soft lithographic and bulk silicon micromachining procedures and evaluated their performance with a combination of theory and experimentation. We assessed the dynamic behaviour during more than 300 open-and-close cycles and report the operating pressure envelope for different bubble gate configurations and for the working fluids: de-ionized water, ethanol and a biological buffer. We obtained excellent agreement between the experimentally determined bubble gate operational envelope and a theoretical prediction based on static wetting behaviour. We report case studies that serve to illustrate the utility of bubble gates for liquid sampling in single and multi-layer microfluidic devices. Scalability of our strategy was demonstrated by simultaneously addressing 128 bubble gates.
Biricocchi, Charlanne; Drake, JaimeLynn; Svien, Lana
2014-01-01
This case report describes the effects of a 6-week progressive tap dance program on static and dynamic balance for a child with type 1 congenital myotonic muscular dystrophy (congenital MMD1). A 6-year-old girl with congenital MMD1 participated in a 1-hour progressive tap dance program. Classes were held once a week for 6 consecutive weeks and included 3 children with adaptive needs and 1 peer with typical development. The Bruininks-Oseretsky Test of Motor Proficiency, second edition (BOT-2) balance subsection and the Pediatric Balance Scale were completed at the beginning of the first class and the sixth class. The participant's BOT-2 score improved from 3 to 14. Her Pediatric Balance Scale score did not change. Participation in a progressive tap dance class by a child with congenital MMD1 may facilitate improvements in static and dynamic balance.
High temperature static strain gage alloy development program
NASA Technical Reports Server (NTRS)
Hulse, C. O.; Bailey, R. S.; Lemkey, F. D.
1985-01-01
The literature, applicable theory and finally an experimental program were used to identify new candidate alloy systems for use as the electrical resistance elements in static strain gages up to 1250K. The program goals were 50 hours of use in the environment of a test stand gas turbine engine with measurement accuracies equal to or better than 10 percent of full scale for strains up to + or - 2000 microstrain. As part of this effort, a computerized electrical resistance measurement system was constructed for use at temperatures between 300K and 1250K and heating and cooling rates of 250K/min and 10K/min. The two best alloys were an iron-chromium-aluminum alloy and a palladium base alloy. Although significant progress was made, it was concluded that a considerable additional effort would be needed to fully optimize and evaluate these candidate systems.
Azarpaikan, Atefeh; Taheri Torbati, Hamidreza
2017-10-23
The aim of this study was to assess the effectiveness of balance training with somatosensory and neurofeedback training on dynamic and static balance in healthy, elderly adults. The sample group consisted of 45 healthy adults randomly assigned to one of the three test groups: somatosensory, neurofeedback, and a control. Individualization of the balance program started with pre-tests for static and dynamic balances. Each group had 15- and 30-min training sessions. All groups were tested for static (postural stability) and dynamic balances (Berg Balance Scale) in acquisition and transfer tests (fall risk of stability and timed up and go). Improvements in static and dynamic balances were assessed by somatosensory and neurofeedback groups and then compared with the control group. Results indicated significant improvements in static and dynamic balances in both test groups in the acquisition test. Results revealed a significant improvement in the transfer test in the neurofeedback and somatosensory groups, in static and dynamic conditions, respectively. The findings suggest that these methods of balance training had a significant influence on balance. Both the methods are appropriate to prevent falling in adults. Neurofeedback training helped the participants to learn static balance, while somatosensory training was effective on dynamic balance learning. Further research is needed to assess the effects of longer and discontinuous stimulation with somatosensory and neurofeedback training on balance in elderly adults.
NASA Technical Reports Server (NTRS)
1996-01-01
Solving for the displacements of free-free coupled systems acted upon by static loads is a common task in the aerospace industry. Often, these problems are solved by static analysis with inertia relief. This technique allows for a free-free static analysis by balancing the applied loads with the inertia loads generated by the applied loads. For some engineering applications, the displacements of the free-free coupled system induce additional static loads. Hence, the applied loads are equal to the original loads plus the displacement-dependent loads. A launch vehicle being acted upon by an aerodynamic loading can have such applied loads. The final displacements of such systems are commonly determined with iterative solution techniques. Unfortunately, these techniques can be time consuming and labor intensive. Because the coupled system equations for free-free systems with displacement-dependent loads can be written in closed form, it is advantageous to solve for the displacements in this manner. Implementing closed-form equations in static analysis with inertia relief is analogous to implementing transfer functions in dynamic analysis. An MSC/NASTRAN (MacNeal-Schwendler Corporation/NASA Structural Analysis) DMAP (Direct Matrix Abstraction Program) Alter was used to include displacement-dependent loads in static analysis with inertia relief. It efficiently solved a common aerospace problem that typically has been solved with an iterative technique.
Program Instrumentation and Trace Analysis
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Goldberg, Allen; Filman, Robert; Rosu, Grigore; Koga, Dennis (Technical Monitor)
2002-01-01
Several attempts have been made recently to apply techniques such as model checking and theorem proving to the analysis of programs. This shall be seen as a current trend to analyze real software systems instead of just their designs. This includes our own effort to develop a model checker for Java, the Java PathFinder 1, one of the very first of its kind in 1998. However, model checking cannot handle very large programs without some kind of abstraction of the program. This paper describes a complementary scalable technique to handle such large programs. Our interest is turned on the observation part of the equation: How much information can be extracted about a program from observing a single execution trace? It is our intention to develop a technology that can be applied automatically and to large full-size applications, with minimal modification to the code. We present a tool, Java PathExplorer (JPaX), for exploring execution traces of Java programs. The tool prioritizes scalability for completeness, and is directed towards detecting errors in programs, not to prove correctness. One core element in JPaX is an instrumentation package that allows to instrument Java byte code files to log various events when executed. The instrumentation is driven by a user provided script that specifies what information to log. Examples of instructions that such a script can contain are: 'report name and arguments of all called methods defined in class C, together with a timestamp'; 'report all updates to all variables'; and 'report all acquisitions and releases of locks'. In more complex instructions one can specify that certain expressions should be evaluated and even that certain code should be executed under various conditions. The instrumentation package can hence be seen as implementing Aspect Oriented Programming for Java in the sense that one can add functionality to a Java program without explicitly changing the code of the original program, but one rather writes an aspect and compiles it into the original program using the instrumentation. Another core element of JPaX is an observation package that supports the analysis of the generated event stream. Two kinds of analysis are currently supported. In temporal analysis the execution trace is evaluated against formulae written in temporal logic. We have implemented a temporal logic evaluator on finite traces using the Maude rewriting system from SRI International, USA. Temporal logic is defined in Maude by giving its syntax as a signature and its semantics as rewrite equations. The resulting semantics is extremely efficient and can handle event streams of hundreds of millions events in few minutes. Furthermore, the implementation is very succinct. The second form of even stream analysis supported is error pattern analysis where an execution trace is analyzed using various error detection algorithms that can identify error-prone programming practices that may potentially lead to errors in some different executions. Two such algorithms focusing on concurrency errors have been implemented in JPaX, one for deadlocks and the other for data races. It is important to note, that a deadlock or data race potential does not need to occur in order for its potential to be detected with these algorithms. This is what makes them very scalable in practice. The data race algorithm implemented is the Eraser algorithm from Compaq, however adopted to Java. The tool is currently being applied to a code base for controlling a spacecraft by the developers of that software in order to evaluate its applicability.
NASTRAN computer system level 12.1
NASA Technical Reports Server (NTRS)
Butler, T. G.
1971-01-01
Program uses finite element displacement method for solving linear response of large, three-dimensional structures subject to static, dynamic, thermal, and random loadings. Program adapts to computers of different manufacture, permits up-dating and extention, allows interchange of output and input information between users, and is extensively documented.
High Temperature Metallic Seal Development For Aero Propulsion and Gas Turbine Applications
NASA Technical Reports Server (NTRS)
More, Greg; Datta, Amit
2006-01-01
A viewgraph presentation on metallic high temperature static seal development at NASA for gas turbine applications is shown. The topics include: 1) High Temperature Static Seal Development; 2) Program Review; 3) Phase IV Innovative Seal with Blade Alloy Spring; 4) Spring Design; 5) Phase IV: Innovative Seal with Blade Alloy Spring; 6) PHase IV: Testing Results; 7) Seal Seating Load; 8) Spring Seal Manufacturing; and 9) Other Applications for HIgh Temperature Spring Design
Sofianidis, George; Dimitriou, Anna-Maria; Hatzitaki, Vassilia
2017-07-01
The present study was designed to compare the effectiveness of exercise programs with Pilates and Latin dance on older adults' static and dynamic balance. Thirty-two older adults were divided into three groups: Pilates group, Dance group, and Control group. Static and dynamic balance was assessed with following tasks: (a) tandem stance, (b) one-leg stance, and (c) periodic sway with and without metronome guidance. Analysis revealed a significant reduction of the trunk sway amplitude during the tandem stance with eyes closed, reduction in the center of pressure (CoP) displacement during one-leg stance, and increase in the amplitude of trunk oscillation during the sway task for both intervention groups, and reduction in the standard deviation of the CoP displacement during the metronome paced task only for the dance group. The differences in specific balance indices between the two programs suggest some specific adaptations that may provide useful knowledge for the selection of exercises that are better tailored to the needs of the old adult.
Adaptive format conversion for scalable video coding
NASA Astrophysics Data System (ADS)
Wan, Wade K.; Lim, Jae S.
2001-12-01
The enhancement layer in many scalable coding algorithms is composed of residual coding information. There is another type of information that can be transmitted instead of (or in addition to) residual coding. Since the encoder has access to the original sequence, it can utilize adaptive format conversion (AFC) to generate the enhancement layer and transmit the different format conversion methods as enhancement data. This paper investigates the use of adaptive format conversion information as enhancement data in scalable video coding. Experimental results are shown for a wide range of base layer qualities and enhancement bitrates to determine when AFC can improve video scalability. Since the parameters needed for AFC are small compared to residual coding, AFC can provide video scalability at low enhancement layer bitrates that are not possible with residual coding. In addition, AFC can also be used in addition to residual coding to improve video scalability at higher enhancement layer bitrates. Adaptive format conversion has not been studied in detail, but many scalable applications may benefit from it. An example of an application that AFC is well-suited for is the migration path for digital television where AFC can provide immediate video scalability as well as assist future migrations.
The Use of Audio and Animation in Computer Based Instruction.
ERIC Educational Resources Information Center
Koroghlanian, Carol; Klein, James D.
This study investigated the effects of audio, animation, and spatial ability in a computer-based instructional program for biology. The program presented instructional material via test or audio with lean text and included eight instructional sequences presented either via static illustrations or animations. High school students enrolled in a…
Trajectories of Moving Charges in Static Electric Fields.
ERIC Educational Resources Information Center
Kirkup, L.
1986-01-01
Describes the implementation of a trajectory-plotting program for a microcomputer; shows how it may be used to demonstrate the focusing effect of a simple electrostatic lens. The computer program is listed and diagrams are included that show comparisons of trajectories of negative charges in the vicinity of positive charges. (TW)
The TOTEM DAQ based on the Scalable Readout System (SRS)
NASA Astrophysics Data System (ADS)
Quinto, Michele; Cafagna, Francesco S.; Fiergolski, Adrian; Radicioni, Emilio
2018-02-01
The TOTEM (TOTal cross section, Elastic scattering and diffraction dissociation Measurement at the LHC) experiment at LHC, has been designed to measure the total proton-proton cross-section and study the elastic and diffractive scattering at the LHC energies. In order to cope with the increased machine luminosity and the higher statistic required by the extension of the TOTEM physics program, approved for the LHC's Run Two phase, the previous VME based data acquisition system has been replaced with a new one based on the Scalable Readout System. The system features an aggregated data throughput of 2GB / s towards the online storage system. This makes it possible to sustain a maximum trigger rate of ˜ 24kHz, to be compared with the 1KHz rate of the previous system. The trigger rate is further improved by implementing zero-suppression and second-level hardware algorithms in the Scalable Readout System. The new system fulfils the requirements for an increased efficiency, providing higher bandwidth, and increasing the purity of the data recorded. Moreover full compatibility has been guaranteed with the legacy front-end hardware, as well as with the DAQ interface of the CMS experiment and with the LHC's Timing, Trigger and Control distribution system. In this contribution we describe in detail the architecture of full system and its performance measured during the commissioning phase at the LHC Interaction Point.
Marathe, Aniruddha P.; Harris, Rachel A.; Lowenthal, David K.; ...
2015-12-17
The use of clouds to execute high-performance computing (HPC) applications has greatly increased recently. Clouds provide several potential advantages over traditional supercomputers and in-house clusters. The most popular cloud is currently Amazon EC2, which provides fixed-cost and variable-cost, auction-based options. The auction market trades lower cost for potential interruptions that necessitate checkpointing; if the market price exceeds the bid price, a node is taken away from the user without warning. We explore techniques to maximize performance per dollar given a time constraint within which an application must complete. Specifically, we design and implement multiple techniques to reduce expected cost bymore » exploiting redundancy in the EC2 auction market. We then design an adaptive algorithm that selects a scheduling algorithm and determines the bid price. We show that our adaptive algorithm executes programs up to seven times cheaper than using the on-demand market and up to 44 percent cheaper than the best non-redundant, auction-market algorithm. We extend our adaptive algorithm to incorporate application scalability characteristics for further cost savings. In conclusion, we show that the adaptive algorithm informed with scalability characteristics of applications achieves up to 56 percent cost savings compared to the expected cost for the base adaptive algorithm run at a fixed, user-defined scale.« less
Parallelizing Data-Centric Programs
2013-09-25
results than current techniques, such as ImageWebs [HGO+10], given the same budget of matches performed. 4.2 Scalable Parallel Similarity Search The work...algorithms. 5 Data-Driven Applications in the Cloud In this project, we investigated what happens when data-centric software is moved from expensive custom ...returns appropriate answer tuples. Figure 9 (b) shows the mutual constraint satisfaction that takes place in answering for 122. The intent is that
Computing Gröbner and Involutive Bases for Linear Systems of Difference Equations
NASA Astrophysics Data System (ADS)
Yanovich, Denis
2018-02-01
The computation of involutive bases and Gröbner bases for linear systems of difference equations is solved and its importance for physical and mathematical problems is discussed. The algorithm and issues concerning its implementation in C are presented and calculation times are compared with the competing programs. The paper ends with consideration on the parallel version of this implementation and its scalability.
Improving the Accuracy and Scalability of Discriminative Learning Methods for Markov Logic Networks
2011-05-01
9 2.2 Inductive Logic Programming and Aleph . . . . . . . . . . . . 10 2.3 MLNs and Alchemy ...positive examples. Aleph allows users to customize each of 10 these steps, and thereby supports a variety of specific algorithms. 2.3 MLNs and Alchemy An...tural motifs. By limiting the search to each unique motif, LSM is able to find good clauses in an efficient manner. Alchemy (Kok, Singla, Richardson
Compact Buried Ducts in a Hot-Humid Climate House
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mallay, D.
2016-01-01
A system of compact, buried ducts provides a high-performance and cost-effective solution for delivering conditioned air throughout the building. This report outlines research activities that are expected to facilitate adoption of compact buried duct systems by builders. The results of this research would be scalable to many new house designs in most climates and markets, leading to wider industry acceptance and building code and energy program approval.
NASA Astrophysics Data System (ADS)
Liu, Shuangyi; Huang, Limin; Li, Wanlu; Liu, Xiaohua; Jing, Shui; Li, Jackie; O'Brien, Stephen
2015-07-01
Colloidal perovskite oxide nanocrystals have attracted a great deal of interest owing to the ability to tune physical properties by virtue of the nanoscale, and generate thin film structures under mild chemical conditions, relying on self-assembly or heterogeneous mixing. This is particularly true for ferroelectric/dielectric perovskite oxide materials, for which device applications cover piezoelectrics, MEMs, memory, gate dielectrics and energy storage. The synthesis of complex oxide nanocrystals, however, continues to present issues pertaining to quality, yield, % crystallinity, purity and may also suffer from tedious separation and purification processes, which are disadvantageous to scaling production. We report a simple, green and scalable ``self-collection'' growth method that produces uniform and aggregate-free colloidal perovskite oxide nanocrystals including BaTiO3 (BT), BaxSr1-xTiO3 (BST) and quaternary oxide BaSrTiHfO3 (BSTH) in high crystallinity and high purity. The synthesis approach is solution processed, based on the sol-gel transformation of metal alkoxides in alcohol solvents with controlled or stoichiometric amounts of water and in the stark absence of surfactants and stabilizers, providing pure colloidal nanocrystals in a remarkably low temperature range (15 °C-55 °C). Under a static condition, the nanoscale hydrolysis of the metal alkoxides accomplishes a complete transformation to fully crystallized single domain perovskite nanocrystals with a passivated surface layer of hydroxyl/alkyl groups, such that the as-synthesized nanocrystals can exist in the form of super-stable and transparent sol, or self-accumulate to form a highly crystalline solid gel monolith of nearly 100% yield for easy separation/purification. The process produces high purity ligand-free nanocrystals excellent dispersibility in polar solvents, with no impurity remaining in the mother solution other than trace alcohol byproducts (such as isopropanol). The afforded stable and transparent suspension/solution can be treated as inks, suitable for printing or spin/spray coating, demonstrating great capabilities of this process for fabrication of high performance dielectric thin films. The simple ``self-collection'' strategy can be described as green and scalable due to the simplified procedure from synthesis to separation/purification, minimum waste generation, and near room temperature crystallization of nanocrystal products with tunable sizes in extremely high yield and high purity.Colloidal perovskite oxide nanocrystals have attracted a great deal of interest owing to the ability to tune physical properties by virtue of the nanoscale, and generate thin film structures under mild chemical conditions, relying on self-assembly or heterogeneous mixing. This is particularly true for ferroelectric/dielectric perovskite oxide materials, for which device applications cover piezoelectrics, MEMs, memory, gate dielectrics and energy storage. The synthesis of complex oxide nanocrystals, however, continues to present issues pertaining to quality, yield, % crystallinity, purity and may also suffer from tedious separation and purification processes, which are disadvantageous to scaling production. We report a simple, green and scalable ``self-collection'' growth method that produces uniform and aggregate-free colloidal perovskite oxide nanocrystals including BaTiO3 (BT), BaxSr1-xTiO3 (BST) and quaternary oxide BaSrTiHfO3 (BSTH) in high crystallinity and high purity. The synthesis approach is solution processed, based on the sol-gel transformation of metal alkoxides in alcohol solvents with controlled or stoichiometric amounts of water and in the stark absence of surfactants and stabilizers, providing pure colloidal nanocrystals in a remarkably low temperature range (15 °C-55 °C). Under a static condition, the nanoscale hydrolysis of the metal alkoxides accomplishes a complete transformation to fully crystallized single domain perovskite nanocrystals with a passivated surface layer of hydroxyl/alkyl groups, such that the as-synthesized nanocrystals can exist in the form of super-stable and transparent sol, or self-accumulate to form a highly crystalline solid gel monolith of nearly 100% yield for easy separation/purification. The process produces high purity ligand-free nanocrystals excellent dispersibility in polar solvents, with no impurity remaining in the mother solution other than trace alcohol byproducts (such as isopropanol). The afforded stable and transparent suspension/solution can be treated as inks, suitable for printing or spin/spray coating, demonstrating great capabilities of this process for fabrication of high performance dielectric thin films. The simple ``self-collection'' strategy can be described as green and scalable due to the simplified procedure from synthesis to separation/purification, minimum waste generation, and near room temperature crystallization of nanocrystal products with tunable sizes in extremely high yield and high purity. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr02351c
Judge, Lawrence W; Craig, Bruce; Baudendistal, Steve; Bodey, Kimberly J
2009-07-01
Research supports the use of preactivity warm-up and stretching, and the purpose of this study was to determine whether college football programs follow these guidelines. Questionnaires designed to gather demographic, professional, and educational information, as well as specific pre- and postactivity practices, were distributed via e-mail to midwestern collegiate programs from NCAA Division I and III conferences. Twenty-three male coaches (12 from Division IA schools and 11 from Division III schools) participated in the study. Division I schools employed certified strength coaches (CSCS; 100%), whereas Division III schools used mainly strength coordinators (73%), with only 25% CSCS. All programs used preactivity warm-up, with the majority employing 2-5 minutes of sport-specific jogging/running drills. Pre stretching (5-10 minutes) was performed in 19 programs (91%), with 2 (9%) performing no pre stretching. Thirteen respondents used a combination of static/proprioceptive neuromuscular facilitation/ballistic and dynamic flexibility, 5 used only dynamic flexibility, and 1 used only static stretching. All 12 Division I coaches used stretching, whereas only 9 of the 11 Division III coaches did (p = 0.22). The results indicate that younger coaches did not use pre stretching (p = 0.30). The majority of the coaches indicated that they did use post stretching, with 11 of the 12 Division I coaches using stretching, whereas only 5 of the 11 Division III coaches used stretching postactivity (p = 0.027). Divisional results show that the majority of Division I coaches use static-style stretching (p = 0.049). The results of this study indicate that divisional status, age, and certification may influence how well research guidelines are followed. Further research is needed to delineate how these factors affect coaching decisions.
Characterization of Friction Joints Subjected to High Levels of Random Vibration
NASA Technical Reports Server (NTRS)
deSantos, Omar; MacNeal, Paul
2012-01-01
This paper describes the test program in detail including test sample description, test procedures, and vibration test results of multiple test samples. The material pairs used in the experiment were Aluminum-Aluminum, Aluminum- Dicronite coated Aluminum, and Aluminum-Plasmadize coated Aluminum. Levels of vibration for each set of twelve samples of each material pairing were gradually increased until all samples experienced substantial displacement. Data was collected on 1) acceleration in all three axes, 2) relative static displacement between vibration runs utilizing photogrammetry techniques, and 3) surface galling and contaminant generation. This data was used to estimate the values of static friction during random vibratory motion when "stick-slip" occurs and compare these to static friction coefficients measured before and after vibration testing.
FaCSI: A block parallel preconditioner for fluid-structure interaction in hemodynamics
NASA Astrophysics Data System (ADS)
Deparis, Simone; Forti, Davide; Grandperrin, Gwenol; Quarteroni, Alfio
2016-12-01
Modeling Fluid-Structure Interaction (FSI) in the vascular system is mandatory to reliably compute mechanical indicators in vessels undergoing large deformations. In order to cope with the computational complexity of the coupled 3D FSI problem after discretizations in space and time, a parallel solution is often mandatory. In this paper we propose a new block parallel preconditioner for the coupled linearized FSI system obtained after space and time discretization. We name it FaCSI to indicate that it exploits the Factorized form of the linearized FSI matrix, the use of static Condensation to formally eliminate the interface degrees of freedom of the fluid equations, and the use of a SIMPLE preconditioner for saddle-point problems. FaCSI is built upon a block Gauss-Seidel factorization of the FSI Jacobian matrix and it uses ad-hoc preconditioners for each physical component of the coupled problem, namely the fluid, the structure and the geometry. In the fluid subproblem, after operating static condensation of the interface fluid variables, we use a SIMPLE preconditioner on the reduced fluid matrix. Moreover, to efficiently deal with a large number of processes, FaCSI exploits efficient single field preconditioners, e.g., based on domain decomposition or the multigrid method. We measure the parallel performances of FaCSI on a benchmark cylindrical geometry and on a problem of physiological interest, namely the blood flow through a patient-specific femoropopliteal bypass. We analyze the dependence of the number of linear solver iterations on the cores count (scalability of the preconditioner) and on the mesh size (optimality).
Compiler-assisted static checkpoint insertion
NASA Technical Reports Server (NTRS)
Long, Junsheng; Fuchs, W. K.; Abraham, Jacob A.
1992-01-01
This paper describes a compiler-assisted approach for static checkpoint insertion. Instead of fixing the checkpoint location before program execution, a compiler enhanced polling mechanism is utilized to maintain both the desired checkpoint intervals and reproducible checkpoint 1ocations. The technique has been implemented in a GNU CC compiler for Sun 3 and Sun 4 (Sparc) processors. Experiments demonstrate that the approach provides for stable checkpoint intervals and reproducible checkpoint placements with performance overhead comparable to a previously presented compiler assisted dynamic scheme (CATCH) utilizing the system clock.
Some studies on the use of NASTRAN for nuclear power plant structural analysis and design
NASA Technical Reports Server (NTRS)
Setlur, A. V.; Valathur, M.
1973-01-01
Studies made on the use of NASTRAN for nuclear power plant analysis and design are presented. These studies indicate that NASTRAN could be effectively used for static, dynamic and special purpose problems encountered in the design of such plants. Normal mode capability of NASTRAN is extended through a post-processor program to handle seismic analysis. Static and dynamic substructuring is discussed. Extension of NASTRAN to include the needs in the civil engineering industry is discussed.
Thermal stress analysis of reusable surface insulation for shuttle
NASA Technical Reports Server (NTRS)
Ojalvo, I. U.; Levy, A.; Austin, F.
1974-01-01
An iterative procedure for accurately determining tile stresses associated with static mechanical and thermally induced internal loads is presented. The necessary conditions for convergence of the method are derived. An user-oriented computer program based upon the present method of analysis was developed. The program is capable of analyzing multi-tiled panels and determining the associated stresses. Typical numerical results from this computer program are presented.
Methods, media, and systems for detecting attack on a digital processing device
Stolfo, Salvatore J.; Li, Wei-Jen; Keromylis, Angelos D.; Androulaki, Elli
2014-07-22
Methods, media, and systems for detecting attack are provided. In some embodiments, the methods include: comparing at least part of a document to a static detection model; determining whether attacking code is included in the document based on the comparison of the document to the static detection model; executing at least part of the document; determining whether attacking code is included in the document based on the execution of the at least part of the document; and if attacking code is determined to be included in the document based on at least one of the comparison of the document to the static detection model and the execution of the at least part of the document, reporting the presence of an attack. In some embodiments, the methods include: selecting a data segment in at least one portion of an electronic document; determining whether the arbitrarily selected data segment can be altered without causing the electronic document to result in an error when processed by a corresponding program; in response to determining that the arbitrarily selected data segment can be altered, arbitrarily altering the data segment in the at least one portion of the electronic document to produce an altered electronic document; and determining whether the corresponding program produces an error state when the altered electronic document is processed by the corresponding program.
Methods, media, and systems for detecting attack on a digital processing device
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stolfo, Salvatore J.; Li, Wei-Jen; Keromytis, Angelos D.
Methods, media, and systems for detecting attack are provided. In some embodiments, the methods include: comparing at least part of a document to a static detection model; determining whether attacking code is included in the document based on the comparison of the document to the static detection model; executing at least part of the document; determining whether attacking code is included in the document based on the execution of the at least part of the document; and if attacking code is determined to be included in the document based on at least one of the comparison of the document tomore » the static detection model and the execution of the at least part of the document, reporting the presence of an attack. In some embodiments, the methods include: selecting a data segment in at least one portion of an electronic document; determining whether the arbitrarily selected data segment can be altered without causing the electronic document to result in an error when processed by a corresponding program; in response to determining that the arbitrarily selected data segment can be altered, arbitrarily altering the data segment in the at least one portion of the electronic document to produce an altered electronic document; and determining whether the corresponding program produces an error state when the altered electronic document is processed by the corresponding program.« less
Scalable Metadata Management for a Large Multi-Source Seismic Data Repository
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaylord, J. M.; Dodge, D. A.; Magana-Zook, S. A.
In this work, we implemented the key metadata management components of a scalable seismic data ingestion framework to address limitations in our existing system, and to position it for anticipated growth in volume and complexity. We began the effort with an assessment of open source data flow tools from the Hadoop ecosystem. We then began the construction of a layered architecture that is specifically designed to address many of the scalability and data quality issues we experience with our current pipeline. This included implementing basic functionality in each of the layers, such as establishing a data lake, designing a unifiedmore » metadata schema, tracking provenance, and calculating data quality metrics. Our original intent was to test and validate the new ingestion framework with data from a large-scale field deployment in a temporary network. This delivered somewhat unsatisfying results, since the new system immediately identified fatal flaws in the data relatively early in the pipeline. Although this is a correct result it did not allow us to sufficiently exercise the whole framework. We then widened our scope to process all available metadata from over a dozen online seismic data sources to further test the implementation and validate the design. This experiment also uncovered a higher than expected frequency of certain types of metadata issues that challenged us to further tune our data management strategy to handle them. Our result from this project is a greatly improved understanding of real world data issues, a validated design, and prototype implementations of major components of an eventual production framework. This successfully forms the basis of future development for the Geophysical Monitoring Program data pipeline, which is a critical asset supporting multiple programs. It also positions us very well to deliver valuable metadata management expertise to our sponsors, and has already resulted in an NNSA Office of Defense Nuclear Nonproliferation commitment to a multi-year project for follow-on work.« less
A Cross-Platform Infrastructure for Scalable Runtime Application Performance Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jack Dongarra; Shirley Moore; Bart Miller, Jeffrey Hollingsworth
2005-03-15
The purpose of this project was to build an extensible cross-platform infrastructure to facilitate the development of accurate and portable performance analysis tools for current and future high performance computing (HPC) architectures. Major accomplishments include tools and techniques for multidimensional performance analysis, as well as improved support for dynamic performance monitoring of multithreaded and multiprocess applications. Previous performance tool development has been limited by the burden of having to re-write a platform-dependent low-level substrate for each architecture/operating system pair in order to obtain the necessary performance data from the system. Manual interpretation of performance data is not scalable for large-scalemore » long-running applications. The infrastructure developed by this project provides a foundation for building portable and scalable performance analysis tools, with the end goal being to provide application developers with the information they need to analyze, understand, and tune the performance of terascale applications on HPC architectures. The backend portion of the infrastructure provides runtime instrumentation capability and access to hardware performance counters, with thread-safety for shared memory environments and a communication substrate to support instrumentation of multiprocess and distributed programs. Front end interfaces provides tool developers with a well-defined, platform-independent set of calls for requesting performance data. End-user tools have been developed that demonstrate runtime data collection, on-line and off-line analysis of performance data, and multidimensional performance analysis. The infrastructure is based on two underlying performance instrumentation technologies. These technologies are the PAPI cross-platform library interface to hardware performance counters and the cross-platform Dyninst library interface for runtime modification of executable images. The Paradyn and KOJAK projects have made use of this infrastructure to build performance measurement and analysis tools that scale to long-running programs on large parallel and distributed systems and that automate much of the search for performance bottlenecks.« less
QUADrATiC: scalable gene expression connectivity mapping for repurposing FDA-approved therapeutics.
O'Reilly, Paul G; Wen, Qing; Bankhead, Peter; Dunne, Philip D; McArt, Darragh G; McPherson, Suzanne; Hamilton, Peter W; Mills, Ken I; Zhang, Shu-Dong
2016-05-04
Gene expression connectivity mapping has proven to be a powerful and flexible tool for research. Its application has been shown in a broad range of research topics, most commonly as a means of identifying potential small molecule compounds, which may be further investigated as candidates for repurposing to treat diseases. The public release of voluminous data from the Library of Integrated Cellular Signatures (LINCS) programme further enhanced the utilities and potentials of gene expression connectivity mapping in biomedicine. We describe QUADrATiC ( http://go.qub.ac.uk/QUADrATiC ), a user-friendly tool for the exploration of gene expression connectivity on the subset of the LINCS data set corresponding to FDA-approved small molecule compounds. It enables the identification of compounds for repurposing therapeutic potentials. The software is designed to cope with the increased volume of data over existing tools, by taking advantage of multicore computing architectures to provide a scalable solution, which may be installed and operated on a range of computers, from laptops to servers. This scalability is provided by the use of the modern concurrent programming paradigm provided by the Akka framework. The QUADrATiC Graphical User Interface (GUI) has been developed using advanced Javascript frameworks, providing novel visualization capabilities for further analysis of connections. There is also a web services interface, allowing integration with other programs or scripts. QUADrATiC has been shown to provide an improvement over existing connectivity map software, in terms of scope (based on the LINCS data set), applicability (using FDA-approved compounds), usability and speed. It offers potential to biological researchers to analyze transcriptional data and generate potential therapeutics for focussed study in the lab. QUADrATiC represents a step change in the process of investigating gene expression connectivity and provides more biologically-relevant results than previous alternative solutions.
A scalable healthcare information system based on a service-oriented architecture.
Yang, Tzu-Hsiang; Sun, Yeali S; Lai, Feipei
2011-06-01
Many existing healthcare information systems are composed of a number of heterogeneous systems and face the important issue of system scalability. This paper first describes the comprehensive healthcare information systems used in National Taiwan University Hospital (NTUH) and then presents a service-oriented architecture (SOA)-based healthcare information system (HIS) based on the service standard HL7. The proposed architecture focuses on system scalability, in terms of both hardware and software. Moreover, we describe how scalability is implemented in rightsizing, service groups, databases, and hardware scalability. Although SOA-based systems sometimes display poor performance, through a performance evaluation of our HIS based on SOA, the average response time for outpatient, inpatient, and emergency HL7Central systems are 0.035, 0.04, and 0.036 s, respectively. The outpatient, inpatient, and emergency WebUI average response times are 0.79, 1.25, and 0.82 s. The scalability of the rightsizing project and our evaluation results show that the SOA HIS we propose provides evidence that SOA can provide system scalability and sustainability in a highly demanding healthcare information system.
Parallel Programming Strategies for Irregular Adaptive Applications
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Biegel, Bryan (Technical Monitor)
2001-01-01
Achieving scalable performance for dynamic irregular applications is eminently challenging. Traditional message-passing approaches have been making steady progress towards this goal; however, they suffer from complex implementation requirements. The use of a global address space greatly simplifies the programming task, but can degrade the performance for such computations. In this work, we examine two typical irregular adaptive applications, Dynamic Remeshing and N-Body, under competing programming methodologies and across various parallel architectures. The Dynamic Remeshing application simulates flow over an airfoil, and refines localized regions of the underlying unstructured mesh. The N-Body experiment models two neighboring Plummer galaxies that are about to undergo a merger. Both problems demonstrate dramatic changes in processor workloads and interprocessor communication with time; thus, dynamic load balancing is a required component.
Morris, Amanda Sheffield; Robinson, Lara R; Hays-Grudo, Jennifer; Claussen, Angelika H; Hartwig, Sophie A; Treat, Amy E
2017-03-01
In this article, the authors posit that programs promoting nurturing parent-child relationships influence outcomes of parents and young children living in poverty through two primary mechanisms: (a) strengthening parents' social support and (b) increasing positive parent-child interactions. The authors discuss evidence for these mechanisms as catalysts for change and provide examples from selected parenting programs that support the influence of nurturing relationships on child and parenting outcomes. The article focuses on prevention programs targeted at children and families living in poverty and closes with a discussion of the potential for widespread implementation and scalability for public health impact. © 2017 The Authors. Child Development © 2017 Society for Research in Child Development, Inc.
A lower-limb training program to improve balance in healthy elderly women using the T-bow device.
Chulvi-Medrano, Iván; Colado, Juan C; Pablos, Carlos; Naclerio, Fernando; García-Massó, Xavier
2009-06-01
Ageing impairs balance, which increases the risk of falls. Fall-related injuries are a serious health problem associated with dependency and disability in the elderly and results in high costs to public health systems. This study aims to determine the effects of a training program to develop balance using a new device called the T-Bow. A total of 28 women > 65 years were randomly assigned to an experimental group (EG) (n = 18; 69.50 [0.99] years), or a control group (CG) (n = 10; 70.70 [2.18] years). A program for lower limbs was applied for 8 weeks using 5 exercises on the T-Bow: squat, lateral and frontal swings, lunges, and plantarflexions. The intensity of the exercises was controlled by time of exposure, support base, and ratings of perceived exertion. Clinical tests were used to evaluate variables of balance. Static balance was measured by a 1-leg balance test (unipedal stance test), dynamic balance was measured by the 8-foot-up-and-go test, and overall balance was measured using the Tinetti test. Results for the EG showed an increase of 35.2% in static balance (P < 0.005), 12.7% in dynamic balance (P < 0.005), and 5.9% in overall balance (P > 0.05). Results for the CG showed a decline of 5.79% in static balance (P > 0.05) but no change in the other balance variables. Thus the data suggest that implementing a training program using the T-Bow could improve balance in healthy older women.
NASA Astrophysics Data System (ADS)
Jing, Changfeng; Liang, Song; Ruan, Yong; Huang, Jie
2008-10-01
During the urbanization process, when facing complex requirements of city development, ever-growing urban data, rapid development of planning business and increasing planning complexity, a scalable, extensible urban planning management information system is needed urgently. PM2006 is such a system that can deal with these problems. In response to the status and problems in urban planning, the scalability and extensibility of PM2006 are introduced which can be seen as business-oriented workflow extensibility, scalability of DLL-based architecture, flexibility on platforms of GIS and database, scalability of data updating and maintenance and so on. It is verified that PM2006 system has good extensibility and scalability which can meet the requirements of all levels of administrative divisions and can adapt to ever-growing changes in urban planning business. At the end of this paper, the application of PM2006 in Urban Planning Bureau of Suzhou city is described.
The Effect of Audio and Animation in Multimedia Instruction
ERIC Educational Resources Information Center
Koroghlanian, Carol; Klein, James D.
2004-01-01
This study investigated the effects of audio, animation, and spatial ability in a multimedia computer program for high school biology. Participants completed a multimedia program that presented content by way of text or audio with lean text. In addition, several instructional sequences were presented either with static illustrations or animations.…
Orbit attitude processor. STS-1 bench program verification test plan
NASA Technical Reports Server (NTRS)
Mcclain, C. R.
1980-01-01
A plan for the static verification of the STS-1 ATT PROC ORBIT software requirements is presented. The orbit version of the SAPIENS bench program is used to generate the verification data. A brief discussion of the simulation software and flight software modules is presented along with a description of the test cases.
NASA Technical Reports Server (NTRS)
Whetstone, W. D.
1976-01-01
The functions and operating rules of the SPAR system, which is a group of computer programs used primarily to perform stress, buckling, and vibrational analyses of linear finite element systems, were given. The following subject areas were discussed: basic information, structure definition, format system matrix processors, utility programs, static solutions, stresses, sparse matrix eigensolver, dynamic response, graphics, and substructure processors.
Relationship between antigravity control and postural control in young children.
Sellers, J S
1988-04-01
The purposes of this study were 1) to determine the relationship between antigravity control (supine flexion and prone extension) and postural control (static and dynamic balance), 2) to determine the quality of antigravity and postural control, and 3) to determine whether sex and ethnic group differences correlate with differences in antigravity control and postural control in young children. I tested 107 black, Hispanic, and Caucasian children in a Head Start program, with a mean age of 61 months. The study results showed significant relationships between antigravity control and postural control. Subjects' supine flexion performance was significantly related to the quantity and quality of their static and dynamic balance performance, whereas prone extension performance was related only to the quality of dynamic balance performance. Quality scale measurements (r = .90) indicated that the children in this study had not yet developed full antigravity or postural control. The study results revealed differences between sexes in the quality of static balance and prone extension performance and ethnic differences in static balance, dynamic balance, and prone extension performance.
Experimental Results from the Active Aeroelastic Wing Wind Tunnel Test Program
NASA Technical Reports Server (NTRS)
Heeg, Jennifer; Spain, Charles V.; Florance, James R.; Wieseman, Carol D.; Ivanco, Thomas G.; DeMoss, Joshua; Silva, Walter A.; Panetta, Andrew; Lively, Peter; Tumwa, Vic
2005-01-01
The Active Aeroelastic Wing (AAW) program is a cooperative effort among NASA, the Air Force Research Laboratory and the Boeing Company, encompassing flight testing, wind tunnel testing and analyses. The objective of the AAW program is to investigate the improvements that can be realized by exploiting aeroelastic characteristics, rather than viewing them as a detriment to vehicle performance and stability. To meet this objective, a wind tunnel model was crafted to duplicate the static aeroelastic behavior of the AAW flight vehicle. The model was tested in the NASA Langley Transonic Dynamics Tunnel in July and August 2004. The wind tunnel investigation served the program goal in three ways. First, the wind tunnel provided a benchmark for comparison with the flight vehicle and various levels of theoretical analyses. Second, it provided detailed insight highlighting the effects of individual parameters upon the aeroelastic response of the AAW vehicle. This parameter identification can then be used for future aeroelastic vehicle design guidance. Third, it provided data to validate scaling laws and their applicability with respect to statically scaled aeroelastic models.
Transputer parallel processing at NASA Lewis Research Center
NASA Technical Reports Server (NTRS)
Ellis, Graham K.
1989-01-01
The transputer parallel processing lab at NASA Lewis Research Center (LeRC) consists of 69 processors (transputers) that can be connected into various networks for use in general purpose concurrent processing applications. The main goal of the lab is to develop concurrent scientific and engineering application programs that will take advantage of the computational speed increases available on a parallel processor over the traditional sequential processor. Current research involves the development of basic programming tools. These tools will help standardize program interfaces to specific hardware by providing a set of common libraries for applications programmers. The thrust of the current effort is in developing a set of tools for graphics rendering/animation. The applications programmer currently has two options for on-screen plotting. One option can be used for static graphics displays and the other can be used for animated motion. The option for static display involves the use of 2-D graphics primitives that can be called from within an application program. These routines perform the standard 2-D geometric graphics operations in real-coordinate space as well as allowing multiple windows on a single screen.
NASA Technical Reports Server (NTRS)
Haftka, R. T.; Adelman, H. M.
1984-01-01
Orbiting spacecraft such as large space antennas have to maintain a highly accurate space to operate satisfactorily. Such structures require active and passive controls to mantain an accurate shape under a variety of disturbances. Methods for the optimum placement of control actuators for correcting static deformations are described. In particular, attention is focused on the case were control locations have to be selected from a large set of available sites, so that integer programing methods are called for. The effectiveness of three heuristic techniques for obtaining a near-optimal site selection is compared. In addition, efficient reanalysis techniques for the rapid assessment of control effectiveness are presented. Two examples are used to demonstrate the methods: a simple beam structure and a 55m space-truss-parabolic antenna.
Dynamic Analysis of Spur Gear Transmissions (DANST). PC Version 3.00 User Manual
NASA Technical Reports Server (NTRS)
Oswald, Fred B.; Lin, Hsiang Hsi; Delgado, Irebert R.
1996-01-01
DANST is a FORTRAN computer program for static and dynamic analysis of spur gear systems. The program can be used for parametric studies to predict the static transmission error, dynamic load, tooth bending stress and other properties of spur gears as they are influenced by operating speed, torque, stiffness, damping, inertia, and tooth profile. DANST performs geometric modeling and dynamic analysis for low- or high-contact-ratio spur gears. DANST can simulate gear systems with contact ratios ranging from one to three. It was designed to be easy to use and it is extensively documented in several previous reports and by comments in the source code. This report describes installing and using a new PC version of DANST, covers input data requirements and presents examples.
NASA Technical Reports Server (NTRS)
Hagedorn, N. H.; Prokipius, P. R.
1977-01-01
A test program was conducted to evaluate the design of a heat and product-water removal system to be used with fuel cell having static water removal and evaporative cooling. The program, which was conducted on a breadboard version of the system, provided a general assessment of the design in terms of operational integrity and transient stability. This assessment showed that, on the whole, the concept appears to be inherently sound but that in refining this design, several facets will require additional study. These involve interactions between pressure regulators in the pumping loop that occur when they are not correctly matched and the question of whether an ejector is necessary in the system.
NASA Technical Reports Server (NTRS)
Baumeister, K. J.; Horowitz, S. J.
1982-01-01
An iterative finite element integral technique is used to predict the sound field radiated from the JT15D turbofan inlet. The sound field is divided into two regions: the sound field within and near the inlet which is computed using the finite element method and the radiation field beyond the inlet which is calculated using an integral solution technique. The velocity potential formulation of the acoustic wave equation was employed in the program. For some single mode JT15D data, the theory and experiment are in good agreement for the far field radiation pattern as well as suppressor attenuation. Also, the computer program is used to simulate flight effects that cannot be performed on a ground static test stand.
Optimizing Interactive Development of Data-Intensive Applications
Interlandi, Matteo; Tetali, Sai Deep; Gulzar, Muhammad Ali; Noor, Joseph; Condie, Tyson; Kim, Miryung; Millstein, Todd
2017-01-01
Modern Data-Intensive Scalable Computing (DISC) systems are designed to process data through batch jobs that execute programs (e.g., queries) compiled from a high-level language. These programs are often developed interactively by posing ad-hoc queries over the base data until a desired result is generated. We observe that there can be significant overlap in the structure of these queries used to derive the final program. Yet, each successive execution of a slightly modified query is performed anew, which can significantly increase the development cycle. Vega is an Apache Spark framework that we have implemented for optimizing a series of similar Spark programs, likely originating from a development or exploratory data analysis session. Spark developers (e.g., data scientists) can leverage Vega to significantly reduce the amount of time it takes to re-execute a modified Spark program, reducing the overall time to market for their Big Data applications. PMID:28405637
Walters, Glenn D; Deming, Adam; Casbon, Todd
2015-04-01
The purpose of this study was to determine whether the Psychological Inventory of Criminal Thinking Styles (PICTS) was capable of predicting recidivism in 322 male sex offenders released from prison-based sex offender programs in a Midwestern state. The Static-99R and PICTS General Criminal Thinking (GCT), Reactive (R), and Entitlement (En) scores all correlated significantly with general recidivism, the Static-99R correlated significantly with violent recidivism, and the Static-99R score and PICTS GCT, Proactive (P), and En scores correlated significantly with failure to register as a sex offender (FTR) recidivism. Area under the curve effect size estimates varied from small to large, and Cox regression analyses revealed that the PICTS En score achieved incremental validity relative to the Static-99R in predicting general recidivism and the PICTS GCT, P, and En scores achieved incremental validity relative to the Static-99R in predicting FTR recidivism. It is speculated that the PICTS in general and the En scale in particular may have utility in risk management and treatment planning for sex offenders by virtue of their focus on antisocial thinking. © The Author(s) 2014.
Highly scalable parallel processing of extracellular recordings of Multielectrode Arrays.
Gehring, Tiago V; Vasilaki, Eleni; Giugliano, Michele
2015-01-01
Technological advances of Multielectrode Arrays (MEAs) used for multisite, parallel electrophysiological recordings, lead to an ever increasing amount of raw data being generated. Arrays with hundreds up to a few thousands of electrodes are slowly seeing widespread use and the expectation is that more sophisticated arrays will become available in the near future. In order to process the large data volumes resulting from MEA recordings there is a pressing need for new software tools able to process many data channels in parallel. Here we present a new tool for processing MEA data recordings that makes use of new programming paradigms and recent technology developments to unleash the power of modern highly parallel hardware, such as multi-core CPUs with vector instruction sets or GPGPUs. Our tool builds on and complements existing MEA data analysis packages. It shows high scalability and can be used to speed up some performance critical pre-processing steps such as data filtering and spike detection, helping to make the analysis of larger data sets tractable.
Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas
2017-01-01
Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments. PMID:28190948
Extreme-Scale Stochastic Particle Tracing for Uncertain Unsteady Flow Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Hanqi; He, Wenbin; Seo, Sangmin
2016-11-13
We present an efficient and scalable solution to estimate uncertain transport behaviors using stochastic flow maps (SFM,) for visualizing and analyzing uncertain unsteady flows. SFM computation is extremely expensive because it requires many Monte Carlo runs to trace densely seeded particles in the flow. We alleviate the computational cost by decoupling the time dependencies in SFMs so that we can process adjacent time steps independently and then compose them together for longer time periods. Adaptive refinement is also used to reduce the number of runs for each location. We then parallelize over tasks—packets of particles in our design—to achieve highmore » efficiency in MPI/thread hybrid programming. Such a task model also enables CPU/GPU coprocessing. We show the scalability on two supercomputers, Mira (up to 1M Blue Gene/Q cores) and Titan (up to 128K Opteron cores and 8K GPUs), that can trace billions of particles in seconds.« less
Halligan, Brian D.; Geiger, Joey F.; Vallejos, Andrew K.; Greene, Andrew S.; Twigger, Simon N.
2009-01-01
One of the major difficulties for many laboratories setting up proteomics programs has been obtaining and maintaining the computational infrastructure required for the analysis of the large flow of proteomics data. We describe a system that combines distributed cloud computing and open source software to allow laboratories to set up scalable virtual proteomics analysis clusters without the investment in computational hardware or software licensing fees. Additionally, the pricing structure of distributed computing providers, such as Amazon Web Services, allows laboratories or even individuals to have large-scale computational resources at their disposal at a very low cost per run. We provide detailed step by step instructions on how to implement the virtual proteomics analysis clusters as well as a list of current available preconfigured Amazon machine images containing the OMSSA and X!Tandem search algorithms and sequence databases on the Medical College of Wisconsin Proteomics Center website (http://proteomics.mcw.edu/vipdac). PMID:19358578
Request redirection paradigm in medical image archive implementation.
Dragan, Dinu; Ivetić, Dragan
2012-08-01
It is widely recognized that the JPEG2000 facilitates issues in medical imaging: storage, communication, sharing, remote access, interoperability, and presentation scalability. Therefore, JPEG2000 support was added to the DICOM standard Supplement 61. Two approaches to support JPEG2000 medical image are explicitly defined by the DICOM standard: replacing the DICOM image format with corresponding JPEG2000 codestream, or by the Pixel Data Provider service, DICOM supplement 106. The latest one supposes two-step retrieval of medical image: DICOM request and response from a DICOM server, and then JPIP request and response from a JPEG2000 server. We propose a novel strategy for transmission of scalable JPEG2000 images extracted from a single codestream over DICOM network using the DICOM Private Data Element without sacrificing system interoperability. It employs the request redirection paradigm: DICOM request and response from JPEG2000 server through DICOM server. The paper presents programming solution for implementation of request redirection paradigm in a DICOM transparent manner. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
A Simple, Scalable, Script-based Science Processor
NASA Technical Reports Server (NTRS)
Lynnes, Christopher
2004-01-01
The production of Earth Science data from orbiting spacecraft is an activity that takes place 24 hours a day, 7 days a week. At the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC), this results in as many as 16,000 program executions each day, far too many to be run by human operators. In fact, when the Moderate Resolution Imaging Spectroradiometer (MODIS) was launched aboard the Terra spacecraft in 1999, the automated commercial system for running science processing was able to manage no more than 4,000 executions per day. Consequently, the GES DAAC developed a lightweight system based on the popular Per1 scripting language, named the Simple, Scalable, Script-based Science Processor (S4P). S4P automates science processing, allowing operators to focus on the rare problems occurring from anomalies in data or algorithms. S4P has been reused in several systems ranging from routine processing of MODIS data to data mining and is publicly available from NASA.
A Massively Parallel Computational Method of Reading Index Files for SOAPsnv.
Zhu, Xiaoqian; Peng, Shaoliang; Liu, Shaojie; Cui, Yingbo; Gu, Xiang; Gao, Ming; Fang, Lin; Fang, Xiaodong
2015-12-01
SOAPsnv is the software used for identifying the single nucleotide variation in cancer genes. However, its performance is yet to match the massive amount of data to be processed. Experiments reveal that the main performance bottleneck of SOAPsnv software is the pileup algorithm. The original pileup algorithm's I/O process is time-consuming and inefficient to read input files. Moreover, the scalability of the pileup algorithm is also poor. Therefore, we designed a new algorithm, named BamPileup, aiming to improve the performance of sequential read, and the new pileup algorithm implemented a parallel read mode based on index. Using this method, each thread can directly read the data start from a specific position. The results of experiments on the Tianhe-2 supercomputer show that, when reading data in a multi-threaded parallel I/O way, the processing time of algorithm is reduced to 3.9 s and the application program can achieve a speedup up to 100×. Moreover, the scalability of the new algorithm is also satisfying.
MPEG-4-based 2D facial animation for mobile devices
NASA Astrophysics Data System (ADS)
Riegel, Thomas B.
2005-03-01
The enormous spread of mobile computing devices (e.g. PDA, cellular phone, palmtop, etc.) emphasizes scalable applications, since users like to run their favorite programs on the terminal they operate at that moment. Therefore appliances are of interest, which can be adapted to the hardware realities without loosing a lot of their functionalities. A good example for this is "Facial Animation," which offers an interesting way to achieve such "scalability." By employing MPEG-4, which provides an own profile for facial animation, a solution for low power terminals including mobile phones is demonstrated. From the generic 3D MPEG-4 face a specific 2D head model is derived, which consists primarily of a portrait image superposed by a suited warping mesh and adapted 2D animation rules. Thus the animation process of MPEG-4 need not be changed and standard compliant facial animation parameters can be used to displace the vertices of the mesh and warp the underlying image accordingly.
Scalable Integrated Multi-Mission Support System (SIMSS) Simulator Release 2.0 for GMSEC
NASA Technical Reports Server (NTRS)
Kim, John; Velamuri, Sarma; Casey, Taylor; Bemann, Travis
2012-01-01
Scalable Integrated Multi-Mission Support System (SIMSS) Simulator Release 2.0 software is designed to perform a variety of test activities related to spacecraft simulations and ground segment checks. This innovation uses the existing SIMSS framework, which interfaces with the GMSEC (Goddard Mission Services Evolution Center) Application Programming Interface (API) Version 3.0 message middleware, and allows SIMSS to accept GMSEC standard messages via the GMSEC message bus service. SIMSS is a distributed, component-based, plug-and-play client-server system that is useful for performing real-time monitoring and communications testing. SIMSS runs on one or more workstations, and is designed to be user-configurable, or to use predefined configurations for routine operations. SIMSS consists of more than 100 modules that can be configured to create, receive, process, and/or transmit data. The SIMSS/GMSEC innovation is intended to provide missions with a low-cost solution for implementing their ground systems, as well as to significantly reduce a mission s integration time and risk.
Halligan, Brian D; Geiger, Joey F; Vallejos, Andrew K; Greene, Andrew S; Twigger, Simon N
2009-06-01
One of the major difficulties for many laboratories setting up proteomics programs has been obtaining and maintaining the computational infrastructure required for the analysis of the large flow of proteomics data. We describe a system that combines distributed cloud computing and open source software to allow laboratories to set up scalable virtual proteomics analysis clusters without the investment in computational hardware or software licensing fees. Additionally, the pricing structure of distributed computing providers, such as Amazon Web Services, allows laboratories or even individuals to have large-scale computational resources at their disposal at a very low cost per run. We provide detailed step-by-step instructions on how to implement the virtual proteomics analysis clusters as well as a list of current available preconfigured Amazon machine images containing the OMSSA and X!Tandem search algorithms and sequence databases on the Medical College of Wisconsin Proteomics Center Web site ( http://proteomics.mcw.edu/vipdac ).
AHPCRC (Army High Performance Computing Research Center) Bulletin. Volume 2, Issue 1
2010-01-01
Researchers in AHPCRC Technical Area 4 focus on improving processes for developing scalable, accurate parallel programs that are easily ported from one...control number. 1. REPORT DATE 2011 2. REPORT TYPE 3. DATES COVERED 00-00-2011 to 00-00-2011 4 . TITLE AND SUBTITLE AHPCRC (Army High...continued on page 4 Virtual levels in Sequoia represent an abstract memory hierarchy without specifying data transfer mechanisms, giving the
Scalable Unix commands for parallel processors : a high-performance implementation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ong, E.; Lusk, E.; Gropp, W.
2001-06-22
We describe a family of MPI applications we call the Parallel Unix Commands. These commands are natural parallel versions of common Unix user commands such as ls, ps, and find, together with a few similar commands particular to the parallel environment. We describe the design and implementation of these programs and present some performance results on a 256-node Linux cluster. The Parallel Unix Commands are open source and freely available.
Information Security Considerations for Applications Using Apache Accumulo
2014-09-01
Distributed File System INSCOM United States Army Intelligence and Security Command JPA Java Persistence API JSON JavaScript Object Notation MAC Mandatory... MySQL [13]. BigTable can process 20 petabytes per day [14]. High degree of scalability on commodity hardware. NoSQL databases do not rely on highly...manipulation in relational databases. NoSQL databases each have a unique programming interface that uses a lower level procedural language (e.g., Java
Scalable Multiplexed Ion Trap (SMIT) Program
2010-12-08
an integrated micromirror . The symmetric cross and the mirror trap had a number of complex design features. Both traps shaped the electrodes in...genetic algorithm. 6. Integrated micromirror . The Gen II linear trap (as well as the linear sections of the mirror and the cross) had a number of new...conventional imaging system constructed by off-the-shelf optical components and a micromirror located very close to the ion. A large fraction of photons
KOJAK: Scalable Semantic Link Discovery Via Integrated Knowledge-Based and Statistical Reasoning
2006-11-01
program can find interesting connections in a network without having to learn the patterns of interestingness beforehand. The key advantage of our...Interesting Instances in Semantic Graphs Below we describe how the UNICORN framework can discover interesting instances in a multi-relational dataset...We can now describe how UNICORN solves the first problem of finding the top interesting nodes in a semantic net by ranking them according to
2013-10-22
Propagation Paramsothy Jayakumar * Daniel Melanz Jamie MacLennan David Gorsich U.S. Army TARDEC Warren, MI, USA Carmine Senatore Karl Iagnemma...Modeling and Uncertainty Propagation 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Paramsothy Jayakumar ; Daniel...of Technology, Cambridge, MA, 2005. [12] C. Senatore, M. Wulfmeier, P. Jayakumar , J. Maclennan, and K. Iagnemma, "Investigation of Stress and
Lam, Sean Shao Wei; Zhang, Ji; Zhang, Zhong Cheng; Oh, Hong Choon; Overton, Jerry; Ng, Yih Yng; Ong, Marcus Eng Hock
2015-02-01
Dynamically reassigning ambulance deployment locations throughout a day to balance ambulance availability and demands can be effective in reducing response times. The objectives of this study were to model dynamic ambulance allocation plans in Singapore based on the system status management (SSM) strategy and to evaluate the dynamic deployment plans using a discrete event simulation (DES) model. The geographical information system-based analysis and mathematical programming were used to develop the dynamic ambulance deployment plans for SSM based on ambulance calls data from January 1, 2011, to June 30, 2011. A DES model that incorporated these plans was used to compare the performance of the dynamic SSM strategy against static reallocation policies under various demands and travel time uncertainties. When the deployment plans based on the SSM strategy were followed strictly, the DES model showed that the geographical information system-based plans resulted in approximately 13-second reduction in the median response times compared to the static reallocation policy, whereas the mathematical programming-based plans resulted in approximately a 44-second reduction. The response times and coverage performances were still better than the static policy when reallocations happened for only 60% of all the recommended moves. Dynamically reassigning ambulance deployment locations based on the SSM strategy can result in superior response times and coverage performance compared to static reallocation policies even when the dynamic plans were not followed strictly. Copyright © 2014 Elsevier Inc. All rights reserved.
Augmenting Traditional Static Analysis With Commonly Available Metadata
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, Devin
Developers and security analysts have been using static analysis for a long time to analyze programs for defects and vulnerabilities with some success. Generally a static analysis tool is run on the source code for a given program, flagging areas of code that need to be further inspected by a human analyst. These areas may be obvious bugs like potential bu er over flows, information leakage flaws, or the use of uninitialized variables. These tools tend to work fairly well - every year they find many important bugs. These tools are more impressive considering the fact that they only examinemore » the source code, which may be very complex. Now consider the amount of data available that these tools do not analyze. There are many pieces of information that would prove invaluable for finding bugs in code, things such as a history of bug reports, a history of all changes to the code, information about committers, etc. By leveraging all this additional data, it is possible to nd more bugs with less user interaction, as well as track useful metrics such as number and type of defects injected by committer. This dissertation provides a method for leveraging development metadata to find bugs that would otherwise be difficult to find using standard static analysis tools. We showcase two case studies that demonstrate the ability to find 0day vulnerabilities in large and small software projects by finding new vulnerabilities in the cpython and Roundup open source projects.« less
CoMD Implementation Suite in Emerging Programming Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haque, Riyaz; Reeve, Sam; Juallmes, Luc
CoMD-Em is a software implementation suite of the CoMD [4] proxy app using different emerging programming models. It is intended to analyze the features and capabilities of novel programming models that could help ensure code and performance portability and scalability across heterogeneous platforms while improving programmer productivity. Another goal is to provide the authors and venders with some meaningful feedback regarding the capabilities and limitations of their models. The actual application is a classical molecular dynamics (MD) simulation using either the Lennard-Jones method (LJ) or the embedded atom method (EAM) for primary particle interaction. The code can be extended tomore » support alternate interaction models. The code is expected ro run on a wide class of heterogeneous hardware configurations like shard/distributed/hybrid memory, GPU's and any other platform supported by the underlying programming model.« less
The theory of interface slicing
NASA Technical Reports Server (NTRS)
Beck, Jon
1993-01-01
Interface slicing is a new tool which was developed to facilitate reuse-based software engineering, by addressing the following problems, needs, and issues: (1) size of systems incorporating reused modules; (2) knowledge requirements for program modification; (3) program understanding for reverse engineering; (4) module granularity and domain management; and (5) time and space complexity of conventional slicing. The definition of a form of static program analysis called interface slicing is addressed.
The FORTRAN static source code analyzer program (SAP) system description
NASA Technical Reports Server (NTRS)
Decker, W.; Taylor, W.; Merwarth, P.; Oneill, M.; Goorevich, C.; Waligora, S.
1982-01-01
A source code analyzer program (SAP) designed to assist personnel in conducting studies of FORTRAN programs is described. The SAP scans FORTRAN source code and produces reports that present statistics and measures of statements and structures that make up a module. The processing performed by SAP and of the routines, COMMON blocks, and files used by SAP are described. The system generation procedure for SAP is also presented.
The NASTRAN theoretical manual
NASA Technical Reports Server (NTRS)
1981-01-01
Designed to accommodate additions and modifications, this commentary on NASTRAN describes the problem solving capabilities of the program in a narrative fashion and presents developments of the analytical and numerical procedures that underlie the program. Seventeen major sections and numerous subsections cover; the organizational aspects of the program, utility matrix routines, static structural analysis, heat transfer, dynamic structural analysis, computer graphics, special structural modeling techniques, error analysis, interaction between structures and fluids, and aeroelastic analysis.
Automated predesign of aircraft
NASA Technical Reports Server (NTRS)
Poe, C. C., Jr.; Kruse, G. S.; Tanner, C. J.; Wilson, P. J.
1978-01-01
Program uses multistation structural-synthesis to size and design box-beam structures for transport aircraft. Program optimizes static strength and scales up to satisfy fatigue and fracture criteria. It has multimaterial capability and library of materials properties, including advanced composites. Program can be used to evaluate impact on weight of variables such as materials, types of construction, structural configurations, minimum gage limits, applied loads, fatigue lives, crack-growth lives, initial crack sizes, and residual strengths.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gartling, D.K.
User instructions are given for the finite element, electromagnetics program, TORO II. The theoretical background and numerical methods used in the program are documented in SAND95-2472. The present document also describes a number of example problems that have been analyzed with the code and provides sample input files for typical simulations. 20 refs., 34 figs., 3 tabs.
SAP- FORTRAN STATIC SOURCE CODE ANALYZER PROGRAM (IBM VERSION)
NASA Technical Reports Server (NTRS)
Manteufel, R.
1994-01-01
The FORTRAN Static Source Code Analyzer program, SAP, was developed to automatically gather statistics on the occurrences of statements and structures within a FORTRAN program and to provide for the reporting of those statistics. Provisions have been made for weighting each statistic and to provide an overall figure of complexity. Statistics, as well as figures of complexity, are gathered on a module by module basis. Overall summed statistics are also accumulated for the complete input source file. SAP accepts as input syntactically correct FORTRAN source code written in the FORTRAN 77 standard language. In addition, code written using features in the following languages is also accepted: VAX-11 FORTRAN, IBM S/360 FORTRAN IV Level H Extended; and Structured FORTRAN. The SAP program utilizes two external files in its analysis procedure. A keyword file allows flexibility in classifying statements and in marking a statement as either executable or non-executable. A statistical weight file allows the user to assign weights to all output statistics, thus allowing the user flexibility in defining the figure of complexity. The SAP program is written in FORTRAN IV for batch execution and has been implemented on a DEC VAX series computer under VMS and on an IBM 370 series computer under MVS. The SAP program was developed in 1978 and last updated in 1985.
SAP- FORTRAN STATIC SOURCE CODE ANALYZER PROGRAM (DEC VAX VERSION)
NASA Technical Reports Server (NTRS)
Merwarth, P. D.
1994-01-01
The FORTRAN Static Source Code Analyzer program, SAP, was developed to automatically gather statistics on the occurrences of statements and structures within a FORTRAN program and to provide for the reporting of those statistics. Provisions have been made for weighting each statistic and to provide an overall figure of complexity. Statistics, as well as figures of complexity, are gathered on a module by module basis. Overall summed statistics are also accumulated for the complete input source file. SAP accepts as input syntactically correct FORTRAN source code written in the FORTRAN 77 standard language. In addition, code written using features in the following languages is also accepted: VAX-11 FORTRAN, IBM S/360 FORTRAN IV Level H Extended; and Structured FORTRAN. The SAP program utilizes two external files in its analysis procedure. A keyword file allows flexibility in classifying statements and in marking a statement as either executable or non-executable. A statistical weight file allows the user to assign weights to all output statistics, thus allowing the user flexibility in defining the figure of complexity. The SAP program is written in FORTRAN IV for batch execution and has been implemented on a DEC VAX series computer under VMS and on an IBM 370 series computer under MVS. The SAP program was developed in 1978 and last updated in 1985.
Analytical modeling of transport aircraft crash scenarios to obtain floor pulses
NASA Technical Reports Server (NTRS)
Wittlin, G.; Lackey, D.
1983-01-01
The KRAS program was used to analyze transport aircraft candidate crash scenarios. Aircraft floor pulses and seat/occupant responses are presented. Results show that: (1) longitudinal only pulses can be represented by equivalent step inputs and/or static requirements; (2) the L1649 crash test floor longitudinal pulse for the aft direction (forward inertia) is less than 9g static or an equivalent 5g pulse; aft inertia accelerations are extremely small ((ch76) 3g) for representative crash scenarios; (3) a viable procedure to relate crash scenario floor pulses to standard laboratory dynamic and static test data using state of the art analysis and test procedures was demonstrated; and (4) floor pulse magnitudes are expected to be lower for wide body aircraft than for smaller narrow body aircraft.
Static analysis of the hull plate using the finite element method
NASA Astrophysics Data System (ADS)
Ion, A.
2015-11-01
This paper aims at presenting the static analysis for two levels of a container ship's construction as follows: the first level is at the girder / hull plate and the second level is conducted at the entire strength hull of the vessel. This article will describe the work for the static analysis of a hull plate. We shall use the software package ANSYS Mechanical 14.5. The program is run on a computer with four Intel Xeon X5260 CPU processors at 3.33 GHz, 32 GB memory installed. In terms of software, the shared memory parallel version of ANSYS refers to running ANSYS across multiple cores on a SMP system. The distributed memory parallel version of ANSYS (Distributed ANSYS) refers to running ANSYS across multiple processors on SMP systems or DMP systems.
a Virtual Trip to the Schwarzschild-De Sitter Black Hole
NASA Astrophysics Data System (ADS)
Bakala, Pavel; Hledík, Stanislav; Stuchlík, Zdenĕk; Truparová, Kamila; Čermák, Petr
2008-09-01
We developed realistic fully general relativistic computer code for simulation of optical projection in a strong, spherically symmetric gravitational field. Standard theoretical analysis of optical projection for an observer in the vicinity of a Schwarzschild black hole is extended to black hole spacetimes with a repulsive cosmological constant, i.e, Schwarzschild-de Sitter (SdS) spacetimes. Influence of the cosmological constant is investigated for static observers and observers radially free-falling from static radius. Simulation includes effects of gravitational lensing, multiple images, Doppler and gravitational frequency shift, as well as the amplification of intensity. The code generates images of static observers sky and a movie simulations for radially free-falling observers. Techniques of parallel programming are applied to get high performance and fast run of the simulation code.
Technical Evaluation Motor no. 5 (TEM-5)
NASA Technical Reports Server (NTRS)
Cook, M.
1990-01-01
Technical Evaluation Motor No. 5 (TEM-5) was static test fired at the Thiokol Corporation Static Test Bay T-97. TEM-5 was a full scale, full duration static test fire of a high performance motor (HPM) configuration solid rocket motor (SRM). The primary purpose of TEM static tests is to recover SRM case and nozzle hardware for use in the redesigned solid rocket motor (RSRM) flight program. Inspection and instrumentation data indicate that the TEM-5 static test firing was successful. The ambient temperature during the test was 41 F and the propellant mean bulk temperature (PMBT) was 72 F. Ballistics performance values were within the specified requirements. The overall performance of the TEM-5 components and test equipment was nominal. Dissembly inspection revealed that joint putty was in contact with the inner groove of the inner primary seal of the ignitor adapter-to-forward dome (inner) joint gasket; this condition had not occurred on any previous static test motor or flight RSRM. While no qualification issues were addressed on TEM-5, two significant component changes were evaluated. Those changes were a new vented assembly process for the case-to-nozzle joint and the installation of two redesigned field joint protection systems. Performance of the vented case-to-nozzle joint assembly was successful, and the assembly/performance differences between the two field joint protection system (FJPS) configurations were compared.
Jorrakate, Chaiyong; Kongsuk, Jutaluk; Pongduang, Chiraprapa; Sadsee, Boontiwa; Chanthorn, Phatchari
2015-01-01
[Purpose] The aim of the present study was to investigate the effect of yoga training on static and dynamic standing balance in obese individuals with poor standing balance. [Subjects and Methods] Sixteen obese volunteers were randomly assigned into yoga and control groups. The yoga training program was performed for 45 minutes per day, 3 times per week, for 4 weeks. Static and dynamic balance were assessed in volunteers with one leg standing and functional reach tests. Outcome measures were tested before training and after a single week of training. Two-way repeated measure analysis of variance with Tukey’s honestly significant difference post hoc statistics was used to analyze the data. [Results] Obese individuals showed significantly increased static standing balance in the yoga training group, but there was no significant improvement of static or dynamic standing balance in the control group after 4 weeks. In the yoga group, significant increases in static standing balance was found after the 2nd, 3rd, and 4th weeks. Compared with the control group, static standing balance in the yoga group was significantly different after the 2nd week, and dynamic standing balance was significantly different after the 4th week. [Conclusion] Yoga training would be beneficial for improving standing balance in obese individuals with poor standing balance. PMID:25642038
FASOR - A second generation shell of revolution code
NASA Technical Reports Server (NTRS)
Cohen, G. A.
1978-01-01
An integrated computer program entitled Field Analysis of Shells of Revolution (FASOR) currently under development for NASA is described. When completed, this code will treat prebuckling, buckling, initial postbuckling and vibrations under axisymmetric static loads as well as linear response and bifurcation under asymmetric static loads. Although these modes of response are treated by existing programs, FASOR extends the class of problems treated to include general anisotropy and transverse shear deformations of stiffened laminated shells. At the same time, a primary goal is to develop a program which is free of the usual problems of modeling, numerical convergence and ill-conditioning, laborious problem setup, limitations on problem size and interpretation of output. The field method is briefly described, the shell differential equations are cast in a suitable form for solution by this method and essential aspects of the input format are presented. Numerical results are given for both unstiffened and stiffened anisotropic cylindrical shells and compared with previously published analytical solutions.
NASA Technical Reports Server (NTRS)
Forbes, R. E.; Smith, M. R.; Farrell, R. R.
1972-01-01
An experimental program was conducted during the static firing of the S-1C stage 13, 14, and 15 rocket engines and the S-2 stage 13, 14, and 15 rocket engines. The data compiled during the experimental program consisted of photographic recordings of the time-dependent growth and diffusion of the exhaust clouds, the collection of meteorological data in the ambient atmosphere, and the acquisition of data on the physical structure of the exhaust clouds which were obtained by flying instrumented aircraft through the clouds. A new technique was developed to verify the previous measurements of evaporation and entrainment of blast deflector cooling water into the cloud. The results of the experimental program indicate that at the lower altitudes the rocket exhaust cloud or plume closely resembles a free-jet type of flow. At the upper altitudes, where the cloud is approaching an equilibrium condition, structure is very similar to a natural cumulus cloud.
Gamell, Marc; Teranishi, Keita; Kolla, Hemanth; ...
2017-10-26
In order to achieve exascale systems, application resilience needs to be addressed. Some programming models, such as task-DAG (directed acyclic graphs) architectures, currently embed resilience features whereas traditional SPMD (single program, multiple data) and message-passing models do not. Since a large part of the community's code base follows the latter models, it is still required to take advantage of application characteristics to minimize the overheads of fault tolerance. To that end, this paper explores how recovering from hard process/node failures in a local manner is a natural approach for certain applications to obtain resilience at lower costs in faulty environments.more » In particular, this paper targets enabling online, semitransparent local recovery for stencil computations on current leadership-class systems as well as presents programming support and scalable runtime mechanisms. Also described and demonstrated in this paper is the effect of failure masking, which allows the effective reduction of impact on total time to solution due to multiple failures. Furthermore, we discuss, implement, and evaluate ghost region expansion and cell-to-rank remapping to increase the probability of failure masking. To conclude, this paper shows the integration of all aforementioned mechanisms with the S3D combustion simulation through an experimental demonstration (using the Titan system) of the ability to tolerate high failure rates (i.e., node failures every five seconds) with low overhead while sustaining performance at large scales. In addition, this demonstration also displays the failure masking probability increase resulting from the combination of both ghost region expansion and cell-to-rank remapping.« less
Myria: Scalable Analytics as a Service
NASA Astrophysics Data System (ADS)
Howe, B.; Halperin, D.; Whitaker, A.
2014-12-01
At the UW eScience Institute, we're working to empower non-experts, especially in the sciences, to write and use data-parallel algorithms. To this end, we are building Myria, a web-based platform for scalable analytics and data-parallel programming. Myria's internal model of computation is the relational algebra extended with iteration, such that every program is inherently data-parallel, just as every query in a database is inherently data-parallel. But unlike databases, iteration is a first class concept, allowing us to express machine learning tasks, graph traversal tasks, and more. Programs can be expressed in a number of languages and can be executed on a number of execution environments, but we emphasize a particular language called MyriaL that supports both imperative and declarative styles and a particular execution engine called MyriaX that uses an in-memory column-oriented representation and asynchronous iteration. We deliver Myria over the web as a service, providing an editor, performance analysis tools, and catalog browsing features in a single environment. We find that this web-based "delivery vector" is critical in reaching non-experts: they are insulated from irrelevant effort technical work associated with installation, configuration, and resource management. The MyriaX backend, one of several execution runtimes we support, is a main-memory, column-oriented, RDBMS-on-the-worker system that supports cyclic data flows as a first-class citizen and has been shown to outperform competitive systems on 100-machine cluster sizes. I will describe the Myria system, give a demo, and present some new results in large-scale oceanographic microbiology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamell, Marc; Teranishi, Keita; Kolla, Hemanth
In order to achieve exascale systems, application resilience needs to be addressed. Some programming models, such as task-DAG (directed acyclic graphs) architectures, currently embed resilience features whereas traditional SPMD (single program, multiple data) and message-passing models do not. Since a large part of the community's code base follows the latter models, it is still required to take advantage of application characteristics to minimize the overheads of fault tolerance. To that end, this paper explores how recovering from hard process/node failures in a local manner is a natural approach for certain applications to obtain resilience at lower costs in faulty environments.more » In particular, this paper targets enabling online, semitransparent local recovery for stencil computations on current leadership-class systems as well as presents programming support and scalable runtime mechanisms. Also described and demonstrated in this paper is the effect of failure masking, which allows the effective reduction of impact on total time to solution due to multiple failures. Furthermore, we discuss, implement, and evaluate ghost region expansion and cell-to-rank remapping to increase the probability of failure masking. To conclude, this paper shows the integration of all aforementioned mechanisms with the S3D combustion simulation through an experimental demonstration (using the Titan system) of the ability to tolerate high failure rates (i.e., node failures every five seconds) with low overhead while sustaining performance at large scales. In addition, this demonstration also displays the failure masking probability increase resulting from the combination of both ghost region expansion and cell-to-rank remapping.« less
Network selection, Information filtering and Scalable computation
NASA Astrophysics Data System (ADS)
Ye, Changqing
This dissertation explores two application scenarios of sparsity pursuit method on large scale data sets. The first scenario is classification and regression in analyzing high dimensional structured data, where predictors corresponds to nodes of a given directed graph. This arises in, for instance, identification of disease genes for the Parkinson's diseases from a network of candidate genes. In such a situation, directed graph describes dependencies among the genes, where direction of edges represent certain causal effects. Key to high-dimensional structured classification and regression is how to utilize dependencies among predictors as specified by directions of the graph. In this dissertation, we develop a novel method that fully takes into account such dependencies formulated through certain nonlinear constraints. We apply the proposed method to two applications, feature selection in large margin binary classification and in linear regression. We implement the proposed method through difference convex programming for the cost function and constraints. Finally, theoretical and numerical analyses suggest that the proposed method achieves the desired objectives. An application to disease gene identification is presented. The second application scenario is personalized information filtering which extracts the information specifically relevant to a user, predicting his/her preference over a large number of items, based on the opinions of users who think alike or its content. This problem is cast into the framework of regression and classification, where we introduce novel partial latent models to integrate additional user-specific and content-specific predictors, for higher predictive accuracy. In particular, we factorize a user-over-item preference matrix into a product of two matrices, each representing a user's preference and an item preference by users. Then we propose a likelihood method to seek a sparsest latent factorization, from a class of over-complete factorizations, possibly with a high percentage of missing values. This promotes additional sparsity beyond rank reduction. Computationally, we design methods based on a ``decomposition and combination'' strategy, to break large-scale optimization into many small subproblems to solve in a recursive and parallel manner. On this basis, we implement the proposed methods through multi-platform shared-memory parallel programming, and through Mahout, a library for scalable machine learning and data mining, for mapReduce computation. For example, our methods are scalable to a dataset consisting of three billions of observations on a single machine with sufficient memory, having good timings. Both theoretical and numerical investigations show that the proposed methods exhibit significant improvement in accuracy over state-of-the-art scalable methods.
Scalability problems of simple genetic algorithms.
Thierens, D
1999-01-01
Scalable evolutionary computation has become an intensively studied research topic in recent years. The issue of scalability is predominant in any field of algorithmic design, but it became particularly relevant for the design of competent genetic algorithms once the scalability problems of simple genetic algorithms were understood. Here we present some of the work that has aided in getting a clear insight in the scalability problems of simple genetic algorithms. Particularly, we discuss the important issue of building block mixing. We show how the need for mixing places a boundary in the GA parameter space that, together with the boundary from the schema theorem, delimits the region where the GA converges reliably to the optimum in problems of bounded difficulty. This region shrinks rapidly with increasing problem size unless the building blocks are tightly linked in the problem coding structure. In addition, we look at how straightforward extensions of the simple genetic algorithm-namely elitism, niching, and restricted mating are not significantly improving the scalability problems.
Low-complexity transcoding algorithm from H.264/AVC to SVC using data mining
NASA Astrophysics Data System (ADS)
Garrido-Cantos, Rosario; De Cock, Jan; Martínez, Jose Luis; Van Leuven, Sebastian; Cuenca, Pedro; Garrido, Antonio
2013-12-01
Nowadays, networks and terminals with diverse characteristics of bandwidth and capabilities coexist. To ensure a good quality of experience, this diverse environment demands adaptability of the video stream. In general, video contents are compressed to save storage capacity and to reduce the bandwidth required for its transmission. Therefore, if these compressed video streams were compressed using scalable video coding schemes, they would be able to adapt to those heterogeneous networks and a wide range of terminals. Since the majority of the multimedia contents are compressed using H.264/AVC, they cannot benefit from that scalability. This paper proposes a low-complexity algorithm to convert an H.264/AVC bitstream without scalability to scalable bitstreams with temporal scalability in baseline and main profiles by accelerating the mode decision task of the scalable video coding encoding stage using machine learning tools. The results show that when our technique is applied, the complexity is reduced by 87% while maintaining coding efficiency.
Random harmonic analysis program, L221 (TEV156). Volume 1: Engineering and usage
NASA Technical Reports Server (NTRS)
Miller, R. D.; Graham, M. L.
1979-01-01
A digital computer program capable of calculating steady state solutions for linear second order differential equations due to sinusoidal forcing functions is described. The field of application of the program, the analysis of airplane response and loads due to continuous random air turbulence, is discussed. Optional capabilities including frequency dependent input matrices, feedback damping, gradual gust penetration, multiple excitation forcing functions, and a static elastic solution are described. Program usage and a description of the analysis used are presented.
NASA Astrophysics Data System (ADS)
Matsakis, Nicholas D.; Gross, Thomas R.
Intervals are a new, higher-level primitive for parallel programming with which programmers directly construct the program schedule. Programs using intervals can be statically analyzed to ensure that they do not deadlock or contain data races. In this paper, we demonstrate the flexibility of intervals by showing how to use them to emulate common parallel control-flow constructs like barriers and signals, as well as higher-level patterns such as bounded-buffer producer-consumer. We have implemented intervals as a publicly available library for Java and Scala.
Parallel grid library for rapid and flexible simulation development
NASA Astrophysics Data System (ADS)
Honkonen, I.; von Alfthan, S.; Sandroos, A.; Janhunen, P.; Palmroth, M.
2013-04-01
We present an easy to use and flexible grid library for developing highly scalable parallel simulations. The distributed cartesian cell-refinable grid (dccrg) supports adaptive mesh refinement and allows an arbitrary C++ class to be used as cell data. The amount of data in grid cells can vary both in space and time allowing dccrg to be used in very different types of simulations, for example in fluid and particle codes. Dccrg transfers the data between neighboring cells on different processes transparently and asynchronously allowing one to overlap computation and communication. This enables excellent scalability at least up to 32 k cores in magnetohydrodynamic tests depending on the problem and hardware. In the version of dccrg presented here part of the mesh metadata is replicated between MPI processes reducing the scalability of adaptive mesh refinement (AMR) to between 200 and 600 processes. Dccrg is free software that anyone can use, study and modify and is available at https://gitorious.org/dccrg. Users are also kindly requested to cite this work when publishing results obtained with dccrg. Catalogue identifier: AEOM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOM_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU Lesser General Public License version 3 No. of lines in distributed program, including test data, etc.: 54975 No. of bytes in distributed program, including test data, etc.: 974015 Distribution format: tar.gz Programming language: C++. Computer: PC, cluster, supercomputer. Operating system: POSIX. The code has been parallelized using MPI and tested with 1-32768 processes RAM: 10 MB-10 GB per process Classification: 4.12, 4.14, 6.5, 19.3, 19.10, 20. External routines: MPI-2 [1], boost [2], Zoltan [3], sfc++ [4] Nature of problem: Grid library supporting arbitrary data in grid cells, parallel adaptive mesh refinement, transparent remote neighbor data updates and load balancing. Solution method: The simulation grid is represented by an adjacency list (graph) with vertices stored into a hash table and edges into contiguous arrays. Message Passing Interface standard is used for parallelization. Cell data is given as a template parameter when instantiating the grid. Restrictions: Logically cartesian grid. Running time: Running time depends on the hardware, problem and the solution method. Small problems can be solved in under a minute and very large problems can take weeks. The examples and tests provided with the package take less than about one minute using default options. In the version of dccrg presented here the speed of adaptive mesh refinement is at most of the order of 106 total created cells per second. http://www.mpi-forum.org/. http://www.boost.org/. K. Devine, E. Boman, R. Heaphy, B. Hendrickson, C. Vaughan, Zoltan data management services for parallel dynamic applications, Comput. Sci. Eng. 4 (2002) 90-97. http://dx.doi.org/10.1109/5992.988653. https://gitorious.org/sfc++.
Many-core graph analytics using accelerated sparse linear algebra routines
NASA Astrophysics Data System (ADS)
Kozacik, Stephen; Paolini, Aaron L.; Fox, Paul; Kelmelis, Eric
2016-05-01
Graph analytics is a key component in identifying emerging trends and threats in many real-world applications. Largescale graph analytics frameworks provide a convenient and highly-scalable platform for developing algorithms to analyze large datasets. Although conceptually scalable, these techniques exhibit poor performance on modern computational hardware. Another model of graph computation has emerged that promises improved performance and scalability by using abstract linear algebra operations as the basis for graph analysis as laid out by the GraphBLAS standard. By using sparse linear algebra as the basis, existing highly efficient algorithms can be adapted to perform computations on the graph. This approach, however, is often less intuitive to graph analytics experts, who are accustomed to vertex-centric APIs such as Giraph, GraphX, and Tinkerpop. We are developing an implementation of the high-level operations supported by these APIs in terms of linear algebra operations. This implementation is be backed by many-core implementations of the fundamental GraphBLAS operations required, and offers the advantages of both the intuitive programming model of a vertex-centric API and the performance of a sparse linear algebra implementation. This technology can reduce the number of nodes required, as well as the run-time for a graph analysis problem, enabling customers to perform more complex analysis with less hardware at lower cost. All of this can be accomplished without the requirement for the customer to make any changes to their analytics code, thanks to the compatibility with existing graph APIs.
ERIC Educational Resources Information Center
Kesselman, Jonathan Rhys
Static and dynamic incentive effects of the following fiscal transfer forms are examined: income subsidy (negative income tax), wage subsidy, categorical income subsidy (work requirement), and overtime wage subsidy. Budgetary costs, aggregate labor-market impacts, and welfare effects are analyzed. A program for categorically combining wage and…
ERIC Educational Resources Information Center
Lancioni, Giulio E.; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Alberti, Gloria; Perilli, Viviana; Zimbaro, Carmen; Boccasini, Adele; Mazzola, Carlo; Russo, Roberto
2018-01-01
This study assessed a technology-aided program (monitoring responding, and ensuring preferred stimulation and encouragements) for promoting physical activity with 11 participants with severe/profound intellectual and multiple disabilities. Each participant was provided with an exercise device (e.g. a static bicycle and a stepper) and exposed to…
Deformation and failure mechanisms of graphite/epoxy composites under static loading
NASA Technical Reports Server (NTRS)
Clements, L. L.
1981-01-01
The mechanisms of deformation and failure of graphite epoxy composites under static loading were clarified. The influence of moisture and temperature upon these mechanisms were also investigated. Because the longitudinal tensile properties are the most critical to the performance of the composite, these properties were investigated in detail. Both ultimate and elastic mechanical properties were investigated, but the study of mechanisms emphasized those leading to failure of the composite. The graphite epoxy composite selected for study was the system being used in several NASA sponsored flight test programs.
Noise-Source Separation Using Internal and Far-Field Sensors for a Full-Scale Turbofan Engine
NASA Technical Reports Server (NTRS)
Hultgren, Lennart S.; Miles, Jeffrey H.
2009-01-01
Noise-source separation techniques for the extraction of the sub-dominant combustion noise from the total noise signatures obtained in static-engine tests are described. Three methods are applied to data from a static, full-scale engine test. Both 1/3-octave and narrow-band results are discussed. The results are used to assess the combustion-noise prediction capability of the Aircraft Noise Prediction Program (ANOPP). A new additional phase-angle-based discriminator for the three-signal method is also introduced.
View looking north west showing the boom, top of the ...
View looking north west showing the boom, top of the center mast and boom angle reeving of the 175-ton derrick. Note in the background of the view, just above the center mast is the F-1 Static-Test Stand used for test firing the Saturn V engines and subsequent program's engine testing. Also in the background center is the Redstone Static Test Stand (center right) and it's cold calibration tower (center left). - Marshall Space Flight Center, Saturn V Dynamic Test Facility, East Test Area, Huntsville, Madison County, AL
Jeon, Mi Yang; Jeong, HyeonCheol; Petrofsky, Jerrold; Lee, Haneul; Yim, JongEun
2014-11-14
Falling can lead to severe health issues in the elderly and importantly contributes to morbidity, death, immobility, hospitalization, and early entry to long-term care facilities. The aim of this study was to devise a recurrent fall prevention program for elderly women in rural areas. This study adopted an assessor-blinded, randomized, controlled trial methodology. Subjects were enrolled in a 12-week recurrent fall prevention program, which comprised strength training, balance training, and patient education. Muscle strength and endurance of the ankles and the lower extremities, static balance, dynamic balance, depression, compliance with preventive behavior related to falls, fear of falling, and fall self-efficacy at baseline and immediately after the program were assessed. Sixty-two subjects (mean age 69.2±4.3 years old) completed the program--31 subjects in the experimental group and 31 subjects in the control group. When the results of the program in the 2 groups were compared, significant differences were found in ankle heel rise test, lower extremity heel rise test, dynamic balance, depression, compliance with fall preventative behavior, fear of falling, and fall self-efficacy (p<0.05), but no significant difference was found in static balance. This study shows that the fall prevention program described effectively improves muscle strength and endurance, balance, and psychological aspects in elderly women with a fall history.
NASA Technical Reports Server (NTRS)
Svalbonas, V.; Ogilvie, P.
1973-01-01
The engineering programming information for the digital computer program for analyzing shell structures is presented. The program is designed to permit small changes such as altering the geometry or a table size to fit the specific requirements. Each major subroutine is discussed and the following subjects are included: (1) subroutine description, (2) pertinent engineering symbols and the FORTRAN coded counterparts, (3) subroutine flow chart, and (4) subroutine FORTRAN listing.
A wireless sensor network based personnel positioning scheme in coal mines with blind areas.
Liu, Zhigao; Li, Chunwen; Wu, Danchen; Dai, Wenhan; Geng, Shaobo; Ding, Qingqing
2010-01-01
This paper proposes a novel personnel positioning scheme for a tunnel network with blind areas, which compared with most existing schemes offers both low-cost and high-precision. Based on the data models of tunnel networks, measurement networks and mobile miners, the global positioning method is divided into four steps: (1) calculate the real time personnel location in local areas using a location engine, and send it to the upper computer through the gateway; (2) correct any localization errors resulting from the underground tunnel environmental interference; (3) determine the global three-dimensional position by coordinate transformation; (4) estimate the personnel locations in the blind areas. A prototype system constructed to verify the positioning performance shows that the proposed positioning system has good reliability, scalability, and positioning performance. In particular, the static localization error of the positioning system is less than 2.4 m in the underground tunnel environment and the moving estimation error is below 4.5 m in the corridor environment. The system was operated continuously over three months without any failures.
Li, Congcong; Zhang, Xi; Wang, Haiping; Li, Dongfeng
2018-01-01
Vehicular sensor networks have been widely applied in intelligent traffic systems in recent years. Because of the specificity of vehicular sensor networks, they require an enhanced, secure and efficient authentication scheme. Existing authentication protocols are vulnerable to some problems, such as a high computational overhead with certificate distribution and revocation, strong reliance on tamper-proof devices, limited scalability when building many secure channels, and an inability to detect hardware tampering attacks. In this paper, an improved authentication scheme using certificateless public key cryptography is proposed to address these problems. A security analysis of our scheme shows that our protocol provides an enhanced secure anonymous authentication, which is resilient against major security threats. Furthermore, the proposed scheme reduces the incidence of node compromise and replication attacks. The scheme also provides a malicious-node detection and warning mechanism, which can quickly identify compromised static nodes and immediately alert the administrative department. With performance evaluations, the scheme can obtain better trade-offs between security and efficiency than the well-known available schemes. PMID:29324719
Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes
NASA Astrophysics Data System (ADS)
Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt; Stuehn, Torsten
2017-11-01
Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach, the theoretical modeling and scaling laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. These two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.
Global Static Indexing for Real-Time Exploration of Very Large Regular Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pascucci, V; Frank, R
2001-07-23
In this paper we introduce a new indexing scheme for progressive traversal and visualization of large regular grids. We demonstrate the potential of our approach by providing a tool that displays at interactive rates planar slices of scalar field data with very modest computing resources. We obtain unprecedented results both in terms of absolute performance and, more importantly, in terms of scalability. On a laptop computer we provide real time interaction with a 2048{sup 3} grid (8 Giga-nodes) using only 20MB of memory. On an SGI Onyx we slice interactively an 8192{sup 3} grid (1/2 tera-nodes) using only 60MB ofmore » memory. The scheme relies simply on the determination of an appropriate reordering of the rectilinear grid data and a progressive construction of the output slice. The reordering minimizes the amount of I/O performed during the out-of-core computation. The progressive and asynchronous computation of the output provides flexible quality/speed tradeoffs and a time-critical and interruptible user interface.« less
Park, Y; Subramanian, K; Verfaillie, C M; Hu, W S
2010-10-01
Many potential applications of stem cells require large quantities of cells, especially those involving large organs such as the liver. For such applications, a scalable reactor system is desirable to ensure a reliable supply of sufficient quantities of differentiation competent or differentiated cells. We employed a microcarrier culture system for the expansion of undifferentiated rat multipotent adult progenitor cells (rMAPC) as well as for directed differentiation of these cells to hepatocyte-like cells. During the 4-day expansion culture, cell concentration increased by 85-fold while expression level of pluripotency markers were maintained, as well as the MAPC differentiation potential. Directed differentiation into hepatocyte-like cells on the microcarriers themselves gave comparable results as observed with cells cultured in static cultures. The cells expressed several mature hepatocyte-lineage genes and asialoglycoprotein receptor-1 (ASGPR-1) surface protein, and secreted albumin and urea. Microcarrier culture thus offers the potential of large-scale expansion and differentiation of stem cells in a more controlled bioreactor environment. Copyright © 2010 Elsevier B.V. All rights reserved.
Design-based modeling of magnetically actuated soft diaphragm materials
NASA Astrophysics Data System (ADS)
Jayaneththi, V. R.; Aw, K. C.; McDaid, A. J.
2018-04-01
Magnetic polymer composites (MPC) have shown promise for emerging biomedical applications such as lab-on-a-chip and implantable drug delivery. These soft material actuators are capable of fast response, large deformation and wireless actuation. Existing MPC modeling approaches are computationally expensive and unsuitable for rapid design prototyping and real-time control applications. This paper proposes a macro-scale 1-DOF model capable of predicting force and displacement of an MPC diaphragm actuator. Model validation confirmed both blocked force and displacement can be accurately predicted in a variety of working conditions i.e. different magnetic field strengths, static/dynamic fields, and gap distances. The contribution of this work includes a comprehensive experimental investigation of a macro-scale diaphragm actuator; the derivation and validation of a new phenomenological model to describe MPC actuation; and insights into the proposed model’s design-based functionality i.e. scalability and generalizability in terms of magnetic filler concentration and diaphragm diameter. Due to the lumped element modeling approach, the proposed model can also be adapted to alternative actuator configurations, and thus presents a useful tool for design, control and simulation of novel MPC applications.
A Wireless Sensor Network Based Personnel Positioning Scheme in Coal Mines with Blind Areas
Liu, Zhigao; Li, Chunwen; Wu, Danchen; Dai, Wenhan; Geng, Shaobo; Ding, Qingqing
2010-01-01
This paper proposes a novel personnel positioning scheme for a tunnel network with blind areas, which compared with most existing schemes offers both low-cost and high-precision. Based on the data models of tunnel networks, measurement networks and mobile miners, the global positioning method is divided into four steps: (1) calculate the real time personnel location in local areas using a location engine, and send it to the upper computer through the gateway; (2) correct any localization errors resulting from the underground tunnel environmental interference; (3) determine the global three-dimensional position by coordinate transformation; (4) estimate the personnel locations in the blind areas. A prototype system constructed to verify the positioning performance shows that the proposed positioning system has good reliability, scalability, and positioning performance. In particular, the static localization error of the positioning system is less than 2.4 m in the underground tunnel environment and the moving estimation error is below 4.5 m in the corridor environment. The system was operated continuously over three months without any failures. PMID:22163446
Multi-Resolution Playback of Network Trace Files
2015-06-01
a com- plete MySQL database, C++ developer tools and the libraries utilized in the development of the system (Boost and Libcrafter), and Wireshark...XE suite has a limit to the allowed size of each database. In order to be scalable, the project had to switch to the MySQL database suite. The...programs that access the database use the MySQL C++ connector, provided by Oracle, and the supplied methods and libraries. 4.4 Flow Generator Chapter 3
Embedding Fonts in MetaPost Output
2016-04-19
by John Hobby ) based on Donald Knuth’s META- FONT [4] with high quality PostScript output. An outstanding feature of MetaPost is that typeset fonts in...output, the graphics are perfectly scalable to any arbitrary res- olution. John Hobby , its author, writes: “[MetaPost] is really a programming lan- guage...for generating graphics, especially fig- ures for TEX [5] and troff documents.” This quote by Hobby indicates that MetaPost figures are not only
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maunz, Peter Lukas Wilhelm
2016-01-26
The High Optical Access (HOA) trap was designed in collaboration with the Modular Universal Scalable Ion-trap Quantum Computer (MUSIQC) team, funded along with Sandia National Laboratories through IARPA's Multi Qubit Coherent Operations (MQCO) program. The design of version 1 of the HOA trap was completed in September 2012 and initial devices were completed and packaged in February 2013. The second version of the High Optical Access Trap (HOA-2) was completed in September 2014 and is available at IARPA's disposal.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel
Coupled-cluster methods provide highly accurate models of molecular structure through explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix–matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy-efficient manner. We achieve up to 240× speedup compared with the optimized shared memory implementation of Libtensor. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures (Cray XC30 and XC40, and IBM Blue Gene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance, tasking and bulk synchronous models. Nevertheless, we preserve a unified interface to both programming models to maintain the productivity of computational quantum chemists.« less
Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel; ...
2017-03-08
Coupled-cluster methods provide highly accurate models of molecular structure through explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix–matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy-efficient manner. We achieve up to 240× speedup compared with the optimized shared memory implementation of Libtensor. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures (Cray XC30 and XC40, and IBM Blue Gene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance, tasking and bulk synchronous models. Nevertheless, we preserve a unified interface to both programming models to maintain the productivity of computational quantum chemists.« less
Developing a scalable modeling architecture for studying survivability technologies
NASA Astrophysics Data System (ADS)
Mohammad, Syed; Bounker, Paul; Mason, James; Brister, Jason; Shady, Dan; Tucker, David
2006-05-01
To facilitate interoperability of models in a scalable environment, and provide a relevant virtual environment in which Survivability technologies can be evaluated, the US Army Research Development and Engineering Command (RDECOM) Modeling Architecture for Technology Research and Experimentation (MATREX) Science and Technology Objective (STO) program has initiated the Survivability Thread which will seek to address some of the many technical and programmatic challenges associated with the effort. In coordination with different Thread customers, such as the Survivability branches of various Army labs, a collaborative group has been formed to define the requirements for the simulation environment that would in turn provide them a value-added tool for assessing models and gauge system-level performance relevant to Future Combat Systems (FCS) and the Survivability requirements of other burgeoning programs. An initial set of customer requirements has been generated in coordination with the RDECOM Survivability IPT lead, through the Survivability Technology Area at RDECOM Tank-automotive Research Development and Engineering Center (TARDEC, Warren, MI). The results of this project are aimed at a culminating experiment and demonstration scheduled for September, 2006, which will include a multitude of components from within RDECOM and provide the framework for future experiments to support Survivability research. This paper details the components with which the MATREX Survivability Thread was created and executed, and provides insight into the capabilities currently demanded by the Survivability faculty within RDECOM.
Simms, Andrew M; Toofanny, Rudesh D; Kehl, Catherine; Benson, Noah C; Daggett, Valerie
2008-06-01
Dynameomics is a project to investigate and catalog the native-state dynamics and thermal unfolding pathways of representatives of all protein folds using solvated molecular dynamics simulations, as described in the preceding paper. Here we introduce the design of the molecular dynamics data warehouse, a scalable, reliable repository that houses simulation data that vastly simplifies management and access. In the succeeding paper, we describe the development of a complementary multidimensional database. A single protein unfolding or native-state simulation can take weeks to months to complete, and produces gigabytes of coordinate and analysis data. Mining information from over 3000 completed simulations is complicated and time-consuming. Even the simplest queries involve writing intricate programs that must be built from low-level file system access primitives and include significant logic to correctly locate and parse data of interest. As a result, programs to answer questions that require data from hundreds of simulations are very difficult to write. Thus, organization and access to simulation data have been major obstacles to the discovery of new knowledge in the Dynameomics project. This repository is used internally and is the foundation of the Dynameomics portal site http://www.dynameomics.org. By organizing simulation data into a scalable, manageable and accessible form, we can begin to address substantial questions that move us closer to solving biomedical and bioengineering problems.
Teaching citizen science skills online: Implications for invasive species training programs
Newman, G.; Crall, A.; Laituri, M.; Graham, J.; Stohlgren, T.; Moore, J.C.; Kodrich, K.; Holfelder, K.A.
2010-01-01
Citizen science programs are emerging as an efficient way to increase data collection and help monitor invasive species. Effective invasive species monitoring requires rigid data quality assurances if expensive control efforts are to be guided by volunteer data. To achieve data quality, effective online training is needed to improve field skills and reach large numbers of remote sentinel volunteers critical to early detection and rapid response. The authors evaluated the effectiveness of online static and multimedia tutorials to teach citizen science volunteers (n = 54) how to identify invasive plants; establish monitoring plots; measure percent cover; and use Global Positioning System (GPS) units. Participants trained using static and multimedia tutorials provided less (p <.001) correct species identifications (63% and 67%) than did professionals (83%) across all species, but they did not differ (p =.125) between each other. However, their ability to identify conspicuous species was comparable to that of professionals. The variability in percent plant cover estimates between static (??10%) and multimedia (??13%) participants did not differ (p =.86 and.08, respectively) from those of professionals (??9%). Trained volunteers struggled with plot setup and GPS skills. Overall, the online approach used did not influence conferred field skills and abilities. Traditional or multimedia online training augmented with more rigorous, repeated, and hands-on, in-person training in specialized skills required for more difficult tasks will likely improve volunteer abilities, data quality, and overall program effectiveness. ?? Taylor & Francis Group, LLC.
IN2 Profile: Go Electric Provides Grid Stabilizing Energy Service Solutions to Utilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pless, Shanti
Through the Wells Fargo Innovation Incubator (IN²) program, Go Electric will validate their Link DR technology, which is an advanced, uninterruptable power supply that provides secure power, lowers facility energy costs, integrates renewables, and generates income from utility demand response programs. The IN² program launched in October 2014 and is part of Wells Fargo’s 2020 Environmental Commitment to provide $100 million to environmentally-focused nonprofits and universities. The goal is to create an ecosystem that fosters and accelerates the commercialization of promising commercial buildings technologies that can provide scalable solutions to reduce the energy impact of buildings. According to the Departmentmore » of Energy, nearly 40 percent of energy consumption in the U.S. today comes from buildings at an estimated cost of $413 billion.« less
Scalable Integrated Region-Based Image Retrieval Using IRM and Statistical Clustering.
ERIC Educational Resources Information Center
Wang, James Z.; Du, Yanping
Statistical clustering is critical in designing scalable image retrieval systems. This paper presents a scalable algorithm for indexing and retrieving images based on region segmentation. The method uses statistical clustering on region features and IRM (Integrated Region Matching), a measure developed to evaluate overall similarity between images…
Simulations to study the static polarization limit for RHIC lattice
NASA Astrophysics Data System (ADS)
Duan, Zhe; Qin, Qing
2016-01-01
A study of spin dynamics based on simulations with the Polymorphic Tracking Code (PTC) is reported, exploring the dependence of the static polarization limit on various beam parameters and lattice settings for a practical RHIC lattice. It is shown that the behavior of the static polarization limit is dominantly affected by the vertical motion, while the effect of beam-beam interaction is small. In addition, the “nonresonant beam polarization” observed and studied in the lattice-independent model is also observed in this lattice-dependent model. Therefore, this simulation study gives insights of polarization evolution at fixed beam energies, that are not available in simple spin tracking. Supported by the U.S. Department of Energy (DE-AC02-98CH10886), Hundred-Talent Program (Chinese Academy of Sciences), and National Natural Science Foundation of China (11105164)
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Bennett, Robert M.
1990-01-01
The CAP-TSD (Computational Aeroelasticity Program - Transonic Small Disturbance) code, developed at the NASA - Langley Research Center, is applied to the Active Flexible Wing (AFW) wind tunnel model for prediction of the model's transonic aeroelastic behavior. Static aeroelastic solutions using CAP-TSD are computed. Dynamic (flutter) analyses are then performed as perturbations about the static aeroelastic deformations of the AFW. The accuracy of the static aeroelastic procedure is investigated by comparing analytical results to those from previous AFW wind tunnel experiments. Dynamic results are presented in the form of root loci at different Mach numbers for a heavy gas and air. The resultant flutter boundaries for both gases are also presented. The effects of viscous damping and angle-of-attack, on the flutter boundary in air, are presented as well.
Production against static electricity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shteiner, A.L.; Minaev, G.S.; Shatkov, O.P.
1978-01-01
Coke industry shops process electrifiable, highly inflammable and explosive substances (benzene, toluene, xylenes, sulfur, coal dust, and coke-oven gas). The electrification of those materials creates a danger of buildup of static electricity charges in them and on the surface of objects interacting with them, followed by an electrical discharge which may cause explosion, fire, or disruption of the technological process. Some of the regulations for protection against static electricity do not reflect modern methods of static electricity control. The regulations are not always observed by workers in the plant services. The main means of protection used to remove static electricitymore » charges in grounding. In many cases it completely drains the charge from the surface of the electrifiable bodies. However, in the processing of compounds with a high specific volumetric electrical resistence grounding is insufficient, since it does not drain the charge from the interior of the substance. Gigh adsorption capacity) are generally met by brown coal low-temperature ompared with predictions using the hourly computer program. The concept of a lumped thermal network for predicting heat losses from in-ground heat storage tanks, developed earlier in the project, has beethe cased-hole log data from various companies and additional comparison factors were calculated for the cased-hole log data. These comparison factors allow for some quantification of these uncalibrated log data.« less
Graham, David F; Carty, Christopher P; Lloyd, David G; Barrett, Rod S
2017-01-01
The purpose of this study was to determine the muscular contributions to the acceleration of the whole body centre of mass (COM) of older compared to younger adults that were able to recover from forward loss of balance with a single step. Forward loss of balance was achieved by releasing participants (14 older adults and 6 younger adults) from a static whole-body forward lean angle of approximately 18 degrees. 10 older adults and 6 younger adults were able to recover with a single step and included in subsequent analysis. A scalable anatomical model consisting of 36 degrees-of-freedom was used to compute kinematics and joint moments from motion capture and force plate data. Forces for 92 muscle actuators were computed using Static Optimisation and Induced Acceleration Analysis was used to compute individual muscle contributions to the three-dimensional acceleration of the whole body COM. There were no significant differences between older and younger adults in step length, step time, 3D COM accelerations or muscle contributions to 3D COM accelerations. The stance and stepping leg Gastrocnemius and Soleus muscles were primarily responsible for the vertical acceleration experienced by the COM. The Gastrocnemius and Soleus from the stance side leg together with bilateral Hamstrings accelerated the COM forwards throughout balance recovery while the Vasti and Soleus of the stepping side leg provided the majority of braking accelerations following foot contact. The Hip Abductor muscles provided the greatest contribution to medial-lateral accelerations of the COM. Deficits in the neuromuscular control of the Gastrocnemius, Soleus, Vasti and Hip Abductors in particular could adversely influence balance recovery and may be important targets in interventions to improve balance recovery performance.
Graham, David F.; Carty, Christopher P.; Lloyd, David G.
2017-01-01
The purpose of this study was to determine the muscular contributions to the acceleration of the whole body centre of mass (COM) of older compared to younger adults that were able to recover from forward loss of balance with a single step. Forward loss of balance was achieved by releasing participants (14 older adults and 6 younger adults) from a static whole-body forward lean angle of approximately 18 degrees. 10 older adults and 6 younger adults were able to recover with a single step and included in subsequent analysis. A scalable anatomical model consisting of 36 degrees-of-freedom was used to compute kinematics and joint moments from motion capture and force plate data. Forces for 92 muscle actuators were computed using Static Optimisation and Induced Acceleration Analysis was used to compute individual muscle contributions to the three-dimensional acceleration of the whole body COM. There were no significant differences between older and younger adults in step length, step time, 3D COM accelerations or muscle contributions to 3D COM accelerations. The stance and stepping leg Gastrocnemius and Soleus muscles were primarily responsible for the vertical acceleration experienced by the COM. The Gastrocnemius and Soleus from the stance side leg together with bilateral Hamstrings accelerated the COM forwards throughout balance recovery while the Vasti and Soleus of the stepping side leg provided the majority of braking accelerations following foot contact. The Hip Abductor muscles provided the greatest contribution to medial-lateral accelerations of the COM. Deficits in the neuromuscular control of the Gastrocnemius, Soleus, Vasti and Hip Abductors in particular could adversely influence balance recovery and may be important targets in interventions to improve balance recovery performance. PMID:29069097
Configuration Management at NASA
NASA Technical Reports Server (NTRS)
Doreswamy, Rajiv
2013-01-01
NASA programs are characterized by complexity, harsh environments and the fact that we usually have one chance to get it right. Programs last decades and need to accept new hardware and technology as it is developed. We have multiple suppliers and international partners Our challenges are many, our costs are high and our failures are highly visible. CM systems need to be scalable, adaptable to new technology and span the life cycle of the program (30+ years). Multiple Systems, Contractors and Countries added major levels of complexity to the ISS program and CM/DM and Requirements management systems center dot CM Systems need to be designed for long design life center dot Space Station Design started in 1984 center dot Assembly Complete in 2012 center dot Systems were developed on a task basis without an overall system perspective center dot Technology moves faster than a large project office, try to make sure you have a system that can adapt
Antonova, A A; Absatova, K A; Korneev, A A; Kurgansky, A V
2015-01-01
The production of drawing movements was studied in 29 right-handed children of 9-to-11 years old. The movements were the sequences of horizontal and vertical linear stokes conjoined at right angle (open polygonal chains) referred to throughout the paper as trajectories. The length of a trajectory varied from 4 to 6. The trajectories were presented visually to a subject in static (linedrawing) and dynamic (moving cursor that leaves no trace) modes. The subjects were asked to draw (copy) a trajectory in response to delayed go-signal (short click) as fast as possible without lifting the pen. The production latency time, the average movement duration along a trajectory segment, and overall number of errors committed by a subject during trajectory production were analyzed. A comparison of children's data with similar data in adults (16 subjects) shows the following. First, a substantial reduction in error rate is observed in the age range between 9 and 11 years old for both static and dynamic modes of trajectory presentation, with children of 11 still committing more error than adults. Second, the averaged movement duration shortens with age while the latency time tends to increase. Third, unlike the adults, the children of 9-11 do not show any difference in latency time between static and dynamic modes of visual presentation of trajectories. The difference in trajectory production between adult and children is attributed to the predominant involvement of on-line programming in children and pre-programming in adults.
1965-02-01
This photograph shows a fuel tank lower half for the Saturn V S-IC-T stage (the S-IC stage for static testing) on a C-frame transporter inside the vertical assembly building at the Marshall Space Flight Center.
Development of an advanced pitch active control system for a wide body jet aircraft
NASA Technical Reports Server (NTRS)
Guinn, Wiley A.; Rising, Jerry J.; Davis, Walt J.
1984-01-01
An advanced PACS control law was developed for a commercial wide-body transport (Lockheed L-1011) by using modern control theory. Validity of the control law was demonstrated by piloted flight simulation tests on the NASA Langley visual motion simulator. The PACS design objective was to develop a PACS that would provide good flying qualities to negative 10 percent static stability margins that were equivalent to those of the baseline aircraft at a 15 percent static stability margin which is normal for the L-1011. Also, the PACS was to compensate for high-Mach/high-g instabilities that degrade flying qualities during upset recoveries and maneuvers. The piloted flight simulation tests showed that the PACS met the design objectives. The simulation demonstrated good flying qualities to negative 20 percent static stability margins for hold, cruise and high-speed flight conditions. Analysis and wind tunnel tests performed on other Lockheed programs indicate that the PACS could be used on an advanced transport configuration to provide a 4 percent fuel savings which results from reduced trim drag by flying at negative static stability margins.
NASA Technical Reports Server (NTRS)
Sutter, Thomas R.; Wu, K. Chauncey; Riutort, Kevin T.; Laufer, Joseph B.; Phelps, James E.
1992-01-01
A first-generation space crane articulated-truss joint was statically and dynamically characterized in a configuration that approximated an operational environment. The articulated-truss joint was integrated into a test-bed for structural characterization. Static characterization was performed by applying known loads and measuring the corresponding deflections to obtain load-deflection curves. Dynamic characterization was performed using modal testing to experimentally determine the first six mode shapes, frequencies, and modal damping values. Static and dynamic characteristics were also determined for a reference truss that served as a characterization baseline. Load-deflection curves and experimental frequency response functions are presented for the reference truss and the articulated-truss joint mounted in the test-bed. The static and dynamic experimental results are compared with analytical predictions obtained from finite element analyses. Load-deflection response is also presented for one of the linear actuators used in the articulated-truss joint. Finally, an assessment is presented for the predictability of the truss hardware used in the reference truss and articulated-truss joint based upon hardware stiffness properties that were previously obtained during the Precision Segmented Reflector (PSR) Technology Development Program.
Certified In-lined Reference Monitoring on .NET
2006-06-01
Introduction Language -based approaches to computer security have employed two major strategies for enforcing security policies over untrusted programs. • Low...automatically verify IRM’s using a static type-checker. Mobile (MOnitorable BIL with Effects) is an exten- sion of BIL (Baby Intermediate Language ) [15], a...AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES Proceedings of the 2006 Programming Languages and
NASA Technical Reports Server (NTRS)
1989-01-01
NBOD2, a program developed at Goddard Space Flight Center to solve equations of motion coupled N-body systems is used by E.I. DuPont de Nemours & Co. to model potential drugs as a series of elements. The program analyses the vibrational and static motions of independent components in drugs. Information generated from this process is used to design specific drugs to interact with enzymes in designated ways.
A Switching-Mode Power Supply Design Tool to Improve Learning in a Power Electronics Course
ERIC Educational Resources Information Center
Miaja, P. F.; Lamar, D. G.; de Azpeitia, M.; Rodriguez, A.; Rodriguez, M.; Hernando, M. M.
2011-01-01
The static design of ac/dc and dc/dc switching-mode power supplies (SMPS) relies on a simple but repetitive process. Although specific spreadsheets, available in various computer-aided design (CAD) programs, are widely used, they are difficult to use in educational applications. In this paper, a graphic tool programmed in MATLAB is presented,…
ERIC Educational Resources Information Center
Lancioni, Giulio E.; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Oliva, Doretta; Campodonico, Francesca; Lang, Russell
2012-01-01
The present three single-case studies assessed the effectiveness of technology-based programs to help three persons with multiple disabilities exercise adaptive response schemes independently. The response schemes included (a) left and right head movements for a man who kept his head increasingly static on his wheelchair's headrest (Study I), (b)…
VALIDATION OF ANSYS FINITE ELEMENT ANALYSIS SOFTWARE
DOE Office of Scientific and Technical Information (OSTI.GOV)
HAMM, E.R.
2003-06-27
This document provides a record of the verification and Validation of the ANSYS Version 7.0 software that is installed on selected CH2M HILL computers. The issues addressed include: Software verification, installation, validation, configuration management and error reporting. The ANSYS{reg_sign} computer program is a large scale multi-purpose finite element program which may be used for solving several classes of engineering analysis. The analysis capabilities of ANSYS Full Mechanical Version 7.0 installed on selected CH2M Hill Hanford Group (CH2M HILL) Intel processor based computers include the ability to solve static and dynamic structural analyses, steady-state and transient heat transfer problems, mode-frequency andmore » buckling eigenvalue problems, static or time-varying magnetic analyses and various types of field and coupled-field applications. The program contains many special features which allow nonlinearities or secondary effects to be included in the solution, such as plasticity, large strain, hyperelasticity, creep, swelling, large deflections, contact, stress stiffening, temperature dependency, material anisotropy, and thermal radiation. The ANSYS program has been in commercial use since 1970, and has been used extensively in the aerospace, automotive, construction, electronic, energy services, manufacturing, nuclear, plastics, oil and steel industries.« less
OpenARC: Extensible OpenACC Compiler Framework for Directive-Based Accelerator Programming Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Seyong; Vetter, Jeffrey S
2014-01-01
Directive-based, accelerator programming models such as OpenACC have arisen as an alternative solution to program emerging Scalable Heterogeneous Computing (SHC) platforms. However, the increased complexity in the SHC systems incurs several challenges in terms of portability and productivity. This paper presents an open-sourced OpenACC compiler, called OpenARC, which serves as an extensible research framework to address those issues in the directive-based accelerator programming. This paper explains important design strategies and key compiler transformation techniques needed to implement the reference OpenACC compiler. Moreover, this paper demonstrates the efficacy of OpenARC as a research framework for directive-based programming study, by proposing andmore » implementing OpenACC extensions in the OpenARC framework to 1) support hybrid programming of the unified memory and separate memory and 2) exploit architecture-specific features in an abstract manner. Porting thirteen standard OpenACC programs and three extended OpenACC programs to CUDA GPUs shows that OpenARC performs similarly to a commercial OpenACC compiler, while it serves as a high-level research framework.« less
Servoss, Jonathan; Chang, Connie; Fay, Jonathan; Lota, Kanchan Sehgal; Mashour, George A; Ward, Kevin R
2017-10-01
The Institute of Medicine recommended the advance of innovation and entrepreneurship training programs within the Clinical & Translational Science Award (CTSA) program; however, there remains a gap in adoption by CTSA institutes. The University of Michigan's Michigan Institute for Clinical & Health Research and Fast Forward Medical Innovation (FFMI) partnered to develop a pilot program designed to teach CTSA hubs how to implement innovation and entrepreneurship programs at their home institutions. The program provided a 2-day onsite training experience combined with observation of an ongoing course focused on providing biomedical innovation, commercialization and entrepreneurial training to a medical academician audience (FFMI fast PACE). All 9 participating CTSA institutes reported a greater connection to biomedical research commercialization resources. Six launched their own version of the FFMI fast PACE course or modified existing programs. Two reported greater collaboration with their technology transfer offices. The FFMI fast PACE course and training program may be suitable for CTSA hubs looking to enhance innovation and entrepreneurship within their institutions and across their innovation ecosystems.
Scalable and portable visualization of large atomistic datasets
NASA Astrophysics Data System (ADS)
Sharma, Ashish; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya
2004-10-01
A scalable and portable code named Atomsviewer has been developed to interactively visualize a large atomistic dataset consisting of up to a billion atoms. The code uses a hierarchical view frustum-culling algorithm based on the octree data structure to efficiently remove atoms outside of the user's field-of-view. Probabilistic and depth-based occlusion-culling algorithms then select atoms, which have a high probability of being visible. Finally a multiresolution algorithm is used to render the selected subset of visible atoms at varying levels of detail. Atomsviewer is written in C++ and OpenGL, and it has been tested on a number of architectures including Windows, Macintosh, and SGI. Atomsviewer has been used to visualize tens of millions of atoms on a standard desktop computer and, in its parallel version, up to a billion atoms. Program summaryTitle of program: Atomsviewer Catalogue identifier: ADUM Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUM Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: 2.4 GHz Pentium 4/Xeon processor, professional graphics card; Apple G4 (867 MHz)/G5, professional graphics card Operating systems under which the program has been tested: Windows 2000/XP, Mac OS 10.2/10.3, SGI IRIX 6.5 Programming languages used: C++, C and OpenGL Memory required to execute with typical data: 1 gigabyte of RAM High speed storage required: 60 gigabytes No. of lines in the distributed program including test data, etc.: 550 241 No. of bytes in the distributed program including test data, etc.: 6 258 245 Number of bits in a word: Arbitrary Number of processors used: 1 Has the code been vectorized or parallelized: No Distribution format: tar gzip file Nature of physical problem: Scientific visualization of atomic systems Method of solution: Rendering of atoms using computer graphic techniques, culling algorithms for data minimization, and levels-of-detail for minimal rendering Restrictions on the complexity of the problem: None Typical running time: The program is interactive in its execution Unusual features of the program: None References: The conceptual foundation and subsequent implementation of the algorithms are found in [A. Sharma, A. Nakano, R.K. Kalia, P. Vashishta, S. Kodiyalam, P. Miller, W. Zhao, X.L. Liu, T.J. Campbell, A. Haas, Presence—Teleoperators and Virtual Environments 12 (1) (2003)].
An Approach to Self-Assembling Swarm Robots Using Multitree Genetic Programming
An, Jinung
2013-01-01
In recent days, self-assembling swarm robots have been studied by a number of researchers due to their advantages such as high efficiency, stability, and scalability. However, there are still critical issues in applying them to practical problems in the real world. The main objective of this study is to develop a novel self-assembling swarm robot algorithm that overcomes the limitations of existing approaches. To this end, multitree genetic programming is newly designed to efficiently discover a set of patterns necessary to carry out the mission of the self-assembling swarm robots. The obtained patterns are then incorporated into their corresponding robot modules. The computational experiments prove the effectiveness of the proposed approach. PMID:23861655
An approach to self-assembling swarm robots using multitree genetic programming.
Lee, Jong-Hyun; Ahn, Chang Wook; An, Jinung
2013-01-01
In recent days, self-assembling swarm robots have been studied by a number of researchers due to their advantages such as high efficiency, stability, and scalability. However, there are still critical issues in applying them to practical problems in the real world. The main objective of this study is to develop a novel self-assembling swarm robot algorithm that overcomes the limitations of existing approaches. To this end, multitree genetic programming is newly designed to efficiently discover a set of patterns necessary to carry out the mission of the self-assembling swarm robots. The obtained patterns are then incorporated into their corresponding robot modules. The computational experiments prove the effectiveness of the proposed approach.
1960-06-15
The Saturn Project was approved on January 18, 1960 as a program of the highest national priority. The formal test program to prove out the clustered-booster concept was well underway. A series of static tests of the Saturn I booster (S-I stage) began June 3, 1960 at the Marshall Space Flight Center (MSFC). This photograph depicts the Saturn I S-I stage equipped with eight H-1 engines, being successfully test-fired for the duration of 121 seconds on June 15, 1960.
Hardware-Independent Proofs of Numerical Programs
NASA Technical Reports Server (NTRS)
Boldo, Sylvie; Nguyen, Thi Minh Tuyen
2010-01-01
On recent architectures, a numerical program may give different answers depending on the execution hardware and the compilation. Our goal is to formally prove properties about numerical programs that are true for multiple architectures and compilers. We propose an approach that states the rounding error of each floating-point computation whatever the environment. This approach is implemented in the Frama-C platform for static analysis of C code. Small case studies using this approach are entirely and automatically proved
Tools for automated acoustic monitoring within the R package monitoR
Katz, Jonathan; Hafner, Sasha D.; Donovan, Therese
2016-01-01
The R package monitoR contains tools for managing an acoustic-monitoring program including survey metadata, template creation and manipulation, automated detection and results management. These tools are scalable for use with small projects as well as larger long-term projects and those with expansive spatial extents. Here, we describe typical workflow when using the tools in monitoR. Typical workflow utilizes a generic sequence of functions, with the option for either binary point matching or spectrogram cross-correlation detectors.
Yang, L. H.; Brooks III, E. D.; Belak, J.
1992-01-01
A molecular dynamics algorithm for performing large-scale simulations using the Parallel C Preprocessor (PCP) programming paradigm on the BBN TC2000, a massively parallel computer, is discussed. The algorithm uses a linked-cell data structure to obtain the near neighbors of each atom as time evoles. Each processor is assigned to a geometric domain containing many subcells and the storage for that domain is private to the processor. Within this scheme, the interdomain (i.e., interprocessor) communication is minimized.
Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.; Kwak, Dochan (Technical Monitor)
2002-01-01
A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel supercomputers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.
Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.; Byun, Chansup; Kwak, Dochan (Technical Monitor)
2001-01-01
A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel super computers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.
Young Investigator Program: Modular Paradigm for Scalable Quantum Information
2016-03-04
For comparison, we plot the time required with direct driving (green lines) with bare Rabi frequencies 20 and 100kHz, when the electronic spin in state...from the NV center. Note that virtual transition of the electronic spin in the ms = 0 manifold result in a decrease of the effective Rabi frequency...strength [17–19]. This nuclear Rabi enhancement depends on the state of the electronic spin. The effective Rabi frequency Ω for an isolated nuclear spin
MADNESS: A Multiresolution, Adaptive Numerical Environment for Scientific Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrison, Robert J.; Beylkin, Gregory; Bischoff, Florian A.
2016-01-01
MADNESS (multiresolution adaptive numerical environment for scientific simulation) is a high-level software environment for solving integral and differential equations in many dimensions that uses adaptive and fast harmonic analysis methods with guaranteed precision based on multiresolution analysis and separated representations. Underpinning the numerical capabilities is a powerful petascale parallel programming environment that aims to increase both programmer productivity and code scalability. This paper describes the features and capabilities of MADNESS and briefly discusses some current applications in chemistry and several areas of physics.
The roofline model: A pedagogical tool for program analysis and optimization
Williams, Samuel; Patterson, David; Oliker, Leonid; ...
2008-08-01
This article consists of a collection of slides from the authors' conference presentation. The Roofline model is a visually intuitive figure for kernel analysis and optimization. We believe undergraduates will find it useful in assessing performance and scalability limitations. It is easily extended to other architectural paradigms. It is easily extendable to other metrics: performance (sort, graphics, crypto..) bandwidth (L2, PCIe, ..). Furthermore, a performance counters could be used to generate a runtime-specific roofline that would greatly aide the optimization.
Design and evaluation of a wireless sensor network based aircraft strength testing system.
Wu, Jian; Yuan, Shenfang; Zhou, Genyuan; Ji, Sai; Wang, Zilong; Wang, Yang
2009-01-01
The verification of aerospace structures, including full-scale fatigue and static test programs, is essential for structure strength design and evaluation. However, the current overall ground strength testing systems employ a large number of wires for communication among sensors and data acquisition facilities. The centralized data processing makes test programs lack efficiency and intelligence. Wireless sensor network (WSN) technology might be expected to address the limitations of cable-based aeronautical ground testing systems. This paper presents a wireless sensor network based aircraft strength testing (AST) system design and its evaluation on a real aircraft specimen. In this paper, a miniature, high-precision, and shock-proof wireless sensor node is designed for multi-channel strain gauge signal conditioning and monitoring. A cluster-star network topology protocol and application layer interface are designed in detail. To verify the functionality of the designed wireless sensor network for strength testing capability, a multi-point WSN based AST system is developed for static testing of a real aircraft undercarriage. Based on the designed wireless sensor nodes, the wireless sensor network is deployed to gather, process, and transmit strain gauge signals and monitor results under different static test loads. This paper shows the efficiency of the wireless sensor network based AST system, compared to a conventional AST system.
Design and Evaluation of a Wireless Sensor Network Based Aircraft Strength Testing System
Wu, Jian; Yuan, Shenfang; Zhou, Genyuan; Ji, Sai; Wang, Zilong; Wang, Yang
2009-01-01
The verification of aerospace structures, including full-scale fatigue and static test programs, is essential for structure strength design and evaluation. However, the current overall ground strength testing systems employ a large number of wires for communication among sensors and data acquisition facilities. The centralized data processing makes test programs lack efficiency and intelligence. Wireless sensor network (WSN) technology might be expected to address the limitations of cable-based aeronautical ground testing systems. This paper presents a wireless sensor network based aircraft strength testing (AST) system design and its evaluation on a real aircraft specimen. In this paper, a miniature, high-precision, and shock-proof wireless sensor node is designed for multi-channel strain gauge signal conditioning and monitoring. A cluster-star network topology protocol and application layer interface are designed in detail. To verify the functionality of the designed wireless sensor network for strength testing capability, a multi-point WSN based AST system is developed for static testing of a real aircraft undercarriage. Based on the designed wireless sensor nodes, the wireless sensor network is deployed to gather, process, and transmit strain gauge signals and monitor results under different static test loads. This paper shows the efficiency of the wireless sensor network based AST system, compared to a conventional AST system. PMID:22408521
NARC Rayon Replacement Program for the RSRM Nozzle, Phase IV Qualification and Implementation Status
NASA Technical Reports Server (NTRS)
Haddock, M. Reed; Wendel, Gary M.; Cook, Roger V.
2005-01-01
The Space Shuttle NARC Rayon Replacement Program has down-selected Enka rayon as a replacement for the obsolete NARC rayon in the nozzle carbon cloth phenolic (CCP) ablative insulators. Full qualification testing of the Enka rayon-based carbon cloth phenolic is underway, including processing, thmal/structural properties, and hot-fire subscale tests. Required thermal-structural capabilities, together with confidence in erosio/char performance in simulated and subscale hot fire tests such as Wright-Patterson Air Force Base Laser Hardened Materials Evaluation Laboratory testing, NASA-MSFC 24-inch motor tests, NASA-MSFC Solid Fuel Torch - Super Sonic Blast Tube, NASA-MSFC Plasma Torch Test Bed, ATK Thiokol Forty Pound Charge and NASA-MSFC MNASA justified the testing of the new Enka-rayon candidate on full-scale static test motors. The first RSRM full-scale static test motor nozzle, fabricated using the new Enka rayon-based CCP, was successfully demonstrated in June 2004. Two additional static test motors are planned with the new Enka rayon in the next two years along with additional A-basis property characterization. Process variation or "corner-of-the-box" testing together with cured and uncured aging studies are also planned as some of the pre-flight implementation activities with 5-year cured aging studies over-lapping flight hardware fabrication.
Disparity : scalable anomaly detection for clusters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desai, N.; Bradshaw, R.; Lusk, E.
2008-01-01
In this paper, we describe disparity, a tool that does parallel, scalable anomaly detection for clusters. Disparity uses basic statistical methods and scalable reduction operations to perform data reduction on client nodes and uses these results to locate node anomalies. We discuss the implementation of disparity and present results of its use on a SiCortex SC5832 system.
Track Geometry Measurement System
DOT National Transportation Integrated Search
1980-09-01
This report contains a summary of the results of the test program that was conducted to validate the TGMS under various static and dynamic conditions. The TGMS has the capability to measure or derive gage, crosslevel (superelevation), warp (twist), c...
Static characteristics design of hydrostatic guide-ways based on fluid-structure interactions
NASA Astrophysics Data System (ADS)
Lin, Shuo; Yin, YueHong
2016-10-01
With the raising requirements in micro optical systems, the available machines become hard to achieve the process dynamic and accuracy in all aspects. This makes compact design based on fluid/structure interactions (FSI) important. However, there is a difficulty in studying FSI with oil film as fluid domain. This paper aims at static characteristic design of a hydrostatic guide-way with capillary restrictors based on FSI. The pressure distribution of the oil film land is calculated by solving the Reynolds-equation with Galerkin technique. The deformation of structure is calculated by commercial FEM software, MSC. Nastran. A matlab program is designed to realize the coupling progress by modifying the load boundary in the submitting file and reading the deformation result. It's obvious that the stiffness of the hydrostatic bearing decreases with the weakening of the bearing structure. This program is proposed to make more precise prediction of bearing stiffness.