Sample records for highly concurrent computational

  1. Finite Element Analysis in Concurrent Processing: Computational Issues

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Watson, Brian; Vanderplaats, Garrett

    2004-01-01

    The purpose of this research is to investigate the potential application of new methods for solving large-scale static structural problems on concurrent computers. It is well known that traditional single-processor computational speed will be limited by inherent physical limits. The only path to achieve higher computational speeds lies through concurrent processing. Traditional factorization solution methods for sparse matrices are ill suited for concurrent processing because the null entries get filled, leading to high communication and memory requirements. The research reported herein investigates alternatives to factorization that promise a greater potential to achieve high concurrent computing efficiency. Two methods, and their variants, based on direct energy minimization are studied: a) minimization of the strain energy using the displacement method formulation; b) constrained minimization of the complementary strain energy using the force method formulation. Initial results indicated that in the context of the direct energy minimization the displacement formulation experienced convergence and accuracy difficulties while the force formulation showed promising potential.

  2. The Caltech Concurrent Computation Program - Project description

    NASA Technical Reports Server (NTRS)

    Fox, G.; Otto, S.; Lyzenga, G.; Rogstad, D.

    1985-01-01

    The Caltech Concurrent Computation Program wwhich studies basic issues in computational science is described. The research builds on initial work where novel concurrent hardware, the necessary systems software to use it and twenty significant scientific implementations running on the initial 32, 64, and 128 node hypercube machines have been constructed. A major goal of the program will be to extend this work into new disciplines and more complex algorithms including general packages that decompose arbitrary problems in major application areas. New high-performance concurrent processors with up to 1024-nodes, over a gigabyte of memory and multigigaflop performance are being constructed. The implementations cover a wide range of problems in areas such as high energy and astrophysics, condensed matter, chemical reactions, plasma physics, applied mathematics, geophysics, simulation, CAD for VLSI, graphics and image processing. The products of the research program include the concurrent algorithms, hardware, systems software, and complete program implementations.

  3. Functional language and data flow architectures

    NASA Technical Reports Server (NTRS)

    Ercegovac, M. D.; Patel, D. R.; Lang, T.

    1983-01-01

    This is a tutorial article about language and architecture approaches for highly concurrent computer systems based on the functional style of programming. The discussion concentrates on the basic aspects of functional languages, and sequencing models such as data-flow, demand-driven and reduction which are essential at the machine organization level. Several examples of highly concurrent machines are described.

  4. VLSI neuroprocessors

    NASA Technical Reports Server (NTRS)

    Kemeny, Sabrina E.

    1994-01-01

    Electronic and optoelectronic hardware implementations of highly parallel computing architectures address several ill-defined and/or computation-intensive problems not easily solved by conventional computing techniques. The concurrent processing architectures developed are derived from a variety of advanced computing paradigms including neural network models, fuzzy logic, and cellular automata. Hardware implementation technologies range from state-of-the-art digital/analog custom-VLSI to advanced optoelectronic devices such as computer-generated holograms and e-beam fabricated Dammann gratings. JPL's concurrent processing devices group has developed a broad technology base in hardware implementable parallel algorithms, low-power and high-speed VLSI designs and building block VLSI chips, leading to application-specific high-performance embeddable processors. Application areas include high throughput map-data classification using feedforward neural networks, terrain based tactical movement planner using cellular automata, resource optimization (weapon-target assignment) using a multidimensional feedback network with lateral inhibition, and classification of rocks using an inner-product scheme on thematic mapper data. In addition to addressing specific functional needs of DOD and NASA, the JPL-developed concurrent processing device technology is also being customized for a variety of commercial applications (in collaboration with industrial partners), and is being transferred to U.S. industries. This viewgraph p resentation focuses on two application-specific processors which solve the computation intensive tasks of resource allocation (weapon-target assignment) and terrain based tactical movement planning using two extremely different topologies. Resource allocation is implemented as an asynchronous analog competitive assignment architecture inspired by the Hopfield network. Hardware realization leads to a two to four order of magnitude speed-up over conventional techniques and enables multiple assignments, (many to many), not achievable with standard statistical approaches. Tactical movement planning (finding the best path from A to B) is accomplished with a digital two-dimensional concurrent processor array. By exploiting the natural parallel decomposition of the problem in silicon, a four order of magnitude speed-up over optimized software approaches has been demonstrated.

  5. Heterogeneous concurrent computing with exportable services

    NASA Technical Reports Server (NTRS)

    Sunderam, Vaidy

    1995-01-01

    Heterogeneous concurrent computing, based on the traditional process-oriented model, is approaching its functionality and performance limits. An alternative paradigm, based on the concept of services, supporting data driven computation, and built on a lightweight process infrastructure, is proposed to enhance the functional capabilities and the operational efficiency of heterogeneous network-based concurrent computing. TPVM is an experimental prototype system supporting exportable services, thread-based computation, and remote memory operations that is built as an extension of and an enhancement to the PVM concurrent computing system. TPVM offers a significantly different computing paradigm for network-based computing, while maintaining a close resemblance to the conventional PVM model in the interest of compatibility and ease of transition Preliminary experiences have demonstrated that the TPVM framework presents a natural yet powerful concurrent programming interface, while being capable of delivering performance improvements of upto thirty percent.

  6. Image-Processing Software For A Hypercube Computer

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Mazer, Alan S.; Groom, Steven L.; Williams, Winifred I.

    1992-01-01

    Concurrent Image Processing Executive (CIPE) is software system intended to develop and use image-processing application programs on concurrent computing environment. Designed to shield programmer from complexities of concurrent-system architecture, it provides interactive image-processing environment for end user. CIPE utilizes architectural characteristics of particular concurrent system to maximize efficiency while preserving architectural independence from user and programmer. CIPE runs on Mark-IIIfp 8-node hypercube computer and associated SUN-4 host computer.

  7. For Whom Is a Picture Worth a Thousand Words? Extensions of a Dual-Coding Theory of Multimedia Learning.

    ERIC Educational Resources Information Center

    Mayer, Richard E.; Sims, Valerie K.

    1994-01-01

    In 2 experiments, 162 high- and low-spatial ability students viewed a computer-generated animation and heard a concurrent or successive explanation. The concurrent group generated more creative solutions to transfer problems and demonstrated a contiguity effect consistent with dual-coding theory. (SLD)

  8. Design of testbed and emulation tools

    NASA Technical Reports Server (NTRS)

    Lundstrom, S. F.; Flynn, M. J.

    1986-01-01

    The research summarized was concerned with the design of testbed and emulation tools suitable to assist in projecting, with reasonable accuracy, the expected performance of highly concurrent computing systems on large, complete applications. Such testbed and emulation tools are intended for the eventual use of those exploring new concurrent system architectures and organizations, either as users or as designers of such systems. While a range of alternatives was considered, a software based set of hierarchical tools was chosen to provide maximum flexibility, to ease in moving to new computers as technology improves and to take advantage of the inherent reliability and availability of commercially available computing systems.

  9. The NASA computer science research program plan

    NASA Technical Reports Server (NTRS)

    1983-01-01

    A taxonomy of computer science is included, one state of the art of each of the major computer science categories is summarized. A functional breakdown of NASA programs under Aeronautics R and D, space R and T, and institutional support is also included. These areas were assessed against the computer science categories. Concurrent processing, highly reliable computing, and information management are identified.

  10. Computational simulation of concurrent engineering for aerospace propulsion systems

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Singhal, S. N.

    1992-01-01

    Results are summarized of an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulations methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties - fundamental in developing such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering for propulsion systems and systems in general. Benefits and facets needing early attention in the development are outlined.

  11. Computational simulation for concurrent engineering of aerospace propulsion systems

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Singhal, S. N.

    1993-01-01

    Results are summarized for an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulation methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties--fundamental to develop such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering of propulsion systems and systems in general. Benefits and issues needing early attention in the development are outlined.

  12. Computational simulation for concurrent engineering of aerospace propulsion systems

    NASA Astrophysics Data System (ADS)

    Chamis, C. C.; Singhal, S. N.

    1993-02-01

    Results are summarized for an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulation methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties--fundamental to develop such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering of propulsion systems and systems in general. Benefits and issues needing early attention in the development are outlined.

  13. The Design of a High Performance Earth Imagery and Raster Data Management and Processing Platform

    NASA Astrophysics Data System (ADS)

    Xie, Qingyun

    2016-06-01

    This paper summarizes the general requirements and specific characteristics of both geospatial raster database management system and raster data processing platform from a domain-specific perspective as well as from a computing point of view. It also discusses the need of tight integration between the database system and the processing system. These requirements resulted in Oracle Spatial GeoRaster, a global scale and high performance earth imagery and raster data management and processing platform. The rationale, design, implementation, and benefits of Oracle Spatial GeoRaster are described. Basically, as a database management system, GeoRaster defines an integrated raster data model, supports image compression, data manipulation, general and spatial indices, content and context based queries and updates, versioning, concurrency, security, replication, standby, backup and recovery, multitenancy, and ETL. It provides high scalability using computer and storage clustering. As a raster data processing platform, GeoRaster provides basic operations, image processing, raster analytics, and data distribution featuring high performance computing (HPC). Specifically, HPC features include locality computing, concurrent processing, parallel processing, and in-memory computing. In addition, the APIs and the plug-in architecture are discussed.

  14. Advanced Collaborative Environments Supporting Systems Integration and Design

    DTIC Science & Technology

    2003-03-01

    concurrently view a virtual system or product model while maintaining natural, human communication . These virtual systems operate within a computer-generated...These environments allow multiple individuals to concurrently view a virtual system or product model while simultaneously maintaining natural, human ... communication . As a result, TARDEC researchers and system developers are using this advanced high-end visualization technology to develop future

  15. Performance Modeling in CUDA Streams - A Means for High-Throughput Data Processing.

    PubMed

    Li, Hao; Yu, Di; Kumar, Anand; Tu, Yi-Cheng

    2014-10-01

    Push-based database management system (DBMS) is a new type of data processing software that streams large volume of data to concurrent query operators. The high data rate of such systems requires large computing power provided by the query engine. In our previous work, we built a push-based DBMS named G-SDMS to harness the unrivaled computational capabilities of modern GPUs. A major design goal of G-SDMS is to support concurrent processing of heterogenous query processing operations and enable resource allocation among such operations. Understanding the performance of operations as a result of resource consumption is thus a premise in the design of G-SDMS. With NVIDIA's CUDA framework as the system implementation platform, we present our recent work on performance modeling of CUDA kernels running concurrently under a runtime mechanism named CUDA stream . Specifically, we explore the connection between performance and resource occupancy of compute-bound kernels and develop a model that can predict the performance of such kernels. Furthermore, we provide an in-depth anatomy of the CUDA stream mechanism and summarize the main kernel scheduling disciplines in it. Our models and derived scheduling disciplines are verified by extensive experiments using synthetic and real-world CUDA kernels.

  16. Methodologies and systems for heterogeneous concurrent computing

    NASA Technical Reports Server (NTRS)

    Sunderam, V. S.

    1994-01-01

    Heterogeneous concurrent computing is gaining increasing acceptance as an alternative or complementary paradigm to multiprocessor-based parallel processing as well as to conventional supercomputing. While algorithmic and programming aspects of heterogeneous concurrent computing are similar to their parallel processing counterparts, system issues, partitioning and scheduling, and performance aspects are significantly different. In this paper, we discuss critical design and implementation issues in heterogeneous concurrent computing, and describe techniques for enhancing its effectiveness. In particular, we highlight the system level infrastructures that are required, aspects of parallel algorithm development that most affect performance, system capabilities and limitations, and tools and methodologies for effective computing in heterogeneous networked environments. We also present recent developments and experiences in the context of the PVM system and comment on ongoing and future work.

  17. From Desktop to Teraflop: Exploiting the U.S. Lead in High Performance Computing. NSF Blue Ribbon Panel on High Performance Computing.

    ERIC Educational Resources Information Center

    National Science Foundation, Washington, DC.

    This report addresses an opportunity to accelerate progress in virtually every branch of science and engineering concurrently, while also boosting the American economy as business firms also learn to exploit these new capabilities. The successful rapid advancement in both science and technology creates its own challenges, four of which are…

  18. Parallel Algorithm Solves Coupled Differential Equations

    NASA Technical Reports Server (NTRS)

    Hayashi, A.

    1987-01-01

    Numerical methods adapted to concurrent processing. Algorithm solves set of coupled partial differential equations by numerical integration. Adapted to run on hypercube computer, algorithm separates problem into smaller problems solved concurrently. Increase in computing speed with concurrent processing over that achievable with conventional sequential processing appreciable, especially for large problems.

  19. Ada Compiler Validation Summary Report: Certificate Number: 900121S1. 10251 Computer Sciences Corporation MC Ada V1.2.Beta/Concurrent Computer Corporation Concurrent/Masscomp 5600 Host To Concurrent/Masscomp 5600 (Dual 68020 Processor Configuration) Target

    DTIC Science & Technology

    1990-04-23

    developed Ada Real - Time Operating System (ARTOS) for bare machine environments(Target), ACW 1.1I0. " ; - -M.UIECTTERMS Ada programming language, Ada...configuration) Operating System: CSC developed Ada Real - Time Operating System (ARTOS) for bare machine environments Memory Size: 4MB 2.2...Test Method Testing of the MC Ado V1.2.beta/ Concurrent Computer Corporation compiler and the CSC developed Ada Real - Time Operating System (ARTOS) for

  20. Generalized concurrence in boson sampling.

    PubMed

    Chin, Seungbeom; Huh, Joonsuk

    2018-04-17

    A fundamental question in linear optical quantum computing is to understand the origin of the quantum supremacy in the physical system. It is found that the multimode linear optical transition amplitudes are calculated through the permanents of transition operator matrices, which is a hard problem for classical simulations (boson sampling problem). We can understand this problem by considering a quantum measure that directly determines the runtime for computing the transition amplitudes. In this paper, we suggest a quantum measure named "Fock state concurrence sum" C S , which is the summation over all the members of "the generalized Fock state concurrence" (a measure analogous to the generalized concurrences of entanglement and coherence). By introducing generalized algorithms for computing the transition amplitudes of the Fock state boson sampling with an arbitrary number of photons per mode, we show that the minimal classical runtime for all the known algorithms directly depends on C S . Therefore, we can state that the Fock state concurrence sum C S behaves as a collective measure that controls the computational complexity of Fock state BS. We expect that our observation on the role of the Fock state concurrence in the generalized algorithm for permanents would provide a unified viewpoint to interpret the quantum computing power of linear optics.

  1. Performance Modeling in CUDA Streams - A Means for High-Throughput Data Processing

    PubMed Central

    Li, Hao; Yu, Di; Kumar, Anand; Tu, Yi-Cheng

    2015-01-01

    Push-based database management system (DBMS) is a new type of data processing software that streams large volume of data to concurrent query operators. The high data rate of such systems requires large computing power provided by the query engine. In our previous work, we built a push-based DBMS named G-SDMS to harness the unrivaled computational capabilities of modern GPUs. A major design goal of G-SDMS is to support concurrent processing of heterogenous query processing operations and enable resource allocation among such operations. Understanding the performance of operations as a result of resource consumption is thus a premise in the design of G-SDMS. With NVIDIA’s CUDA framework as the system implementation platform, we present our recent work on performance modeling of CUDA kernels running concurrently under a runtime mechanism named CUDA stream. Specifically, we explore the connection between performance and resource occupancy of compute-bound kernels and develop a model that can predict the performance of such kernels. Furthermore, we provide an in-depth anatomy of the CUDA stream mechanism and summarize the main kernel scheduling disciplines in it. Our models and derived scheduling disciplines are verified by extensive experiments using synthetic and real-world CUDA kernels. PMID:26566545

  2. SOI layout decomposition for double patterning lithography on high-performance computer platforms

    NASA Astrophysics Data System (ADS)

    Verstov, Vladimir; Zinchenko, Lyudmila; Makarchuk, Vladimir

    2014-12-01

    In the paper silicon on insulator layout decomposition algorithms for the double patterning lithography on high performance computing platforms are discussed. Our approach is based on the use of a contradiction graph and a modified concurrent breadth-first search algorithm. We evaluate our technique on 45 nm Nangate Open Cell Library including non-Manhattan geometry. Experimental results show that our soft computing algorithms decompose layout successfully and a minimal distance between polygons in layout is increased.

  3. Probabilistic simulation of concurrent engineering of propulsion systems

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Singhal, S. N.

    1993-01-01

    Technology readiness and the available infrastructure is assessed for timely computational simulation of concurrent engineering for propulsion systems. Results for initial coupled multidisciplinary, fabrication-process, and system simulators are presented including uncertainties inherent in various facets of engineering processes. An approach is outlined for computationally formalizing the concurrent engineering process from cradle-to-grave via discipline dedicated workstations linked with a common database.

  4. Multi-objective optimization of GENIE Earth system models.

    PubMed

    Price, Andrew R; Myerscough, Richard J; Voutchkov, Ivan I; Marsh, Robert; Cox, Simon J

    2009-07-13

    The tuning of parameters in climate models is essential to provide reliable long-term forecasts of Earth system behaviour. We apply a multi-objective optimization algorithm to the problem of parameter estimation in climate models. This optimization process involves the iterative evaluation of response surface models (RSMs), followed by the execution of multiple Earth system simulations. These computations require an infrastructure that provides high-performance computing for building and searching the RSMs and high-throughput computing for the concurrent evaluation of a large number of models. Grid computing technology is therefore essential to make this algorithm practical for members of the GENIE project.

  5. Enabling Large-Scale Biomedical Analysis in the Cloud

    PubMed Central

    Lin, Ying-Chih; Yu, Chin-Sheng; Lin, Yen-Jen

    2013-01-01

    Recent progress in high-throughput instrumentations has led to an astonishing growth in both volume and complexity of biomedical data collected from various sources. The planet-size data brings serious challenges to the storage and computing technologies. Cloud computing is an alternative to crack the nut because it gives concurrent consideration to enable storage and high-performance computing on large-scale data. This work briefly introduces the data intensive computing system and summarizes existing cloud-based resources in bioinformatics. These developments and applications would facilitate biomedical research to make the vast amount of diversification data meaningful and usable. PMID:24288665

  6. Explicit time integration of finite element models on a vectorized, concurrent computer with shared memory

    NASA Technical Reports Server (NTRS)

    Gilbertsen, Noreen D.; Belytschko, Ted

    1990-01-01

    The implementation of a nonlinear explicit program on a vectorized, concurrent computer with shared memory is described and studied. The conflict between vectorization and concurrency is described and some guidelines are given for optimal block sizes. Several example problems are summarized to illustrate the types of speed-ups which can be achieved by reprogramming as compared to compiler optimization.

  7. Low Proficiency Learners in Synchronous Computer-Assisted and Face-to-Face Interactions

    ERIC Educational Resources Information Center

    Tam, Shu Sim; Kan, Ngat Har; Ng, Lee Luan

    2010-01-01

    This experimental study offers empirical evidence of the effect of the computer-mediated environment on the linguistic output of low proficiency learners. The subjects were 32 female undergraduates with high and low proficiency in ESL. A within-subject repeated measures concurrent nested QUAN-qual (Creswell, 2003) mixed methods approach was used.…

  8. Reliability models for dataflow computer systems

    NASA Technical Reports Server (NTRS)

    Kavi, K. M.; Buckles, B. P.

    1985-01-01

    The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers.

  9. Jungle Computing: Distributed Supercomputing Beyond Clusters, Grids, and Clouds

    NASA Astrophysics Data System (ADS)

    Seinstra, Frank J.; Maassen, Jason; van Nieuwpoort, Rob V.; Drost, Niels; van Kessel, Timo; van Werkhoven, Ben; Urbani, Jacopo; Jacobs, Ceriel; Kielmann, Thilo; Bal, Henri E.

    In recent years, the application of high-performance and distributed computing in scientific practice has become increasingly wide spread. Among the most widely available platforms to scientists are clusters, grids, and cloud systems. Such infrastructures currently are undergoing revolutionary change due to the integration of many-core technologies, providing orders-of-magnitude speed improvements for selected compute kernels. With high-performance and distributed computing systems thus becoming more heterogeneous and hierarchical, programming complexity is vastly increased. Further complexities arise because urgent desire for scalability and issues including data distribution, software heterogeneity, and ad hoc hardware availability commonly force scientists into simultaneous use of multiple platforms (e.g., clusters, grids, and clouds used concurrently). A true computing jungle.

  10. Rater reliability and concurrent validity of the Keyboard Personal Computer Style instrument (K-PeCS).

    PubMed

    Baker, Nancy A; Cook, James R; Redfern, Mark S

    2009-01-01

    This paper describes the inter-rater and intra-rater reliability, and the concurrent validity of an observational instrument, the Keyboard Personal Computer Style instrument (K-PeCS), which assesses stereotypical postures and movements associated with computer keyboard use. Three trained raters independently rated the video clips of 45 computer keyboard users to ascertain inter-rater reliability, and then re-rated a sub-sample of 15 video clips to ascertain intra-rater reliability. Concurrent validity was assessed by comparing the ratings obtained using the K-PeCS to scores developed from a 3D motion analysis system. The overall K-PeCS had excellent reliability [inter-rater: intra-class correlation coefficients (ICC)=.90; intra-rater: ICC=.92]. Most individual items on the K-PeCS had from good to excellent reliability, although six items fell below ICC=.75. Those K-PeCS items that were assessed for concurrent validity compared favorably to the motion analysis data for all but two items. These results suggest that most items on the K-PeCS can be used to reliably document computer keyboarding style.

  11. Nonrecursive formulations of multibody dynamics and concurrent multiprocessing

    NASA Technical Reports Server (NTRS)

    Kurdila, Andrew J.; Menon, Ramesh

    1993-01-01

    Since the late 1980's, research in recursive formulations of multibody dynamics has flourished. Historically, much of this research can be traced to applications of low dimensionality in mechanism and vehicle dynamics. Indeed, there is little doubt that recursive order N methods are the method of choice for this class of systems. This approach has the advantage that a minimal number of coordinates are utilized, parallelism can be induced for certain system topologies, and the method is of order N computational cost for systems of N rigid bodies. Despite the fact that many authors have dismissed redundant coordinate formulations as being of order N(exp 3), and hence less attractive than recursive formulations, we present recent research that demonstrates that at least three distinct classes of redundant, nonrecursive multibody formulations consistently achieve order N computational cost for systems of rigid and/or flexible bodies. These formulations are as follows: (1) the preconditioned range space formulation; (2) penalty methods; and (3) augmented Lagrangian methods for nonlinear multibody dynamics. The first method can be traced to its foundation in equality constrained quadratic optimization, while the last two methods have been studied extensively in the context of coercive variational boundary value problems in computational mechanics. Until recently, however, they have not been investigated in the context of multibody simulation, and present theoretical questions unique to nonlinear dynamics. All of these nonrecursive methods have additional advantages with respect to recursive order N methods: (1) the formalisms retain the highly desirable order N computational cost; (2) the techniques are amenable to concurrent simulation strategies; (3) the approaches do not depend upon system topology to induce concurrency; and (4) the methods can be derived to balance the computational load automatically on concurrent multiprocessors. In addition to the presentation of the fundamental formulations, this paper presents new theoretical results regarding the rate of convergence of order N constraint stabilization schemes associated with the newly introduced class of methods.

  12. Group implicit concurrent algorithms in nonlinear structural dynamics

    NASA Technical Reports Server (NTRS)

    Ortiz, M.; Sotelino, E. D.

    1989-01-01

    During the 70's and 80's, considerable effort was devoted to developing efficient and reliable time stepping procedures for transient structural analysis. Mathematically, the equations governing this type of problems are generally stiff, i.e., they exhibit a wide spectrum in the linear range. The algorithms best suited to this type of applications are those which accurately integrate the low frequency content of the response without necessitating the resolution of the high frequency modes. This means that the algorithms must be unconditionally stable, which in turn rules out explicit integration. The most exciting possibility in the algorithms development area in recent years has been the advent of parallel computers with multiprocessing capabilities. So, this work is mainly concerned with the development of parallel algorithms in the area of structural dynamics. A primary objective is to devise unconditionally stable and accurate time stepping procedures which lend themselves to an efficient implementation in concurrent machines. Some features of the new computer architecture are summarized. A brief survey of current efforts in the area is presented. A new class of concurrent procedures, or Group Implicit algorithms is introduced and analyzed. The numerical simulation shows that GI algorithms hold considerable promise for application in coarse grain as well as medium grain parallel computers.

  13. Queueing Network Models for Parallel Processing of Task Systems: an Operational Approach

    NASA Technical Reports Server (NTRS)

    Mak, Victor W. K.

    1986-01-01

    Computer performance modeling of possibly complex computations running on highly concurrent systems is considered. Earlier works in this area either dealt with a very simple program structure or resulted in methods with exponential complexity. An efficient procedure is developed to compute the performance measures for series-parallel-reducible task systems using queueing network models. The procedure is based on the concept of hierarchical decomposition and a new operational approach. Numerical results for three test cases are presented and compared to those of simulations.

  14. Actors: A Model of Concurrent Computation in Distributed Systems.

    DTIC Science & Technology

    1985-06-01

    Artificial Intelligence Labora- tory of the Massachusetts Institute of Technology. Support for the labora- tory’s aritificial intelligence research is...RD-A157 917 ACTORS: A MODEL OF CONCURRENT COMPUTATION IN 1/3- DISTRIBUTED SY𔃿TEMS(U) MASSACHUSETTS INST OF TECH CRMBRIDGE ARTIFICIAL INTELLIGENCE ...Computation In Distributed Systems Gui A. Aghai MIT Artificial Intelligence Laboratory Thsdocument ha. been cipp-oved I= pblicrelease and sale; itsI

  15. Concurrent measurement of "real-world" stress and arousal in individuals with psychosis: assessing the feasibility and validity of a novel methodology.

    PubMed

    Kimhy, David; Delespaul, Philippe; Ahn, Hongshik; Cai, Shengnan; Shikhman, Marina; Lieberman, Jeffrey A; Malaspina, Dolores; Sloan, Richard P

    2010-11-01

    Psychosis has been repeatedly suggested to be affected by increases in stress and arousal. However, there is a dearth of evidence supporting the temporal link between stress, arousal, and psychosis during "real-world" functioning. This paucity of evidence may stem from limitations of current research methodologies. Our aim is to the test the feasibility and validity of a novel methodology designed to measure concurrent stress and arousal in individuals with psychosis during "real-world" daily functioning. Twenty patients with psychosis completed a 36-hour ambulatory assessment of stress and arousal. We used experience sampling method with palm computers to assess stress (10 times per day, 10 AM → 10 PM) along with concurrent ambulatory measurement of cardiac autonomic regulation using a Holter monitor. The clocks of the palm computer and Holter monitor were synchronized, allowing the temporal linking of the stress and arousal data. We used power spectral analysis to determine the parasympathetic contributions to autonomic regulation and sympathovagal balance during 5 minutes before and after each experience sample. Patients completed 79% of the experience samples (75% with a valid concurrent arousal data). Momentary increases in stress had inverse correlation with concurrent parasympathetic activity (ρ = -.27, P < .0001) and positive correlation with sympathovagal balance (ρ = .19, P = .0008). Stress and heart rate were not significantly related (ρ = -.05, P = .3875). The findings support the feasibility and validity of our methodology in individuals with psychosis. The methodology offers a novel way to study in high time resolution the concurrent, "real-world" interactions between stress, arousal, and psychosis. The authors discuss the methodology's potential applications and future research directions.

  16. (Re)engineering Earth System Models to Expose Greater Concurrency for Ultrascale Computing: Practice, Experience, and Musings

    NASA Astrophysics Data System (ADS)

    Mills, R. T.

    2014-12-01

    As the high performance computing (HPC) community pushes towards the exascale horizon, the importance and prevalence of fine-grained parallelism in new computer architectures is increasing. This is perhaps most apparent in the proliferation of so-called "accelerators" such as the Intel Xeon Phi or NVIDIA GPGPUs, but the trend also holds for CPUs, where serial performance has grown slowly and effective use of hardware threads and vector units are becoming increasingly important to realizing high performance. This has significant implications for weather, climate, and Earth system modeling codes, many of which display impressive scalability across MPI ranks but take relatively little advantage of threading and vector processing. In addition to increasing parallelism, next generation codes will also need to address increasingly deep hierarchies for data movement: NUMA/cache levels, on node vs. off node, local vs. wide neighborhoods on the interconnect, and even in the I/O system. We will discuss some approaches (grounded in experiences with the Intel Xeon Phi architecture) for restructuring Earth science codes to maximize concurrency across multiple levels (vectors, threads, MPI ranks), and also discuss some novel approaches for minimizing expensive data movement/communication.

  17. The implementation and use of Ada on distributed systems with high reliability requirements

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1986-01-01

    The general inadequacy of Ada for programming systems that must survive processor loss was shown. A solution to the problem was proposed in which there are no syntatic changes to Ada. The approach was evaluated using a full-scale, realistic application. The application used was the Advanced Transport Operating System (ATOPS), an experimental computer control system developed for a modified Boeing 737 aircraft. The ATOPS system is a full authority, real-time avionics system providing a large variety of advanced features. Methods of building fault tolerance into concurrent systems were explored. A set of criteria by which the proposed method will be judged was examined. Extensive interaction with personnel from Computer Sciences Corporation and NASA Langley occurred to determine the requirements of the ATOPS software. Backward error recovery in concurrent systems was assessed.

  18. Size and emotion averaging: costs of dividing attention after all.

    PubMed

    Brand, John; Oriet, Chris; Tottenham, Laurie Sykes

    2012-03-01

    Perceptual averaging is a process by which sets of similar items are represented by summary statistics such as their average size, luminance, or orientation. Researchers have argued that this process is automatic, able to be carried out without interference from concurrent processing. Here, we challenge this conclusion and demonstrate a reliable cost of computing the mean size of circles distinguished by colour (Experiments 1 and 2) and the mean emotionality of faces distinguished by sex (Experiment 3). We also test the viability of two strategies that could have allowed observers to guess the correct response without computing the average size or emotionality of both sets concurrently. We conclude that although two means can be computed concurrently, doing so incurs a cost of dividing attention.

  19. An assessment of future computer system needs for large-scale computation

    NASA Technical Reports Server (NTRS)

    Lykos, P.; White, J.

    1980-01-01

    Data ranging from specific computer capability requirements to opinions about the desirability of a national computer facility are summarized. It is concluded that considerable attention should be given to improving the user-machine interface. Otherwise, increased computer power may not improve the overall effectiveness of the machine user. Significant improvement in throughput requires highly concurrent systems plus the willingness of the user community to develop problem solutions for that kind of architecture. An unanticipated result was the expression of need for an on-going cross-disciplinary users group/forum in order to share experiences and to more effectively communicate needs to the manufacturers.

  20. Concurrent multiscale modeling of microstructural effects on localization behavior in finite deformation solid mechanics

    DOE PAGES

    Alleman, Coleman N.; Foulk, James W.; Mota, Alejandro; ...

    2017-11-06

    The heterogeneity in mechanical fields introduced by microstructure plays a critical role in the localization of deformation. In order to resolve this incipient stage of failure, it is therefore necessary to incorporate microstructure with sufficient resolution. On the other hand, computational limitations make it infeasible to represent the microstructure in the entire domain at the component scale. Here, the authors demonstrate the use of concurrent multiscale modeling to incorporate explicit, finely resolved microstructure in a critical region while resolving the smoother mechanical fields outside this region with a coarser discretization to limit computational cost. The microstructural physics is modeled withmore » a high-fidelity model that incorporates anisotropic crystal elasticity and rate-dependent crystal plasticity to simulate the behavior of a stainless steel alloy. The component-scale material behavior is treated with a lower fidelity model incorporating isotropic linear elasticity and rate-independent J 2 plasticity. The microstructural and component scale subdomains are modeled concurrently, with coupling via the Schwarz alternating method, which solves boundary-value problems in each subdomain separately and transfers solution information between subdomains via Dirichlet boundary conditions. In this study, the framework is applied to model incipient localization in tensile specimens during necking.« less

  1. Concurrent multiscale modeling of microstructural effects on localization behavior in finite deformation solid mechanics

    NASA Astrophysics Data System (ADS)

    Alleman, Coleman N.; Foulk, James W.; Mota, Alejandro; Lim, Hojun; Littlewood, David J.

    2018-02-01

    The heterogeneity in mechanical fields introduced by microstructure plays a critical role in the localization of deformation. To resolve this incipient stage of failure, it is therefore necessary to incorporate microstructure with sufficient resolution. On the other hand, computational limitations make it infeasible to represent the microstructure in the entire domain at the component scale. In this study, the authors demonstrate the use of concurrent multiscale modeling to incorporate explicit, finely resolved microstructure in a critical region while resolving the smoother mechanical fields outside this region with a coarser discretization to limit computational cost. The microstructural physics is modeled with a high-fidelity model that incorporates anisotropic crystal elasticity and rate-dependent crystal plasticity to simulate the behavior of a stainless steel alloy. The component-scale material behavior is treated with a lower fidelity model incorporating isotropic linear elasticity and rate-independent J2 plasticity. The microstructural and component scale subdomains are modeled concurrently, with coupling via the Schwarz alternating method, which solves boundary-value problems in each subdomain separately and transfers solution information between subdomains via Dirichlet boundary conditions. In this study, the framework is applied to model incipient localization in tensile specimens during necking.

  2. Concurrent multiscale modeling of microstructural effects on localization behavior in finite deformation solid mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alleman, Coleman N.; Foulk, James W.; Mota, Alejandro

    The heterogeneity in mechanical fields introduced by microstructure plays a critical role in the localization of deformation. In order to resolve this incipient stage of failure, it is therefore necessary to incorporate microstructure with sufficient resolution. On the other hand, computational limitations make it infeasible to represent the microstructure in the entire domain at the component scale. Here, the authors demonstrate the use of concurrent multiscale modeling to incorporate explicit, finely resolved microstructure in a critical region while resolving the smoother mechanical fields outside this region with a coarser discretization to limit computational cost. The microstructural physics is modeled withmore » a high-fidelity model that incorporates anisotropic crystal elasticity and rate-dependent crystal plasticity to simulate the behavior of a stainless steel alloy. The component-scale material behavior is treated with a lower fidelity model incorporating isotropic linear elasticity and rate-independent J 2 plasticity. The microstructural and component scale subdomains are modeled concurrently, with coupling via the Schwarz alternating method, which solves boundary-value problems in each subdomain separately and transfers solution information between subdomains via Dirichlet boundary conditions. In this study, the framework is applied to model incipient localization in tensile specimens during necking.« less

  3. Heterogeneous Distributed Computing for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Sunderam, Vaidy S.

    1998-01-01

    The research supported under this award focuses on heterogeneous distributed computing for high-performance applications, with particular emphasis on computational aerosciences. The overall goal of this project was to and investigate issues in, and develop solutions to, efficient execution of computational aeroscience codes in heterogeneous concurrent computing environments. In particular, we worked in the context of the PVM[1] system and, subsequent to detailed conversion efforts and performance benchmarking, devising novel techniques to increase the efficacy of heterogeneous networked environments for computational aerosciences. Our work has been based upon the NAS Parallel Benchmark suite, but has also recently expanded in scope to include the NAS I/O benchmarks as specified in the NHT-1 document. In this report we summarize our research accomplishments under the auspices of the grant.

  4. Methods for design and evaluation of integrated hardware-software systems for concurrent computation

    NASA Technical Reports Server (NTRS)

    Pratt, T. W.

    1985-01-01

    Research activities and publications are briefly summarized. The major tasks reviewed are: (1) VAX implementation of the PISCES parallel programming environment; (2) Apollo workstation network implementation of the PISCES environment; (3) FLEX implementation of the PISCES environment; (4) sparse matrix iterative solver in PSICES Fortran; (5) image processing application of PISCES; and (6) a formal model of concurrent computation being developed.

  5. Developing Singing in Third-Grade Music Classrooms: The Effect of a Concurrent-Feedback Computer Game on Pitch-Matching Skills

    ERIC Educational Resources Information Center

    Paney, Andrew S.; Kay, Ann C.

    2015-01-01

    The purpose of this study was to measure the effect of concurrent visual feedback on pitch-matching skill development in third-grade students. Participants played a computer game, "SingingCoach," which scored the accuracy of their singing of the song "America." They followed the contour of the melody on the screen as the…

  6. Kalman approach to accuracy management for interoperable heterogeneous model abstraction within an HLA-compliant simulation

    NASA Astrophysics Data System (ADS)

    Leskiw, Donald M.; Zhau, Junmei

    2000-06-01

    This paper reports on results from an ongoing project to develop methodologies for representing and managing multiple, concurrent levels of detail and enabling high performance computing using parallel arrays within distributed object-based simulation frameworks. At this time we present the methodology for representing and managing multiple, concurrent levels of detail and modeling accuracy by using a representation based on the Kalman approach for estimation. The Kalman System Model equations are used to represent model accuracy, Kalman Measurement Model equations provide transformations between heterogeneous levels of detail, and interoperability among disparate abstractions is provided using a form of the Kalman Update equations.

  7. Asymmetry of Reinforcement and Punishment in Human Choice

    ERIC Educational Resources Information Center

    Rasmussen, Erin B.; Newland, M. Christopher

    2008-01-01

    The hypothesis that a penny lost is valued more highly than a penny earned was tested in human choice. Five participants clicked a computer mouse under concurrent variable-interval schedules of monetary reinforcement. In the no-punishment condition, the schedules arranged monetary gain. In the punishment conditions, a schedule of monetary loss was…

  8. Benchmarking high performance computing architectures with CMS’ skeleton framework

    NASA Astrophysics Data System (ADS)

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    2017-10-01

    In 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta, Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.

  9. AnRAD: A Neuromorphic Anomaly Detection Framework for Massive Concurrent Data Streams.

    PubMed

    Chen, Qiuwen; Luley, Ryan; Wu, Qing; Bishop, Morgan; Linderman, Richard W; Qiu, Qinru

    2018-05-01

    The evolution of high performance computing technologies has enabled the large-scale implementation of neuromorphic models and pushed the research in computational intelligence into a new era. Among the machine learning applications, unsupervised detection of anomalous streams is especially challenging due to the requirements of detection accuracy and real-time performance. Designing a computing framework that harnesses the growing computing power of the multicore systems while maintaining high sensitivity and specificity to the anomalies is an urgent research topic. In this paper, we propose anomaly recognition and detection (AnRAD), a bioinspired detection framework that performs probabilistic inferences. We analyze the feature dependency and develop a self-structuring method that learns an efficient confabulation network using unlabeled data. This network is capable of fast incremental learning, which continuously refines the knowledge base using streaming data. Compared with several existing anomaly detection approaches, our method provides competitive detection quality. Furthermore, we exploit the massive parallel structure of the AnRAD framework. Our implementations of the detection algorithm on the graphic processing unit and the Xeon Phi coprocessor both obtain substantial speedups over the sequential implementation on general-purpose microprocessor. The framework provides real-time service to concurrent data streams within diversified knowledge contexts, and can be applied to large problems with multiple local patterns. Experimental results demonstrate high computing performance and memory efficiency. For vehicle behavior detection, the framework is able to monitor up to 16000 vehicles (data streams) and their interactions in real time with a single commodity coprocessor, and uses less than 0.2 ms for one testing subject. Finally, the detection network is ported to our spiking neural network simulator to show the potential of adapting to the emerging neuromorphic architectures.

  10. Periodic Application of Concurrent Error Detection in Processor Array Architectures. PhD. Thesis -

    NASA Technical Reports Server (NTRS)

    Chen, Paul Peichuan

    1993-01-01

    Processor arrays can provide an attractive architecture for some applications. Featuring modularity, regular interconnection and high parallelism, such arrays are well-suited for VLSI/WSI implementations, and applications with high computational requirements, such as real-time signal processing. Preserving the integrity of results can be of paramount importance for certain applications. In these cases, fault tolerance should be used to ensure reliable delivery of a system's service. One aspect of fault tolerance is the detection of errors caused by faults. Concurrent error detection (CED) techniques offer the advantage that transient and intermittent faults may be detected with greater probability than with off-line diagnostic tests. Applying time-redundant CED techniques can reduce hardware redundancy costs. However, most time-redundant CED techniques degrade a system's performance.

  11. The Transition to a Many-core World

    NASA Astrophysics Data System (ADS)

    Mattson, T. G.

    2012-12-01

    The need to increase performance within a fixed energy budget has pushed the computer industry to many core processors. This is grounded in the physics of computing and is not a trend that will just go away. It is hard to overestimate the profound impact of many-core processors on software developers. Virtually every facet of the software development process will need to change to adapt to these new processors. In this talk, we will look at many-core hardware and consider its evolution from a perspective grounded in the CPU. We will show that the number of cores will inevitably increase, but in addition, a quest to maximize performance per watt will push these cores to be heterogeneous. We will show that the inevitable result of these changes is a computing landscape where the distinction between the CPU and the GPU is blurred. We will then consider the much more pressing problem of software in a many core world. Writing software for heterogeneous many core processors is well beyond the ability of current programmers. One solution is to support a software development process where programmer teams are split into two distinct groups: a large group of domain-expert productivity programmers and much smaller team of computer-scientist efficiency programmers. The productivity programmers work in terms of high level frameworks to express the concurrency in their problems while avoiding any details for how that concurrency is exploited. The second group, the efficiency programmers, map applications expressed in terms of these frameworks onto the target many-core system. In other words, we can solve the many-core software problem by creating a software infrastructure that only requires a small subset of programmers to become master parallel programmers. This is different from the discredited dream of automatic parallelism. Note that productivity programmers still need to define the architecture of their software in a way that exposes the concurrency inherent in their problem. We submit that domain-expert programmers understand "what is concurrent". The parallel programming problem emerges from the complexity of "how that concurrency is utilized" on real hardware. The research described in this talk was carried out in collaboration with the ParLab at UC Berkeley. We use a design pattern language to define the high level frameworks exposed to domain-expert, productivity programmers. We then use tools from the SEJITS project (Selective embedded Just In time Specializers) to build the software transformation tool chains thst turn these framework-oriented designs into highly efficient code. The final ingredient is a software platform to serve as a target for these tools. One such platform is the OpenCL industry standard for programming heterogeneous systems. We will briefly describe OpenCL and show how it provides a vendor-neutral software target for current and future many core systems; both CPU-based, GPU-based, and heterogeneous combinations of the two.

  12. Ada Compiler Validation Summary Report: Certificate Number 890711W1. 10109 Concurrent Computer Corporation C(3) Ada, Version R02-02.00 Concurrent Computer Corporation 3280 MPS

    DTIC Science & Technology

    1989-07-11

    applicable because this implementation does not support temporary files with names. ag . EE2401D is inapplicable because this implementation does not...buffer. No spanned records with ASCII.NUL are output. A line terminator followed by a page terminator may be represented as: ASC::. CR ASCU :.FF ASCII.CR if

  13. Method for concurrent execution of primitive operations by dynamically assigning operations based upon computational marked graph and availability of data

    NASA Technical Reports Server (NTRS)

    Mielke, Roland V. (Inventor); Stoughton, John W. (Inventor)

    1990-01-01

    Computationally complex primitive operations of an algorithm are executed concurrently in a plurality of functional units under the control of an assignment manager. The algorithm is preferably defined as a computationally marked graph contianing data status edges (paths) corresponding to each of the data flow edges. The assignment manager assigns primitive operations to the functional units and monitors completion of the primitive operations to determine data availability using the computational marked graph of the algorithm. All data accessing of the primitive operations is performed by the functional units independently of the assignment manager.

  14. Assessment of a new web-based sexual concurrency measurement tool for men who have sex with men.

    PubMed

    Rosenberg, Eli S; Rothenberg, Richard B; Kleinbaum, David G; Stephenson, Rob B; Sullivan, Patrick S

    2014-11-10

    Men who have sex with men (MSM) are the most affected risk group in the United States' human immunodeficiency virus (HIV) epidemic. Sexual concurrency, the overlapping of partnerships in time, accelerates HIV transmission in populations and has been documented at high levels among MSM. However, concurrency is challenging to measure empirically and variations in assessment techniques used (primarily the date overlap and direct question approaches) and the outcomes derived from them have led to heterogeneity and questionable validity of estimates among MSM and other populations. The aim was to evaluate a novel Web-based and interactive partnership-timing module designed for measuring concurrency among MSM, and to compare outcomes measured by the partnership-timing module to those of typical approaches in an online study of MSM. In an online study of MSM aged ≥18 years, we assessed concurrency by using the direct question method and by gathering the dates of first and last sex, with enhanced programming logic, for each reported partner in the previous 6 months. From these methods, we computed multiple concurrency cumulative prevalence outcomes: direct question, day resolution / date overlap, and month resolution / date overlap including both 1-month ties and excluding ties. We additionally computed variants of the UNAIDS point prevalence outcome. The partnership-timing module was also administered. It uses an interactive month resolution calendar to improve recall and follow-up questions to resolve temporal ambiguities, combines elements of the direct question and date overlap approaches. The agreement between the partnership-timing module and other concurrency outcomes was assessed with percent agreement, kappa statistic (κ), and matched odds ratios at the individual, dyad, and triad levels of analysis. Among 2737 MSM who completed the partnership section of the partnership-timing module, 41.07% (1124/2737) of individuals had concurrent partners in the previous 6 months. The partnership-timing module had the highest degree of agreement with the direct question. Agreement was lower with date overlap outcomes (agreement range 79%-81%, κ range .55-.59) and lowest with the UNAIDS outcome at 5 months before interview (65% agreement, κ=.14, 95% CI .12-.16). All agreements declined after excluding individuals with 1 sex partner (always classified as not engaging in concurrency), although the highest agreement was still observed with the direct question technique (81% agreement, κ=.59, 95% CI .55-.63). Similar patterns in agreement were observed with dyad- and triad-level outcomes. The partnership-timing module showed strong concurrency detection ability and agreement with previous measures. These levels of agreement were greater than others have reported among previous measures. The partnership-timing module may be well suited to quantifying concurrency among MSM at multiple levels of analysis.

  15. Mapping Trade-Offs in Teachers' Integration of Technology-Supported Inquiry in High School Science Classes

    ERIC Educational Resources Information Center

    Sandoval, William A.; Daniszewski, Kenneth

    2004-01-01

    This paper explores how two teachers concurrently enacting the same technology-based inquiry unit on evolution structured activity and discourse in their classrooms to connect students' computer-based investigations to formal domain theories. Our analyses show that the teachers' interactions with their students during inquiry were quite similar,…

  16. Parallel computing works

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of manymore » computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.« less

  17. Parallel computation using boundary elements in solid mechanics

    NASA Technical Reports Server (NTRS)

    Chien, L. S.; Sun, C. T.

    1990-01-01

    The inherent parallelism of the boundary element method is shown. The boundary element is formulated by assuming the linear variation of displacements and tractions within a line element. Moreover, MACSYMA symbolic program is employed to obtain the analytical results for influence coefficients. Three computational components are parallelized in this method to show the speedup and efficiency in computation. The global coefficient matrix is first formed concurrently. Then, the parallel Gaussian elimination solution scheme is applied to solve the resulting system of equations. Finally, and more importantly, the domain solutions of a given boundary value problem are calculated simultaneously. The linear speedups and high efficiencies are shown for solving a demonstrated problem on Sequent Symmetry S81 parallel computing system.

  18. BESIU Physical Analysis on Hadoop Platform

    NASA Astrophysics Data System (ADS)

    Huo, Jing; Zang, Dongsong; Lei, Xiaofeng; Li, Qiang; Sun, Gongxing

    2014-06-01

    In the past 20 years, computing cluster has been widely used for High Energy Physics data processing. The jobs running on the traditional cluster with a Data-to-Computing structure, have to read large volumes of data via the network to the computing nodes for analysis, thereby making the I/O latency become a bottleneck of the whole system. The new distributed computing technology based on the MapReduce programming model has many advantages, such as high concurrency, high scalability and high fault tolerance, and it can benefit us in dealing with Big Data. This paper brings the idea of using MapReduce model to do BESIII physical analysis, and presents a new data analysis system structure based on Hadoop platform, which not only greatly improve the efficiency of data analysis, but also reduces the cost of system building. Moreover, this paper establishes an event pre-selection system based on the event level metadata(TAGs) database to optimize the data analyzing procedure.

  19. Benchmarking high performance computing architectures with CMS’ skeleton framework

    DOE PAGES

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    2017-11-23

    Here, in 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta,more » Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.« less

  20. Benchmarking high performance computing architectures with CMS’ skeleton framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    Here, in 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta,more » Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.« less

  1. Achieving High Performance on the i860 Microprocessor

    NASA Technical Reports Server (NTRS)

    Lee, King; Kutler, Paul (Technical Monitor)

    1998-01-01

    The i860 is a high performance microprocessor used in the Intel Touchstone project. This paper proposes a paradigm for programming the i860 that is modelled on the vector instructions of the Cray computers. Fortran callable assembler subroutines were written that mimic the concurrent vector instructions of the Cray. Cache takes the place of vector registers. Using this paradigm we have achieved twice the performance of compiled code on a traditional solve.

  2. Finite elements and the method of conjugate gradients on a concurrent processor

    NASA Technical Reports Server (NTRS)

    Lyzenga, G. A.; Raefsky, A.; Hager, G. H.

    1985-01-01

    An algorithm for the iterative solution of finite element problems on a concurrent processor is presented. The method of conjugate gradients is used to solve the system of matrix equations, which is distributed among the processors of a MIMD computer according to an element-based spatial decomposition. This algorithm is implemented in a two-dimensional elastostatics program on the Caltech Hypercube concurrent processor. The results of tests on up to 32 processors show nearly linear concurrent speedup, with efficiencies over 90 percent for sufficiently large problems.

  3. A Concurrent Distributed System for Aircraft Tactical Decision Generation

    NASA Technical Reports Server (NTRS)

    McManus, John W.

    1990-01-01

    A research program investigating the use of artificial intelligence (AI) techniques to aid in the development of a Tactical Decision Generator (TDG) for Within Visual Range (WVR) air combat engagements is discussed. The application of AI programming and problem solving methods in the development and implementation of a concurrent version of the Computerized Logic For Air-to-Air Warfare Simulations (CLAWS) program, a second generation TDG, is presented. Concurrent computing environments and programming approaches are discussed and the design and performance of a prototype concurrent TDG system are presented.

  4. Finite elements and the method of conjugate gradients on a concurrent processor

    NASA Technical Reports Server (NTRS)

    Lyzenga, G. A.; Raefsky, A.; Hager, B. H.

    1984-01-01

    An algorithm for the iterative solution of finite element problems on a concurrent processor is presented. The method of conjugate gradients is used to solve the system of matrix equations, which is distributed among the processors of a MIMD computer according to an element-based spatial decomposition. This algorithm is implemented in a two-dimensional elastostatics program on the Caltech Hypercube concurrent processor. The results of tests on up to 32 processors show nearly linear concurrent speedup, with efficiencies over 90% for sufficiently large problems.

  5. The Enhancement of Concurrent Processing through Functional Programming Languages.

    DTIC Science & Technology

    1984-06-01

    ta * functional programming languages allow us to harness the pro- cessing power of computers with hundreds or even thousands of DD I 1473 EDITION OF...that it might be the best way to make imperative library", programs into functional ones which are well suited to concurrent processing. Accession For...statements in their code. We assert that functional programming languajes allok us to harness the processing power of computers with hundre4s or even

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biyikli, Emre; To, Albert C., E-mail: albertto@pitt.edu

    Atomistic/continuum coupling methods combine accurate atomistic methods and efficient continuum methods to simulate the behavior of highly ordered crystalline systems. Coupled methods utilize the advantages of both approaches to simulate systems at a lower computational cost, while retaining the accuracy associated with atomistic methods. Many concurrent atomistic/continuum coupling methods have been proposed in the past; however, their true computational efficiency has not been demonstrated. The present work presents an efficient implementation of a concurrent coupling method called the Multiresolution Molecular Mechanics (MMM) for serial, parallel, and adaptive analysis. First, we present the features of the software implemented along with themore » associated technologies. The scalability of the software implementation is demonstrated, and the competing effects of multiscale modeling and parallelization are discussed. Then, the algorithms contributing to the efficiency of the software are presented. These include algorithms for eliminating latent ghost atoms from calculations and measurement-based dynamic balancing of parallel workload. The efficiency improvements made by these algorithms are demonstrated by benchmark tests. The efficiency of the software is found to be on par with LAMMPS, a state-of-the-art Molecular Dynamics (MD) simulation code, when performing full atomistic simulations. Speed-up of the MMM method is shown to be directly proportional to the reduction of the number of the atoms visited in force computation. Finally, an adaptive MMM analysis on a nanoindentation problem, containing over a million atoms, is performed, yielding an improvement of 6.3–8.5 times in efficiency, over the full atomistic MD method. For the first time, the efficiency of a concurrent atomistic/continuum coupling method is comprehensively investigated and demonstrated.« less

  7. Multiresolution molecular mechanics: Implementation and efficiency

    NASA Astrophysics Data System (ADS)

    Biyikli, Emre; To, Albert C.

    2017-01-01

    Atomistic/continuum coupling methods combine accurate atomistic methods and efficient continuum methods to simulate the behavior of highly ordered crystalline systems. Coupled methods utilize the advantages of both approaches to simulate systems at a lower computational cost, while retaining the accuracy associated with atomistic methods. Many concurrent atomistic/continuum coupling methods have been proposed in the past; however, their true computational efficiency has not been demonstrated. The present work presents an efficient implementation of a concurrent coupling method called the Multiresolution Molecular Mechanics (MMM) for serial, parallel, and adaptive analysis. First, we present the features of the software implemented along with the associated technologies. The scalability of the software implementation is demonstrated, and the competing effects of multiscale modeling and parallelization are discussed. Then, the algorithms contributing to the efficiency of the software are presented. These include algorithms for eliminating latent ghost atoms from calculations and measurement-based dynamic balancing of parallel workload. The efficiency improvements made by these algorithms are demonstrated by benchmark tests. The efficiency of the software is found to be on par with LAMMPS, a state-of-the-art Molecular Dynamics (MD) simulation code, when performing full atomistic simulations. Speed-up of the MMM method is shown to be directly proportional to the reduction of the number of the atoms visited in force computation. Finally, an adaptive MMM analysis on a nanoindentation problem, containing over a million atoms, is performed, yielding an improvement of 6.3-8.5 times in efficiency, over the full atomistic MD method. For the first time, the efficiency of a concurrent atomistic/continuum coupling method is comprehensively investigated and demonstrated.

  8. Concurrent and discriminant validity of the Star Excursion Balance Test for military personnel with lateral ankle sprain.

    PubMed

    Bastien, Maude; Moffet, Hélène; Bouyer, Laurent; Perron, Marc; Hébert, Luc J; Leblond, Jean

    2014-02-01

    The Star Excursion Balance Test (SEBT) has frequently been used to measure motor control and residual functional deficits at different stages of recovery from lateral ankle sprain (LAS) in various populations. However, the validity of the measure used to characterize performance--the maximal reach distance (MRD) measured by visual estimation--is still unknown. To evaluate the concurrent validity of the MRD in the SEBT estimated visually vs the MRD measured with a 3D motion-capture system and evaluate and compare the discriminant validity of 2 MRD-normalization methods (by height or by lower-limb length) in participants with or without LAS (n = 10 per group). There is a high concurrent validity and a good degree of accuracy between the visual estimation measurement and the MRD gold-standard measurement for both groups and under all conditions. The Cohen d ratios between groups and MANOVA products were higher when computed from MRD data normalized by height. The results support the concurrent validity of visual estimation of the MRD and the use of the SEBT to evaluate motor control. Moreover, normalization of MRD data by height appears to increase the discriminant validity of this test.

  9. Visualization of unsteady computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Haimes, Robert

    1994-11-01

    A brief summary of the computer environment used for calculating three dimensional unsteady Computational Fluid Dynamic (CFD) results is presented. This environment requires a super computer as well as massively parallel processors (MPP's) and clusters of workstations acting as a single MPP (by concurrently working on the same task) provide the required computational bandwidth for CFD calculations of transient problems. The cluster of reduced instruction set computers (RISC) is a recent advent based on the low cost and high performance that workstation vendors provide. The cluster, with the proper software can act as a multiple instruction/multiple data (MIMD) machine. A new set of software tools is being designed specifically to address visualizing 3D unsteady CFD results in these environments. Three user's manuals for the parallel version of Visual3, pV3, revision 1.00 make up the bulk of this report.

  10. Visualization of unsteady computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1994-01-01

    A brief summary of the computer environment used for calculating three dimensional unsteady Computational Fluid Dynamic (CFD) results is presented. This environment requires a super computer as well as massively parallel processors (MPP's) and clusters of workstations acting as a single MPP (by concurrently working on the same task) provide the required computational bandwidth for CFD calculations of transient problems. The cluster of reduced instruction set computers (RISC) is a recent advent based on the low cost and high performance that workstation vendors provide. The cluster, with the proper software can act as a multiple instruction/multiple data (MIMD) machine. A new set of software tools is being designed specifically to address visualizing 3D unsteady CFD results in these environments. Three user's manuals for the parallel version of Visual3, pV3, revision 1.00 make up the bulk of this report.

  11. Lower bounds of concurrence for N-qubit systems and the detection of k-nonseparability of multipartite quantum systems

    NASA Astrophysics Data System (ADS)

    Qi, Xianfei; Gao, Ting; Yan, Fengli

    2017-01-01

    Concurrence, as one of the entanglement measures, is a useful tool to characterize quantum entanglement in various quantum systems. However, the computation of the concurrence involves difficult optimizations and only for the case of two qubits, an exact formula was found. We investigate the concurrence of four-qubit quantum states and derive analytical lower bound of concurrence using the multiqubit monogamy inequality. It is shown that this lower bound is able to improve the existing bounds. This approach can be generalized to arbitrary qubit systems. We present an exact formula of concurrence for some mixed quantum states. For even-qubit states, we derive an improved lower bound of concurrence using a monogamy equality for qubit systems. At the same time, we show that a multipartite state is k-nonseparable if the multipartite concurrence is larger than a constant related to the value of k, the qudit number and the dimension of the subsystems. Our results can be applied to detect the multipartite k-nonseparable states.

  12. A validation study of the Keyboard Personal Computer Style instrument (K-PeCS) for use with children.

    PubMed

    Green, Dido; Meroz, Anat; Margalit, Adi Edit; Ratzon, Navah Z

    2012-11-01

    This study examines a potential instrument for measurement of typing postures of children. This paper describes inter-rater, test-retest reliability and concurrent validity of the Keyboard Personal Computer Style instrument (K-PeCS), an observational measurement of postures and movements during keyboarding, for use with children. Two trained raters independently rated videos of 24 children (aged 7-10 years). Six children returned one week later for identifying test-retest reliability. Concurrent validity was assessed by comparing ratings obtained using the K-PECS to scores from a 3D motion analysis system. Inter-rater reliability was moderate to high for 12 out of 16 items (Kappa: 0.46 to 1.00; correlation coefficients: 0.77-0.95) and test-retest reliability varied across items (Kappa: 0.25 to 0.67; correlation coefficients: r = 0.20 to r = 0.95). Concurrent validity compared favourably across arm pathlength, wrist extension and ulnar deviation. In light of the limitations of other tools the K-PeCS offers a fairly affordable, reliable and valid instrument to address the gap for measurement of typing styles of children, despite the shortcomings of some items. However further research is required to refine the instrument for use in evaluating typing among children. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  13. Developing strong concurrent multiphysics multiscale coupling to understand the impact of microstructural mechanisms on the structural scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foulk, James W.; Alleman, Coleman N.; Mota, Alejandro

    The heterogeneity in mechanical fields introduced by microstructure plays a critical role in the localization of deformation. To resolve this incipient stage of failure, it is therefore necessary to incorporate microstructure with sufficient resolution. On the other hand, computational limitations make it infeasible to represent the microstructure in the entire domain at the component scale. In this study, the authors demonstrate the use of concurrent multi- scale modeling to incorporate explicit, finely resolved microstructure in a critical region while resolving the smoother mechanical fields outside this region with a coarser discretization to limit computational cost. The microstructural physics is modeledmore » with a high-fidelity model that incorporates anisotropic crystal elasticity and rate-dependent crystal plasticity to simulate the behavior of a stainless steel alloy. The component-scale material behavior is treated with a lower fidelity model incorporating isotropic linear elasticity and rate-independent J 2 plas- ticity. The microstructural and component scale subdomains are modeled concurrently, with coupling via the Schwarz alternating method, which solves boundary-value problems in each subdomain separately and transfers solution information between subdomains via Dirichlet boundary conditions. Beyond cases studies in concurrent multiscale, we explore progress in crystal plastic- ity through modular designs, solution methodologies, model verification, and extensions to Sierra/SM and manycore applications. Advances in conformal microstructures having both hexahedral and tetrahedral workflows in Sculpt and Cubit are highlighted. A structure-property case study in two-phase metallic composites applies the Materials Knowledge System to local metrics for void evolution. Discussion includes lessons learned, future work, and a summary of funded efforts and proposed work. Finally, an appendix illustrates the need for two-way coupling through a single degree of freedom.« less

  14. Fault-Tolerant, Real-Time, Multi-Core Computer System

    NASA Technical Reports Server (NTRS)

    Gostelow, Kim P.

    2012-01-01

    A document discusses a fault-tolerant, self-aware, low-power, multi-core computer for space missions with thousands of simple cores, achieving speed through concurrency. The proposed machine decides how to achieve concurrency in real time, rather than depending on programmers. The driving features of the system are simple hardware that is modular in the extreme, with no shared memory, and software with significant runtime reorganizing capability. The document describes a mechanism for moving ongoing computations and data that is based on a functional model of execution. Because there is no shared memory, the processor connects to its neighbors through a high-speed data link. Messages are sent to a neighbor switch, which in turn forwards that message on to its neighbor until reaching the intended destination. Except for the neighbor connections, processors are isolated and independent of each other. The processors on the periphery also connect chip-to-chip, thus building up a large processor net. There is no particular topology to the larger net, as a function at each processor allows it to forward a message in the correct direction. Some chip-to-chip connections are not necessarily nearest neighbors, providing short cuts for some of the longer physical distances. The peripheral processors also provide the connections to sensors, actuators, radios, science instruments, and other devices with which the computer system interacts.

  15. Modeling and optimum time performance for concurrent processing

    NASA Technical Reports Server (NTRS)

    Mielke, Roland R.; Stoughton, John W.; Som, Sukhamoy

    1988-01-01

    The development of a new graph theoretic model for describing the relation between a decomposed algorithm and its execution in a data flow environment is presented. Called ATAMM, the model consists of a set of Petri net marked graphs useful for representing decision-free algorithms having large-grained, computationally complex primitive operations. Performance time measures which determine computing speed and throughput capacity are defined, and the ATAMM model is used to develop lower bounds for these times. A concurrent processing operating strategy for achieving optimum time performance is presented and illustrated by example.

  16. Parallel Algorithms and Patterns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robey, Robert W.

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  17. Parallel scheduling of recursively defined arrays

    NASA Technical Reports Server (NTRS)

    Myers, T. J.; Gokhale, M. B.

    1986-01-01

    A new method of automatic generation of concurrent programs which constructs arrays defined by sets of recursive equations is described. It is assumed that the time of computation of an array element is a linear combination of its indices, and integer programming is used to seek a succession of hyperplanes along which array elements can be computed concurrently. The method can be used to schedule equations involving variable length dependency vectors and mutually recursive arrays. Portions of the work reported here have been implemented in the PS automatic program generation system.

  18. Ada (Tradename) Compiler Validation Summary Report. Concurrent Computer Corporation C3 Ada, Version R00-00.00. Concurrent Computer Corporation Series 3200.

    DTIC Science & Technology

    1986-06-11

    been specified, then the amount specified is returned. Otherwise the current amount allocated is returned. T’STORAGESIZE for task types or objects is...hrs DURATION’LAST 131071.99993896484375 36 hrs F.A Address Clauses Address clauses are implemented for objects. No storage is allocated for objects...it is ignored. at Allocation . An integer in the range 1..2,147,483,647. For CONTIGUOUS files, it specifies the number of 256 byte sectors. For ITAM

  19. The path toward HEP High Performance Computing

    NASA Astrophysics Data System (ADS)

    Apostolakis, John; Brun, René; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-06-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a "High Performance" implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit best from the recent technology evolution in computing.

  20. Petri nets as a modeling tool for discrete concurrent tasks of the human operator. [describing sequential and parallel demands on human operators

    NASA Technical Reports Server (NTRS)

    Schumacher, W.; Geiser, G.

    1978-01-01

    The basic concepts of Petri nets are reviewed as well as their application as the fundamental model of technical systems with concurrent discrete events such as hardware systems and software models of computers. The use of Petri nets is proposed for modeling the human operator dealing with concurrent discrete tasks. Their properties useful in modeling the human operator are discussed and practical examples are given. By means of and experimental investigation of binary concurrent tasks which are presented in a serial manner, the representation of human behavior by Petri nets is demonstrated.

  1. 25 CFR 542.10 - What are the minimum internal control standards for keno?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... keno? (a) Computer applications. For any computer applications utilized, alternate documentation and/or... restricted transaction log or computer storage media concurrently with the generation of the ticket. (3) Keno personnel shall be precluded from having access to the restricted transaction log or computer storage media...

  2. 25 CFR 542.10 - What are the minimum internal control standards for keno?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... keno? (a) Computer applications. For any computer applications utilized, alternate documentation and/or... restricted transaction log or computer storage media concurrently with the generation of the ticket. (3) Keno personnel shall be precluded from having access to the restricted transaction log or computer storage media...

  3. Wireless Computing Architecture III

    DTIC Science & Technology

    2013-09-01

    MIMO Multiple-Input and Multiple-Output MIMO /CON MIMO with concurrent hannel access and estimation MU- MIMO Multiuser MIMO OFDM Orthogonal...compressive sensing \\; a design for concurrent channel estimation in scalable multiuser MIMO networking; and novel networking protocols based on machine...Network, Antenna Arrays, UAV networking, Angle of Arrival, Localization MIMO , Access Point, Channel State Information, Compressive Sensing 16

  4. A Counterexample Guided Abstraction Refinement Framework for Verifying Concurrent C Programs

    DTIC Science & Technology

    2005-05-24

    source code are routinely executed. The source code is written in languages ranging from C/C++/Java to ML/ Ocaml . These languages differ not only in...from the difficulty to model computer programs—due to the complexity of programming languages as compared to hardware description languages —to...intermediate specification language lying between high-level Statechart- like formalisms and transition systems. Actions are encoded as changes in

  5. Analysis and design of algorithm-based fault-tolerant systems

    NASA Technical Reports Server (NTRS)

    Nair, V. S. Sukumaran

    1990-01-01

    An important consideration in the design of high performance multiprocessor systems is to ensure the correctness of the results computed in the presence of transient and intermittent failures. Concurrent error detection and correction have been applied to such systems in order to achieve reliability. Algorithm Based Fault Tolerance (ABFT) was suggested as a cost-effective concurrent error detection scheme. The research was motivated by the complexity involved in the analysis and design of ABFT systems. To that end, a matrix-based model was developed and, based on that, algorithms for both the design and analysis of ABFT systems are formulated. These algorithms are less complex than the existing ones. In order to reduce the complexity further, a hierarchical approach is developed for the analysis of large systems.

  6. Software Development Technologies for Reactive, Real-Time, and Hybrid Systems: Summary of Research

    NASA Technical Reports Server (NTRS)

    Manna, Zohar

    1998-01-01

    This research is directed towards the implementation of a comprehensive deductive-algorithmic environment (toolkit) for the development and verification of high assurance reactive systems, especially concurrent, real-time, and hybrid systems. For this, we have designed and implemented the STCP (Stanford Temporal Prover) verification system. Reactive systems have an ongoing interaction with their environment, and their computations are infinite sequences of states. A large number of systems can be seen as reactive systems, including hardware, concurrent programs, network protocols, and embedded systems. Temporal logic provides a convenient language for expressing properties of reactive systems. A temporal verification methodology provides procedures for proving that a given system satisfies a given temporal property. The research covered necessary theoretical foundations as well as implementation and application issues.

  7. Towards a general object-oriented software development methodology

    NASA Technical Reports Server (NTRS)

    Seidewitz, ED; Stark, Mike

    1986-01-01

    Object diagrams were used to design a 5000 statement team training exercise and to design the entire dynamics simulator. The object diagrams are also being used to design another 50,000 statement Ada system and a personal computer based system that will be written in Modula II. The design methodology evolves out of these experiences as well as the limitations of other methods that were studied. Object diagrams, abstraction analysis, and associated principles provide a unified framework which encompasses concepts from Yourdin, Booch, and Cherry. This general object-oriented approach handles high level system design, possibly with concurrency, through object-oriented decomposition down to a completely functional level. How object-oriented concepts can be used in other phases of the software life-cycle, such as specification and testing is being studied concurrently.

  8. CT colonography: investigation of the optimum reader paradigm by using computer-aided detection software.

    PubMed

    Taylor, Stuart A; Charman, Susan C; Lefere, Philippe; McFarland, Elizabeth G; Paulson, Erik K; Yee, Judy; Aslam, Rizwan; Barlow, John M; Gupta, Arun; Kim, David H; Miller, Chad M; Halligan, Steve

    2008-02-01

    To prospectively compare the diagnostic performance and time efficiency of both second and concurrent computer-aided detection (CAD) reading paradigms for retrospectively obtained computed tomographic (CT) colonography data sets by using consensus reading (three radiologists) of colonoscopic findings as a reference standard. Ethical permission, HIPAA compliance (for U.S. institutions), and patient consent were obtained from all institutions for use of CT colonography data sets in this study. Ten radiologists each read 25 CT colonography data sets (12 men, 13 women; mean age, 61 years) containing 69 polyps (28 were 1-5 mm, 41 were >or=6 mm) by using workstations integrated with CAD software. Reading was randomized to either "second read" CAD (applied only after initial unassisted assessment) or "concurrent read" CAD (applied at the start of assessment). Data sets were reread 6 weeks later by using the opposing paradigm. Polyp sensitivity and reading times were compared by using multilevel logistic and linear regression, respectively. Receiver operating characteristic (ROC) curves were generated. Compared with the unassisted read, odds of improved polyp (>or=6 mm) detection were 1.5 (95% confidence interval [CI]: 1.0, 2.2) and 1.3 (95% CI: 0.9, 1.9) by using CAD as second and concurrent reader, respectively. Detection odds by using CAD concurrently were 0.87 (95% CI: 0.59, 1.3) and 0.76 (95% CI: 0.57, 1.01) those of second read CAD, excluding and including polyps 1-5 mm, respectively. The concurrent read took 2.9 minutes (95% CI: -3.8, -1.9) less than did second read. The mean areas under the ROC curve (95% CI) for the unassisted read, second read CAD, and concurrent read CAD were 0.83 (95% CI: 0.78, 0.87), 0.86 (95% CI: 0.82, 0.90), and 0.88 (95% CI: 0.83, 0.92), respectively. CAD is more time efficient when used concurrently than when used as a second reader, with similar sensitivity for polyps 6 mm or larger. However, use of second read CAD maximizes sensitivity, particularly for smaller lesions. (c) RSNA, 2007.

  9. Parallel processors and nonlinear structural dynamics algorithms and software

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted

    1990-01-01

    Techniques are discussed for the implementation and improvement of vectorization and concurrency in nonlinear explicit structural finite element codes. In explicit integration methods, the computation of the element internal force vector consumes the bulk of the computer time. The program can be efficiently vectorized by subdividing the elements into blocks and executing all computations in vector mode. The structuring of elements into blocks also provides a convenient way to implement concurrency by creating tasks which can be assigned to available processors for evaluation. The techniques were implemented in a 3-D nonlinear program with one-point quadrature shell elements. Concurrency and vectorization were first implemented in a single time step version of the program. Techniques were developed to minimize processor idle time and to select the optimal vector length. A comparison of run times between the program executed in scalar, serial mode and the fully vectorized code executed concurrently using eight processors shows speed-ups of over 25. Conjugate gradient methods for solving nonlinear algebraic equations are also readily adapted to a parallel environment. A new technique for improving convergence properties of conjugate gradients in nonlinear problems is developed in conjunction with other techniques such as diagonal scaling. A significant reduction in the number of iterations required for convergence is shown for a statically loaded rigid bar suspended by three equally spaced springs.

  10. Parallel Scaling Characteristics of Selected NERSC User ProjectCodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skinner, David; Verdier, Francesca; Anand, Harsh

    This report documents parallel scaling characteristics of NERSC user project codes between Fiscal Year 2003 and the first half of Fiscal Year 2004 (Oct 2002-March 2004). The codes analyzed cover 60% of all the CPU hours delivered during that time frame on seaborg, a 6080 CPU IBM SP and the largest parallel computer at NERSC. The scale in terms of concurrency and problem size of the workload is analyzed. Drawing on batch queue logs, performance data and feedback from researchers we detail the motivations, benefits, and challenges of implementing highly parallel scientific codes on current NERSC High Performance Computing systems.more » An evaluation and outlook of the NERSC workload for Allocation Year 2005 is presented.« less

  11. Concurrent Image Processing Executive (CIPE). Volume 1: Design overview

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.

    1990-01-01

    The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are described. The target machine for this software is a JPL/Caltech Mark 3fp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules: user interface, host-resident executive, hypercube-resident executive, and application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube, a data management method which distributes, redistributes, and tracks data set information was implemented. The data management also allows data sharing among application programs. The CIPE software architecture provides a flexible environment for scientific analysis of complex remote sensing image data, such as planetary data and imaging spectrometry, utilizing state-of-the-art concurrent computation capabilities.

  12. Using Histories to Implement Atomic Objects

    NASA Technical Reports Server (NTRS)

    Ng, Pui

    1987-01-01

    In this paper we describe an approach of implementing atomicity. Atomicity requires that computations appear to be all-or-nothing and executed in a serialization order. The approach we describe has three characteristics. First, it utilizes the semantics of an application to improve concurrency. Second, it reduces the complexity of application-dependent synchronization code by analyzing the process of writing it. In fact, the process can be automated with logic programming. Third, our approach hides the protocol used to arrive at a serialization order from the applications. As a result, different protocols can be used without affecting the applications. Our approach uses a history tree abstraction. The history tree captures the ordering relationship among concurrent computations. By determining what types of computations exist in the history tree and their parameters, a computation can determine whether it can proceed.

  13. Adult Literacy Learning and Computer Technology: Features of Effective Computer-Assisted Learning Systems.

    ERIC Educational Resources Information Center

    Fahy, Patrick J.

    Computer-assisted learning (CAL) can be used for adults functioning at any academic or grade level. In adult basic education (ABE), CAL can promote greater learning effectiveness and faster progress, concurrent learning and experience with computer literacy skills, privacy, and motivation. Adults who face barriers (financial, geographic, personal,…

  14. DREAMS and IMAGE: A Model and Computer Implementation for Concurrent, Life-Cycle Design of Complex Systems

    NASA Technical Reports Server (NTRS)

    Hale, Mark A.; Craig, James I.; Mistree, Farrokh; Schrage, Daniel P.

    1995-01-01

    Computing architectures are being assembled that extend concurrent engineering practices by providing more efficient execution and collaboration on distributed, heterogeneous computing networks. Built on the successes of initial architectures, requirements for a next-generation design computing infrastructure can be developed. These requirements concentrate on those needed by a designer in decision-making processes from product conception to recycling and can be categorized in two areas: design process and design information management. A designer both designs and executes design processes throughout design time to achieve better product and process capabilities while expanding fewer resources. In order to accomplish this, information, or more appropriately design knowledge, needs to be adequately managed during product and process decomposition as well as recomposition. A foundation has been laid that captures these requirements in a design architecture called DREAMS (Developing Robust Engineering Analysis Models and Specifications). In addition, a computing infrastructure, called IMAGE (Intelligent Multidisciplinary Aircraft Generation Environment), is being developed that satisfies design requirements defined in DREAMS and incorporates enabling computational technologies.

  15. Phase 2 study of high-dose proton therapy with concurrent chemotherapy for unresectable stage III nonsmall cell lung cancer.

    PubMed

    Chang, Joe Y; Komaki, Ritsuko; Lu, Charles; Wen, Hong Y; Allen, Pamela K; Tsao, Anne; Gillin, Michael; Mohan, Radhe; Cox, James D

    2011-10-15

    The authors sought to improve the toxicity of conventional concurrent chemoradiation therapy for stage III nonsmall cell lung cancer (NSCLC) by using proton-beam therapy to escalate the radiation dose to the tumor. They report early results of a phase 2 study of high-dose proton therapy and concurrent chemotherapy in terms of toxicity, failure patterns, and survival. Forty-four patients with stage III NSCLC were treated with 74 grays (radiobiologic equivalent) proton therapy with weekly carboplatin (area under the curve, 2 U) and paclitaxel (50 mg/m(2)). Disease was staged with positron emission tomography/computed tomography (CT), and treatments were simulated with 4-dimensional (4D) CT to account for tumor motion. Protons were delivered as passively scattered beams, and treatment simulation was repeated during the treatment process to determine the need for adaptive replanning. Median follow-up time was 19.7 months (range, 6.1-44.4 months), and median overall survival time was 29.4 months. No patient experienced grade 4 or 5 proton-related adverse events. The most common nonhematologic grade 3 toxicities were dermatitis (n = 5), esophagitis (n = 5), and pneumonitis (n = 1). Nine (20.5%) patients experienced local disease recurrence, but only 4 (9.1%) had isolated local failure. Four (9.1%) patients had regional lymph node recurrence, but only 1 (2.3%) had isolated regional recurrence. Nineteen (43.2%) patients developed distant metastasis. The overall survival and progression-free survival rates were 86% and 63% at 1 year. Concurrent high-dose proton therapy and chemotherapy are well tolerated, and the median survival time of 29.4 months is encouraging for unresectable stage III NSCLC. Copyright © 2011 American Cancer Society.

  16. Characterizing output bottlenecks in a supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Bing; Chase, Jeffrey; Dillow, David A

    2012-01-01

    Supercomputer I/O loads are often dominated by writes. HPC (High Performance Computing) file systems are designed to absorb these bursty outputs at high bandwidth through massive parallelism. However, the delivered write bandwidth often falls well below the peak. This paper characterizes the data absorption behavior of a center-wide shared Lustre parallel file system on the Jaguar supercomputer. We use a statistical methodology to address the challenges of accurately measuring a shared machine under production load and to obtain the distribution of bandwidth across samples of compute nodes, storage targets, and time intervals. We observe and quantify limitations from competing traffic,more » contention on storage servers and I/O routers, concurrency limitations in the client compute node operating systems, and the impact of variance (stragglers) on coupled output such as striping. We then examine the implications of our results for application performance and the design of I/O middleware systems on shared supercomputers.« less

  17. Concurrent negotiation and coordination for grid resource coallocation.

    PubMed

    Sim, Kwang Mong; Shi, Benyun

    2010-06-01

    Bolstering resource coallocation is essential for realizing the Grid vision, because computationally intensive applications often require multiple computing resources from different administrative domains. Given that resource providers and consumers may have different requirements, successfully obtaining commitments through concurrent negotiations with multiple resource providers to simultaneously access several resources is a very challenging task for consumers. The impetus of this paper is that it is one of the earliest works that consider a concurrent negotiation mechanism for Grid resource coallocation. The concurrent negotiation mechanism is designed for 1) managing (de)commitment of contracts through one-to-many negotiations and 2) coordination of multiple concurrent one-to-many negotiations between a consumer and multiple resource providers. The novel contributions of this paper are devising 1) a utility-oriented coordination (UOC) strategy, 2) three classes of commitment management strategies (CMSs) for concurrent negotiation, and 3) the negotiation protocols of consumers and providers. Implementing these ideas in a testbed, three series of experiments were carried out in a variety of settings to compare the following: 1) the CMSs in this paper with the work of others in a single one-to-many negotiation environment for one resource where decommitment is allowed for both provider and consumer agents; 2) the performance of the three classes of CMSs in different resource market types; and 3) the UOC strategy with the work of others [e.g., the patient coordination strategy (PCS )] for coordinating multiple concurrent negotiations. Empirical results show the following: 1) the UOC strategy achieved higher utility, faster negotiation speed, and higher success rates than PCS for different resource market types; and 2) the CMS in this paper achieved higher final utility than the CMS in other works. Additionally, the properties of the three classes of CMSs in different kinds of resource markets are also verified.

  18. Comparing Computer-Adaptive and Curriculum-Based Measurement Methods of Assessment

    ERIC Educational Resources Information Center

    Shapiro, Edward S.; Gebhardt, Sarah N.

    2012-01-01

    This article reported the concurrent, predictive, and diagnostic accuracy of a computer-adaptive test (CAT) and curriculum-based measurements (CBM; both computation and concepts/application measures) for universal screening in mathematics among students in first through fourth grade. Correlational analyses indicated moderate to strong…

  19. NASA Workshop on Computational Structural Mechanics 1987, part 3

    NASA Technical Reports Server (NTRS)

    Sykes, Nancy P. (Editor)

    1989-01-01

    Computational Structural Mechanics (CSM) topics are explored. Algorithms and software for nonlinear structural dynamics, concurrent algorithms for transient finite element analysis, computational methods and software systems for dynamics and control of large space structures, and the use of multi-grid for structural analysis are discussed.

  20. Implementation of a Three-Semester Concurrent Engineering Design Sequence for Lower-Division Engineering Students

    ERIC Educational Resources Information Center

    Bertozzi, N.; Hebert, C.; Rought, J.; Staniunas, C.

    2007-01-01

    Over the past decade the software products available for solid modeling, dynamic, stress, thermal, and flow analysis, and computer-aiding manufacturing (CAM) have become more powerful, affordable, and easier to use. At the same time it has become increasingly important for students to gain concurrent engineering design and systems integration…

  1. How to Quickly Import CAD Geometry into Thermal Desktop

    NASA Technical Reports Server (NTRS)

    Wright, Shonte; Beltran, Emilio

    2002-01-01

    There are several groups at JPL (Jet Propulsion Laboratory) that are committed to concurrent design efforts, two are featured here. Center for Space Mission Architecture and Design (CSMAD) enables the practical application of advanced process technologies in JPL's mission architecture process. Team I functions as an incubator for projects that are in the Discovery, and even pre-Discovery proposal stages. JPL's concurrent design environment is to a large extent centered on the CAD (Computer Aided Design) file. During concurrent design sessions CAD geometry is ported to other more specialized engineering design packages.

  2. Perspectives on an education in computational biology and medicine.

    PubMed

    Rubinstein, Jill C

    2012-09-01

    The mainstream application of massively parallel, high-throughput assays in biomedical research has created a demand for scientists educated in Computational Biology and Bioinformatics (CBB). In response, formalized graduate programs have rapidly evolved over the past decade. Concurrently, there is increasing need for clinicians trained to oversee the responsible translation of CBB research into clinical tools. Physician-scientists with dedicated CBB training can facilitate such translation, positioning themselves at the intersection between computational biomedical research and medicine. This perspective explores key elements of the educational path to such a position, specifically addressing: 1) evolving perceptions of the role of the computational biologist and the impact on training and career opportunities; 2) challenges in and strategies for obtaining the core skill set required of a biomedical researcher in a computational world; and 3) how the combination of CBB with medical training provides a logical foundation for a career in academic medicine and/or biomedical research.

  3. CHARACTERISTICS OF MULTIPLE AND CONCURRENT PARTNERSHIPS AMONG WOMEN AT HIGH RISK FOR HIV INFECTION

    PubMed Central

    Adimora, Adaora A.; Hughes, James P.; Wang, Jing; Haley, Danielle F.; Golin, Carol E.; Magnus, Manya; Rompalo, Anne; Justman, Jessica; del Rio, Carlos; El-Sadr, Wafaa; Mannheimer, Sharon; Soto-Torres, Lydia; Hodder, Sally L.

    2014-01-01

    Objectives We examined parameters of sexual partnerships, including respondents’ participation in concurrency, belief that their partner had concurrent partnerships (partners’ concurrency), and partnership intervals, among the 2,099 women in HIV Prevention Trials Network 064, a study of women at high risk for HIV infection, in ten US communities. Methods We analyzed baseline survey responses about partnership dates to determine prevalence of participants’ and partners’ concurrency, intervals between partnerships, knowledge of whether recent partner(s) had undergone HIV testing, and intercourse frequency during the preceding 6 months. Results Prevalence of participants’ and partners’ concurrency was 40% and 36% respectively; 24% of respondents had both concurrent partnerships and non-monogamous partners. Among women with >1 partner and no concurrent partnerships themselves, the median gap between partners was one month. Multiple episodes of unprotected vaginal intercourse with >2 of their most recent partners was reported by 60% of women who had both concurrent partnerships and non-monogamous partners, 50% with only concurrent partners and no partners’ concurrency, and 33% with only partners’ concurrency versus 14% of women with neither type of concurrency (p<.0001). Women who had any involvement with concurrency were also more likely than women with no concurrency involvement to report lack of awareness of whether recent partners had undergone HIV testing (participants’ concurrency 41%, partners’ concurrency 40%, both participants’ and partners’ concurrency 48%, neither 17%; p<.0001). Conclusions These network patterns and short gaps between partnerships may create substantial opportunities for HIV transmission in this sample of women at high risk for HIV infection. PMID:24056163

  4. Acquisition of gamma camera and physiological data by computer.

    PubMed

    Hack, S N; Chang, M; Line, B R; Cooper, J A; Robeson, G H

    1986-11-01

    We have designed, implemented, and tested a new Research Data Acquisition System (RDAS) that permits a general purpose digital computer to acquire signals from both gamma camera sources and physiological signal sources concurrently. This system overcomes the limited multi-source, high speed data acquisition capabilities found in most clinically oriented nuclear medicine computers. The RDAS can simultaneously input signals from up to four gamma camera sources with a throughput of 200 kHz per source and from up to eight physiological signal sources with an aggregate throughput of 50 kHz. Rigorous testing has found the RDAS to exhibit acceptable linearity and timing characteristics. In addition, flood images obtained by this system were compared with flood images acquired by a commercial nuclear medicine computer system. National Electrical Manufacturers Association performance standards of the flood images were found to be comparable.

  5. Overview of Computer Simulation Modeling Approaches and Methods

    Treesearch

    Robert E. Manning; Robert M. Itami; David N. Cole; Randy Gimblett

    2005-01-01

    The field of simulation modeling has grown greatly with recent advances in computer hardware and software. Much of this work has involved large scientific and industrial applications for which substantial financial resources are available. However, advances in object-oriented programming and simulation methodology, concurrent with dramatic increases in computer...

  6. Document Concurrence System

    NASA Technical Reports Server (NTRS)

    Muhsin, Mansour; Walters, Ian

    2004-01-01

    The Document Concurrence System is a combination of software modules for routing users expressions of concurrence with documents. This system enables determination of the current status of concurrences and eliminates the need for the prior practice of manually delivering paper documents to all persons whose approvals were required. This system runs on a server, and participants gain access via personal computers equipped with Web-browser and electronic-mail software. A user can begin a concurrence routing process by logging onto an administration module, naming the approvers and stating the sequence for routing among them, and attaching documents. The server then sends a message to the first person on the list. Upon concurrence by the first person, the system sends a message to the second person, and so forth. A person on the list indicates approval, places the documents on hold, or indicates disapproval, via a Web-based module. When the last person on the list has concurred, a message is sent to the initiator, who can then finalize the process through the administration module. A background process running on the server identifies concurrence processes that are overdue and sends reminders to the appropriate persons.

  7. Toward ubiquitous healthcare services with a novel efficient cloud platform.

    PubMed

    He, Chenguang; Fan, Xiaomao; Li, Ye

    2013-01-01

    Ubiquitous healthcare services are becoming more and more popular, especially under the urgent demand of the global aging issue. Cloud computing owns the pervasive and on-demand service-oriented natures, which can fit the characteristics of healthcare services very well. However, the abilities in dealing with multimodal, heterogeneous, and nonstationary physiological signals to provide persistent personalized services, meanwhile keeping high concurrent online analysis for public, are challenges to the general cloud. In this paper, we proposed a private cloud platform architecture which includes six layers according to the specific requirements. This platform utilizes message queue as a cloud engine, and each layer thereby achieves relative independence by this loosely coupled means of communications with publish/subscribe mechanism. Furthermore, a plug-in algorithm framework is also presented, and massive semistructure or unstructured medical data are accessed adaptively by this cloud architecture. As the testing results showing, this proposed cloud platform, with robust, stable, and efficient features, can satisfy high concurrent requests from ubiquitous healthcare services.

  8. Linking consistency with object/thread semantics - An approach to robust computation

    NASA Technical Reports Server (NTRS)

    Chen, Raymond C.; Dasgupta, Partha

    1989-01-01

    This paper presents an object/thread based paradigm that links data consistency with object/thread semantics. The paradigm can be used to achieve a wide range of consistency semantics from strict atomic transactions to standard process semantics. The paradigm supports three types of data consistency. Object programmers indicate the type of consistency desired on a per-operation basis and the system performs automatic concurrency control and recovery management to ensure that those consistency requirements are met. This allows programmers to customize consistency and recovery on a per-application basis without having to supply complicated, custom recovery management schemes. The paradigm allows robust and nonrobust computation to operate concurrently on the same data in a well defined manner. The operating system needs to support only one vehicle of computation - the thread.

  9. Distractions, distractions: does instant messaging affect college students' performance on a concurrent reading comprehension task?

    PubMed

    Fox, Annie Beth; Rosen, Jonathan; Crawford, Mary

    2009-02-01

    Instant messaging (IM) has become one of the most popular forms of computer-mediated communication (CMC) and is especially prevalent on college campuses. Previous research suggests that IM users often multitask while conversing online. To date, no one has yet examined the cognitive effect of concurrent IM use. Participants in the present study (N = 69) completed a reading comprehension task uninterrupted or while concurrently holding an IM conversation. Participants who IMed while performing the reading task took significantly longer to complete the task, indicating that concurrent IM use negatively affects efficiency. Concurrent IM use did not affect reading comprehension scores. Additional analyses revealed that the more time participants reported spending on IM, the lower their reading comprehension scores. Finally, we found that the more time participants reported spending on IM, the lower their self-reported GPA. Implications and future directions are discussed.

  10. IMAGE: A Design Integration Framework Applied to the High Speed Civil Transport

    NASA Technical Reports Server (NTRS)

    Hale, Mark A.; Craig, James I.

    1993-01-01

    Effective design of the High Speed Civil Transport requires the systematic application of design resources throughout a product's life-cycle. Information obtained from the use of these resources is used for the decision-making processes of Concurrent Engineering. Integrated computing environments facilitate the acquisition, organization, and use of required information. State-of-the-art computing technologies provide the basis for the Intelligent Multi-disciplinary Aircraft Generation Environment (IMAGE) described in this paper. IMAGE builds upon existing agent technologies by adding a new component called a model. With the addition of a model, the agent can provide accountable resource utilization in the presence of increasing design fidelity. The development of a zeroth-order agent is used to illustrate agent fundamentals. Using a CATIA(TM)-based agent from previous work, a High Speed Civil Transport visualization system linking CATIA, FLOPS, and ASTROS will be shown. These examples illustrate the important role of the agent technologies used to implement IMAGE, and together they demonstrate that IMAGE can provide an integrated computing environment for the design of the High Speed Civil Transport.

  11. Numerical Computation of Flame Spread over a Thin Solid in Forced Concurrent Flow with Gas-phase Radiation

    NASA Technical Reports Server (NTRS)

    Jiang, Ching-Biau; T'ien, James S.

    1994-01-01

    Excerpts from a paper describing the numerical examination of concurrent-flow flame spread over a thin solid in purely forced flow with gas-phase radiation are presented. The computational model solves the two-dimensional, elliptic, steady, and laminar conservation equations for mass, momentum, energy, and chemical species. Gas-phase combustion is modeled via a one-step, second order finite rate Arrhenius reaction. Gas-phase radiation considering gray non-scattering medium is solved by a S-N discrete ordinates method. A simplified solid phase treatment assumes a zeroth order pyrolysis relation and includes radiative interaction between the surface and the gas phase.

  12. Modeling of dialogue regimes of distance robot control

    NASA Astrophysics Data System (ADS)

    Larkin, E. V.; Privalov, A. N.

    2017-02-01

    Process of distance control of mobile robots is investigated. Petri-Markov net for modeling of dialogue regime is worked out. It is shown, that sequence of operations of next subjects: a human operator, a dialogue computer and an onboard computer may be simulated with use the theory of semi-Markov processes. From the semi-Markov process of the general form Markov process was obtained, which includes only states of transaction generation. It is shown, that a real transaction flow is the result of «concurrency» in states of Markov process. Iteration procedure for evaluation of transaction flow parameters, which takes into account effect of «concurrency», is proposed.

  13. Concurrent processing simulation of the space station

    NASA Technical Reports Server (NTRS)

    Gluck, R.; Hale, A. L.; Sunkel, John W.

    1989-01-01

    The development of a new capability for the time-domain simulation of multibody dynamic systems and its application to the study of a large angle rotational maneuvers of the Space Station is described. The effort was divided into three sequential tasks, which required significant advancements of the state-of-the art to accomplish. These were: (1) the development of an explicit mathematical model via symbol manipulation of a flexible, multibody dynamic system; (2) the development of a methodology for balancing the computational load of an explicit mathematical model for concurrent processing; and (3) the implementation and successful simulation of the above on a prototype Custom Architectured Parallel Processing System (CAPPS) containing eight processors. The throughput rate achieved by the CAPPS operating at only 70 percent efficiency, was 3.9 times greater than that obtained sequentially by the IBM 3090 supercomputer simulating the same problem. More significantly, analysis of the results leads to the conclusion that the relative cost effectiveness of concurrent vs. sequential digital computation will grow substantially as the computational load is increased. This is a welcomed development in an era when very complex and cumbersome mathematical models of large space vehicles must be used as substitutes for full scale testing which has become impractical.

  14. The Educational Value of Microcomputers: Perceptions among Parents of Young Gifted Children.

    ERIC Educational Resources Information Center

    Johnson, Lawrence J.; Lewman, Beverly S.

    1986-01-01

    Parents of 62 children enrolled in a private school for young gifted students completed a questionnaire designed to assess home use of computers, as well as parental concerns and expectations for appropriate concurrent and future computer use in educational settings. Familiarity with computers increased perceptions of their beneficial educational…

  15. Computational complexities and storage requirements of some Riccati equation solvers

    NASA Technical Reports Server (NTRS)

    Utku, Senol; Garba, John A.; Ramesh, A. V.

    1989-01-01

    The linear optimal control problem of an nth-order time-invariant dynamic system with a quadratic performance functional is usually solved by the Hamilton-Jacobi approach. This leads to the solution of the differential matrix Riccati equation with a terminal condition. The bulk of the computation for the optimal control problem is related to the solution of this equation. There are various algorithms in the literature for solving the matrix Riccati equation. However, computational complexities and storage requirements as a function of numbers of state variables, control variables, and sensors are not available for all these algorithms. In this work, the computational complexities and storage requirements for some of these algorithms are given. These expressions show the immensity of the computational requirements of the algorithms in solving the Riccati equation for large-order systems such as the control of highly flexible space structures. The expressions are also needed to compute the speedup and efficiency of any implementation of these algorithms on concurrent machines.

  16. The force on the flex: Global parallelism and portability

    NASA Technical Reports Server (NTRS)

    Jordan, H. F.

    1986-01-01

    A parallel programming methodology, called the force, supports the construction of programs to be executed in parallel by an unspecified, but potentially large, number of processes. The methodology was originally developed on a pipelined, shared memory multiprocessor, the Denelcor HEP, and embodies the primitive operations of the force in a set of macros which expand into multiprocessor Fortran code. A small set of primitives is sufficient to write large parallel programs, and the system has been used to produce 10,000 line programs in computational fluid dynamics. The level of complexity of the force primitives is intermediate. It is high enough to mask detailed architectural differences between multiprocessors but low enough to give the user control over performance. The system is being ported to a medium scale multiprocessor, the Flex/32, which is a 20 processor system with a mixture of shared and local memory. Memory organization and the type of processor synchronization supported by the hardware on the two machines lead to some differences in efficient implementations of the force primitives, but the user interface remains the same. An initial implementation was done by retargeting the macros to Flexible Computer Corporation's ConCurrent C language. Subsequently, the macros were caused to directly produce the system calls which form the basis for ConCurrent C. The implementation of the Fortran based system is in step with Flexible Computer Corporations's implementation of a Fortran system in the parallel environment.

  17. The Concurrent Implementation of Radio Frequency Identification and Unique Item Identification at Naval Surface Warfare Center, Crane, IN as a Model for a Navy Supply Chain Application

    DTIC Science & Technology

    2007-12-01

    electromagnetic theory related to RFID in his works “ Field measurements using active scatterers” and “Theory of loaded scatterers”. At the same time...Business Case Analysis BRE: Bangor Radio Frequency Evaluation C4ISR: Command, Control, Communications, Computers, Intelligence, Surveillance...Surveillance EEDSKs: Early Entry Deployment Support Kits EHF: Extremely High Frequency xvi EUCOM: European Command FCC : Federal Communications

  18. Validation of the Concurrent Atomistic-Continuum Method on Screw Dislocation/Stacking Fault Interactions

    DOE PAGES

    Xu, Shuozhi; Xiong, Liming; Chen, Youping; ...

    2017-04-26

    Dislocation/stacking fault interactions play an important role in the plastic deformation of metallic nanocrystals and polycrystals. These interactions have been explored in atomistic models, which are limited in scale length by high computational cost. In contrast, multiscale material modeling approaches have the potential to simulate the same systems at a fraction of the computational cost. In this paper, we validate the concurrent atomistic-continuum (CAC) method on the interactions between a lattice screw dislocation and a stacking fault (SF) in three face-centered cubic metallic materials—Ni, Al, and Ag. Two types of SFs are considered: intrinsic SF (ISF) and extrinsic SF (ESF).more » For the three materials at different strain levels, two screw dislocation/ISF interaction modes (annihilation of the ISF and transmission of the dislocation across the ISF) and three screw dislocation/ESF interaction modes (transformation of the ESF into a three-layer twin, transformation of the ESF into an ISF, and transmission of the dislocation across the ESF) are identified. Here, our results show that CAC is capable of accurately predicting the dislocation/SF interaction modes with greatly reduced DOFs compared to fully-resolved atomistic simulations.« less

  19. Validation of the Concurrent Atomistic-Continuum Method on Screw Dislocation/Stacking Fault Interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Shuozhi; Xiong, Liming; Chen, Youping

    Dislocation/stacking fault interactions play an important role in the plastic deformation of metallic nanocrystals and polycrystals. These interactions have been explored in atomistic models, which are limited in scale length by high computational cost. In contrast, multiscale material modeling approaches have the potential to simulate the same systems at a fraction of the computational cost. In this paper, we validate the concurrent atomistic-continuum (CAC) method on the interactions between a lattice screw dislocation and a stacking fault (SF) in three face-centered cubic metallic materials—Ni, Al, and Ag. Two types of SFs are considered: intrinsic SF (ISF) and extrinsic SF (ESF).more » For the three materials at different strain levels, two screw dislocation/ISF interaction modes (annihilation of the ISF and transmission of the dislocation across the ISF) and three screw dislocation/ESF interaction modes (transformation of the ESF into a three-layer twin, transformation of the ESF into an ISF, and transmission of the dislocation across the ESF) are identified. Here, our results show that CAC is capable of accurately predicting the dislocation/SF interaction modes with greatly reduced DOFs compared to fully-resolved atomistic simulations.« less

  20. Early MIMD experience on the CRAY X-MP

    NASA Astrophysics Data System (ADS)

    Rhoades, Clifford E.; Stevens, K. G.

    1985-07-01

    This paper describes some early experience with converting four physics simulation programs to the CRAY X-MP, a current Multiple Instruction, Multiple Data (MIMD) computer consisting of two processors each with an architecture similar to that of the CRAY-1. As a multi-processor, the CRAY X-MP together with the high speed Solid-state Storage Device (SSD) in an ideal machine upon which to study MIMD algorithms for solving the equations of mathematical physics because it is fast enough to run real problems. The computer programs used in this study are all FORTRAN versions of original production codes. They range in sophistication from a one-dimensional numerical simulation of collisionless plasma to a two-dimensional hydrodynamics code with heat flow to a couple of three-dimensional fluid dynamics codes with varying degrees of viscous modeling. Early research with a dual processor configuration has shown speed-ups ranging from 1.55 to 1.98. It has been observed that a few simple extensions to FORTRAN allow a typical programmer to achieve a remarkable level of efficiency. These extensions involve the concept of memory local to a concurrent subprogram and memory common to all concurrent subprograms.

  1. Concurrent Probabilistic Simulation of High Temperature Composite Structural Response

    NASA Technical Reports Server (NTRS)

    Abdi, Frank

    1996-01-01

    A computational structural/material analysis and design tool which would meet industry's future demand for expedience and reduced cost is presented. This unique software 'GENOA' is dedicated to parallel and high speed analysis to perform probabilistic evaluation of high temperature composite response of aerospace systems. The development is based on detailed integration and modification of diverse fields of specialized analysis techniques and mathematical models to combine their latest innovative capabilities into a commercially viable software package. The technique is specifically designed to exploit the availability of processors to perform computationally intense probabilistic analysis assessing uncertainties in structural reliability analysis and composite micromechanics. The primary objectives which were achieved in performing the development were: (1) Utilization of the power of parallel processing and static/dynamic load balancing optimization to make the complex simulation of structure, material and processing of high temperature composite affordable; (2) Computational integration and synchronization of probabilistic mathematics, structural/material mechanics and parallel computing; (3) Implementation of an innovative multi-level domain decomposition technique to identify the inherent parallelism, and increasing convergence rates through high- and low-level processor assignment; (4) Creating the framework for Portable Paralleled architecture for the machine independent Multi Instruction Multi Data, (MIMD), Single Instruction Multi Data (SIMD), hybrid and distributed workstation type of computers; and (5) Market evaluation. The results of Phase-2 effort provides a good basis for continuation and warrants Phase-3 government, and industry partnership.

  2. Formal Semanol Specification of Ada.

    DTIC Science & Technology

    1980-09-01

    concurrent task modeling involved very little change to the SEMANOL metalanguage. A primitive capable of initiating concurrent SEMANOL task processors...i.e., #CO-COMPUTE) and two primitivc-; corresponding to integer semaphores (i.c., #P and #V) were all that were required. In addition, these changes... synchronization techniques and choice of correct unblocking alternatives. We should note that it had been our original intention to use the Ada Translator program

  3. The science of visual analysis at extreme scale

    NASA Astrophysics Data System (ADS)

    Nowell, Lucy T.

    2011-01-01

    Driven by market forces and spanning the full spectrum of computational devices, computer architectures are changing in ways that present tremendous opportunities and challenges for data analysis and visual analytic technologies. Leadership-class high performance computing system will have as many as a million cores by 2020 and support 10 billion-way concurrency, while laptop computers are expected to have as many as 1,000 cores by 2015. At the same time, data of all types are increasing exponentially and automated analytic methods are essential for all disciplines. Many existing analytic technologies do not scale to make full use of current platforms and fewer still are likely to scale to the systems that will be operational by the end of this decade. Furthermore, on the new architectures and for data at extreme scales, validating the accuracy and effectiveness of analytic methods, including visual analysis, will be increasingly important.

  4. Expected Reachability-Time Games

    NASA Astrophysics Data System (ADS)

    Forejt, Vojtěch; Kwiatkowska, Marta; Norman, Gethin; Trivedi, Ashutosh

    In an expected reachability-time game (ERTG) two players, Min and Max, move a token along the transitions of a probabilistic timed automaton, so as to minimise and maximise, respectively, the expected time to reach a target. These games are concurrent since at each step of the game both players choose a timed move (a time delay and action under their control), and the transition of the game is determined by the timed move of the player who proposes the shorter delay. A game is turn-based if at any step of the game, all available actions are under the control of precisely one player. We show that while concurrent ERTGs are not always determined, turn-based ERTGs are positionally determined. Using the boundary region graph abstraction, and a generalisation of Asarin and Maler's simple function, we show that the decision problems related to computing the upper/lower values of concurrent ERTGs, and computing the value of turn-based ERTGs are decidable and their complexity is in NEXPTIME ∩ co-NEXPTIME.

  5. Effects of dialysate flow configurations in continuous renal replacement therapy on solute removal: computational modeling.

    PubMed

    Kim, Jeong Chul; Cruz, Dinna; Garzotto, Francesco; Kaushik, Manish; Teixeria, Catarina; Baldwin, Marie; Baldwin, Ian; Nalesso, Federico; Kim, Ji Hyun; Kang, Eungtaek; Kim, Hee Chan; Ronco, Claudio

    2013-01-01

    Continuous renal replacement therapy (CRRT) is commonly used for critically ill patients with acute kidney injury. During treatment, a slow dialysate flow rate can be applied to enhance diffusive solute removal. However, due to the lack of the rationale of the dialysate flow configuration (countercurrent or concurrent to blood flow), in clinical practice, the connection settings of a hemodiafilter are done depending on nurse preference or at random. In this study, we investigated the effects of flow configurations in a hemodiafilter during continuous venovenous hemodialysis on solute removal and fluid transport using computational fluid dynamic modeling. We solved the momentum equation coupling solute transport to predict quantitative diffusion and convection phenomena in a simplified hemodiafilter model. Computational modeling results showed superior solute removal (clearance of urea: 67.8 vs. 45.1 ml/min) and convection (filtration volume: 29.0 vs. 25.7 ml/min) performances for the countercurrent flow configuration. Countercurrent flow configuration enhances convection and diffusion compared to concurrent flow configuration by increasing filtration volume and equilibrium concentration in the proximal part of a hemodiafilter and backfiltration of pure dialysate in the distal part. In clinical practice, the countercurrent dialysate flow configuration of a hemodiafilter could increase solute removal in CRRT. Nevertheless, while this configuration may become mandatory for high-efficiency treatments, the impact of differences in solute removal observed in slow continuous therapies may be less important. Under these circumstances, if continuous therapies are prescribed, some of the advantages of the concurrent configuration in terms of simpler circuit layout and simpler machine design may overcome the advantages in terms of solute clearance. Different dialysate flow configurations influence solute clearance and change major solute removal mechanisms in the proximal and distal parts of a hemodiafilter. Advantages of each configuration should be balanced against the overall performance of the treatment and its simplicity in terms of treatment delivery and circuit handling procedures. Copyright © 2013 S. Karger AG, Basel.

  6. Impact of high-intensity concurrent training on cardiovascular risk factors in persons with multiple sclerosis - pilot study.

    PubMed

    Keytsman, Charly; Hansen, Dominique; Wens, Inez; O Eijnde, Bert

    2017-10-27

    High-intensity concurrent training positively affects cardiovascular risk factors. Because this was never investigated in multiple sclerosis, the present pilot study explored the impact of this training on cardiovascular risk factors in this population. Before and after 12 weeks of high-intense concurrent training (interval and strength training, 5 sessions per 2 weeks, n = 16) body composition, resting blood pressure and heart rate, 2-h oral glucose tolerance (insulin sensitivity, glycosylated hemoglobin, blood glucose and insulin concentrations), blood lipids (high- and low-density lipoprotein, total cholesterol, triglyceride levels) and C-reactive protein were analyzed. Twelve weeks of high-intense concurrent training significantly improved resting heart rate (-6%), 2-h blood glucose concentrations (-13%) and insulin sensitivity (-24%). Blood pressure, body composition, blood lipids and C-reactive protein did not seem to be affected. Under the conditions of this pilot study, 12 weeks of concurrent high-intense interval and strength training improved resting heart rate, 2-h glucose and insulin sensitivity in multiple sclerosis but did not affect blood C-reactive protein levels, blood pressure, body composition and blood lipid profiles. Further, larger and controlled research investigating the effects of high-intense concurrent training on cardiovascular risk factors in multiple sclerosis is warranted. Implications for rehabilitation High-intensity concurrent training improves cardiovascular fitness. This pilot study explores the impact of this training on cardiovascular risk factors in multiple sclerosis. Despite the lack of a control group, high-intense concurrent training does not seem to improve cardiovascular risk factors in multiple sclerosis.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heroux, Michael; Lethin, Richard

    Programming models and environments play the essential roles in high performance computing of enabling the conception, design, implementation and execution of science and engineering application codes. Programmer productivity is strongly influenced by the effectiveness of our programming models and environments, as is software sustainability since our codes have lifespans measured in decades, so the advent of new computing architectures, increased concurrency, concerns for resilience, and the increasing demands for high-fidelity, multi-physics, multi-scale and data-intensive computations mean that we have new challenges to address as part of our fundamental R&D requirements. Fortunately, we also have new tools and environments that makemore » design, prototyping and delivery of new programming models easier than ever. The combination of new and challenging requirements and new, powerful toolsets enables significant synergies for the next generation of programming models and environments R&D. This report presents the topics discussed and results from the 2014 DOE Office of Science Advanced Scientific Computing Research (ASCR) Programming Models & Environments Summit, and subsequent discussions among the summit participants and contributors to topics in this report.« less

  8. Accelerating nuclear configuration interaction calculations through a preconditioned block iterative eigensolver

    NASA Astrophysics Data System (ADS)

    Shao, Meiyue; Aktulga, H. Metin; Yang, Chao; Ng, Esmond G.; Maris, Pieter; Vary, James P.

    2018-01-01

    We describe a number of recently developed techniques for improving the performance of large-scale nuclear configuration interaction calculations on high performance parallel computers. We show the benefit of using a preconditioned block iterative method to replace the Lanczos algorithm that has traditionally been used to perform this type of computation. The rapid convergence of the block iterative method is achieved by a proper choice of starting guesses of the eigenvectors and the construction of an effective preconditioner. These acceleration techniques take advantage of special structure of the nuclear configuration interaction problem which we discuss in detail. The use of a block method also allows us to improve the concurrency of the computation, and take advantage of the memory hierarchy of modern microprocessors to increase the arithmetic intensity of the computation relative to data movement. We also discuss the implementation details that are critical to achieving high performance on massively parallel multi-core supercomputers, and demonstrate that the new block iterative solver is two to three times faster than the Lanczos based algorithm for problems of moderate sizes on a Cray XC30 system.

  9. A synthetic design environment for ship design

    NASA Technical Reports Server (NTRS)

    Chipman, Richard R.

    1995-01-01

    Rapid advances in computer science and information system technology have made possible the creation of synthetic design environments (SDE) which use virtual prototypes to increase the efficiency and agility of the design process. This next generation of computer-based design tools will rely heavily on simulation and advanced visualization techniques to enable integrated product and process teams to concurrently conceptualize, design, and test a product and its fabrication processes. This paper summarizes a successful demonstration of the feasibility of using a simulation based design environment in the shipbuilding industry. As computer science and information science technologies have evolved, there have been many attempts to apply and integrate the new capabilities into systems for the improvement of the process of design. We see the benefits of those efforts in the abundance of highly reliable, technologically complex products and services in the modern marketplace. Furthermore, the computer-based technologies have been so cost effective that the improvements embodied in modern products have been accompanied by lowered costs. Today the state-of-the-art in computerized design has advanced so dramatically that the focus is no longer on merely improving design methodology; rather the goal is to revolutionize the entire process by which complex products are conceived, designed, fabricated, tested, deployed, operated, maintained, refurbished and eventually decommissioned. By concurrently addressing all life-cycle issues, the basic decision making process within an enterprise will be improved dramatically, leading to new levels of quality, innovation, efficiency, and customer responsiveness. By integrating functions and people with an enterprise, such systems will change the fundamental way American industries are organized, creating companies that are more competitive, creative, and productive.

  10. Validation of learning style measures: implications for medical education practice.

    PubMed

    Chapman, Dane M; Calhoun, Judith G

    2006-06-01

    It is unclear which learners would most benefit from the more individualised, student-structured, interactive approaches characteristic of problem-based and computer-assisted learning. The validity of learning style measures is uncertain, and there is no unifying learning style construct identified to predict such learners. This study was conducted to validate learning style constructs and to identify the learners most likely to benefit from problem-based and computer-assisted curricula. Using a cross-sectional design, 3 established learning style inventories were administered to 97 post-Year 2 medical students. Cognitive personality was measured by the Group Embedded Figures Test, information processing by the Learning Styles Inventory, and instructional preference by the Learning Preference Inventory. The 11 subscales from the 3 inventories were factor-analysed to identify common learning constructs and to verify construct validity. Concurrent validity was determined by intercorrelations of the 11 subscales. A total of 94 pre-clinical medical students completed all 3 inventories. Five meaningful learning style constructs were derived from the 11 subscales: student- versus teacher-structured learning; concrete versus abstract learning; passive versus active learning; individual versus group learning, and field-dependence versus field-independence. The concurrent validity of 10 of 11 subscales was supported by correlation analysis. Medical students most likely to thrive in a problem-based or computer-assisted learning environment would be expected to score highly on abstract, active and individual learning constructs and would be more field-independent. Learning style measures were validated in a medical student population and learning constructs were established for identifying learners who would most likely benefit from a problem-based or computer-assisted curriculum.

  11. Petri net model for analysis of concurrently processed complex algorithms

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1986-01-01

    This paper presents a Petri-net model suitable for analyzing the concurrent processing of computationally complex algorithms. The decomposed operations are to be processed in a multiple processor, data driven architecture. Of particular interest is the application of the model to both the description of the data/control flow of a particular algorithm, and to the general specification of the data driven architecture. A candidate architecture is also presented.

  12. Uncovering the problem-solving process: cued retrospective reporting versus concurrent and retrospective reporting.

    PubMed

    van Gog, Tamara; Paas, Fred; van Merriënboer, Jeroen J G; Witte, Puk

    2005-12-01

    This study investigated the amounts of problem-solving process information ("action," "why," "how," and "metacognitive") elicited by means of concurrent, retrospective, and cued retrospective reporting. In a within-participants design, 26 participants completed electrical circuit troubleshooting tasks under different reporting conditions. The method of cued retrospective reporting used the original computer-based task and a superimposed record of the participant's eye fixations and mouse-keyboard operations as a cue for retrospection. Cued retrospective reporting (with the exception of why information) and concurrent reporting (with the exception of metacognitive information) resulted in a higher number of codes on the different types of information than did retrospective reporting.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haugen, Carl C.; Forget, Benoit; Smith, Kord S.

    Most high performance computing systems being deployed currently and envisioned for the future are based on making use of heavy parallelism across many computational nodes and many concurrent cores. These types of heavily parallel systems often have relatively little memory per core but large amounts of computing capability. This places a significant constraint on how data storage is handled in many Monte Carlo codes. This is made even more significant in fully coupled multiphysics simulations, which requires simulations of many physical phenomena be carried out concurrently on individual processing nodes, which further reduces the amount of memory available for storagemore » of Monte Carlo data. As such, there has been a move towards on-the-fly nuclear data generation to reduce memory requirements associated with interpolation between pre-generated large nuclear data tables for a selection of system temperatures. Methods have been previously developed and implemented in MIT’s OpenMC Monte Carlo code for both the resolved resonance regime and the unresolved resonance regime, but are currently absent for the thermal energy regime. While there are many components involved in generating a thermal neutron scattering cross section on-the-fly, this work will focus on a proposed method for determining the energy and direction of a neutron after a thermal incoherent inelastic scattering event. This work proposes a rejection sampling based method using the thermal scattering kernel to determine the correct outgoing energy and angle. The goal of this project is to be able to treat the full S (a, ß) kernel for graphite, to assist in high fidelity simulations of the TREAT reactor at Idaho National Laboratory. The method is, however, sufficiently general to be applicable in other thermal scattering materials, and can be initially validated with the continuous analytic free gas model.« less

  14. Taming Crowded Visual Scenes

    DTIC Science & Technology

    2014-08-12

    Nolan Warner, Mubarak Shah. Tracking in Dense Crowds Using Prominenceand Neighborhood Motion Concurrence, IEEE Transactions on Pattern Analysis...of  computer  vision,   computer   graphics  and  evacuation  dynamics  by  providing  a  common  platform,  and  provides...areas  that  includes  Computer  Vision,  Computer   Graphics ,  and  Pedestrian   Evacuation  Dynamics.  Despite  the

  15. A Comparison of PETSC Library and HPF Implementations of an Archetypal PDE Computation

    NASA Technical Reports Server (NTRS)

    Hayder, M. Ehtesham; Keyes, David E.; Mehrotra, Piyush

    1997-01-01

    Two paradigms for distributed-memory parallel computation that free the application programmer from the details of message passing are compared for an archetypal structured scientific computation a nonlinear, structured-grid partial differential equation boundary value problem using the same algorithm on the same hardware. Both paradigms, parallel libraries represented by Argonne's PETSC, and parallel languages represented by the Portland Group's HPF, are found to be easy to use for this problem class, and both are reasonably effective in exploiting concurrency after a short learning curve. The level of involvement required by the application programmer under either paradigm includes specification of the data partitioning (corresponding to a geometrically simple decomposition of the domain of the PDE). Programming in SPAM style for the PETSC library requires writing the routines that discretize the PDE and its Jacobian, managing subdomain-to-processor mappings (affine global- to-local index mappings), and interfacing to library solver routines. Programming for HPF requires a complete sequential implementation of the same algorithm, introducing concurrency through subdomain blocking (an effort similar to the index mapping), and modest experimentation with rewriting loops to elucidate to the compiler the latent concurrency. Correctness and scalability are cross-validated on up to 32 nodes of an IBM SP2.

  16. Knowledge Based Synthesis of Efficient Structures for Concurrent Computation Using Fat-Trees and Pipelining.

    DTIC Science & Technology

    1986-12-31

    synthesize synchronization skeletons"Science of Computer Programming 2, 1982, pp. 241-266 [Gel85] Gelernter, David, "Generative communication in...effective computation based on given primitives . An architecture is an abstract object-type, whose instances are computing systems. By a parallel computing...explaining the language primitives on this basis. We explain how such a basis can be "simpler" than a general-purpose manual-programming language such as

  17. Synthesis of Efficient Structures for Concurrent Computation.

    DTIC Science & Technology

    1983-10-01

    formal presentation of these techniques, called virtualisation and aggregation, can be found n [King-83$. 113.2 Census Functions Trees perform broadcast... Functions .. .. .. .. ... .... ... ... .... ... ... ....... 6 4 User-Assisted Aggregation .. .. .. .. ... ... ... .... ... .. .......... 6 5 Parallel...6. Simple Parallel Structure for Broadcasting .. .. .. .. .. . ... .. . .. . .... 4 Figure 7. Internal Structure of a Prefix Computation Network

  18. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1988-01-01

    The purpose is to document research to develop strategies for concurrent processing of complex algorithms in data driven architectures. The problem domain consists of decision-free algorithms having large-grained, computationally complex primitive operations. Such are often found in signal processing and control applications. The anticipated multiprocessor environment is a data flow architecture containing between two and twenty computing elements. Each computing element is a processor having local program memory, and which communicates with a common global data memory. A new graph theoretic model called ATAMM which establishes rules for relating a decomposed algorithm to its execution in a data flow architecture is presented. The ATAMM model is used to determine strategies to achieve optimum time performance and to develop a system diagnostic software tool. In addition, preliminary work on a new multiprocessor operating system based on the ATAMM specifications is described.

  19. Symbolic Analysis of Concurrent Programs with Polymorphism

    NASA Technical Reports Server (NTRS)

    Rungta, Neha Shyam

    2010-01-01

    The current trend of multi-core and multi-processor computing is causing a paradigm shift from inherently sequential to highly concurrent and parallel applications. Certain thread interleavings, data input values, or combinations of both often cause errors in the system. Systematic verification techniques such as explicit state model checking and symbolic execution are extensively used to detect errors in such systems [7, 9]. Explicit state model checking enumerates possible thread schedules and input data values of a program in order to check for errors [3, 9]. To partially mitigate the state space explosion from data input values, symbolic execution techniques substitute data input values with symbolic values [5, 7, 6]. Explicit state model checking and symbolic execution techniques used in conjunction with exhaustive search techniques such as depth-first search are unable to detect errors in medium to large-sized concurrent programs because the number of behaviors caused by data and thread non-determinism is extremely large. We present an overview of abstraction-guided symbolic execution for concurrent programs that detects errors manifested by a combination of thread schedules and data values [8]. The technique generates a set of key program locations relevant in testing the reachability of the target locations. The symbolic execution is then guided along these locations in an attempt to generate a feasible execution path to the error state. This allows the execution to focus in parts of the behavior space more likely to contain an error.

  20. Exploiting loop level parallelism in nonprocedural dataflow programs

    NASA Technical Reports Server (NTRS)

    Gokhale, Maya B.

    1987-01-01

    Discussed are how loop level parallelism is detected in a nonprocedural dataflow program, and how a procedural program with concurrent loops is scheduled. Also discussed is a program restructuring technique which may be applied to recursive equations so that concurrent loops may be generated for a seemingly iterative computation. A compiler which generates C code for the language described below has been implemented. The scheduling component of the compiler and the restructuring transformation are described.

  1. Numerical algorithms for finite element computations on concurrent processors

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.

    1986-01-01

    The work of several graduate students which relate to the NASA grant is briefly summarized. One student has worked on a detailed analysis of the so-called ijk forms of Gaussian elemination and Cholesky factorization on concurrent processors. Another student has worked on the vectorization of the incomplete Cholesky conjugate method on the CYBER 205. Two more students implemented various versions of Gaussian elimination and Cholesky factorization on the FLEX/32.

  2. A performance comparison of scalar, vector, and concurrent vector computers including supercomputers for modeling transport of reactive contaminants in groundwater

    NASA Astrophysics Data System (ADS)

    Tripathi, Vijay S.; Yeh, G. T.

    1993-06-01

    Sophisticated and highly computation-intensive models of transport of reactive contaminants in groundwater have been developed in recent years. Application of such models to real-world contaminant transport problems, e.g., simulation of groundwater transport of 10-15 chemically reactive elements (e.g., toxic metals) and relevant complexes and minerals in two and three dimensions over a distance of several hundred meters, requires high-performance computers including supercomputers. Although not widely recognized as such, the computational complexity and demand of these models compare with well-known computation-intensive applications including weather forecasting and quantum chemical calculations. A survey of the performance of a variety of available hardware, as measured by the run times for a reactive transport model HYDROGEOCHEM, showed that while supercomputers provide the fastest execution times for such problems, relatively low-cost reduced instruction set computer (RISC) based scalar computers provide the best performance-to-price ratio. Because supercomputers like the Cray X-MP are inherently multiuser resources, often the RISC computers also provide much better turnaround times. Furthermore, RISC-based workstations provide the best platforms for "visualization" of groundwater flow and contaminant plumes. The most notable result, however, is that current workstations costing less than $10,000 provide performance within a factor of 5 of a Cray X-MP.

  3. Evaluating Preclinical Medical Students by Using Computer-Based Problem-Solving Examinations.

    ERIC Educational Resources Information Center

    Stevens, Ronald H.; And Others

    1989-01-01

    A study to determine the feasibility of creating and administering computer-based problem-solving examinations for evaluating second-year medical students in immunology and to determine how students would perform on these tests relative to their performances on concurrently administered objective and essay examinations is described. (Author/MLW)

  4. The Concept of Nondeterminism: Its Development and Implications for Teaching

    ERIC Educational Resources Information Center

    Armoni, Michal; Ben-Ari, Mordechai

    2009-01-01

    Nondeterminism is a fundamental concept in computer science that appears in various contexts such as automata theory, algorithms and concurrent computation. We present a taxonomy of the different ways that nondeterminism can be defined and used; the categories of the taxonomy are domain, nature, implementation, consistency, execution and…

  5. Automated extraction of natural drainage density patterns for the conterminous United States through high performance computing

    USGS Publications Warehouse

    Stanislawski, Larry V.; Falgout, Jeff T.; Buttenfield, Barbara P.

    2015-01-01

    Hydrographic networks form an important data foundation for cartographic base mapping and for hydrologic analysis. Drainage density patterns for these networks can be derived to characterize local landscape, bedrock and climate conditions, and further inform hydrologic and geomorphological analysis by indicating areas where too few headwater channels have been extracted. But natural drainage density patterns are not consistently available in existing hydrographic data for the United States because compilation and capture criteria historically varied, along with climate, during the period of data collection over the various terrain types throughout the country. This paper demonstrates an automated workflow that is being tested in a high-performance computing environment by the U.S. Geological Survey (USGS) to map natural drainage density patterns at the 1:24,000-scale (24K) for the conterminous United States. Hydrographic network drainage patterns may be extracted from elevation data to guide corrections for existing hydrographic network data. The paper describes three stages in this workflow including data pre-processing, natural channel extraction, and generation of drainage density patterns from extracted channels. The workflow is concurrently implemented by executing procedures on multiple subbasin watersheds within the U.S. National Hydrography Dataset (NHD). Pre-processing defines parameters that are needed for the extraction process. Extraction proceeds in standard fashion: filling sinks, developing flow direction and weighted flow accumulation rasters. Drainage channels with assigned Strahler stream order are extracted within a subbasin and simplified. Drainage density patterns are then estimated with 100-meter resolution and subsequently smoothed with a low-pass filter. The extraction process is found to be of better quality in higher slope terrains. Concurrent processing through the high performance computing environment is shown to facilitate and refine the choice of drainage density extraction parameters and more readily improve extraction procedures than conventional processing.

  6. Specific interference between a cognitive task and sensory organization for stance balance control in healthy young adults: visuospatial effects.

    PubMed

    Chong, Raymond K Y; Mills, Bradley; Dailey, Leanna; Lane, Elizabeth; Smith, Sarah; Lee, Kyoung-Hyun

    2010-07-01

    We tested the hypothesis that a computational overload results when two activities, one motor and the other cognitive that draw on the same neural processing pathways, are performed concurrently. Healthy young adult subjects carried out two seemingly distinct tasks of maintaining standing balance control under conditions of low (eyes closed), normal (eyes open) or high (eyes open, sway-referenced surround) visuospatial processing load while concurrently performing a cognitive task of either subtracting backwards by seven or generating words of the same first letter. A decrease in the performance of the balance control task and a decrement in the speed and accuracy of responses were noted during the subtraction but not the word generation task. The interference in the subtraction task was isolated to the first trial of the high but not normal or low visuospatial conditions. Balance control improvements with repeated exposures were observed only in the low visuospatial conditions while performance in the other conditions remained compromised. These results suggest that sensory organization for balance control appear to draw on similar visuospatial computational resources needed for the subtraction but not the word generation task. In accordance with the theory of modularity in human performance, the contrast in results between the subtraction and word generation tasks suggests that the neural overload is related to competition for similar visuospatial processes rather than limited attentional resources. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  7. Parallelization of fine-scale computation in Agile Multiscale Modelling Methodology

    NASA Astrophysics Data System (ADS)

    Macioł, Piotr; Michalik, Kazimierz

    2016-10-01

    Nowadays, multiscale modelling of material behavior is an extensively developed area. An important obstacle against its wide application is high computational demands. Among others, the parallelization of multiscale computations is a promising solution. Heterogeneous multiscale models are good candidates for parallelization, since communication between sub-models is limited. In this paper, the possibility of parallelization of multiscale models based on Agile Multiscale Methodology framework is discussed. A sequential, FEM based macroscopic model has been combined with concurrently computed fine-scale models, employing a MatCalc thermodynamic simulator. The main issues, being investigated in this work are: (i) the speed-up of multiscale models with special focus on fine-scale computations and (ii) on decreasing the quality of computations enforced by parallel execution. Speed-up has been evaluated on the basis of Amdahl's law equations. The problem of `delay error', rising from the parallel execution of fine scale sub-models, controlled by the sequential macroscopic sub-model is discussed. Some technical aspects of combining third-party commercial modelling software with an in-house multiscale framework and a MPI library are also discussed.

  8. Research on Synthesis of Concurrent Computing Systems.

    DTIC Science & Technology

    1982-09-01

    20 1.5.1 An Informal Description of the Techniques ....... ..................... 20 1.5 2 Formal Definitions of Aggregation and Virtualisation ...sparsely interconnected networks . We have also developed techniques to create Kung’s systolic array parallel structure from a specification of matrix...resufts of the computation of that element. For example, if A,j is computed using a single enumeration, then virtualisation would produce a three

  9. General Multimechanism Reversible-Irreversible Time-Dependent Constitutive Deformation Model Being Developed

    NASA Technical Reports Server (NTRS)

    Saleeb, A. F.; Arnold, Steven M.

    2001-01-01

    Since most advanced material systems (for example metallic-, polymer-, and ceramic-based systems) being currently researched and evaluated are for high-temperature airframe and propulsion system applications, the required constitutive models must account for both reversible and irreversible time-dependent deformations. Furthermore, since an integral part of continuum-based computational methodologies (be they microscale- or macroscale-based) is an accurate and computationally efficient constitutive model to describe the deformation behavior of the materials of interest, extensive research efforts have been made over the years on the phenomenological representations of constitutive material behavior in the inelastic analysis of structures. From a more recent and comprehensive perspective, the NASA Glenn Research Center in conjunction with the University of Akron has emphasized concurrently addressing three important and related areas: that is, 1) Mathematical formulation; 2) Algorithmic developments for updating (integrating) the external (e.g., stress) and internal state variables; 3) Parameter estimation for characterizing the model. This concurrent perspective to constitutive modeling has enabled the overcoming of the two major obstacles to fully utilizing these sophisticated time-dependent (hereditary) constitutive models in practical engineering analysis. These obstacles are: 1) Lack of efficient and robust integration algorithms; 2) Difficulties associated with characterizing the large number of required material parameters, particularly when many of these parameters lack obvious or direct physical interpretations.

  10. Evaluation of Methods for Multidisciplinary Design Optimization (MDO). Part 2

    NASA Technical Reports Server (NTRS)

    Kodiyalam, Srinivas; Yuan, Charles; Sobieski, Jaroslaw (Technical Monitor)

    2000-01-01

    A new MDO method, BLISS, and two different variants of the method, BLISS/RS and BLISS/S, have been implemented using iSIGHT's scripting language and evaluated in this report on multidisciplinary problems. All of these methods are based on decomposing a modular system optimization system into several subtasks optimization, that may be executed concurrently, and the system optimization that coordinates the subtasks optimization. The BLISS method and its variants are well suited for exploiting the concurrent processing capabilities in a multiprocessor machine. Several steps, including the local sensitivity analysis, local optimization, response surfaces construction and updates are all ideally suited for concurrent processing. Needless to mention, such algorithms that can effectively exploit the concurrent processing capabilities of the compute servers will be a key requirement for solving large-scale industrial design problems, such as the automotive vehicle problem detailed in Section 3.4.

  11. 29 CFR 541.106 - Concurrent duties.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... DELIMITING THE EXEMPTIONS FOR EXECUTIVE, ADMINISTRATIVE, PROFESSIONAL, COMPUTER AND OUTSIDE SALES EMPLOYEES..., cooking food, stocking shelves and cleaning the establishment, but performance of such nonexempt work does...

  12. Software For Drawing Design Details Concurrently

    NASA Technical Reports Server (NTRS)

    Crosby, Dewey C., III

    1990-01-01

    Software system containing five computer-aided-design programs enables more than one designer to work on same part or assembly at same time. Reduces time necessary to produce design by implementing concept of parallel or concurrent detailing, in which all detail drawings documenting three-dimensional model of part or assembly produced simultaneously, rather than sequentially. Keeps various detail drawings consistent with each other and with overall design by distributing changes in each detail to all other affected details.

  13. Real-time processing of radar return on a parallel computer

    NASA Technical Reports Server (NTRS)

    Aalfs, David D.

    1992-01-01

    NASA is working with the FAA to demonstrate the feasibility of pulse Doppler radar as a candidate airborne sensor to detect low altitude windshears. The need to provide the pilot with timely information about possible hazards has motivated a demand for real-time processing of a radar return. Investigated here is parallel processing as a means of accommodating the high data rates required. A PC based parallel computer, called the transputer, is used to investigate issues in real time concurrent processing of radar signals. A transputer network is made up of an array of single instruction stream processors that can be networked in a variety of ways. They are easily reconfigured and software development is largely independent of the particular network topology. The performance of the transputer is evaluated in light of the computational requirements. A number of algorithms have been implemented on the transputers in OCCAM, a language specially designed for parallel processing. These include signal processing algorithms such as the Fast Fourier Transform (FFT), pulse-pair, and autoregressive modelling, as well as routing software to support concurrency. The most computationally intensive task is estimating the spectrum. Two approaches have been taken on this problem, the first and most conventional of which is to use the FFT. By using table look-ups for the basis function and other optimizing techniques, an algorithm has been developed that is sufficient for real time. The other approach is to model the signal as an autoregressive process and estimate the spectrum based on the model coefficients. This technique is attractive because it does not suffer from the spectral leakage problem inherent in the FFT. Benchmark tests indicate that autoregressive modeling is feasible in real time.

  14. TEMPEST: A three-dimensional time-dependent computer program for hydrothermal analysis: Volume 2, Assessment and verification results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eyler, L L; Trent, D S; Budden, M J

    During the course of the TEMPEST computer code development a concurrent effort was conducted to assess the code's performance and the validity of computed results. The results of this work are presented in this document. The principal objective of this effort was to assure the code's computational correctness for a wide range of hydrothermal phenomena typical of fast breeder reactor application. 47 refs., 94 figs., 6 tabs.

  15. Partitioning strategy for efficient nonlinear finite element dynamic analysis on multiprocessor computers

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Peters, Jeanne M.

    1989-01-01

    A computational procedure is presented for the nonlinear dynamic analysis of unsymmetric structures on vector multiprocessor systems. The procedure is based on a novel hierarchical partitioning strategy in which the response of the unsymmetric and antisymmetric response vectors (modes), each obtained by using only a fraction of the degrees of freedom of the original finite element model. The three key elements of the procedure which result in high degree of concurrency throughout the solution process are: (1) mixed (or primitive variable) formulation with independent shape functions for the different fields; (2) operator splitting or restructuring of the discrete equations at each time step to delineate the symmetric and antisymmetric vectors constituting the response; and (3) two level iterative process for generating the response of the structure. An assessment is made of the effectiveness of the procedure on the CRAY X-MP/4 computers.

  16. U-interpreter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arvind; Gostelow, K.P.

    1982-02-01

    The author argues that by giving a unique name to every activity generated during a computation, the u-interpreter can provide greater concurrency in the interpretation of data flow graphs. 19 references.

  17. Percent Grammatical Responses as a General Outcome Measure: Initial Validity

    ERIC Educational Resources Information Center

    Eisenberg, Sarita L.; Guo, Ling-Yu

    2018-01-01

    Purpose: This report investigated the validity of using percent grammatical responses (PGR) as a measure for assessing grammaticality. To establish construct validity, we computed the correlation of PGR with another measure of grammar skills and with an unrelated skill area. To establish concurrent validity for PGR, we computed the correlation of…

  18. Quizzing and Feedback in Computer-Based and Book-Based Training for Workplace Safety and Health

    ERIC Educational Resources Information Center

    Rohlman, Diane S.; Eckerman, David A.; Ammerman, Tammara A.; Fercho, Heather L.; Lundeen, Christine A.; Blomquist, Carrie; Anger, W. Kent

    2005-01-01

    Participants received different amounts of information in either a cTRAIN computer-based instruction (CBI) program or in a booklet format, presented before or concurrently with interactive questions about the information. An interactive CBI presentation that required an overt response during training produced equivalent acquisition and retention…

  19. Accelerating nuclear configuration interaction calculations through a preconditioned block iterative eigensolver

    DOE PAGES

    Shao, Meiyue; Aktulga, H.  Metin; Yang, Chao; ...

    2017-09-14

    In this paper, we describe a number of recently developed techniques for improving the performance of large-scale nuclear configuration interaction calculations on high performance parallel computers. We show the benefit of using a preconditioned block iterative method to replace the Lanczos algorithm that has traditionally been used to perform this type of computation. The rapid convergence of the block iterative method is achieved by a proper choice of starting guesses of the eigenvectors and the construction of an effective preconditioner. These acceleration techniques take advantage of special structure of the nuclear configuration interaction problem which we discuss in detail. Themore » use of a block method also allows us to improve the concurrency of the computation, and take advantage of the memory hierarchy of modern microprocessors to increase the arithmetic intensity of the computation relative to data movement. Finally, we also discuss the implementation details that are critical to achieving high performance on massively parallel multi-core supercomputers, and demonstrate that the new block iterative solver is two to three times faster than the Lanczos based algorithm for problems of moderate sizes on a Cray XC30 system.« less

  20. Accelerating nuclear configuration interaction calculations through a preconditioned block iterative eigensolver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shao, Meiyue; Aktulga, H.  Metin; Yang, Chao

    In this paper, we describe a number of recently developed techniques for improving the performance of large-scale nuclear configuration interaction calculations on high performance parallel computers. We show the benefit of using a preconditioned block iterative method to replace the Lanczos algorithm that has traditionally been used to perform this type of computation. The rapid convergence of the block iterative method is achieved by a proper choice of starting guesses of the eigenvectors and the construction of an effective preconditioner. These acceleration techniques take advantage of special structure of the nuclear configuration interaction problem which we discuss in detail. Themore » use of a block method also allows us to improve the concurrency of the computation, and take advantage of the memory hierarchy of modern microprocessors to increase the arithmetic intensity of the computation relative to data movement. Finally, we also discuss the implementation details that are critical to achieving high performance on massively parallel multi-core supercomputers, and demonstrate that the new block iterative solver is two to three times faster than the Lanczos based algorithm for problems of moderate sizes on a Cray XC30 system.« less

  1. Concurrent ultrasonic weld evaluation system

    DOEpatents

    Hood, Donald W.; Johnson, John A.; Smartt, Herschel B.

    1987-01-01

    A system for concurrent, non-destructive evaluation of partially completed welds for use in conjunction with an automated welder. The system utilizes real time, automated ultrasonic inspection of a welding operation as the welds are being made by providing a transducer which follows a short distance behind the welding head. Reflected ultrasonic signals are analyzed utilizing computer based digital pattern recognition techniques to discriminate between good and flawed welds on a pass by pass basis. The system also distinguishes between types of weld flaws.

  2. Constant-Round Concurrent Zero Knowledge From Falsifiable Assumptions

    DTIC Science & Technology

    2013-01-01

    assumptions (e.g., [DS98, Dam00, CGGM00, Gol02, PTV12, GJO+12]), or in alternative models (e.g., super -polynomial-time simulation [Pas03b, PV10]). In the...T (·)-time computations, where T (·) is some “nice” (slightly) super -polynomial function (e.g., T (n) = nlog log logn). We refer to such proof...put a cap on both using a (slightly) super -polynomial function, and thus to guarantee soundness of the concurrent zero-knowledge protocol, we need

  3. Concurrent ultrasonic weld evaluation system

    DOEpatents

    Hood, D.W.; Johnson, J.A.; Smartt, H.B.

    1985-09-04

    A system for concurrent, non-destructive evaluation of partially completed welds for use in conjunction with an automated welder. The system utilizes real time, automated ultrasonic inspection of a welding operation as the welds are being made by providing a transducer which follows a short distance behind the welding head. Reflected ultrasonic signals are analyzed utilizing computer based digital pattern recognition techniques to discriminate between good and flawed welds on a pass by pass basis. The system also distinguishes between types of weld flaws.

  4. Concurrent ultrasonic weld evaluation system

    DOEpatents

    Hood, D.W.; Johnson, J.A.; Smartt, H.B.

    1987-12-15

    A system for concurrent, non-destructive evaluation of partially completed welds for use in conjunction with an automated welder is disclosed. The system utilizes real time, automated ultrasonic inspection of a welding operation as the welds are being made by providing a transducer which follows a short distance behind the welding head. Reflected ultrasonic signals are analyzed utilizing computer based digital pattern recognition techniques to discriminate between good and flawed welds on a pass by pass basis. The system also distinguishes between types of weld flaws. 5 figs.

  5. Distributed Database Control and Allocation. Volume 2. Performance Analysis of Concurrency Control Algorithms.

    DTIC Science & Technology

    1983-10-01

    Concurrency Control Algorithms Computer Corporation of America Wente K. Lin, Philip A. Bernstein, Nathan Goodman and Jerry Nolte APPROVED FOR PUBLIC ...84 03 IZ 004 ’KV This report has been reviewed by the RADC Public Affairs Office (PA) an is releasable to the National Technical Information Service...NTIS). At NTIS it will be releasable to the general public , including foreign na~ions. RADC-TR-83-226, Vol II (of three) has been reviewed and is

  6. Design and Analysis Tools for Concurrent Blackboard Systems

    NASA Technical Reports Server (NTRS)

    McManus, John W.

    1991-01-01

    A blackboard system consists of a set of knowledge sources, a blackboard data structure, and a control strategy used to activate the knowledge sources. The blackboard model of problem solving is best described by Dr. H. Penny Nii of the Stanford University AI Laboratory: "A Blackboard System can be viewed as a collection of intelligent agents who are gathered around a blackboard, looking at pieces of information written on it, thinking about the current state of the solution, and writing their conclusions on the blackboard as they generate them. " The blackboard is a centralized global data structure, often partitioned in a hierarchical manner, used to represent the problem domain. The blackboard is also used to allow inter-knowledge source communication and acts as a shared memory visible to all of the knowledge sources. A knowledge source is a highly specialized, highly independent process that takes inputs from the blackboard data structure, performs a computation, and places the results of the computation in the blackboard data structure. This design allows for an opportunistic control strategy. The opportunistic problem-solving technique allows a knowledge source to contribute towards the solution of the current problem without knowing which of the other knowledge sources will use the information. The use of opportunistic problem-solving allows the data transfers on the blackboard to determine which processes are active at a given time. Designing and developing blackboard systems is a difficult process. The designer is trying to balance several conflicting goals and achieve a high degree of concurrent knowledge source execution while maintaining both knowledge and semantic consistency on the blackboard. Blackboard systems have not attained their apparent potential because there are no established tools or methods to guide in their construction or analyze their performance.

  7. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE PAGES

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...

    2017-09-21

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  8. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  9. Hierarchical nonlinear behavior of hot composite structures

    NASA Technical Reports Server (NTRS)

    Murthy, P. L. N.; Chamis, C. C.; Singhal, S. N.

    1993-01-01

    Hierarchical computational procedures are described to simulate the multiple scale thermal/mechanical behavior of high temperature metal matrix composites (HT-MMC) in the following three broad areas: (1) behavior of HT-MMC's from micromechanics to laminate via METCAN (Metal Matrix Composite Analyzer), (2) tailoring of HT-MMC behavior for optimum specific performance via MMLT (Metal Matrix Laminate Tailoring), and (3) HT-MMC structural response for hot structural components via HITCAN (High Temperature Composite Analyzer). Representative results from each area are presented to illustrate the effectiveness of computational simulation procedures and accompanying computer codes. The sample case results show that METCAN can be used to simulate material behavior such as the entire creep span; MMLT can be used to concurrently tailor the fabrication process and the interphase layer for optimum performance such as minimum residual stresses; and HITCAN can be used to predict the structural behavior such as the deformed shape due to component fabrication. These codes constitute virtual portable desk-top test laboratories for characterizing HT-MMC laminates, tailoring the fabrication process, and qualifying structural components made from them.

  10. Developing a novel hierarchical approach for multiscale structural reliability predictions for ultra-high consequence applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Emery, John M.; Coffin, Peter; Robbins, Brian A.

    Microstructural variabilities are among the predominant sources of uncertainty in structural performance and reliability. We seek to develop efficient algorithms for multiscale calcu- lations for polycrystalline alloys such as aluminum alloy 6061-T6 in environments where ductile fracture is the dominant failure mode. Our approach employs concurrent multiscale methods, but does not focus on their development. They are a necessary but not sufficient ingredient to multiscale reliability predictions. We have focused on how to efficiently use concurrent models for forward propagation because practical applications cannot include fine-scale details throughout the problem domain due to exorbitant computational demand. Our approach begins withmore » a low-fidelity prediction at the engineering scale that is sub- sequently refined with multiscale simulation. The results presented in this report focus on plasticity and damage at the meso-scale, efforts to expedite Monte Carlo simulation with mi- crostructural considerations, modeling aspects regarding geometric representation of grains and second-phase particles, and contrasting algorithms for scale coupling.« less

  11. Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models

    NASA Astrophysics Data System (ADS)

    Altuntas, Alper; Baugh, John

    2017-07-01

    Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.

  12. Concurrency-Induced Transitions in Epidemic Dynamics on Temporal Networks.

    PubMed

    Onaga, Tomokatsu; Gleeson, James P; Masuda, Naoki

    2017-09-08

    Social contact networks underlying epidemic processes in humans and animals are highly dynamic. The spreading of infections on such temporal networks can differ dramatically from spreading on static networks. We theoretically investigate the effects of concurrency, the number of neighbors that a node has at a given time point, on the epidemic threshold in the stochastic susceptible-infected-susceptible dynamics on temporal network models. We show that network dynamics can suppress epidemics (i.e., yield a higher epidemic threshold) when the node's concurrency is low, but can also enhance epidemics when the concurrency is high. We analytically determine different phases of this concurrency-induced transition, and confirm our results with numerical simulations.

  13. Concurrency-Induced Transitions in Epidemic Dynamics on Temporal Networks

    NASA Astrophysics Data System (ADS)

    Onaga, Tomokatsu; Gleeson, James P.; Masuda, Naoki

    2017-09-01

    Social contact networks underlying epidemic processes in humans and animals are highly dynamic. The spreading of infections on such temporal networks can differ dramatically from spreading on static networks. We theoretically investigate the effects of concurrency, the number of neighbors that a node has at a given time point, on the epidemic threshold in the stochastic susceptible-infected-susceptible dynamics on temporal network models. We show that network dynamics can suppress epidemics (i.e., yield a higher epidemic threshold) when the node's concurrency is low, but can also enhance epidemics when the concurrency is high. We analytically determine different phases of this concurrency-induced transition, and confirm our results with numerical simulations.

  14. NASA's computer science research program

    NASA Technical Reports Server (NTRS)

    Larsen, R. L.

    1983-01-01

    Following a major assessment of NASA's computing technology needs, a new program of computer science research has been initiated by the Agency. The program includes work in concurrent processing, management of large scale scientific databases, software engineering, reliable computing, and artificial intelligence. The program is driven by applications requirements in computational fluid dynamics, image processing, sensor data management, real-time mission control and autonomous systems. It consists of university research, in-house NASA research, and NASA's Research Institute for Advanced Computer Science (RIACS) and Institute for Computer Applications in Science and Engineering (ICASE). The overall goal is to provide the technical foundation within NASA to exploit advancing computing technology in aerospace applications.

  15. High Prevalence of Concurrent Male-Male Partnerships in the Context of Low Human Immunodeficiency Virus Testing Among Men Who Have Sex With Men in Bamako, Mali.

    PubMed

    Hakim, Avi; Patnaik, Padmaja; Telly, Nouhoum; Ballo, Tako; Traore, Bouyagui; Doumbia, Seydou; Lahuerta, Maria

    2017-09-01

    Concurrent male-male sexual partnerships have been understudied in sub-Saharan Africa and are especially important because human immunodeficiency virus (HIV) prevalence and acquisition probability are higher among men who have sex with men (MSM) than among heterosexual men and women. We conducted a respondent-driven sampling survey of 552 men who have sex with men in Bamako, Mali from October 2014 to February 2015. Eligibility criteria included 18 years or older, history of oral or anal sex with another man in the last 6 months, residence in or around Bamako in the last 6 months, ability to communicate in French. HIV prevalence was 13.7%, with 86.7% of MSM with HIV unaware of their infection. Concurrent male-male sexual partnerships were common, with 60.6% of MSM having a concurrent male sexual partnerships or believing their sex partner did in the last 6 months, and 27.3% having a concurrent male sexual partnerships and believing their sex partner did in the last 6 months. Over half (52.5%) of MSM had sex with women, and 30.8% had concurrent male partnerships and sex with a woman in the last 6 months. Concurrency was more likely among MSM with limited education, telling only MSM of same-sex behaviors, high social cohesion, and not knowing anyone with HIV. The high proportion of HIV-infected MSM in Bamako who are unaware of their HIV infection and the high prevalence of concurrent partnerships could further the spread of HIV in Bamako. Increasing testing through peer educators conducting mobile testing could improve awareness of HIV status and limit the spread of HIV in concurrent partnerships.

  16. Comparison of sensitivity and reading time for the use of computer aided detection (CAD) of pulmonary nodules at MDCT as concurrent or second reader

    NASA Astrophysics Data System (ADS)

    Beyer, F.; Zierott, L.; Fallenberg, E. M.; Juergens, K.; Stoeckel, J.; Heindel, W.; Wormanns, D.

    2006-03-01

    Purpose: To compare sensitivity and reading time when using CAD as second reader resp. concurrent reader. Materials and Methods: Fifty chest MDCT scans due to clinical indication were analysed independently by four radiologists two times: First with CAD as concurrent reader (display of CAD results simultaneously to the primary reading by the radiologist); then after a median of 14 weeks with CAD as second reader (CAD results were shown after completion of a reading session without CAD). A prototype version of Siemens LungCAD (Siemens,Malvern,USA) was used. Sensitivities and reading times for detecting nodules >=4mm of concurrent reading, reading without CAD and second reading were recorded. In a consensus conference false positive findings were eliminated. Student's T-Test was used to compare sensitivities and reading times. Results: 108 true positive nodules were found. Mean sensitivity was .68 for reading without CAD, .68 for concurrent reading and .75 for second reading. Differences of sensitivities were significant between concurrent and second reading (p<.001) resp. reading without CAD and second reading (p=.001). Mean reading time for concurrent reading was significant shorter (274s) compared to reading without CAD (294s;p=.04) and second reading (337sp<.001). New work to be presented: To our knowledge this is the first study that compares sensitivities and reading times between use of CAD as concurrent resp. second reader. Conclusion: CAD can either be used to speed up reading of chest CT cases for pulmonary nodules without loss of sensitivity as concurrent reader -OR (and not AND) to increase sensitivity and reading time as second reader.

  17. Prediction of miRNA targets.

    PubMed

    Oulas, Anastasis; Karathanasis, Nestoras; Louloupi, Annita; Pavlopoulos, Georgios A; Poirazi, Panayiota; Kalantidis, Kriton; Iliopoulos, Ioannis

    2015-01-01

    Computational methods for miRNA target prediction are currently undergoing extensive review and evaluation. There is still a great need for improvement of these tools and bioinformatics approaches are looking towards high-throughput experiments in order to validate predictions. The combination of large-scale techniques with computational tools will not only provide greater credence to computational predictions but also lead to the better understanding of specific biological questions. Current miRNA target prediction tools utilize probabilistic learning algorithms, machine learning methods and even empirical biologically defined rules in order to build models based on experimentally verified miRNA targets. Large-scale protein downregulation assays and next-generation sequencing (NGS) are now being used to validate methodologies and compare the performance of existing tools. Tools that exhibit greater correlation between computational predictions and protein downregulation or RNA downregulation are considered the state of the art. Moreover, efficiency in prediction of miRNA targets that are concurrently verified experimentally provides additional validity to computational predictions and further highlights the competitive advantage of specific tools and their efficacy in extracting biologically significant results. In this review paper, we discuss the computational methods for miRNA target prediction and provide a detailed comparison of methodologies and features utilized by each specific tool. Moreover, we provide an overview of current state-of-the-art high-throughput methods used in miRNA target prediction.

  18. Persons with Alzheimer's Disease Make Phone Calls Independently Using a Computer-Aided Telephone System

    ERIC Educational Resources Information Center

    Perilli, Viviana; Lancioni, Giulio E.; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Cassano, Germana; Cordiano, Noemi; Pinto, Katia; Minervini, Mauro G.; Oliva, Doretta

    2012-01-01

    This study assessed whether four patients with a diagnosis of Alzheimer's disease could make independent phone calls via a computer-aided telephone system. The study was carried out according to a non-concurrent multiple baseline design across participants. All participants started with baseline during which the telephone system was not available,…

  19. Concurrent EEG And NIRS Tomographic Imaging Based on Wearable Electro-Optodes

    DTIC Science & Technology

    2014-04-13

    Interfaces   ( BCIs ),   and   other   systems   in   the   same   computational   framework.   Figure   11   below   shows...Improving  Brain-­‐Computer   Interfaces  Using   Independent  Component   Analysis,  In:  Towards  Future   BCIs ,  2012

  20. Computer Program Re-layers Engineering Drawings

    NASA Technical Reports Server (NTRS)

    Crosby, Dewey C., III

    1990-01-01

    RULCHK computer program aids in structuring layers of information pertaining to part or assembly designed with software described in article "Software for Drawing Design Details Concurrently" (MFS-28444). Checks and optionally updates structure of layers for part. Enables designer to construct model and annotate its documentation without burden of manually layering part to conform to standards at design time.

  1. Evaluating a Computer Flash-Card Sight-Word Recognition Intervention with Self-Determined Response Intervals in Elementary Students with Intellectual Disability

    ERIC Educational Resources Information Center

    Cazzell, Samantha; Skinner, Christopher H.; Ciancio, Dennis; Aspiranti, Kathleen; Watson, Tiffany; Taylor, Kala; McCurdy, Merilee; Skinner, Amy

    2017-01-01

    A concurrent multiple-baseline across-tasks design was used to evaluate the effectiveness of a computer flash-card sight-word recognition intervention with elementary-school students with intellectual disability. This intervention allowed the participants to self-determine each response interval and resulted in both participants acquiring…

  2. The Vestibular System Implements a Linear–Nonlinear Transformation In Order to Encode Self-Motion

    PubMed Central

    Massot, Corentin; Schneider, Adam D.; Chacron, Maurice J.; Cullen, Kathleen E.

    2012-01-01

    Although it is well established that the neural code representing the world changes at each stage of a sensory pathway, the transformations that mediate these changes are not well understood. Here we show that self-motion (i.e. vestibular) sensory information encoded by VIIIth nerve afferents is integrated nonlinearly by post-synaptic central vestibular neurons. This response nonlinearity was characterized by a strong (∼50%) attenuation in neuronal sensitivity to low frequency stimuli when presented concurrently with high frequency stimuli. Using computational methods, we further demonstrate that a static boosting nonlinearity in the input-output relationship of central vestibular neurons accounts for this unexpected result. Specifically, when low and high frequency stimuli are presented concurrently, this boosting nonlinearity causes an intensity-dependent bias in the output firing rate, thereby attenuating neuronal sensitivities. We suggest that nonlinear integration of afferent input extends the coding range of central vestibular neurons and enables them to better extract the high frequency features of self-motion when embedded with low frequency motion during natural movements. These findings challenge the traditional notion that the vestibular system uses a linear rate code to transmit information and have important consequences for understanding how the representation of sensory information changes across sensory pathways. PMID:22911113

  3. A framework for optimizing micro-CT in dual-modality micro-CT/XFCT small-animal imaging system

    NASA Astrophysics Data System (ADS)

    Vedantham, Srinivasan; Shrestha, Suman; Karellas, Andrew; Cho, Sang Hyun

    2017-09-01

    Dual-modality Computed Tomography (CT)/X-ray Fluorescence Computed Tomography (XFCT) can be a valuable tool for imaging and quantifying the organ and tissue distribution of small concentrations of high atomic number materials in small-animal system. In this work, the framework for optimizing the micro-CT imaging system component of the dual-modality system is described, either when the micro-CT images are concurrently acquired with XFCT and using the x-ray spectral conditions for XFCT, or when the micro-CT images are acquired sequentially and independently of XFCT. This framework utilizes the cascaded systems analysis for task-specific determination of the detectability index using numerical observer models at a given radiation dose, where the radiation dose is determined using Monte Carlo simulations.

  4. Compressed sensing for ultrasound computed tomography.

    PubMed

    van Sloun, Ruud; Pandharipande, Ashish; Mischi, Massimo; Demi, Libertario

    2015-06-01

    Ultrasound computed tomography (UCT) allows the reconstruction of quantitative tissue characteristics, such as speed of sound, mass density, and attenuation. Lowering its acquisition time would be beneficial; however, this is fundamentally limited by the physical time of flight and the number of transmission events. In this letter, we propose a compressed sensing solution for UCT. The adopted measurement scheme is based on compressed acquisitions, with concurrent randomised transmissions in a circular array configuration. Reconstruction of the image is then obtained by combining the born iterative method and total variation minimization, thereby exploiting variation sparsity in the image domain. Evaluation using simulated UCT scattering measurements shows that the proposed transmission scheme performs better than uniform undersampling, and is able to reduce acquisition time by almost one order of magnitude, while maintaining high spatial resolution.

  5. Directions in parallel programming: HPF, shared virtual memory and object parallelism in pC++

    NASA Technical Reports Server (NTRS)

    Bodin, Francois; Priol, Thierry; Mehrotra, Piyush; Gannon, Dennis

    1994-01-01

    Fortran and C++ are the dominant programming languages used in scientific computation. Consequently, extensions to these languages are the most popular for programming massively parallel computers. We discuss two such approaches to parallel Fortran and one approach to C++. The High Performance Fortran Forum has designed HPF with the intent of supporting data parallelism on Fortran 90 applications. HPF works by asking the user to help the compiler distribute and align the data structures with the distributed memory modules in the system. Fortran-S takes a different approach in which the data distribution is managed by the operating system and the user provides annotations to indicate parallel control regions. In the case of C++, we look at pC++ which is based on a concurrent aggregate parallel model.

  6. Computer Sciences and Data Systems, volume 1

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Topics addressed include: software engineering; university grants; institutes; concurrent processing; sparse distributed memory; distributed operating systems; intelligent data management processes; expert system for image analysis; fault tolerant software; and architecture research.

  7. A Response Surface Methodology for Bi-Level Integrated System Synthesis (BLISS)

    NASA Technical Reports Server (NTRS)

    Altus, Troy David; Sobieski, Jaroslaw (Technical Monitor)

    2002-01-01

    The report describes a new method for optimization of engineering systems such as aerospace vehicles whose design must harmonize a number of subsystems and various physical phenomena, each represented by a separate computer code, e.g., aerodynamics, structures, propulsion, performance, etc. To represent the system internal couplings, the codes receive output from other codes as part of their inputs. The system analysis and optimization task is decomposed into subtasks that can be executed concurrently, each subtask conducted using local state and design variables and holding constant a set of the system-level design variables. The subtasks results are stored in form of the Response Surfaces (RS) fitted in the space of the system-level variables to be used as the subtask surrogates in a system-level optimization whose purpose is to optimize the system objective(s) and to reconcile the system internal couplings. By virtue of decomposition and execution concurrency, the method enables a broad workfront in organization of an engineering project involving a number of specialty groups that might be geographically dispersed, and it exploits the contemporary computing technology of massively concurrent and distributed processing. The report includes a demonstration test case of supersonic business jet design.

  8. EPAs Virtual Embryo: Modeling Developmental Toxicity

    EPA Science Inventory

    Embryogenesis is regulated by concurrent activities of signaling pathways organized into networks that control spatial patterning, molecular clocks, morphogenetic rearrangements and cell differentiation. Quantitative mathematical and computational models are needed to better unde...

  9. Multigrid methods with space–time concurrency

    DOE PAGES

    Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.; ...

    2017-10-06

    Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less

  10. A concurrent distributed system for aircraft tactical decision generation

    NASA Technical Reports Server (NTRS)

    Mcmanus, John W.

    1990-01-01

    A research program investigating the use of AI techniques to aid in the development of a tactical decision generator (TDG) for within visual range (WVR) air combat engagements is discussed. The application of AI programming and problem-solving methods in the development and implementation of a concurrent version of the computerized logic for air-to-air warfare simulations (CLAWS) program, a second-generation TDG, is presented. Concurrent computing environments and programming approaches are discussed, and the design and performance of prototype concurrent TDG system (Cube CLAWS) are presented. It is concluded that the Cube CLAWS has provided a useful testbed to evaluate the development of a distributed blackboard system. The project has shown that the complexity of developing specialized software on a distributed, message-passing architecture such as the Hypercube is not overwhelming, and that reasonable speedups and processor efficiency can be achieved by a distributed blackboard system. The project has also highlighted some of the costs of using a distributed approach to designing a blackboard system.

  11. Multigrid methods with space–time concurrency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.

    Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less

  12. Multiple and concurrent sexual partnerships among men who have sex with men in Viet Nam: results from a National Internet-based Cross-sectional Survey.

    PubMed

    García, M C; Duong, Q L; Meyer, S B; Ward, P R

    2016-03-01

    Men who have sex with men (MSM) are one of the largest HIV risk groups in Viet Nam and have been understudied. Sexual concurrency and multiple sex partnerships may contribute to high HIV incidence among MSM in Viet Nam. Limited information is available on concurrency and multiple sexual partnerships among MSM in Viet Nam or on the extent to which this population engages in concurrent and multiple unprotected anal intercourse. Data are from a self-administered Internet-based survey of Vietnamese MSM aged 18 years or older, having sex with male partner(s) in the last 12 months and recruited from social networking MSM-specific websites in Viet Nam. Multiple partnerships and concurrency were measured using the UNAIDS-recommended sexual partner matrix, a key component in the questionnaire. Concurrent and multiple sexual partnerships were analyzed at the individual level. Logistic regression analyses were conducted to assess the demographic characteristics and behaviors associated with multiple sexual partnerships. A total of 1695 MSM reported on multiple sexual partnerships; 69.5% indicated multiple sexual partnerships in the last 6 months. A total of 257 MSM reported on concurrent sexual partnerships, with 51.0% reporting penetrative sex with concurrent partners in the last 6 months. Respondents were more likely to engage in multiple sexual partnerships if they were no longer a student, consumed alcohol before and/or during sex, used the Internet to meet casual sex partners and had never participated in a behavioral HIV intervention. Multiple sexual partnerships in the previous 6 months were common among MSM surveyed, as was sexual concurrency. High levels of multiple and concurrent sexual partnerships may be catalyzing the transmission of HIV among MSM in Viet Nam. Given the high prevalence of this high-risk sexual behavior, our findings underscore the urgent need for targeted prevention efforts, focusing on the reduction of multiple and concurrent sexual partners among this key population. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. High fold computer disk storage DATABASE for fast extended analysis of γ-rays events

    NASA Astrophysics Data System (ADS)

    Stézowski, O.; Finck, Ch.; Prévost, D.

    1999-03-01

    Recently spectacular technical developments have been achieved to increase the resolving power of large γ-ray spectrometers. With these new eyes, physicists are able to study the intricate nature of atomic nuclei. Concurrently more and more complex multidimensional analyses are needed to investigate very weak phenomena. In this article, we first present a software (DATABASE) allowing high fold coincidences γ-rays events to be stored on hard disk. Then, a non-conventional method of analysis, anti-gating procedure, is described. Two physical examples are given to explain how it can be used and Monte Carlo simulations have been performed to test the validity of this method.

  14. Multiple grid problems on concurrent-processing computers

    NASA Technical Reports Server (NTRS)

    Eberhardt, D. S.; Baganoff, D.

    1986-01-01

    Three computer codes were studied which make use of concurrent processing computer architectures in computational fluid dynamics (CFD). The three parallel codes were tested on a two processor multiple-instruction/multiple-data (MIMD) facility at NASA Ames Research Center, and are suggested for efficient parallel computations. The first code is a well-known program which makes use of the Beam and Warming, implicit, approximate factored algorithm. This study demonstrates the parallelism found in a well-known scheme and it achieved speedups exceeding 1.9 on the two processor MIMD test facility. The second code studied made use of an embedded grid scheme which is used to solve problems having complex geometries. The particular application for this study considered an airfoil/flap geometry in an incompressible flow. The scheme eliminates some of the inherent difficulties found in adapting approximate factorization techniques onto MIMD machines and allows the use of chaotic relaxation and asynchronous iteration techniques. The third code studied is an application of overset grids to a supersonic blunt body problem. The code addresses the difficulties encountered when using embedded grids on a compressible, and therefore nonlinear, problem. The complex numerical boundary system associated with overset grids is discussed and several boundary schemes are suggested. A boundary scheme based on the method of characteristics achieved the best results.

  15. Measuring quality indicators in the operating room: cleaning and turnover time.

    PubMed

    Jericó, Marli de Carvalho; Perroca, Márcia Galan; da Penha, Vivian Colombo

    2011-01-01

    This exploratory-descriptive study was carried out in the Surgical Center Unit of a university hospital aiming to measure time spent with concurrent cleaning performed by the cleaning service and turnover time and also investigated potential associations between cleaning time and the surgery's magnitude and specialty, period of the day and the room's size. The sample consisted of 101 surgeries, computing cleaning time and 60 surgeries, computing turnover time. The Kaplan-Meier method was used to analyze time and Pearson's correlation to study potential correlations. The time spent in concurrent cleaning was 7.1 minutes and turnover time was 35.6 minutes. No association between cleaning time and the other variables was found. These findings can support nurses in the efficient use of resources thereby speeding up the work process in the operating room.

  16. Locality Aware Concurrent Start for Stencil Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shrestha, Sunil; Gao, Guang R.; Manzano Franco, Joseph B.

    Stencil computations are at the heart of many physical simulations used in scientific codes. Thus, there exists a plethora of optimization efforts for this family of computations. Among these techniques, tiling techniques that allow concurrent start have proven to be very efficient in providing better performance for these critical kernels. Nevertheless, with many core designs being the norm, these optimization techniques might not be able to fully exploit locality (both spatial and temporal) on multiple levels of the memory hierarchy without compromising parallelism. It is no longer true that the machine can be seen as a homogeneous collection of nodesmore » with caches, main memory and an interconnect network. New architectural designs exhibit complex grouping of nodes, cores, threads, caches and memory connected by an ever evolving network-on-chip design. These new designs may benefit greatly from carefully crafted schedules and groupings that encourage parallel actors (i.e. threads, cores or nodes) to be aware of the computational history of other actors in close proximity. In this paper, we provide an efficient tiling technique that allows hierarchical concurrent start for memory hierarchy aware tile groups. Each execution schedule and tile shape exploit the available parallelism, load balance and locality present in the given applications. We demonstrate our technique on the Intel Xeon Phi architecture with selected and representative stencil kernels. We show improvement ranging from 5.58% to 31.17% over existing state-of-the-art techniques.« less

  17. A Computer-Aided Telephone System to Enable Five Persons with Alzheimer's Disease to Make Phone Calls Independently

    ERIC Educational Resources Information Center

    Perilli, Viviana; Lancioni, Giulio E.; Laporta, Dominga; Paparella, Adele; Caffo, Alessandro O.; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Oliva, Doretta

    2013-01-01

    This study extended the assessment of a computer-aided telephone system to enable five patients with a diagnosis of Alzheimer's disease to make phone calls independently. The patients were divided into two groups and exposed to intervention according to a non-concurrent multiple baseline design across groups. All patients started with baseline in…

  18. MIT Laboratory for Computer Science Progress Report 27

    DTIC Science & Technology

    1990-06-01

    because of the natural, yet unexploited, concurrence that characterizes contemporary and prospective applications from business to sensory computing...432. 14 Advanced Network Architecture Academic Staff D. Clark, Group Leader D. Tennenhouse J. Saltzer Research Staff J. Davin K. Sollins Graduate...Murray Hill, NJ, July 1989. 23 24 Clinical Decision Making Academic Staff R. Patil P. Szolovits, Group Leader G. Rennels Collaborating Investigators M

  19. Evolution of a standard microprocessor-based space computer

    NASA Technical Reports Server (NTRS)

    Fernandez, M.

    1980-01-01

    An existing in inventory computer hardware/software package (B-1 RFS/ECM) was repackaged and applied to multiple missile/space programs. Concurrent with the application efforts, low risk modifications were made to the computer from program to program to take advantage of newer, advanced technology and to meet increasingly more demanding requirements (computational and memory capabilities, longer life, and fault tolerant autonomy). It is concluded that microprocessors hold promise in a number of critical areas for future space computer applications. However, the benefits of the DoD VHSIC Program are required and the old proliferation problem must be revised.

  20. Proceedings of USC (University of Southern California) Workshop on VLSI (Very Large Scale Integration) & Modern Signal Processing, held at Los Angeles, California on 1-3 November 1982

    DTIC Science & Technology

    1983-11-15

    Concurrent Algorithms", A. Cremers , Dortmund University, West Germany, and T. Hibbard, JPL, Pasadena, CA 64 "An Overview of Signal Representations in...n O f\\ n O P- A -> Problem-oriented specification of concurrent algorithms Armin B. Cremers and Thomas N. Hibbard Preliminary version September...1982 s* Armin B. Cremers Computer Science Department University of Dortmund P.O. Box 50 05 00 D-4600 Dortmund 50 Fed. Rep. Germany

  1. Human choice among five alternatives when reinforcers decay.

    PubMed

    Rothstein, Jacob B; Jensen, Greg; Neuringer, Allen

    2008-06-01

    Human participants played a computer game in which choices among five alternatives were concurrently reinforced according to dependent random-ratio schedules. "Dependent" indicates that choices to any of the wedges activated the random-number generators governing reinforcers on all five alternatives. Two conditions were compared. In the hold condition, once scheduled, a reinforcer - worth a constant five points - remained available until it was collected. In the decay condition, point values decreased with intervening responses, i.e., rapid collection was differentially reinforced. Slopes of matching functions were higher in the decay than hold condition. However inter-subject variability was high in both conditions.

  2. Integrated System-Level Optimization for Concurrent Engineering With Parametric Subsystem Modeling

    NASA Technical Reports Server (NTRS)

    Schuman, Todd; DeWeck, Oliver L.; Sobieski, Jaroslaw

    2005-01-01

    The introduction of concurrent design practices to the aerospace industry has greatly increased the productivity of engineers and teams during design sessions as demonstrated by JPL's Team X. Simultaneously, advances in computing power have given rise to a host of potent numerical optimization methods capable of solving complex multidisciplinary optimization problems containing hundreds of variables, constraints, and governing equations. Unfortunately, such methods are tedious to set up and require significant amounts of time and processor power to execute, thus making them unsuitable for rapid concurrent engineering use. This paper proposes a framework for Integration of System-Level Optimization with Concurrent Engineering (ISLOCE). It uses parametric neural-network approximations of the subsystem models. These approximations are then linked to a system-level optimizer that is capable of reaching a solution quickly due to the reduced complexity of the approximations. The integration structure is described in detail and applied to the multiobjective design of a simplified Space Shuttle external fuel tank model. Further, a comparison is made between the new framework and traditional concurrent engineering (without system optimization) through an experimental trial with two groups of engineers. Each method is evaluated in terms of optimizer accuracy, time to solution, and ease of use. The results suggest that system-level optimization, running as a background process during integrated concurrent engineering sessions, is potentially advantageous as long as it is judiciously implemented.

  3. Exascale computing and what it means for shock physics

    NASA Astrophysics Data System (ADS)

    Germann, Timothy

    2015-06-01

    The U.S. Department of Energy is preparing to launch an Exascale Computing Initiative, to address the myriad challenges required to deploy and effectively utilize an exascale-class supercomputer (i.e., one capable of performing 1018 operations per second) in the 2023 timeframe. Since physical (power dissipation) requirements limit clock rates to at most a few GHz, this will necessitate the coordination of on the order of a billion concurrent operations, requiring sophisticated system and application software, and underlying mathematical algorithms, that may differ radically from traditional approaches. Even at the smaller workstation or cluster level of computation, the massive concurrency and heterogeneity within each processor will impact computational scientists. Through the multi-institutional, multi-disciplinary Exascale Co-design Center for Materials in Extreme Environments (ExMatEx), we have initiated an early and deep collaboration between domain (computational materials) scientists, applied mathematicians, computer scientists, and hardware architects, in order to establish the relationships between algorithms, software stacks, and architectures needed to enable exascale-ready materials science application codes within the next decade. In my talk, I will discuss these challenges, and what it will mean for exascale-era electronic structure, molecular dynamics, and engineering-scale simulations of shock-compressed condensed matter. In particular, we anticipate that the emerging hierarchical, heterogeneous architectures can be exploited to achieve higher physical fidelity simulations using adaptive physics refinement. This work is supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research.

  4. A Low Cost VLSI Architecture for Spike Sorting Based on Feature Extraction with Peak Search.

    PubMed

    Chang, Yuan-Jyun; Hwang, Wen-Jyi; Chen, Chih-Chang

    2016-12-07

    The goal of this paper is to present a novel VLSI architecture for spike sorting with high classification accuracy, low area costs and low power consumption. A novel feature extraction algorithm with low computational complexities is proposed for the design of the architecture. In the feature extraction algorithm, a spike is separated into two portions based on its peak value. The area of each portion is then used as a feature. The algorithm is simple to implement and less susceptible to noise interference. Based on the algorithm, a novel architecture capable of identifying peak values and computing spike areas concurrently is proposed. To further accelerate the computation, a spike can be divided into a number of segments for the local feature computation. The local features are subsequently merged with the global ones by a simple hardware circuit. The architecture can also be easily operated in conjunction with the circuits for commonly-used spike detection algorithms, such as the Non-linear Energy Operator (NEO). The architecture has been implemented by an Application-Specific Integrated Circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture is well suited for real-time multi-channel spike detection and feature extraction requiring low hardware area costs, low power consumption and high classification accuracy.

  5. Randomized Clinical Trial of Weekly vs. Triweekly Cisplatin-Based Chemotherapy Concurrent With Radiotherapy in the Treatment of Locally Advanced Cervical Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryu, Sang-Young, E-mail: ryu@kcch.re.kr; Lee, Won-Moo; Kim, Kidong

    Purpose: To compare compliance, toxicity, and outcome of weekly and triweekly cisplatin administration concurrent with radiotherapy in locally advanced cervical cancer. Methods and Materials: In this open-label, randomized trial, 104 patients with histologically proven Stage IIB-IVA cervical cancer were randomly assigned by a computer-generated procedure to weekly (weekly cisplatin 40 mg/m{sup 2}, six cycles) and triweekly (cisplatin 75 mg/m{sup 2} every 3 weeks, three cycles) chemotherapy arms during concurrent radiotherapy. The difference of compliance and the toxicity profiles between the two arms were investigated, and the overall survival rate was analyzed after 5 years. Results: All patients tolerated both treatmentsmore » very well, with a high completion rate of scheduled chemotherapy cycles. There was no statistically significant difference in compliance between the two arms (86.3% in the weekly arm, 92.5% in the triweekly arm, p > 0.05). Grade 3-4 neutropenia was more frequent in the weekly arm (39.2%) than in the triweekly arm (22.6%) (p = 0.03). The overall 5-year survival rate was significantly higher in the triweekly arm (88.7%) than in the weekly arm (66.5%) (hazard ratio 0.375; 95% confidence interval 0.154-0.914; p = 0.03). Conclusions: Triweekly cisplatin 75-mg/m{sup 2} chemotherapy concurrent with radiotherapy is more effective and feasible than the conventional weekly cisplatin 40-mg/m{sup 2} regimen and may be a strong candidate for the optimal cisplatin dose and dosing schedule in the treatment of locally advanced cervical cancer.« less

  6. Experiences of High-Achieving High School Students Who Have Taken Multiple Concurrent Advanced Placement Courses

    ERIC Educational Resources Information Center

    Milburn, Kristine M.

    2011-01-01

    Problem: An increasing number of high-achieving American high school students are enrolling in multiple Advanced Placement (AP) courses. As a result, high schools face a growing need to understand the impact of taking multiple AP courses concurrently on the social-emotional lives of high-achieving students. Procedures: This phenomenological…

  7. Concurrent performance in a three-alternative choice situation: response allocation in a Rock/Paper/Scissors game.

    PubMed

    Kangas, Brian D; Berry, Meredith S; Cassidy, Rachel N; Dallery, Jesse; Vaidya, Manish; Hackenberg, Timothy D

    2009-10-01

    Adult human subjects engaged in a simulated Rock/Paper/Scissors game against a computer opponent. The computer opponent's responses were determined by programmed probabilities that differed across 10 blocks of 100 trials each. Response allocation in Experiment 1 was well described by a modified version of the generalized matching equation, with undermatching observed in all subjects. To assess the effects of instructions on response allocation, accurate probability-related information on how the computer was programmed to respond was provided to subjects in Experiment 2. Five of 6 subjects played the counter response of the computer's dominant programmed response near-exclusively (e.g., subjects played paper almost exclusively if the probability of rock was high), resulting in minor overmatching, and higher reinforcement rates relative to Experiment 1. On the whole, the study shows that the generalized matching law provides a good description of complex human choice in a gaming context, and illustrates a promising set of laboratory methods and analytic techniques that capture important features of human choice outside the laboratory.

  8. A projected preconditioned conjugate gradient algorithm for computing many extreme eigenpairs of a Hermitian matrix [A projected preconditioned conjugate gradient algorithm for computing a large eigenspace of a Hermitian matrix

    DOE PAGES

    Vecharynski, Eugene; Yang, Chao; Pask, John E.

    2015-02-25

    Here, we present an iterative algorithm for computing an invariant subspace associated with the algebraically smallest eigenvalues of a large sparse or structured Hermitian matrix A. We are interested in the case in which the dimension of the invariant subspace is large (e.g., over several hundreds or thousands) even though it may still be small relative to the dimension of A. These problems arise from, for example, density functional theory (DFT) based electronic structure calculations for complex materials. The key feature of our algorithm is that it performs fewer Rayleigh–Ritz calculations compared to existing algorithms such as the locally optimalmore » block preconditioned conjugate gradient or the Davidson algorithm. It is a block algorithm, and hence can take advantage of efficient BLAS3 operations and be implemented with multiple levels of concurrency. We discuss a number of practical issues that must be addressed in order to implement the algorithm efficiently on a high performance computer.« less

  9. High-throughput state-machine replication using software transactional memory.

    PubMed

    Zhao, Wenbing; Yang, William; Zhang, Honglei; Yang, Jack; Luo, Xiong; Zhu, Yueqin; Yang, Mary; Luo, Chaomin

    2016-11-01

    State-machine replication is a common way of constructing general purpose fault tolerance systems. To ensure replica consistency, requests must be executed sequentially according to some total order at all non-faulty replicas. Unfortunately, this could severely limit the system throughput. This issue has been partially addressed by identifying non-conflicting requests based on application semantics and executing these requests concurrently. However, identifying and tracking non-conflicting requests require intimate knowledge of application design and implementation, and a custom fault tolerance solution developed for one application cannot be easily adopted by other applications. Software transactional memory offers a new way of constructing concurrent programs. In this article, we present the mechanisms needed to retrofit existing concurrency control algorithms designed for software transactional memory for state-machine replication. The main benefit for using software transactional memory in state-machine replication is that general purpose concurrency control mechanisms can be designed without deep knowledge of application semantics. As such, new fault tolerance systems based on state-machine replications with excellent throughput can be easily designed and maintained. In this article, we introduce three different concurrency control mechanisms for state-machine replication using software transactional memory, namely, ordered strong strict two-phase locking, conventional timestamp-based multiversion concurrency control, and speculative timestamp-based multiversion concurrency control. Our experiments show that speculative timestamp-based multiversion concurrency control mechanism has the best performance in all types of workload, the conventional timestamp-based multiversion concurrency control offers the worst performance due to high abort rate in the presence of even moderate contention between transactions. The ordered strong strict two-phase locking mechanism offers the simplest solution with excellent performance in low contention workload, and fairly good performance in high contention workload.

  10. High-throughput state-machine replication using software transactional memory

    PubMed Central

    Yang, William; Zhang, Honglei; Yang, Jack; Luo, Xiong; Zhu, Yueqin; Yang, Mary; Luo, Chaomin

    2017-01-01

    State-machine replication is a common way of constructing general purpose fault tolerance systems. To ensure replica consistency, requests must be executed sequentially according to some total order at all non-faulty replicas. Unfortunately, this could severely limit the system throughput. This issue has been partially addressed by identifying non-conflicting requests based on application semantics and executing these requests concurrently. However, identifying and tracking non-conflicting requests require intimate knowledge of application design and implementation, and a custom fault tolerance solution developed for one application cannot be easily adopted by other applications. Software transactional memory offers a new way of constructing concurrent programs. In this article, we present the mechanisms needed to retrofit existing concurrency control algorithms designed for software transactional memory for state-machine replication. The main benefit for using software transactional memory in state-machine replication is that general purpose concurrency control mechanisms can be designed without deep knowledge of application semantics. As such, new fault tolerance systems based on state-machine replications with excellent throughput can be easily designed and maintained. In this article, we introduce three different concurrency control mechanisms for state-machine replication using software transactional memory, namely, ordered strong strict two-phase locking, conventional timestamp-based multiversion concurrency control, and speculative timestamp-based multiversion concurrency control. Our experiments show that speculative timestamp-based multiversion concurrency control mechanism has the best performance in all types of workload, the conventional timestamp-based multiversion concurrency control offers the worst performance due to high abort rate in the presence of even moderate contention between transactions. The ordered strong strict two-phase locking mechanism offers the simplest solution with excellent performance in low contention workload, and fairly good performance in high contention workload. PMID:29075049

  11. Self-consistent clustering analysis: an efficient multiscale scheme for inelastic heterogeneous materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Z.; Bessa, M. A.; Liu, W.K.

    A predictive computational theory is shown for modeling complex, hierarchical materials ranging from metal alloys to polymer nanocomposites. The theory can capture complex mechanisms such as plasticity and failure that span across multiple length scales. This general multiscale material modeling theory relies on sound principles of mathematics and mechanics, and a cutting-edge reduced order modeling method named self-consistent clustering analysis (SCA) [Zeliang Liu, M.A. Bessa, Wing Kam Liu, “Self-consistent clustering analysis: An efficient multi-scale scheme for inelastic heterogeneous materials,” Comput. Methods Appl. Mech. Engrg. 306 (2016) 319–341]. SCA reduces by several orders of magnitude the computational cost of micromechanical andmore » concurrent multiscale simulations, while retaining the microstructure information. This remarkable increase in efficiency is achieved with a data-driven clustering method. Computationally expensive operations are performed in the so-called offline stage, where degrees of freedom (DOFs) are agglomerated into clusters. The interaction tensor of these clusters is computed. In the online or predictive stage, the Lippmann-Schwinger integral equation is solved cluster-wise using a self-consistent scheme to ensure solution accuracy and avoid path dependence. To construct a concurrent multiscale model, this scheme is applied at each material point in a macroscale structure, replacing a conventional constitutive model with the average response computed from the microscale model using just the SCA online stage. A regularized damage theory is incorporated in the microscale that avoids the mesh and RVE size dependence that commonly plagues microscale damage calculations. The SCA method is illustrated with two cases: a carbon fiber reinforced polymer (CFRP) structure with the concurrent multiscale model and an application to fatigue prediction for additively manufactured metals. For the CFRP problem, a speed up estimated to be about 43,000 is achieved by using the SCA method, as opposed to FE2, enabling the solution of an otherwise computationally intractable problem. The second example uses a crystal plasticity constitutive law and computes the fatigue potency of extrinsic microscale features such as voids. This shows that local stress and strain are capture sufficiently well by SCA. This model has been incorporated in a process-structure-properties prediction framework for process design in additive manufacturing.« less

  12. Flexible Ionic-Electronic Hybrid Oxide Synaptic TFTs with Programmable Dynamic Plasticity for Brain-Inspired Neuromorphic Computing.

    PubMed

    John, Rohit Abraham; Ko, Jieun; Kulkarni, Mohit R; Tiwari, Naveen; Chien, Nguyen Anh; Ing, Ng Geok; Leong, Wei Lin; Mathews, Nripan

    2017-08-01

    Emulation of biological synapses is necessary for future brain-inspired neuromorphic computational systems that could look beyond the standard von Neuman architecture. Here, artificial synapses based on ionic-electronic hybrid oxide-based transistors on rigid and flexible substrates are demonstrated. The flexible transistors reported here depict a high field-effect mobility of ≈9 cm 2 V -1 s -1 with good mechanical performance. Comprehensive learning abilities/synaptic rules like paired-pulse facilitation, excitatory and inhibitory postsynaptic currents, spike-time-dependent plasticity, consolidation, superlinear amplification, and dynamic logic are successfully established depicting concurrent processing and memory functionalities with spatiotemporal correlation. The results present a fully solution processable approach to fabricate artificial synapses for next-generation transparent neural circuits. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Comparing the development of the multiplication of fractions in Turkish and American textbooks

    NASA Astrophysics Data System (ADS)

    Kar, Tuğrul; Güler, Gürsel; Şen, Ceylan; Özdemir, Ercan

    2018-02-01

    This study analyzed the methods used to teach the multiplication of fractions in Turkish and American textbooks. Two Turkish textbooks and two American textbooks, Everyday Mathematics (EM) and Connected Mathematics 3 (CM), were analyzed. The analyses focused on the content and the nature of the mathematical problems presented in the textbooks. The findings of the study showed that the American textbooks aimed at developing conceptual understanding first and then procedural fluency, whereas the Turkish textbooks aimed at developing both concurrently. The American textbooks provided more opportunities for different computational strategies. The solutions to most problems in all textbooks required a single computational step, a numerical answer, and procedural knowledge. Furthermore, compared with the Turkish textbooks, the American textbooks contained a greater number of problems that required high-level cognitive skills such as mathematical reasoning.

  14. A modularized pulse programmer for NMR spectroscopy

    NASA Astrophysics Data System (ADS)

    Mao, Wenping; Bao, Qingjia; Yang, Liang; Chen, Yiqun; Liu, Chaoyang; Qiu, Jianqing; Ye, Chaohui

    2011-02-01

    A modularized pulse programmer for a NMR spectrometer is described. It consists of a networked PCI-104 single-board computer and a field programmable gate array (FPGA). The PCI-104 is dedicated to translate the pulse sequence elements from the host computer into 48-bit binary words and download these words to the FPGA, while the FPGA functions as a sequencer to execute these binary words. High-resolution NMR spectra obtained on a home-built spectrometer with four pulse programmers working concurrently demonstrate the effectiveness of the pulse programmer. Advantages of the module include (1) once designed it can be duplicated and used to construct a scalable NMR/MRI system with multiple transmitter and receiver channels, (2) it is a totally programmable system in which all specific applications are determined by software, and (3) it provides enough reserve for possible new pulse sequences.

  15. Design and Verification of Remote Sensing Image Data Center Storage Architecture Based on Hadoop

    NASA Astrophysics Data System (ADS)

    Tang, D.; Zhou, X.; Jing, Y.; Cong, W.; Li, C.

    2018-04-01

    The data center is a new concept of data processing and application proposed in recent years. It is a new method of processing technologies based on data, parallel computing, and compatibility with different hardware clusters. While optimizing the data storage management structure, it fully utilizes cluster resource computing nodes and improves the efficiency of data parallel application. This paper used mature Hadoop technology to build a large-scale distributed image management architecture for remote sensing imagery. Using MapReduce parallel processing technology, it called many computing nodes to process image storage blocks and pyramids in the background to improve the efficiency of image reading and application and sovled the need for concurrent multi-user high-speed access to remotely sensed data. It verified the rationality, reliability and superiority of the system design by testing the storage efficiency of different image data and multi-users and analyzing the distributed storage architecture to improve the application efficiency of remote sensing images through building an actual Hadoop service system.

  16. KeyWare: an open wireless distributed computing environment

    NASA Astrophysics Data System (ADS)

    Shpantzer, Isaac; Schoenfeld, Larry; Grindahl, Merv; Kelman, Vladimir

    1995-12-01

    Deployment of distributed applications in the wireless domain lack equivalent tools, methodologies, architectures, and network management that exist in LAN based applications. A wireless distributed computing environment (KeyWareTM) based on intelligent agents within a multiple client multiple server scheme was developed to resolve this problem. KeyWare renders concurrent application services to wireline and wireless client nodes encapsulated in multiple paradigms such as message delivery, database access, e-mail, and file transfer. These services and paradigms are optimized to cope with temporal and spatial radio coverage, high latency, limited throughput and transmission costs. A unified network management paradigm for both wireless and wireline facilitates seamless extensions of LAN- based management tools to include wireless nodes. A set of object oriented tools and methodologies enables direct asynchronous invocation of agent-based services supplemented by tool-sets matched to supported KeyWare paradigms. The open architecture embodiment of KeyWare enables a wide selection of client node computing platforms, operating systems, transport protocols, radio modems and infrastructures while maintaining application portability.

  17. SEI Report on Graduate Software Engineering Education for 1991

    DTIC Science & Technology

    1991-04-01

    12, 12 (Dec. 1979), 85-94. Andrews83 Andrews, Gregory R . and Schneider, Fred B. “Concepts and Notations for Concurrent Programming.” ACM Computing...Barringer87 Barringer , H. “Up and Down the Temporal Way.” Computer J. 30, 2 (Apr. 1987), 134-148. Bjørner78 The Vienna Development Method: The Meta-Language...Lecture Notes in Computer Science. Bruns86 Bruns, Glenn R . Technology Assessment: PAISLEY. Tech. Rep. MCC TR STP-296-86, MCC, Austin, Texas, Sept

  18. Computer control of a scanning electron microscope for digital image processing of thermal-wave images

    NASA Technical Reports Server (NTRS)

    Gilbert, Percy; Jones, Robert E.; Kramarchuk, Ihor; Williams, Wallace D.; Pouch, John J.

    1987-01-01

    Using a recently developed technology called thermal-wave microscopy, NASA Lewis Research Center has developed a computer controlled submicron thermal-wave microscope for the purpose of investigating III-V compound semiconductor devices and materials. This paper describes the system's design and configuration and discusses the hardware and software capabilities. Knowledge of the Concurrent 3200 series computers is needed for a complete understanding of the material presented. However, concepts and procedures are of general interest.

  19. Study of Solid State Drives performance in PROOF distributed analysis system

    NASA Astrophysics Data System (ADS)

    Panitkin, S. Y.; Ernst, M.; Petkus, R.; Rind, O.; Wenaus, T.

    2010-04-01

    Solid State Drives (SSD) is a promising storage technology for High Energy Physics parallel analysis farms. Its combination of low random access time and relatively high read speed is very well suited for situations where multiple jobs concurrently access data located on the same drive. It also has lower energy consumption and higher vibration tolerance than Hard Disk Drive (HDD) which makes it an attractive choice in many applications raging from personal laptops to large analysis farms. The Parallel ROOT Facility - PROOF is a distributed analysis system which allows to exploit inherent event level parallelism of high energy physics data. PROOF is especially efficient together with distributed local storage systems like Xrootd, when data are distributed over computing nodes. In such an architecture the local disk subsystem I/O performance becomes a critical factor, especially when computing nodes use multi-core CPUs. We will discuss our experience with SSDs in PROOF environment. We will compare performance of HDD with SSD in I/O intensive analysis scenarios. In particular we will discuss PROOF system performance scaling with a number of simultaneously running analysis jobs.

  20. A hardware/software environment to support R D in intelligent machines and mobile robotic systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mann, R.C.

    1990-01-01

    The Center for Engineering Systems Advanced Research (CESAR) serves as a focal point at the Oak Ridge National Laboratory (ORNL) for basic and applied research in intelligent machines. R D at CESAR addresses issues related to autonomous systems, unstructured (i.e. incompletely known) operational environments, and multiple performing agents. Two mobile robot prototypes (HERMIES-IIB and HERMIES-III) are being used to test new developments in several robot component technologies. This paper briefly introduces the computing environment at CESAR which includes three hypercube concurrent computers (two on-board the mobile robots), a graphics workstation, VAX, and multiple VME-based systems (several on-board the mobile robots).more » The current software environment at CESAR is intended to satisfy several goals, e.g.: code portability, re-usability in different experimental scenarios, modularity, concurrent computer hardware transparent to applications programmer, future support for multiple mobile robots, support human-machine interface modules, and support for integration of software from other, geographically disparate laboratories with different hardware set-ups. 6 refs., 1 fig.« less

  1. Love, Lust, and the Emotional Context of Concurrent Sexual Partnerships among Young Swazi Adults

    PubMed Central

    Ruark, Allison; Dlamini, Lunga; Mazibuko, Nonhlanhla; Green, Edward C.; Kennedy, Caitlin; Nunn, Amy; Flanigan, Timothy; Surkan, Pamela J.

    2014-01-01

    Men and women in Swaziland who are engaged in multiple or concurrent sexual partnerships, or who have sexual partners with concurrent partners, face a very high risk of HIV infection. Ninety-four in-depth interviews were conducted with 28 Swazi men and women (14 of each sex) between the ages of 20 and 39 in order to explore participants’ sexual partnership histories, including motivations for sexual relationships which carried high HIV risk. Concurrency was normative, with most men and women having had at least one concurrent sexual partnership, and all women reporting having had at least one partner who had a concurrent partner. Men distinguished sexual partnerships that were just for sex from those that were considered to be “real relationships”, while women represented the majority of their relationships, even those which included significant financial support, as being based on love. Besides being motivated by love, concurrent sexual partnerships were described as motivated by a lack of sexual satisfaction, a desire for emotional support and/or as a means to exact revenge against a cheating partner. Social and structural factors were also found to play a role in creating an enabling environment for high-risk sexual partnerships, and these factors included social pressure and norms, a lack of social trust, poverty and a desire for material goods, and geographical separation of partners. PMID:25174630

  2. eXascale PRogramming Environment and System Software (XPRESS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapman, Barbara; Gabriel, Edgar

    Exascale systems, with a thousand times the compute capacity of today’s leading edge petascale computers, are expected to emerge during the next decade. Their software systems will need to facilitate the exploitation of exceptional amounts of concurrency in applications, and ensure that jobs continue to run despite the occurrence of system failures and other kinds of hard and soft errors. Adapting computations at runtime to cope with changes in the execution environment, as well as to improve power and performance characteristics, is likely to become the norm. As a result, considerable innovation is required to develop system support to meetmore » the needs of future computing platforms. The XPRESS project aims to develop and prototype a revolutionary software system for extreme-­scale computing for both exascale and strong­scaled problems. The XPRESS collaborative research project will advance the state-­of-­the-­art in high performance computing and enable exascale computing for current and future DOE mission-­critical applications and supporting systems. The goals of the XPRESS research project are to: A. enable exascale performance capability for DOE applications, both current and future, B. develop and deliver a practical computing system software X-­stack, OpenX, for future practical DOE exascale computing systems, and C. provide programming methods and environments for effective means of expressing application and system software for portable exascale system execution.« less

  3. EPA'S TOXICOGENOMICS PARTNERSHIPS ACROSS GOVERNMENT, ACADEMIA AND INDUSTRY

    EPA Science Inventory

    Genomics, proteomics and metabonomics technologies are transforming the science of toxicology, and concurrent advances in computing and informatics are providing management and analysis solutions for this onslaught of toxicogenomic data. EPA has been actively developing an intra...

  4. Power module Data Management System (DMS) study

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Computer trades and analyses of selected Power Module Data Management Subsystem issues to support concurrent inhouse MSFC Power Study are provided. The charts which summarize and describe the results are presented. Software requirements and definitions are included.

  5. Methods and devices for determining quality of services of storage systems

    DOEpatents

    Seelam, Seetharami R [Yorktown Heights, NY; Teller, Patricia J [Las Cruces, NM

    2012-01-17

    Methods and systems for allowing access to computer storage systems. Multiple requests from multiple applications can be received and processed efficiently to allow traffic from multiple customers to access the storage system concurrently.

  6. Algorithms and software for nonlinear structural dynamics

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Gilbertsen, Noreen D.; Neal, Mark O.

    1989-01-01

    The objective of this research is to develop efficient methods for explicit time integration in nonlinear structural dynamics for computers which utilize both concurrency and vectorization. As a framework for these studies, the program WHAMS, which is described in Explicit Algorithms for the Nonlinear Dynamics of Shells (T. Belytschko, J. I. Lin, and C.-S. Tsay, Computer Methods in Applied Mechanics and Engineering, Vol. 42, 1984, pp 225 to 251), is used. There are two factors which make the development of efficient concurrent explicit time integration programs a challenge in a structural dynamics program: (1) the need for a variety of element types, which complicates the scheduling-allocation problem; and (2) the need for different time steps in different parts of the mesh, which is here called mixed delta t integration, so that a few stiff elements do not reduce the time steps throughout the mesh.

  7. Application of the actor model to large scale NDE data analysis

    NASA Astrophysics Data System (ADS)

    Coughlin, Chris

    2018-03-01

    The Actor model of concurrent computation discretizes a problem into a series of independent units or actors that interact only through the exchange of messages. Without direct coupling between individual components, an Actor-based system is inherently concurrent and fault-tolerant. These traits lend themselves to so-called "Big Data" applications in which the volume of data to analyze requires a distributed multi-system design. For a practical demonstration of the Actor computational model, a system was developed to assist with the automated analysis of Nondestructive Evaluation (NDE) datasets using the open source Myriad Data Reduction Framework. A machine learning model trained to detect damage in two-dimensional slices of C-Scan data was deployed in a streaming data processing pipeline. To demonstrate the flexibility of the Actor model, the pipeline was deployed on a local system and re-deployed as a distributed system without recompiling, reconfiguring, or restarting the running application.

  8. Sub-domain decomposition methods and computational controls for multibody dynamical systems. [of spacecraft structures

    NASA Technical Reports Server (NTRS)

    Menon, R. G.; Kurdila, A. J.

    1992-01-01

    This paper presents a concurrent methodology to simulate the dynamics of flexible multibody systems with a large number of degrees of freedom. A general class of open-loop structures is treated and a redundant coordinate formulation is adopted. A range space method is used in which the constraint forces are calculated using a preconditioned conjugate gradient method. By using a preconditioner motivated by the regular ordering of the directed graph of the structures, it is shown that the method is order N in the total number of coordinates of the system. The overall formulation has the advantage that it permits fine parallelization and does not rely on system topology to induce concurrency. It can be efficiently implemented on the present generation of parallel computers with a large number of processors. Validation of the method is presented via numerical simulations of space structures incorporating large number of flexible degrees of freedom.

  9. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1987-01-01

    The results of ongoing research directed at developing a graph theoretical model for describing data and control flow associated with the execution of large grained algorithms in a spatial distributed computer environment is presented. This model is identified by the acronym ATAMM (Algorithm/Architecture Mapping Model). The purpose of such a model is to provide a basis for establishing rules for relating an algorithm to its execution in a multiprocessor environment. Specifications derived from the model lead directly to the description of a data flow architecture which is a consequence of the inherent behavior of the data and control flow described by the model. The purpose of the ATAMM based architecture is to optimize computational concurrency in the multiprocessor environment and to provide an analytical basis for performance evaluation. The ATAMM model and architecture specifications are demonstrated on a prototype system for concept validation.

  10. Accelerating Wright–Fisher Forward Simulations on the Graphics Processing Unit

    PubMed Central

    Lawrie, David S.

    2017-01-01

    Forward Wright–Fisher simulations are powerful in their ability to model complex demography and selection scenarios, but suffer from slow execution on the Central Processor Unit (CPU), thus limiting their usefulness. However, the single-locus Wright–Fisher forward algorithm is exceedingly parallelizable, with many steps that are so-called “embarrassingly parallel,” consisting of a vast number of individual computations that are all independent of each other and thus capable of being performed concurrently. The rise of modern Graphics Processing Units (GPUs) and programming languages designed to leverage the inherent parallel nature of these processors have allowed researchers to dramatically speed up many programs that have such high arithmetic intensity and intrinsic concurrency. The presented GPU Optimized Wright–Fisher simulation, or “GO Fish” for short, can be used to simulate arbitrary selection and demographic scenarios while running over 250-fold faster than its serial counterpart on the CPU. Even modest GPU hardware can achieve an impressive speedup of over two orders of magnitude. With simulations so accelerated, one can not only do quick parametric bootstrapping of previously estimated parameters, but also use simulated results to calculate the likelihoods and summary statistics of demographic and selection models against real polymorphism data, all without restricting the demographic and selection scenarios that can be modeled or requiring approximations to the single-locus forward algorithm for efficiency. Further, as many of the parallel programming techniques used in this simulation can be applied to other computationally intensive algorithms important in population genetics, GO Fish serves as an exciting template for future research into accelerating computation in evolution. GO Fish is part of the Parallel PopGen Package available at: http://dl42.github.io/ParallelPopGen/. PMID:28768689

  11. De novo self-assembling collagen heterotrimers using explicit positive and negative design.

    PubMed

    Xu, Fei; Zhang, Lei; Koder, Ronald L; Nanda, Vikas

    2010-03-23

    We sought to computationally design model collagen peptides that specifically associate as heterotrimers. Computational design has been successfully applied to the creation of new protein folds and functions. Despite the high abundance of collagen and its key role in numerous biological processes, fibrous proteins have received little attention as computational design targets. Collagens are composed of three polypeptide chains that wind into triple helices. We developed a discrete computational model to design heterotrimer-forming collagen-like peptides. Stability and specificity of oligomerization were concurrently targeted using a combined positive and negative design approach. The sequences of three 30-residue peptides, A, B, and C, were optimized to favor charge-pair interactions in an ABC heterotrimer, while disfavoring the 26 competing oligomers (i.e., AAA, ABB, BCA). Peptides were synthesized and characterized for thermal stability and triple-helical structure by circular dichroism and NMR. A unique A:B:C-type species was not achieved. Negative design was partially successful, with only A + B and B + C competing mixtures formed. Analysis of computed versus experimental stabilities helps to clarify the role of electrostatics and secondary-structure propensities determining collagen stability and to provide important insight into how subsequent designs can be improved.

  12. Rosetta:MSF: a modular framework for multi-state computational protein design.

    PubMed

    Löffler, Patrick; Schmitz, Samuel; Hupfeld, Enrico; Sterner, Reinhard; Merkl, Rainer

    2017-06-01

    Computational protein design (CPD) is a powerful technique to engineer existing proteins or to design novel ones that display desired properties. Rosetta is a software suite including algorithms for computational modeling and analysis of protein structures and offers many elaborate protocols created to solve highly specific tasks of protein engineering. Most of Rosetta's protocols optimize sequences based on a single conformation (i. e. design state). However, challenging CPD objectives like multi-specificity design or the concurrent consideration of positive and negative design goals demand the simultaneous assessment of multiple states. This is why we have developed the multi-state framework MSF that facilitates the implementation of Rosetta's single-state protocols in a multi-state environment and made available two frequently used protocols. Utilizing MSF, we demonstrated for one of these protocols that multi-state design yields a 15% higher performance than single-state design on a ligand-binding benchmark consisting of structural conformations. With this protocol, we designed de novo nine retro-aldolases on a conformational ensemble deduced from a (βα)8-barrel protein. All variants displayed measurable catalytic activity, testifying to a high success rate for this concept of multi-state enzyme design.

  13. Rosetta:MSF: a modular framework for multi-state computational protein design

    PubMed Central

    Hupfeld, Enrico; Sterner, Reinhard

    2017-01-01

    Computational protein design (CPD) is a powerful technique to engineer existing proteins or to design novel ones that display desired properties. Rosetta is a software suite including algorithms for computational modeling and analysis of protein structures and offers many elaborate protocols created to solve highly specific tasks of protein engineering. Most of Rosetta’s protocols optimize sequences based on a single conformation (i. e. design state). However, challenging CPD objectives like multi-specificity design or the concurrent consideration of positive and negative design goals demand the simultaneous assessment of multiple states. This is why we have developed the multi-state framework MSF that facilitates the implementation of Rosetta’s single-state protocols in a multi-state environment and made available two frequently used protocols. Utilizing MSF, we demonstrated for one of these protocols that multi-state design yields a 15% higher performance than single-state design on a ligand-binding benchmark consisting of structural conformations. With this protocol, we designed de novo nine retro-aldolases on a conformational ensemble deduced from a (βα)8-barrel protein. All variants displayed measurable catalytic activity, testifying to a high success rate for this concept of multi-state enzyme design. PMID:28604768

  14. Simulations & Measurements of Airframe Noise: A BANC Workshops Perspective

    NASA Technical Reports Server (NTRS)

    Choudhari, Meelan; Lockard, David

    2016-01-01

    Airframe noise corresponds to the acoustic radiation due to turbulent flow in the vicinity of airframe components such as high-lift devices and landing gears. Since 2010, the American Institute of Aeronautics and Astronautics has organized an ongoing series of workshops devoted to Benchmark Problems for Airframe Noise Computations (BANC). The BANC workshops are aimed at enabling a systematic progress in the understanding and high-fidelity predictions of airframe noise via collaborative investigations that integrate computational fluid dynamics, computational aeroacoustics, and in depth measurements targeting a selected set of canonical yet realistic configurations that advance the current state-of-the-art in multiple respects. Unique features of the BANC Workshops include: intrinsically multi-disciplinary focus involving both fluid dynamics and aeroacoustics, holistic rather than predictive emphasis, concurrent, long term evolution of experiments and simulations with a powerful interplay between the two, and strongly integrative nature by virtue of multi-team, multi-facility, multiple-entry measurements. This paper illustrates these features in the context of the BANC problem categories and outlines some of the challenges involved and how they were addressed. A brief summary of the BANC effort, including its technical objectives, strategy, and selective outcomes thus far is also included.

  15. Socioeconomic Inequality in Concurrent Tobacco and Alcohol Consumption

    PubMed Central

    Intarut, Nirun; Pukdeesamai, Piyalak

    2017-01-01

    Background: Whilst several studies have examined inequity of tobacco use and inequity of alcohol drinking individually, comparatively little is known about concurrent tobacco and alcohol consumption. The present study therefore investigated inequity of concurrent tobacco and alcohol consumption in Thailand. Methods: The 2015 Health and Welfare Survey was obtained from Thailand’s National Statistical Office and used as a source of national representative data. Concurrent tobacco and alcohol consumption was defined as current and concurrent use of both tobacco and alcohol. The wealth assets index was used as an indicator of socioeconomic inequity. Socioeconomic status included 5 groups ranging from poorest (Q1) to richest (Q5). A total of 55,920 households and 113,705 participants aged 15 years or over were included and analyzed. A weighted multiple logistic regression was performed. Results: The prevalence of concurrent tobacco and alcohol consumption, tobacco consumption only, and alcohol consumption only were 15.2% (95% CI: 14.9, 15.4), 4.7% (95% CI: 4.5, 4.8), and 18.9% (95% CI: 18.7, 19.1), respectively. Weighted multiple logistic regression showed that concurrent tobacco and alcohol consumption was high in the poorest socioeconomic group (P for trend <0.001), and tobacco consumption only was also high in the poorest group (P for trend <0.001). A high prevalence of alcohol consumption was observed in the richest group (P for trend <0.001). Conclusions: These findings suggest that tobacco and alcohol consumption prevention programs would be more effective if they considered socioeconomic inequities in concurrent tobacco and alcohol consumption rather than focusing on single drug use. PMID:28749620

  16. Validity of Computer Adaptive Tests of Daily Routines for Youth with Spinal Cord Injury

    PubMed Central

    Haley, Stephen M.

    2013-01-01

    Objective: To evaluate the accuracy of computer adaptive tests (CATs) of daily routines for child- and parent-reported outcomes following pediatric spinal cord injury (SCI) and to evaluate the validity of the scales. Methods: One hundred ninety-six daily routine items were administered to 381 youths and 322 parents. Pearson correlations, intraclass correlation coefficients (ICC), and 95% confidence intervals (CI) were calculated to evaluate the accuracy of simulated 5-item, 10-item, and 15-item CATs against the full-item banks and to evaluate concurrent validity. Independent samples t tests and analysis of variance were used to evaluate the ability of the daily routine scales to discriminate between children with tetraplegia and paraplegia and among 5 motor groups. Results: ICC and 95% CI demonstrated that simulated 5-, 10-, and 15-item CATs accurately represented the full-item banks for both child- and parent-report scales. The daily routine scales demonstrated discriminative validity, except between 2 motor groups of children with paraplegia. Concurrent validity of the daily routine scales was demonstrated through significant relationships with the FIM scores. Conclusion: Child- and parent-reported outcomes of daily routines can be obtained using CATs with the same relative precision of a full-item bank. Five-item, 10-item, and 15-item CATs have discriminative and concurrent validity. PMID:23671380

  17. Informational and linguistic analysis of large genomic sequence collections via efficient Hadoop cluster algorithms.

    PubMed

    Ferraro Petrillo, Umberto; Roscigno, Gianluca; Cattaneo, Giuseppe; Giancarlo, Raffaele

    2018-06-01

    Information theoretic and compositional/linguistic analysis of genomes have a central role in bioinformatics, even more so since the associated methodologies are becoming very valuable also for epigenomic and meta-genomic studies. The kernel of those methods is based on the collection of k-mer statistics, i.e. how many times each k-mer in {A,C,G,T}k occurs in a DNA sequence. Although this problem is computationally very simple and efficiently solvable on a conventional computer, the sheer amount of data available now in applications demands to resort to parallel and distributed computing. Indeed, those type of algorithms have been developed to collect k-mer statistics in the realm of genome assembly. However, they are so specialized to this domain that they do not extend easily to the computation of informational and linguistic indices, concurrently on sets of genomes. Following the well-established approach in many disciplines, and with a growing success also in bioinformatics, to resort to MapReduce and Hadoop to deal with 'Big Data' problems, we present KCH, the first set of MapReduce algorithms able to perform concurrently informational and linguistic analysis of large collections of genomic sequences on a Hadoop cluster. The benchmarking of KCH that we provide indicates that it is quite effective and versatile. It is also competitive with respect to the parallel and distributed algorithms highly specialized to k-mer statistics collection for genome assembly problems. In conclusion, KCH is a much needed addition to the growing number of algorithms and tools that use MapReduce for bioinformatics core applications. The software, including instructions for running it over Amazon AWS, as well as the datasets are available at http://www.di-srv.unisa.it/KCH. umberto.ferraro@uniroma1.it. Supplementary data are available at Bioinformatics online.

  18. Aerospace Applications Conference, Steamboat Springs, CO, Feb. 1-8, 1986, Digest

    NASA Astrophysics Data System (ADS)

    The present conference considers topics concerning the projected NASA Space Station's systems, digital signal and data processing applications, and space science and microwave applications. Attention is given to Space Station video and audio subsystems design, clock error, jitter, phase error and differential time-of-arrival in satellite communications, automation and robotics in space applications, target insertion into synthetic background scenes, and a novel scheme for the computation of the discrete Fourier transform on a systolic processor. Also discussed are a novel signal parameter measurement system employing digital signal processing, EEPROMS for spacecraft applications, a unique concurrent processor architecture for high speed simulation of dynamic systems, a dual polarization flat plate antenna, Fresnel diffraction, and ultralinear TWTs for high efficiency satellite communications.

  19. Concurrent Validity of the Defense and Veterans Pain Rating Scale in VA Outpatients.

    PubMed

    Nassif, Thomas H; Hull, Amanda; Holliday, Stephanie Brooks; Sullivan, Patrick; Sandbrink, Friedhelm

    2015-11-01

    The purpose of this report is to investigate the concurrent validity of the Defense and Veterans Pain Rating Scale (DVPRS) with other validated self-report measures in U.S. veterans. This correlational study was conducted using two samples of outpatients at the Washington, DC Veterans Affairs Medical Center who completed self-report measures relevant to pain conditions, including pain disability, quality of life, and mental health. Study 1 and 2 consisted of n = 204 and n = 13 participants, respectively. Bivariate Spearman correlations were calculated to examine the correlation among total scores and subscale scores for each scale of interest. Multiple linear regressions were also computed in Study 1. In Study 1, the DVPRS interference scale (DVPRS-II) was significantly correlated with the Pain Disability Questionnaire (PDQ) (ρ = 0.69, P < 0.001) and the Veterans RAND 36-item Health Survey physical and mental component scales (ρ = -0.37, P < 0.001; ρ = -0.46, P < 0.001, respectively). When controlling for sex, age, and other self-report measures, the relationship between the DVPRS-II and PDQ remained significant. In Study 2, pain interference on the DVPRS and Brief Pain Inventory were highly correlated (ρ = 0.90, P < 0.001); however, the intensity scale of each measure was also highly associated with the interference summary scores. These findings provide preliminary evidence for the concurrent validity of the DVPRS as a brief, multidimensional measure of pain interference that make it a practical tool for use in primary care settings to assess the impact of pain on daily functioning and monitor chronic pain over time. Wiley Periodicals, Inc.

  20. Analyzing and designing object-oriented missile simulations with concurrency

    NASA Astrophysics Data System (ADS)

    Randorf, Jeffrey Allen

    2000-11-01

    A software object model for the six degree-of-freedom missile modeling domain is presented. As a precursor, a domain analysis of the missile modeling domain was started, based on the Feature-Oriented Domain Analysis (FODA) technique described by the Software Engineering Institute (SEI). It was subsequently determined the FODA methodology is functionally equivalent to the Object Modeling Technique. The analysis used legacy software documentation and code from the ENDOSIM, KDEC, and TFrames 6-DOF modeling tools, including other technical literature. The SEI Object Connection Architecture (OCA) was the template for designing the object model. Three variants of the OCA were considered---a reference structure, a recursive structure, and a reference structure with augmentation for flight vehicle modeling. The reference OCA design option was chosen for maintaining simplicity while not compromising the expressive power of the OMT model. The missile architecture was then analyzed for potential areas of concurrent computing. It was shown how protected objects could be used for data passing between OCA object managers, allowing concurrent access without changing the OCA reference design intent or structure. The implementation language was the 1995 release of Ada. OCA software components were shown how to be expressed as Ada child packages. While acceleration of several low level and other high operations level are possible on proper hardware, there was a 33% degradation of 4th order Runge-Kutta integrator performance of two simultaneous ordinary differential equations using Ada tasking on a single processor machine. The Defense Department's High Level Architecture was introduced and explained in context with the OCA. It was shown the HLA and OCA were not mutually exclusive architectures, but complimentary. HLA was shown as an interoperability solution, with the OCA as an architectural vehicle for software reuse. Further directions for implementing a 6-DOF missile modeling environment are discussed.

  1. Universal quantum computation with little entanglement.

    PubMed

    Van den Nest, Maarten

    2013-02-08

    We show that universal quantum computation can be achieved in the standard pure-state circuit model while the entanglement entropy of every bipartition is small in each step of the computation. The entanglement entropy required for large-scale quantum computation even tends to zero. Moreover we show that the same conclusion applies to many entanglement measures commonly used in the literature. This includes e.g., the geometric measure, localizable entanglement, multipartite concurrence, squashed entanglement, witness-based measures, and more generally any entanglement measure which is continuous in a certain natural sense. These results demonstrate that many entanglement measures are unsuitable tools to assess the power of quantum computers.

  2. Update on Integrated Optical Design Analyzer

    NASA Technical Reports Server (NTRS)

    Moore, James D., Jr.; Troy, Ed

    2003-01-01

    Updated information on the Integrated Optical Design Analyzer (IODA) computer program has become available. IODA was described in Software for Multidisciplinary Concurrent Optical Design (MFS-31452), NASA Tech Briefs, Vol. 25, No. 10 (October 2001), page 8a. To recapitulate: IODA facilitates multidisciplinary concurrent engineering of highly precise optical instruments. The architecture of IODA was developed by reviewing design processes and software in an effort to automate design procedures. IODA significantly reduces design iteration cycle time and eliminates many potential sources of error. IODA integrates the modeling efforts of a team of experts in different disciplines (e.g., optics, structural analysis, and heat transfer) working at different locations and provides seamless fusion of data among thermal, structural, and optical models used to design an instrument. IODA is compatible with data files generated by the NASTRAN structural-analysis program and the Code V (Registered Trademark) optical-analysis program, and can be used to couple analyses performed by these two programs. IODA supports multiple-load-case analysis for quickly accomplishing trade studies. IODA can also model the transient response of an instrument under the influence of dynamic loads and disturbances.

  3. An Adaptive Tradeoff Algorithm for Multi-issue SLA Negotiation

    NASA Astrophysics Data System (ADS)

    Son, Seokho; Sim, Kwang Mong

    Since participants in a Cloud may be independent bodies, mechanisms are necessary for resolving different preferences in leasing Cloud services. Whereas there are currently mechanisms that support service-level agreement negotiation, there is little or no negotiation support for concurrent price and timeslot for Cloud service reservations. For the concurrent price and timeslot negotiation, a tradeoff algorithm to generate and evaluate a proposal which consists of price and timeslot proposal is necessary. The contribution of this work is thus to design an adaptive tradeoff algorithm for multi-issue negotiation mechanism. The tradeoff algorithm referred to as "adaptive burst mode" is especially designed to increase negotiation speed and total utility and to reduce computational load by adaptively generating concurrent set of proposals. The empirical results obtained from simulations carried out using a testbed suggest that due to the concurrent price and timeslot negotiation mechanism with adaptive tradeoff algorithm: 1) both agents achieve the best performance in terms of negotiation speed and utility; 2) the number of evaluations of each proposal is comparatively lower than previous scheme (burst-N).

  4. The construction of an idealised urban masculinity among men with concurrent sexual partners in a South African township

    PubMed Central

    Ragnarsson, Anders; Townsend, Loraine; Ekström, Anna Mia; Chopra, Mickey; Thorson, Anna

    2010-01-01

    Background The perspectives of heterosexual males who have large sexual networks comprising concurrent sexual partners and who engage in high-risk sexual behaviours are scarcely documented. Yet these perspectives are crucial to understanding the high HIV prevalence in South Africa where domestic violence, sexual assault and rape are alarmingly high, suggesting problematic gender dynamics. Objective To explore the construction of masculinities and men's perceptions of women and their sexual relationships, among men with large sexual networks and concurrent partners. Design This qualitative study was conducted in conjunction with a larger quantitative survey among men at high risk of HIV, using respondent-driven sampling to recruit participants, where long referral chains allowed us to reach far into social networks. Twenty in-depth, open-ended interviews with South African men who had multiple and concurrent sexual partners were conducted. A latent content analysis was used to explore the characteristics and dynamics of social and sexual relationships. Results We found dominant masculine ideals characterised by overt economic power and multiple sexual partners. Reasons for large concurrent sexual networks were the perception that women were too empowered, could not be trusted, and lack of control over women. Existing masculine norms encourage concurrent sexual networks, ignoring the high risk of HIV transmission. Biological explanations and determinism further reinforced strong and negative perceptions of women and female sexuality, which helped polarise men's interpretation of gender constructions. Conclusions Our results highlight the need to address sexuality and gender dynamics among men in growing, informal urban areas where HIV prevalence is strikingly high. Traditional structures that could work as focal entry points should be explored for effective HIV prevention aimed at normative change among hard-to-reach men in high-risk urban and largely informal contexts. PMID:20644656

  5. Methods for operating parallel computing systems employing sequenced communications

    DOEpatents

    Benner, R.E.; Gustafson, J.L.; Montry, G.R.

    1999-08-10

    A parallel computing system and method are disclosed having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system. 15 figs.

  6. Fault tolerant computing: A preamble for assuring viability of large computer systems

    NASA Technical Reports Server (NTRS)

    Lim, R. S.

    1977-01-01

    The need for fault-tolerant computing is addressed from the viewpoints of (1) why it is needed, (2) how to apply it in the current state of technology, and (3) what it means in the context of the Phoenix computer system and other related systems. To this end, the value of concurrent error detection and correction is described. User protection, program retry, and repair are among the factors considered. The technology of algebraic codes to protect memory systems and arithmetic codes to protect memory systems and arithmetic codes to protect arithmetic operations is discussed.

  7. Methods for operating parallel computing systems employing sequenced communications

    DOEpatents

    Benner, Robert E.; Gustafson, John L.; Montry, Gary R.

    1999-01-01

    A parallel computing system and method having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system.

  8. Preventing Run-Time Bugs at Compile-Time Using Advanced C++

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neswold, Richard

    When writing software, we develop algorithms that tell the computer what to do at run-time. Our solutions are easier to understand and debug when they are properly modeled using class hierarchies, enumerations, and a well-factored API. Unfortunately, even with these design tools, we end up having to debug our programs at run-time. Worse still, debugging an embedded system changes its dynamics, making it tough to find and fix concurrency issues. This paper describes techniques using C++ to detect run-time bugs *at compile time*. A concurrency library, developed at Fermilab, is used for examples in illustrating these techniques.

  9. A pilot study of combined working memory and inhibition training for children with AD/HD.

    PubMed

    Johnstone, Stuart J; Roodenrys, Steven; Phillips, Elise; Watt, Annele J; Mantz, Sharlene

    2010-03-01

    Building on recent favourable outcomes using working memory (WM) training, this study examined the behavioural and physiological effect of concurrent computer-based WM and inhibition training for children with attention-deficit hyperactivity disorder (AD/HD). Using a double-blind active-control design, 29 children with AD/HD completed a 5-week at-home training programme and pre- and post-training sessions which included the assessment of overt behaviour, resting EEG, as well as task performance, skin conductance level and event-related potentials (ERPs) during a Go/Nogo task. Results indicated that after training, children from the high-intensity training condition showed reduced frequency of inattention and hyperactivity symptoms. Although there were trends for improved Go/Nogo performance, increased arousal and specific training effects for the inhibition-related N2 ERP component, they failed to reach standard levels of statistical significance. Both the low- and high-intensity conditions showed resting EEG changes (increased delta, reduced alpha and theta activity) and improved early attention alerting to Go and Nogo stimuli, as indicated by the N1 ERP component, post-training. Despite limitations, this preliminary work indicates the potential for cognitive training that concurrently targets the interrelated processes of WM and inhibition to be used as a treatment for AD/HD.

  10. The Interactive Electronic Technical Manual: Requirements, Current Status, and Implementation. Strategy Considerations.

    DTIC Science & Technology

    1991-07-01

    authoring systems. Concurrently, great strides in computer-aided design and computer-aided maintenance have contributed to this capability. 12 Junod ...J.; William A. Nugent; and L. John Junod . Plan for the Navy/Air Force Test of the Interactive Electronic Technical Manual (IETM) at Cecil Field...AFHRL Logistics and Human Factors Division, WPAFB. Aug 1990. 12. Junod , John L. PY90 Interactive Electronic Technical Manual (IETM) Portable Delivery

  11. Integrated Engineering Information Technology, FY93 accommplishments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, R.N.; Miller, D.K.; Neugebauer, G.L.

    1994-03-01

    The Integrated Engineering Information Technology (IEIT) project is providing a comprehensive, easy-to-use computer network solution or communicating with coworkers both inside and outside Sandia National Laboratories. IEIT capabilities include computer networking, electronic mail, mechanical design, and data management. These network-based tools have one fundamental purpose: to help create a concurrent engineering environment that will enable Sandia organizations to excel in today`s increasingly competitive business environment.

  12. Image Processing Using a Parallel Architecture.

    DTIC Science & Technology

    1987-12-01

    ENG/87D-25 Abstract This study developed a set o± low level image processing tools on a parallel computer that allows concurrent processing of images...environment, the set of tools offers a significant reduction in the time required to perform some commonly used image processing operations. vI IMAGE...step toward developing these systems, a structured set of image processing tools was implemented using a parallel computer. More important than

  13. Prognostic implication of simultaneous anemia and lymphopenia during concurrent chemoradiotherapy in cervical squamous cell carcinoma.

    PubMed

    Cho, Oyeon; Chun, Mison; Oh, Young-Taek; Noh, O Kyu; Chang, Suk-Joon; Ryu, Hee-Sug; Lee, Eun Ju

    2017-10-01

    Radioresistance often leads to poor survival in concurrent chemoradiotherapy-treated cervical squamous cell carcinoma, and reliable biomarkers can improve prognosis. We compared the prognostic potential of hemoglobin, absolute neutrophil count, and absolute lymphocyte count with that of squamous cell carcinoma antigen in concurrent chemoradiotherapy-treated squamous cell carcinoma. We analyzed 152 patients with concurrent chemoradiotherapy and high-dose-rate intracavitary brachytherapy-treated cervical squamous cell carcinoma. Hemoglobin, absolute neutrophil count, absolute lymphocyte count, and squamous cell carcinoma antigen were quantitated and correlated with survival, using Cox regression, receiver operating characteristic curve analysis, and Kaplan-Meier plots. Both hemoglobin and absolute lymphocyte count in the second week of concurrent chemoradiotherapy (Hb2 and ALC2) and squamous cell carcinoma antigen in the third week of concurrent chemoradiotherapy (mid-squamous cell carcinoma antigen) correlated significantly with disease-specific survival and progression-free survival. The ratio of high-dose-rate intracavitary brachytherapy dose to total dose (high-dose-rate intracavitary brachytherapy ratio) correlated significantly with progression-free survival. Patients with both low Hb2 (≤11 g/dL) and ALC2 (≤639 cells/µL) showed a lower 5-year disease-specific survival rate than those with high Hb2 and/or ALC2, regardless of mid-squamous cell carcinoma antigen (mid-squamous cell carcinoma antigen: ≤4.7 ng/mL; 5-year disease-specific survival rate: 85.5% vs 94.6%, p = 0.0096, and mid-squamous cell carcinoma antigen: >4.7 ng/mL; 5-year disease-specific survival rate: 43.8% vs 66.7%, p = 0.192). When both Hb2 and ALC2 were low, the low high-dose-rate intracavitary brachytherapy ratio (≤0.43) subgroup displayed significantly lower 5-year disease-specific survival rate compared to the subgroup high high-dose-rate intracavitary brachytherapy ratio (>0.43) (62.5% vs 88.2%, p = 0.0067). Patients with both anemia and lymphopenia during concurrent chemoradiotherapy showed poor survival, independent of mid-squamous cell carcinoma antigen, and escalating high-dose-rate intracavitary brachytherapy ratio might improve survival.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei Xiong; Liu, H. Helen; Tucker, Susan L.

    Purpose: To identify clinical and dosimetric factors influencing the risk of pericardial effusion (PCE) in patients with inoperable esophageal cancer treated with definitive concurrent chemotherapy and radiation therapy (RT). Methods and Materials: Data for 101 patients with inoperable esophageal cancer treated with concurrent chemotherapy and RT from 2000 to 2003 at our institution were analyzed. The PCE was confirmed from follow-up chest computed tomography scans and radiologic reports, with freedom from PCE computed from the end of RT. Log-rank tests were used to identify clinical and dosimetric factors influencing freedom from PCE. Dosimetric factors were calculated from the dose-volume histogrammore » for the whole heart and pericardium. Results: The crude rate of PCE was 27.7% (28 of 101). Median time to onset of PCE was 5.3 months (range, 1.0-16.7 months) after RT. None of the clinical factors investigated was found to significantly influence the risk of PCE. In univariate analysis, a wide range of dose-volume histogram parameters of the pericardium and heart were associated with risk of PCE, including mean dose to the pericardium, volume of pericardium receiving a dose greater than 3 Gy (V3) to greater than 50 Gy (V50), and heart volume treated to greater than 32-38 Gy. Multivariate analysis selected V30 as the only parameter significantly associated with risk of PCE. Conclusions: High-dose radiation to the pericardium may strongly increase the risk of PCE. Such a risk may be reduced by minimizing the dose-volume of the irradiated pericardium and heart.« less

  15. A Self-Organizing Spatial Clustering Approach to Support Large-Scale Network RTK Systems.

    PubMed

    Shen, Lili; Guo, Jiming; Wang, Lei

    2018-06-06

    The network real-time kinematic (RTK) technique can provide centimeter-level real time positioning solutions and play a key role in geo-spatial infrastructure. With ever-increasing popularity, network RTK systems will face issues in the support of large numbers of concurrent users. In the past, high-precision positioning services were oriented towards professionals and only supported a few concurrent users. Currently, precise positioning provides a spatial foundation for artificial intelligence (AI), and countless smart devices (autonomous cars, unmanned aerial-vehicles (UAVs), robotic equipment, etc.) require precise positioning services. Therefore, the development of approaches to support large-scale network RTK systems is urgent. In this study, we proposed a self-organizing spatial clustering (SOSC) approach which automatically clusters online users to reduce the computational load on the network RTK system server side. The experimental results indicate that both the SOSC algorithm and the grid algorithm can reduce the computational load efficiently, while the SOSC algorithm gives a more elastic and adaptive clustering solution with different datasets. The SOSC algorithm determines the cluster number and the mean distance to cluster center (MDTCC) according to the data set, while the grid approaches are all predefined. The side-effects of clustering algorithms on the user side are analyzed with real global navigation satellite system (GNSS) data sets. The experimental results indicate that 10 km can be safely used as the cluster radius threshold for the SOSC algorithm without significantly reducing the positioning precision and reliability on the user side.

  16. To a Higher Degree: Addressing Disparities in College Access with Concurrent Enrollment

    ERIC Educational Resources Information Center

    Ulate, David Delgado

    2011-01-01

    Concurrent enrollment--defined as high school students enrolling in college coursework--is increasingly being used as strategy to improve the college readiness levels of underrepresented students and to reduce disparities in college-going rates. States have developed policy and analyzed data to evaluate the practice of concurrent enrollment. This…

  17. Interpretive model for ''A Concurrency Method''

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, C.L.

    1987-01-01

    This paper describes an interpreter for ''A Concurrency Method,'' in which concurrency is the inherent mode of operation and not an appendage to sequentiality. This method is based on the notions of data-drive and single-assignment while preserving a natural manner of programming. The interpreter is designed for and implemented on a network of Corvus Concept Personal Workstations, which are based on the Motorola MC68000 super-microcomputer. The interpreter utilizes the MC68000 processors in each workstation by communicating across OMNINET, the local area network designed for the workstations. The interpreter is a complete system, containing an editor, a compiler, an operating systemmore » with load balancer, and a communication facility. The system includes the basic arithmetic and trigonometric primitive operations for mathematical computations as well as the ability to construct more complex operations from these. 9 refs., 5 figs.« less

  18. Threaded cognition: an integrated theory of concurrent multitasking.

    PubMed

    Salvucci, Dario D; Taatgen, Niels A

    2008-01-01

    The authors propose the idea of threaded cognition, an integrated theory of concurrent multitasking--that is, performing 2 or more tasks at once. Threaded cognition posits that streams of thought can be represented as threads of processing coordinated by a serial procedural resource and executed across other available resources (e.g., perceptual and motor resources). The theory specifies a parsimonious mechanism that allows for concurrent execution, resource acquisition, and resolution of resource conflicts, without the need for specialized executive processes. By instantiating this mechanism as a computational model, threaded cognition provides explicit predictions of how multitasking behavior can result in interference, or lack thereof, for a given set of tasks. The authors illustrate the theory in model simulations of several representative domains ranging from simple laboratory tasks such as dual-choice tasks to complex real-world domains such as driving and driver distraction. (c) 2008 APA, all rights reserved

  19. 18F-fluorodeoxyglucose imaging of primary malignant pericardial mesothelioma with concurrent pericardial and pleural effusions and bone metastasis: A case report.

    PubMed

    Li, Xiaohui; Lu, Rugang; Zhao, Youcai; Wang, Feng; Shao, Guoqiang

    2018-06-01

    Primary malignant pericardial mesothelioma (PMPM) is an aggressive tumor that originates from the mesothelial cells of the pericardium. PMPM with extensive atrial infiltration and bone metastasis is extremely rare. The diagnosis and staging of PMPM based on anatomical imaging may be difficult when concurrent pericardial and pleural effusions are present. A 28-year-old man presented with progressive chest pain. Concurrent pericardial and pleural effusions were identified on computed tomography. On echocardiography, mild thickening and adhesions of the pericardium with the right ventricle and atrium were observed. 18 F-fluorodeoxyglucose (FDG) metabolism imaging revealed increased accumulation in the pericardium and adjacent right atrium. Ring-shaped radioactivity aggregation and bone destruction in the sacrum were demonstrated on 18 F-FDG and 99m Tc-methyl diphosphonate imaging. The diagnosis of PMPM was subsequently confirmed by pathology. The patient survived for >1.5 years with comprehensive treatment.

  20. Phenytoin intoxication during concurrent diazepam therapy

    PubMed Central

    Rogers, Howard J.; Haslam, Robert A.; Longstreth, James; Lietman, Paul S.

    1977-01-01

    Phenytoin elimination is a saturable process obeying Michaelis-Menten kinetics. Plasma phenytoin levels are not related linearly to dose, and small changes in enzyme activity produced by concurrent drug therapy could alter plasma levels. Two cases of phenytoin intoxication associated with simultaneous administration of diazepam are reported. Intravenous phenytoin infusions were given and the apparent Km and Vmax computed from the resulting plasma phenytoin levels. In one case `Km' and `Vmax' were 0.8 μmol/1 and 1.3 μmol/1/hour respectively during concurrent diazepam administration, and 50.3 μmol/1 and 4.4 μmol/1/hour after discontinuation of diazepam. In the second case phenytoin infusion with diazepam gave `Km' and `Vmax' values of 0.012 μmol/1 and 0.95 μmol/1/hour. Without diazepam these were 28.8 μmol/1 and 0.92 μmol/1/hour respectively. PMID:599366

  1. EPA SCIENCE FORUM - EPA'S TOXICOGENOMICS PARTNERSHIPS ACROSS GOVERNMENT, ACADEMIA AND INDUSTRY

    EPA Science Inventory

    Over the past decade genomics, proteomics and metabonomics technologies have transformed the science of toxicology, and concurrent advances in computing and informatics have provided management and analysis solutions for this onslaught of toxicogenomic data. EPA has been actively...

  2. Testing the Wildlink activity-detection system on wolves and white-tailed deer

    USGS Publications Warehouse

    Kunkel, K.E.; Chapman, R.C.; Mech, L.D.; Gese, E.M.

    1991-01-01

    We tested the reliability and predictive capabilities of the activity meter in the new Wildlink Data Acquisition and Recapture System by comparing activity counts with concurrent observations of captive wolf (Canis lupus) and free-ranging white-tailed deer (Odocoileus virginianus) activity. The Wildlink system stores activity data in a computer within a radio collar with which a biologist can communicate. Three levels of activity could be detected. The Wildlink system provided greater activity discrimination and was more reliable, adaptable, and efficient and was easier to use than conventional telemetry activity systems. The Wildlink system could be highly useful for determining wildlife energy budgets.

  3. Measurement of viscosity and elasticity of lubricants at high pressures

    NASA Technical Reports Server (NTRS)

    Rein, R. G., Jr.; Charng, T. T.; Sliepcevich, C. M.; Ewbank, W. J.

    1975-01-01

    The oscillating quartz crystal viscometer has been used to investigate possible viscoelastic behavior in synthetic lubricating fluids and to obtain viscosity-pressure-temperature data for these fluids at temperatures to 300 F and pressures to 40,000 psig. The effect of pressure and temperature on the density of the test fluids was measured concurrently with the viscosity measurements. Viscoelastic behavior of one fluid, di-(2-ethylhexyl) sebacate, was observed over a range of pressures. These data were used to compute the reduced shear elastic (storage) modulus and reduced loss modulus for this fluid at atmospheric pressure and 100 F as functions of reduced frequency.

  4. Three dimensional nozzle-exhaust flow field analysis by a reference plane technique.

    NASA Technical Reports Server (NTRS)

    Dash, S. M.; Del Guidice, P. D.

    1972-01-01

    A numerical method based on reference plane characteristics has been developed for the calculation of highly complex supersonic nozzle-exhaust flow fields. The difference equations have been developed for three coordinate systems. Local reference plane orientations are employed using the three coordinate systems concurrently thus catering to a wide class of flow geometries. Discontinuities such as the underexpansion shock and contact surfaces are computed explicitly for nonuniform vehicle external flows. The nozzles considered may have irregular cross-sections with swept throats and may be stacked in modules using the vehicle undersurface for additional expansion. Results are presented for several nozzle configurations.

  5. Paradigms and strategies for scientific computing on distributed memory concurrent computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foster, I.T.; Walker, D.W.

    1994-06-01

    In this work we examine recent advances in parallel languages and abstractions that have the potential for improving the programmability and maintainability of large-scale, parallel, scientific applications running on high performance architectures and networks. This paper focuses on Fortran M, a set of extensions to Fortran 77 that supports the modular design of message-passing programs. We describe the Fortran M implementation of a particle-in-cell (PIC) plasma simulation application, and discuss issues in the optimization of the code. The use of two other methodologies for parallelizing the PIC application are considered. The first is based on the shared object abstraction asmore » embodied in the Orca language. The second approach is the Split-C language. In Fortran M, Orca, and Split-C the ability of the programmer to control the granularity of communication is important is designing an efficient implementation.« less

  6. The OptIPuter microscopy demonstrator: enabling science through a transatlantic lightpath

    PubMed Central

    Ellisman, M.; Hutton, T.; Kirkland, A.; Lin, A.; Lin, C.; Molina, T.; Peltier, S.; Singh, R.; Tang, K.; Trefethen, A.E.; Wallom, D.C.H.; Xiong, X.

    2009-01-01

    The OptIPuter microscopy demonstrator project has been designed to enable concurrent and remote usage of world-class electron microscopes located in Oxford and San Diego. The project has constructed a network consisting of microscopes and computational and data resources that are all connected by a dedicated network infrastructure using the UK Lightpath and US Starlight systems. Key science drivers include examples from both materials and biological science. The resulting system is now a permanent link between the Oxford and San Diego microscopy centres. This will form the basis of further projects between the sites and expansion of the types of systems that can be remotely controlled, including optical, as well as electron, microscopy. Other improvements will include the updating of the Microsoft cluster software to the high performance computing (HPC) server 2008, which includes the HPC basic profile implementation that will enable the development of interoperable clients. PMID:19487201

  7. The OptIPuter microscopy demonstrator: enabling science through a transatlantic lightpath.

    PubMed

    Ellisman, M; Hutton, T; Kirkland, A; Lin, A; Lin, C; Molina, T; Peltier, S; Singh, R; Tang, K; Trefethen, A E; Wallom, D C H; Xiong, X

    2009-07-13

    The OptIPuter microscopy demonstrator project has been designed to enable concurrent and remote usage of world-class electron microscopes located in Oxford and San Diego. The project has constructed a network consisting of microscopes and computational and data resources that are all connected by a dedicated network infrastructure using the UK Lightpath and US Starlight systems. Key science drivers include examples from both materials and biological science. The resulting system is now a permanent link between the Oxford and San Diego microscopy centres. This will form the basis of further projects between the sites and expansion of the types of systems that can be remotely controlled, including optical, as well as electron, microscopy. Other improvements will include the updating of the Microsoft cluster software to the high performance computing (HPC) server 2008, which includes the HPC basic profile implementation that will enable the development of interoperable clients.

  8. A language comparison for scientific computing on MIMD architectures

    NASA Technical Reports Server (NTRS)

    Jones, Mark T.; Patrick, Merrell L.; Voigt, Robert G.

    1989-01-01

    Choleski's method for solving banded symmetric, positive definite systems is implemented on a multiprocessor computer using three FORTRAN based parallel programming languages, the Force, PISCES and Concurrent FORTRAN. The capabilities of the language for expressing parallelism and their user friendliness are discussed, including readability of the code, debugging assistance offered, and expressiveness of the languages. The performance of the different implementations is compared. It is argued that PISCES, using the Force for medium-grained parallelism, is the appropriate choice for programming Choleski's method on the multiprocessor computer, Flex/32.

  9. The Petascale Data Storage Institute

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gibson, Garth; Long, Darrell; Honeyman, Peter

    2013-07-01

    Petascale computing infrastructures for scientific discovery make petascale demands on information storage capacity, performance, concurrency, reliability, availability, and manageability.The Petascale Data Storage Institute focuses on the data storage problems found in petascale scientific computing environments, with special attention to community issues such as interoperability, community buy-in, and shared tools.The Petascale Data Storage Institute is a collaboration between researchers at Carnegie Mellon University, National Energy Research Scientific Computing Center, Pacific Northwest National Laboratory, Oak Ridge National Laboratory, Sandia National Laboratory, Los Alamos National Laboratory, University of Michigan, and the University of California at Santa Cruz.

  10. A new vertical grid nesting capability in the Weather Research and Forecasting (WRF) Model

    DOE PAGES

    Daniels, Megan H.; Lundquist, Katherine A.; Mirocha, Jeffrey D.; ...

    2016-09-16

    Mesoscale atmospheric models are increasingly used for high-resolution (<3 km) simulations to better resolve smaller-scale flow details. Increased resolution is achieved using mesh refinement via grid nesting, a procedure where multiple computational domains are integrated either concurrently or in series. A constraint in the concurrent nesting framework offered by the Weather Research and Forecasting (WRF) Model is that mesh refinement is restricted to the horizontal dimensions. This limitation prevents control of the grid aspect ratio, leading to numerical errors due to poor grid quality and preventing grid optimization. Here, a procedure permitting vertical nesting for one-way concurrent simulation is developedmore » and validated through idealized cases. The benefits of vertical nesting are demonstrated using both mesoscale and large-eddy simulations (LES). Mesoscale simulations of the Terrain-Induced Rotor Experiment (T-REX) show that vertical grid nesting can alleviate numerical errors due to large aspect ratios on coarse grids, while allowing for higher vertical resolution on fine grids. Furthermore, the coarsening of the parent domain does not result in a significant loss of accuracy on the nested domain. LES of neutral boundary layer flow shows that, by permitting optimal grid aspect ratios on both parent and nested domains, use of vertical nesting yields improved agreement with the theoretical logarithmic velocity profile on both domains. Lastly, vertical grid nesting in WRF opens the path forward for multiscale simulations, allowing more accurate simulations spanning a wider range of scales than previously possible.« less

  11. A new vertical grid nesting capability in the Weather Research and Forecasting (WRF) Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daniels, Megan H.; Lundquist, Katherine A.; Mirocha, Jeffrey D.

    Mesoscale atmospheric models are increasingly used for high-resolution (<3 km) simulations to better resolve smaller-scale flow details. Increased resolution is achieved using mesh refinement via grid nesting, a procedure where multiple computational domains are integrated either concurrently or in series. A constraint in the concurrent nesting framework offered by the Weather Research and Forecasting (WRF) Model is that mesh refinement is restricted to the horizontal dimensions. This limitation prevents control of the grid aspect ratio, leading to numerical errors due to poor grid quality and preventing grid optimization. Here, a procedure permitting vertical nesting for one-way concurrent simulation is developedmore » and validated through idealized cases. The benefits of vertical nesting are demonstrated using both mesoscale and large-eddy simulations (LES). Mesoscale simulations of the Terrain-Induced Rotor Experiment (T-REX) show that vertical grid nesting can alleviate numerical errors due to large aspect ratios on coarse grids, while allowing for higher vertical resolution on fine grids. Furthermore, the coarsening of the parent domain does not result in a significant loss of accuracy on the nested domain. LES of neutral boundary layer flow shows that, by permitting optimal grid aspect ratios on both parent and nested domains, use of vertical nesting yields improved agreement with the theoretical logarithmic velocity profile on both domains. Lastly, vertical grid nesting in WRF opens the path forward for multiscale simulations, allowing more accurate simulations spanning a wider range of scales than previously possible.« less

  12. Is there another coincidence problem at the reionization epoch?

    NASA Astrophysics Data System (ADS)

    Lombriser, Lucas; Smer-Barreto, Vanessa

    2017-12-01

    The cosmological coincidences between the matter and radiation energy densities at recombination as well as between the densities of matter and the cosmological constant at the present time are well known. We point out that, moreover, the third intersection between the energy densities of radiation and the cosmological constant coincides with the reionization epoch. To quantify the statistical relevance of this concurrence, we compute the Bayes factor between the concordance cosmology with free Thomson scattering optical depth and a model for which this parameter is inferred from imposing a match between the time of density equality and the epoch of reionization. This is to characterize the potential explanatory gain if one were to find a parameter-free physical connection. We find a very strong preference for such a concurrence on the Jeffreys scale from current cosmological observations. We furthermore discuss the effect of the choice of priors, changes in reionization history, and free sum of neutrino masses. We also estimate the impact of adding intermediate polarization data from the Planck High Frequency Instrument and prospects for future 21 cm surveys. In the first case, the preference for the correlation remains substantial, whereas future data may give results more decisive in pro or substantial in contra of it. Finally, we provide a discussion on different interpretations of these findings. In particular, we show how a connection between the star-formation history and the cosmological background dynamics can give rise to this concurrence.

  13. The ARES High-level Intermediate Representation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moss, Nicholas David

    The LLVM intermediate representation (IR) lacks semantic constructs for depicting common high-performance operations such as parallel and concurrent execution, communication and synchronization. Currently, representing such semantics in LLVM requires either extending the intermediate form (a signi cant undertaking) or the use of ad hoc indirect means such as encoding them as intrinsics and/or the use of metadata constructs. In this paper we discuss a work in progress to explore the design and implementation of a new compilation stage and associated high-level intermediate form that is placed between the abstract syntax tree and when it is lowered to LLVM's IR. Thismore » highlevel representation is a superset of LLVM IR and supports the direct representation of these common parallel computing constructs along with the infrastructure for supporting analysis and transformation passes on this representation.« less

  14. Anesthesia information management system-based near real-time decision support to manage intraoperative hypotension and hypertension.

    PubMed

    Nair, Bala G; Horibe, Mayumi; Newman, Shu-Fang; Wu, Wei-Ying; Peterson, Gene N; Schwid, Howard A

    2014-01-01

    Intraoperative hypotension and hypertension are associated with adverse clinical outcomes and morbidity. Clinical decision support mediated through an anesthesia information management system (AIMS) has been shown to improve quality of care. We hypothesized that an AIMS-based clinical decision support system could be used to improve management of intraoperative hypotension and hypertension. A near real-time AIMS-based decision support module, Smart Anesthesia Manager (SAM), was used to detect selected scenarios contributing to hypotension and hypertension. Specifically, hypotension (systolic blood pressure <80 mm Hg) with a concurrent high concentration (>1.25 minimum alveolar concentration [MAC]) of inhaled drug and hypertension (systolic blood pressure >160 mm Hg) with concurrent phenylephrine infusion were detected, and anesthesia providers were notified via "pop-up" computer screen messages. AIMS data were retrospectively analyzed to evaluate the effect of SAM notification messages on hypotensive and hypertensive episodes. For anesthetic cases 12 months before (N = 16913) and after (N = 17132) institution of SAM messages, the median duration of hypotensive episodes with concurrent high MAC decreased with notifications (Mann Whitney rank sum test, P = 0.031). However, the reduction in the median duration of hypertensive episodes with concurrent phenylephrine infusion was not significant (P = 0.47). The frequency of prolonged episodes that lasted >6 minutes (sampling period of SAM), represented in terms of the number of cases with episodes per 100 surgical cases (or percentage occurrence), declined with notifications for both hypotension with >1.25 MAC inhaled drug episodes (δ = -0.26% [confidence interval, -0.38% to -0.11%], P < 0.001) and hypertension with phenylephrine infusion episodes (δ = -0.92% [confidence interval, -1.79% to -0.04%], P = 0.035). For hypotensive events, the anesthesia providers reduced the inhaled drug concentrations to <1.25 MAC 81% of the time with notifications compared with 59% without notifications (P = 0.003). For hypertensive episodes, although the anesthesia providers' reduction or discontinuation of the phenylephrine infusion increased from 22% to 37% (P = 0.030) with notification messages, the overall response was less consistent than the response to hypotensive episodes. With automatic acquisition of arterial blood pressure and inhaled drug concentration variables in an AIMS, near real-time notification was effective in reducing the duration and frequency of hypotension with concurrent >1.25 MAC inhaled drug episodes. However, since phenylephrine infusion is manually documented in an AIMS, the impact of notification messages was less pronounced in reducing episodes of hypertension with concurrent phenylephrine infusion. Automated data capture and a higher frequency of data acquisition in an AIMS can improve the effectiveness of an intraoperative clinical decision support system.

  15. A Metascalable Computing Framework for Large Spatiotemporal-Scale Atomistic Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nomura, K; Seymour, R; Wang, W

    2009-02-17

    A metascalable (or 'design once, scale on new architectures') parallel computing framework has been developed for large spatiotemporal-scale atomistic simulations of materials based on spatiotemporal data locality principles, which is expected to scale on emerging multipetaflops architectures. The framework consists of: (1) an embedded divide-and-conquer (EDC) algorithmic framework based on spatial locality to design linear-scaling algorithms for high complexity problems; (2) a space-time-ensemble parallel (STEP) approach based on temporal locality to predict long-time dynamics, while introducing multiple parallelization axes; and (3) a tunable hierarchical cellular decomposition (HCD) parallelization framework to map these O(N) algorithms onto a multicore cluster based onmore » hybrid implementation combining message passing and critical section-free multithreading. The EDC-STEP-HCD framework exposes maximal concurrency and data locality, thereby achieving: (1) inter-node parallel efficiency well over 0.95 for 218 billion-atom molecular-dynamics and 1.68 trillion electronic-degrees-of-freedom quantum-mechanical simulations on 212,992 IBM BlueGene/L processors (superscalability); (2) high intra-node, multithreading parallel efficiency (nanoscalability); and (3) nearly perfect time/ensemble parallel efficiency (eon-scalability). The spatiotemporal scale covered by MD simulation on a sustained petaflops computer per day (i.e. petaflops {center_dot} day of computing) is estimated as NT = 2.14 (e.g. N = 2.14 million atoms for T = 1 microseconds).« less

  16. A Mixed Methods Approach to Understanding School Counseling Program Evaluation: High School Counselors' Methods and Perceptions

    ERIC Educational Resources Information Center

    Aucoin, Jennifer Mangrum

    2013-01-01

    The purpose of this mixed methods concurrent triangulation study was to examine the program evaluation practices of high school counselors. A total of 294 high school counselors in Texas were assessed using a mixed methods concurrent triangulation design. A researcher-developed survey, the School Counseling Program Evaluation Questionnaire…

  17. Concurrent electromagnetic scattering analysis

    NASA Technical Reports Server (NTRS)

    Patterson, Jean E.; Cwik, Tom; Ferraro, Robert D.; Jacobi, Nathan; Liewer, Paulett C.; Lockhart, Thomas G.; Lyzenga, Gregory A.; Parker, Jay

    1989-01-01

    The computational power of the hypercube parallel computing architecture is applied to the solution of large-scale electromagnetic scattering and radiation problems. Three analysis codes have been implemented. A Hypercube Electromagnetic Interactive Analysis Workstation was developed to aid in the design and analysis of metallic structures such as antennas and to facilitate the use of these analysis codes. The workstation provides a general user environment for specification of the structure to be analyzed and graphical representations of the results.

  18. Computer-Access-Code Matrices

    NASA Technical Reports Server (NTRS)

    Collins, Earl R., Jr.

    1990-01-01

    Authorized users respond to changing challenges with changing passwords. Scheme for controlling access to computers defeats eavesdroppers and "hackers". Based on password system of challenge and password or sign, challenge, and countersign correlated with random alphanumeric codes in matrices of two or more dimensions. Codes stored on floppy disk or plug-in card and changed frequently. For even higher security, matrices of four or more dimensions used, just as cubes compounded into hypercubes in concurrent processing.

  19. A Multi-Time Scale Morphable Software Milieu for Polymorphous Computing Architectures (PCA) - Composable, Scalable Systems

    DTIC Science & Technology

    2004-10-01

    MONITORING AGENCY NAME(S) AND ADDRESS(ES) Defense Advanced Research Projects Agency AFRL/IFTC 3701 North Fairfax Drive...Scalable Parallel Libraries for Large-Scale Concurrent Applications," Technical Report UCRL -JC-109251, Lawrence Livermore National Laboratory

  20. Remote file inquiry (RFI) system

    NASA Technical Reports Server (NTRS)

    1975-01-01

    System interrogates and maintains user-definable data files from remote terminals, using English-like, free-form query language easily learned by persons not proficient in computer programming. System operates in asynchronous mode, allowing any number of inquiries within limitation of available core to be active concurrently.

  1. Concurrent Breakpoints

    DTIC Science & Technology

    2011-12-18

    Proceedings of the SIGMET- RICS Symposium on Parallel and Distributed Tools, pages 48–59, 1998. [8] A. Dinning and E. Schonberg . Detecting access...multi- threaded programs. ACM Trans. Comput. Syst., 15(4):391– 411, 1997. [38] E. Schonberg . On-the-fly detection of access anomalies. In Proceedings

  2. A digital photographic measurement method for quantifying foot posture: validity, reliability, and descriptive data.

    PubMed

    Cobb, Stephen C; James, C Roger; Hjertstedt, Matthew; Kruk, James

    2011-01-01

    Although abnormal foot posture long has been associated with lower extremity injury risk, the evidence is equivocal. Poor intertester reliability of traditional foot measures might contribute to the inconsistency. To investigate the validity and reliability of a digital photographic measurement method (DPMM) technology, the reliability of DPMM-quantified foot measures, and the concurrent validity of the DPMM with clinical-measurement methods (CMMs) and to report descriptive data for DPMM measures with moderate to high intratester and intertester reliability. Descriptive laboratory study. Biomechanics research laboratory. A total of 159 people participated in 3 groups. Twenty-eight people (11 men, 17 women; age  =  25 ± 5 years, height  =  1.71 ± 0.10 m, mass  =  77.6 ± 17.3 kg) were recruited for investigation of intratester and intertester reliability of the DPMM technology; 20 (10 men, 10 women; age  =  24 ± 2 years, height  =  1.71 ± 0.09 m, mass  =  76 ± 16 kg) for investigation of DPMM and CMM reliability and concurrent validity; and 111 (42 men, 69 women; age  =  22.8 ± 4.7 years, height  =  168.5 ± 10.4 cm, mass  =  69.8 ± 13.3 kg) for development of a descriptive data set of the DPMM foot measurements with moderate to high intratester and intertester reliabilities. The dimensions of 10 model rectangles and the 28 participants' feet were measured, and DPMM foot posture was measured in the 111 participants. Two clinicians assessed the DPMM and CMM foot measures of the 20 participants. Validity and reliability were evaluated using mean absolute and percentage errors and intraclass correlation coefficients. Descriptive data were computed from the DPMM foot posture measures. The DPMM technology intratester and intertester reliability intraclass correlation coefficients were 1.0 for each tester and variable. Mean absolute errors were equal to or less than 0.2 mm for the bottom and right-side variables and 0.1° for the calculated angle variable. Mean percentage errors between the DPMM and criterion reference values were equal to or less than 0.4%. Intratester and intertester reliabilities of DPMM-computed structural measures of arch and navicular indices were moderate to high (>0.78), and concurrent validity was moderate to strong. The DPMM is a valid and reliable clinical and research tool for quantifying foot structure. The DPMM and the descriptive data might be used to define groups in future studies in which the relationship between foot posture and function or injury risk is investigated.

  3. A novel heterogeneous algorithm to simulate multiphase flow in porous media on multicore CPU-GPU systems

    NASA Astrophysics Data System (ADS)

    McClure, J. E.; Prins, J. F.; Miller, C. T.

    2014-07-01

    Multiphase flow implementations of the lattice Boltzmann method (LBM) are widely applied to the study of porous medium systems. In this work, we construct a new variant of the popular "color" LBM for two-phase flow in which a three-dimensional, 19-velocity (D3Q19) lattice is used to compute the momentum transport solution while a three-dimensional, seven velocity (D3Q7) lattice is used to compute the mass transport solution. Based on this formulation, we implement a novel heterogeneous GPU-accelerated algorithm in which the mass transport solution is computed by multiple shared memory CPU cores programmed using OpenMP while a concurrent solution of the momentum transport is performed using a GPU. The heterogeneous solution is demonstrated to provide speedup of 2.6 × as compared to multi-core CPU solution and 1.8 × compared to GPU solution due to concurrent utilization of both CPU and GPU bandwidths. Furthermore, we verify that the proposed formulation provides an accurate physical representation of multiphase flow processes and demonstrate that the approach can be applied to perform heterogeneous simulations of two-phase flow in porous media using a typical GPU-accelerated workstation.

  4. Highlights of X-Stack ExM Deliverable Swift/T

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wozniak, Justin M.

    Swift/T is a key success from the ExM: System support for extreme-scale, many-task applications1 X-Stack project, which proposed to use concurrent dataflow as an innovative programming model to exploit extreme parallelism in exascale computers. The Swift/T component of the project reimplemented the Swift language from scratch to allow applications that compose scientific modules together to be build and run on available petascale computers (Blue Gene, Cray). Swift/T does this via a new compiler and runtime that generates and executes the application as an MPI program. We assume that mission-critical emerging exascale applications will be composed as scalable applications using existingmore » software components, connected by data dependencies. Developers wrap native code fragments using a higherlevel language, then build composite applications to form a computational experiment. This exemplifies hierarchical concurrency: lower-level messaging libraries are used for fine-grained parallelism; highlevel control is used for inter-task coordination. These patterns are best expressed with dataflow, but static DAGs (i.e., other workflow languages) limit the applications that can be built; they do not provide the expressiveness of Swift, such as conditional execution, iteration, and recursive functions.« less

  5. Computational approaches to metabolic engineering utilizing systems biology and synthetic biology.

    PubMed

    Fong, Stephen S

    2014-08-01

    Metabolic engineering modifies cellular function to address various biochemical applications. Underlying metabolic engineering efforts are a host of tools and knowledge that are integrated to enable successful outcomes. Concurrent development of computational and experimental tools has enabled different approaches to metabolic engineering. One approach is to leverage knowledge and computational tools to prospectively predict designs to achieve the desired outcome. An alternative approach is to utilize combinatorial experimental tools to empirically explore the range of cellular function and to screen for desired traits. This mini-review focuses on computational systems biology and synthetic biology tools that can be used in combination for prospective in silico strain design.

  6. Enhanced conformational sampling using replica exchange with concurrent solute scaling and hamiltonian biasing realized in one dimension.

    PubMed

    Yang, Mingjun; Huang, Jing; MacKerell, Alexander D

    2015-06-09

    Replica exchange (REX) is a powerful computational tool for overcoming the quasi-ergodic sampling problem of complex molecular systems. Recently, several multidimensional extensions of this method have been developed to realize exchanges in both temperature and biasing potential space or the use of multiple biasing potentials to improve sampling efficiency. However, increased computational cost due to the multidimensionality of exchanges becomes challenging for use on complex systems under explicit solvent conditions. In this study, we develop a one-dimensional (1D) REX algorithm to concurrently combine the advantages of overall enhanced sampling from Hamiltonian solute scaling and the specific enhancement of collective variables using Hamiltonian biasing potentials. In the present Hamiltonian replica exchange method, termed HREST-BP, Hamiltonian solute scaling is applied to the solute subsystem, and its interactions with the environment to enhance overall conformational transitions and biasing potentials are added along selected collective variables associated with specific conformational transitions, thereby balancing the sampling of different hierarchical degrees of freedom. The two enhanced sampling approaches are implemented concurrently allowing for the use of a small number of replicas (e.g., 6 to 8) in 1D, thus greatly reducing the computational cost in complex system simulations. The present method is applied to conformational sampling of two nitrogen-linked glycans (N-glycans) found on the HIV gp120 envelope protein. Considering the general importance of the conformational sampling problem, HREST-BP represents an efficient procedure for the study of complex saccharides, and, more generally, the method is anticipated to be of general utility for the conformational sampling in a wide range of macromolecular systems.

  7. Web usability evaluation with screen reader users: implementation of the partial concurrent thinking aloud technique.

    PubMed

    Federici, Stefano; Stefano, Federici; Borsci, Simone; Stamerra, Gianluca

    2010-08-01

    A verbal protocol technique, adopted for a web usability evaluation, requires that the users are able to perform a double task: surfing and talking. Nevertheless, when blind users surf by using a screen reader and talk about the way they interact with the computer, the evaluation is influenced by a structural interference: users are forced to think aloud and listen to the screen reader at the same time. The aim of this study is to build up a verbal protocol technique for samples of visual impaired users in order to overcome the limits of concurrent and retrospective protocols. The technique we improved, called partial concurrent thinking aloud (PCTA), integrates a modified set of concurrent verbalization and retrospective analysis. One group of 6 blind users and another group of 6 sighted users evaluated the usability of a website using PCTA. By estimating the number of necessary users by the means of an asymptotic test, it was found out that the two groups had an equivalent ability of identifying usability problems, both over 80%. The result suggests that PCTA, while respecting the properties of classic verbal protocols, also allows to overcome the structural interference and the limits of concurrent and retrospective protocols when used with screen reader users. In this way, PCTA reduces the efficiency difference of usability evaluation between blind and sighted users.

  8. Storage media pipelining: Making good use of fine-grained media

    NASA Technical Reports Server (NTRS)

    Vanmeter, Rodney

    1993-01-01

    This paper proposes a new high-performance paradigm for accessing removable media such as tapes and especially magneto-optical disks. In high-performance computing the striping of data across multiple devices is a common means of improving data transfer rates. Striping has been used very successfully for fixed magnetic disks improving overall system reliability as well as throughput. It has also been proposed as a solution for providing improved bandwidth for tape and magneto-optical subsystems. However, striping of removable media has shortcomings, particularly in the areas of latency to data and restricted system configurations, and is suitable primarily for very large I/Os. We propose that for fine-grained media, an alternative access method, media pipelining, may be used to provide high bandwidth for large requests while retaining the flexibility to support concurrent small requests and different system configurations. Its principal drawback is high buffering requirements in the host computer or file server. This paper discusses the possible organization of such a system including the hardware conditions under which it may be effective, and the flexibility of configuration. Its expected performance is discussed under varying workloads including large single I/O's and numerous smaller ones. Finally, a specific system incorporating a high-transfer-rate magneto-optical disk drive and autochanger is discussed.

  9. Production experience with the ATLAS Event Service

    NASA Astrophysics Data System (ADS)

    Benjamin, D.; Calafiura, P.; Childers, T.; De, K.; Guan, W.; Maeno, T.; Nilsson, P.; Tsulaia, V.; Van Gemmeren, P.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    The ATLAS Event Service (AES) has been designed and implemented for efficient running of ATLAS production workflows on a variety of computing platforms, ranging from conventional Grid sites to opportunistic, often short-lived resources, such as spot market commercial clouds, supercomputers and volunteer computing. The Event Service architecture allows real time delivery of fine grained workloads to running payload applications which process dispatched events or event ranges and immediately stream the outputs to highly scalable Object Stores. Thanks to its agile and flexible architecture the AES is currently being used by grid sites for assigning low priority workloads to otherwise idle computing resources; similarly harvesting HPC resources in an efficient back-fill mode; and massively scaling out to the 50-100k concurrent core level on the Amazon spot market to efficiently utilize those transient resources for peak production needs. Platform ports in development include ATLAS@Home (BOINC) and the Google Compute Engine, and a growing number of HPC platforms. After briefly reviewing the concept and the architecture of the Event Service, we will report the status and experience gained in AES commissioning and production operations on supercomputers, and our plans for extending ES application beyond Geant4 simulation to other workflows, such as reconstruction and data analysis.

  10. Concurrent image-based visual servoing with adaptive zooming for non-cooperative rendezvous maneuvers

    NASA Astrophysics Data System (ADS)

    Pomares, Jorge; Felicetti, Leonard; Pérez, Javier; Emami, M. Reza

    2018-02-01

    An image-based servo controller for the guidance of a spacecraft during non-cooperative rendezvous is presented in this paper. The controller directly utilizes the visual features from image frames of a target spacecraft for computing both attitude and orbital maneuvers concurrently. The utilization of adaptive optics, such as zooming cameras, is also addressed through developing an invariant-image servo controller. The controller allows for performing rendezvous maneuvers independently from the adjustments of the camera focal length, improving the performance and versatility of maneuvers. The stability of the proposed control scheme is proven analytically in the invariant space, and its viability is explored through numerical simulations.

  11. Distribution of G concurrence of random pure states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cappellini, Valerio; Sommers, Hans-Juergen; Zyczkowski, Karol

    2006-12-15

    The average entanglement of random pure states of an NxN composite system is analyzed. We compute the average value of the determinant D of the reduced state, which forms an entanglement monotone. Calculating higher moments of the determinant, we characterize the probability distribution P(D). Similar results are obtained for the rescaled Nth root of the determinant, called the G concurrence. We show that in the limit N{yields}{infinity} this quantity becomes concentrated at a single point G{sub *}=1/e. The position of the concentration point changes if one consider an arbitrary NxK bipartite system, in the joint limit N,K{yields}{infinity}, with K/N fixed.

  12. Leveraging Scratch4SL and Second Life to motivate high school students' participation in introductory programming courses: findings from a case study

    NASA Astrophysics Data System (ADS)

    Pellas, Nikolaos; Peroutseas, Efstratios

    2017-01-01

    Students in secondary education strive hard enough to understand basic programming concepts. With all that is known regarding the benefits of programming, little is the published evidence showing how high school students can learn basic programming concepts following innovative instructional formats correctly with the respect to gain/enhance their computational thinking skills. This distinction has caused lack of their motivation and interest in Computer Science courses. This case study presents the opinions of twenty-eight (n = 28) high school students who participated voluntarily in a 3D-game-like environment created in Second Life. This environment was combined with the 2D programming environment of Scratch4SL for the implementation of programming concepts (i.e. sequence and concurrent programming commands) in a blended instructional format. An instructional framework based on Papert's theory of Constructionism to assist students how to coordinate or manage better the learning material in collaborative practice-based learning activities is also proposed. By conducting a mixed-method research, before and after finishing several learning tasks, students' participation in focus group (qualitative data) and their motivation based on their experiences (quantitative data) are measured. Findings indicated that an instructional design framework based on Constructionism for acquiring or empowering students' social, cognitive, higher order and computational thinking skills is meaningful. Educational implications and recommendations for future research are also discussed.

  13. Social networks and concurrent sexual relationships--a qualitative study among men in an urban South African community.

    PubMed

    Ragnarsson, Anders; Townsend, Loraine; Thorson, Anna; Chopra, Mickey; Ekstrom, Anna Mia

    2009-10-01

    The aim was to explore and describe characteristics of males' social and sexual networks in a South African peri-urban community. Twenty in-depth interviews were conducted with men participating in a larger quantitative study where the median age of the men was 28.7 years and almost 56% had some high-school education, 17.2% were unemployed and 94.7% were not married. A Thematic Question Guide with open-ended questions was used for the interviews. A thematic content analysis was conducted to explore the characteristics and dynamics of social and sexual relationships among these men. A high number of temporary and stable concurrent female sexual partners, geographic mobility and high levels of unprotected sex were common. Increased status as a man and lack of trust in women's fidelity were given as important reasons for concurrent female sexual relationships. Strong social networks within male core groups provided economic and social support for the pursuit and maintenance of this behaviour. Concurrent sexual relationships in combination with high viral loads among newly infected individuals unaware of their HIV status create an extremely high-risk environment for the spread of HIV in this population. Interventions targeting men at high risk of HIV need to challenge current societal norms of masculinity to help promote individual sexual risk reduction strategies. Such strategies should go beyond increasing condom use, to include a reduction in the number of concurrent sexual partners.

  14. Media multitasking behavior: concurrent television and computer usage.

    PubMed

    Brasel, S Adam; Gips, James

    2011-09-01

    Changes in the media landscape have made simultaneous usage of the computer and television increasingly commonplace, but little research has explored how individuals navigate this media multitasking environment. Prior work suggests that self-insight may be limited in media consumption and multitasking environments, reinforcing a rising need for direct observational research. A laboratory experiment recorded both younger and older individuals as they used a computer and television concurrently, multitasking across television and Internet content. Results show that individuals are attending primarily to the computer during media multitasking. Although gazes last longer on the computer when compared to the television, the overall distribution of gazes is strongly skewed toward very short gazes only a few seconds in duration. People switched between media at an extreme rate, averaging more than 4 switches per min and 120 switches over the 27.5-minute study exposure. Participants had little insight into their switching activity and recalled their switching behavior at an average of only 12 percent of their actual switching rate revealed in the objective data. Younger individuals switched more often than older individuals, but other individual differences such as stated multitasking preference and polychronicity had little effect on switching patterns or gaze duration. This overall pattern of results highlights the importance of exploring new media environments, such as the current drive toward media multitasking, and reinforces that self-monitoring, post hoc surveying, and lay theory may offer only limited insight into how individuals interact with media.

  15. Media Multitasking Behavior: Concurrent Television and Computer Usage

    PubMed Central

    Gips, James

    2011-01-01

    Abstract Changes in the media landscape have made simultaneous usage of the computer and television increasingly commonplace, but little research has explored how individuals navigate this media multitasking environment. Prior work suggests that self-insight may be limited in media consumption and multitasking environments, reinforcing a rising need for direct observational research. A laboratory experiment recorded both younger and older individuals as they used a computer and television concurrently, multitasking across television and Internet content. Results show that individuals are attending primarily to the computer during media multitasking. Although gazes last longer on the computer when compared to the television, the overall distribution of gazes is strongly skewed toward very short gazes only a few seconds in duration. People switched between media at an extreme rate, averaging more than 4 switches per min and 120 switches over the 27.5-minute study exposure. Participants had little insight into their switching activity and recalled their switching behavior at an average of only 12 percent of their actual switching rate revealed in the objective data. Younger individuals switched more often than older individuals, but other individual differences such as stated multitasking preference and polychronicity had little effect on switching patterns or gaze duration. This overall pattern of results highlights the importance of exploring new media environments, such as the current drive toward media multitasking, and reinforces that self-monitoring, post hoc surveying, and lay theory may offer only limited insight into how individuals interact with media. PMID:21381969

  16. Signing and pavement marking for concurrent-flow high-occupancy-vehicle lanes : summary of current practice

    DOT National Transportation Integrated Search

    1997-01-01

    Concurrent-flow lanes account for more than half of existing high-occupancy-vehicle (HOV) mileage in the United States. Traffic on this type of HOV lane operates in the same direction as the adjacent traffic, typically in the far-left lane. Limited n...

  17. Benzodiazepine Use Among Low Back Pain Patients Concurrently Prescribed Opioids in the Military Health System

    DTIC Science & Technology

    2017-08-27

    release. Distributibn is unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF~ 17. LIMITATION OF 18 ...chronic pain, there are high rates ( 18 -38%) of concurrent opioid and benzo prescribing. These high-risk prescribing patterns have contributed to the

  18. Multifidelity, multidisciplinary optimization of turbomachines with shock interaction

    NASA Astrophysics Data System (ADS)

    Joly, Michael Marie

    Research on high-speed air-breathing propulsion aims at developing aircraft with antipodal range and space access. Before reaching high speed at high altitude, the flight vehicle needs to accelerate from takeoff to scramjet takeover. Air turbo rocket engines combine turbojet and rocket engine cycles to provide the necessary thrust in the so-called low-speed regime. Challenges related to turbomachinery components are multidisciplinary, since both the high compression ratio compressor and the powering high-pressure turbine operate in the transonic regime in compact environments with strong shock interactions. Besides, lightweight is vital to avoid hindering the scramjet operation. Recent progress in evolutionary computing provides aerospace engineers with robust and efficient optimization algorithms to address concurrent objectives. The present work investigates Multidisciplinary Design Optimization (MDO) of innovative transonic turbomachinery components. Inter-stage aerodynamic shock interaction in turbomachines are known to generate high-cycle fatigue on the rotor blades compromising their structural integrity. A soft-computing strategy is proposed to mitigate the vane downstream distortion, and shown to successfully attenuate the unsteady forcing on the rotor of a high-pressure turbine. Counter-rotation offers promising prospects to reduce the weight of the machine, with fewer stages and increased load per row. An integrated approach based on increasing level of fidelity and aero-structural coupling is then presented and allows achieving a highly loaded compact counter-rotating compressor.

  19. Fast prediction of RNA-RNA interaction using heuristic algorithm.

    PubMed

    Montaseri, Soheila

    2015-01-01

    Interaction between two RNA molecules plays a crucial role in many medical and biological processes such as gene expression regulation. In this process, an RNA molecule prohibits the translation of another RNA molecule by establishing stable interactions with it. Some algorithms have been formed to predict the structure of the RNA-RNA interaction. High computational time is a common challenge in most of the presented algorithms. In this context, a heuristic method is introduced to accurately predict the interaction between two RNAs based on minimum free energy (MFE). This algorithm uses a few dot matrices for finding the secondary structure of each RNA and binding sites between two RNAs. Furthermore, a parallel version of this method is presented. We describe the algorithm's concurrency and parallelism for a multicore chip. The proposed algorithm has been performed on some datasets including CopA-CopT, R1inv-R2inv, Tar-Tar*, DIS-DIS, and IncRNA54-RepZ in Escherichia coli bacteria. The method has high validity and efficiency, and it is run in low computational time in comparison to other approaches.

  20. Distributed Parallel Processing and Dynamic Load Balancing Techniques for Multidisciplinary High Speed Aircraft Design

    NASA Technical Reports Server (NTRS)

    Krasteva, Denitza T.

    1998-01-01

    Multidisciplinary design optimization (MDO) for large-scale engineering problems poses many challenges (e.g., the design of an efficient concurrent paradigm for global optimization based on disciplinary analyses, expensive computations over vast data sets, etc.) This work focuses on the application of distributed schemes for massively parallel architectures to MDO problems, as a tool for reducing computation time and solving larger problems. The specific problem considered here is configuration optimization of a high speed civil transport (HSCT), and the efficient parallelization of the embedded paradigm for reasonable design space identification. Two distributed dynamic load balancing techniques (random polling and global round robin with message combining) and two necessary termination detection schemes (global task count and token passing) were implemented and evaluated in terms of effectiveness and scalability to large problem sizes and a thousand processors. The effect of certain parameters on execution time was also inspected. Empirical results demonstrated stable performance and effectiveness for all schemes, and the parametric study showed that the selected algorithmic parameters have a negligible effect on performance.

  1. A Comparison of Forward and Concurrent Chaining Strategies in Teaching Laundromat Skills to Students with Severe Handicaps.

    ERIC Educational Resources Information Center

    McDonnell, John; McFarland, Susan

    1988-01-01

    In a study which taught four high school students with severe handicaps to use a commercial washing machine and laundry soap dispenser, a concurrent chaining strategy was found more efficient than forward chaining in facilitating skill acquisition. Concurrent chaining also resulted in better maintenance at four- and eight-week follow-up…

  2. Modeling Zebrafish Developmental Toxicity using a Concurrent In vitro Assay Battery (SOT)

    EPA Science Inventory

    We describe the development of computational models that predict activity in a repeat-dose zebrafish embryo developmental toxicity assay using a combination of physico-chemical parameters and in vitro (human) assay measurements. The data set covered 986 chemicals including pestic...

  3. 49 CFR 7.2 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... component of DOT and includes the Under Secretary for Security, the Commandant of the Coast Guard, the Inspector General, and the Director of the Bureau of Transportation Statistics. Concurrence means that the... preserved. The term also includes any such documentary material stored by computer. Responsible DOT official...

  4. 49 CFR 7.2 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... component of DOT and includes the Under Secretary for Security, the Commandant of the Coast Guard, the Inspector General, and the Director of the Bureau of Transportation Statistics. Concurrence means that the... preserved. The term also includes any such documentary material stored by computer. Responsible DOT official...

  5. 49 CFR 7.2 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... component of DOT and includes the Under Secretary for Security, the Commandant of the Coast Guard, the Inspector General, and the Director of the Bureau of Transportation Statistics. Concurrence means that the... preserved. The term also includes any such documentary material stored by computer. Responsible DOT official...

  6. Concurrent computation of attribute filters on shared memory parallel machines.

    PubMed

    Wilkinson, Michael H F; Gao, Hui; Hesselink, Wim H; Jonker, Jan-Eppo; Meijster, Arnold

    2008-10-01

    Morphological attribute filters have not previously been parallelized, mainly because they are both global and non-separable. We propose a parallel algorithm that achieves efficient parallelism for a large class of attribute filters, including attribute openings, closings, thinnings and thickenings, based on Salembier's Max-Trees and Min-trees. The image or volume is first partitioned in multiple slices. We then compute the Max-trees of each slice using any sequential Max-Tree algorithm. Subsequently, the Max-trees of the slices can be merged to obtain the Max-tree of the image. A C-implementation yielded good speed-ups on both a 16-processor MIPS 14000 parallel machine, and a dual-core Opteron-based machine. It is shown that the speed-up of the parallel algorithm is a direct measure of the gain with respect to the sequential algorithm used. Furthermore, the concurrent algorithm shows a speed gain of up to 72 percent on a single-core processor, due to reduced cache thrashing.

  7. AEROELASTIC SIMULATION TOOL FOR INFLATABLE BALLUTE AEROCAPTURE

    NASA Technical Reports Server (NTRS)

    Liever, P. A.; Sheta, E. F.; Habchi, S. D.

    2006-01-01

    A multidisciplinary analysis tool is under development for predicting the impact of aeroelastic effects on the functionality of inflatable ballute aeroassist vehicles in both the continuum and rarefied flow regimes. High-fidelity modules for continuum and rarefied aerodynamics, structural dynamics, heat transfer, and computational grid deformation are coupled in an integrated multi-physics, multi-disciplinary computing environment. This flexible and extensible approach allows the integration of state-of-the-art, stand-alone NASA and industry leading continuum and rarefied flow solvers and structural analysis codes into a computing environment in which the modules can run concurrently with synchronized data transfer. Coupled fluid-structure continuum flow demonstrations were conducted on a clamped ballute configuration. The feasibility of implementing a DSMC flow solver in the simulation framework was demonstrated, and loosely coupled rarefied flow aeroelastic demonstrations were performed. A NASA and industry technology survey identified CFD, DSMC and structural analysis codes capable of modeling non-linear shape and material response of thin-film inflated aeroshells. The simulation technology will find direct and immediate applications with NASA and industry in ongoing aerocapture technology development programs.

  8. Multi-mode sensor processing on a dynamically reconfigurable massively parallel processor array

    NASA Astrophysics Data System (ADS)

    Chen, Paul; Butts, Mike; Budlong, Brad; Wasson, Paul

    2008-04-01

    This paper introduces a novel computing architecture that can be reconfigured in real time to adapt on demand to multi-mode sensor platforms' dynamic computational and functional requirements. This 1 teraOPS reconfigurable Massively Parallel Processor Array (MPPA) has 336 32-bit processors. The programmable 32-bit communication fabric provides streamlined inter-processor connections with deterministically high performance. Software programmability, scalability, ease of use, and fast reconfiguration time (ranging from microseconds to milliseconds) are the most significant advantages over FPGAs and DSPs. This paper introduces the MPPA architecture, its programming model, and methods of reconfigurability. An MPPA platform for reconfigurable computing is based on a structural object programming model. Objects are software programs running concurrently on hundreds of 32-bit RISC processors and memories. They exchange data and control through a network of self-synchronizing channels. A common application design pattern on this platform, called a work farm, is a parallel set of worker objects, with one input and one output stream. Statically configured work farms with homogeneous and heterogeneous sets of workers have been used in video compression and decompression, network processing, and graphics applications.

  9. A hardware-oriented concurrent TZ search algorithm for High-Efficiency Video Coding

    NASA Astrophysics Data System (ADS)

    Doan, Nghia; Kim, Tae Sung; Rhee, Chae Eun; Lee, Hyuk-Jae

    2017-12-01

    High-Efficiency Video Coding (HEVC) is the latest video coding standard, in which the compression performance is double that of its predecessor, the H.264/AVC standard, while the video quality remains unchanged. In HEVC, the test zone (TZ) search algorithm is widely used for integer motion estimation because it effectively searches the good-quality motion vector with a relatively small amount of computation. However, the complex computation structure of the TZ search algorithm makes it difficult to implement it in the hardware. This paper proposes a new integer motion estimation algorithm which is designed for hardware execution by modifying the conventional TZ search to allow parallel motion estimations of all prediction unit (PU) partitions. The algorithm consists of the three phases of zonal, raster, and refinement searches. At the beginning of each phase, the algorithm obtains the search points required by the original TZ search for all PU partitions in a coding unit (CU). Then, all redundant search points are removed prior to the estimation of the motion costs, and the best search points are then selected for all PUs. Compared to the conventional TZ search algorithm, experimental results show that the proposed algorithm significantly decreases the Bjøntegaard Delta bitrate (BD-BR) by 0.84%, and it also reduces the computational complexity by 54.54%.

  10. Teachers and Students' Conceptions of Computer-Based Models in the Context of High School Chemistry: Elicitations at the Pre-intervention Stage

    NASA Astrophysics Data System (ADS)

    Waight, Noemi; Gillmeister, Kristina

    2014-04-01

    This study examined teachers' and students' initial conceptions of computer-based models—Flash and NetLogo models—and documented how teachers and students reconciled notions of multiple representations featuring macroscopic, submicroscopic and symbolic representations prior to actual intervention in eight high school chemistry classrooms. Individual in-depth interviews were conducted with 32 students and 6 teachers. Findings revealed an interplay of complex factors that functioned as opportunities and obstacles in the implementation of technologies in science classrooms. Students revealed preferences for the Flash models as opposed to the open-ended NetLogo models. Altogether, due to lack of content and modeling background knowledge, students experienced difficulties articulating coherent and blended understandings of multiple representations. Concurrently, while the aesthetic and interactive features of the models were of great value, they did not sustain students' initial curiosity and opportunities to improve understandings about chemistry phenomena. Most teachers recognized direct alignment of the Flash model with their existing curriculum; however, the benefits were relegated to existing procedural and passive classroom practices. The findings have implications for pedagogical approaches that address the implementation of computer-based models, function of models, models as multiple representations and the role of background knowledge and cognitive load, and the role of teacher vision and classroom practices.

  11. Accelerating large-scale protein structure alignments with graphics processing units

    PubMed Central

    2012-01-01

    Background Large-scale protein structure alignment, an indispensable tool to structural bioinformatics, poses a tremendous challenge on computational resources. To ensure structure alignment accuracy and efficiency, efforts have been made to parallelize traditional alignment algorithms in grid environments. However, these solutions are costly and of limited accessibility. Others trade alignment quality for speedup by using high-level characteristics of structure fragments for structure comparisons. Findings We present ppsAlign, a parallel protein structure Alignment framework designed and optimized to exploit the parallelism of Graphics Processing Units (GPUs). As a general-purpose GPU platform, ppsAlign could take many concurrent methods, such as TM-align and Fr-TM-align, into the parallelized algorithm design. We evaluated ppsAlign on an NVIDIA Tesla C2050 GPU card, and compared it with existing software solutions running on an AMD dual-core CPU. We observed a 36-fold speedup over TM-align, a 65-fold speedup over Fr-TM-align, and a 40-fold speedup over MAMMOTH. Conclusions ppsAlign is a high-performance protein structure alignment tool designed to tackle the computational complexity issues from protein structural data. The solution presented in this paper allows large-scale structure comparisons to be performed using massive parallel computing power of GPU. PMID:22357132

  12. The compatibility of concurrent high intensity interval training and resistance training for muscular strength and hypertrophy: a systematic review and meta-analysis.

    PubMed

    Sabag, Angelo; Najafi, Abdolrahman; Michael, Scott; Esgin, Tuguy; Halaki, Mark; Hackett, Daniel

    2018-04-16

    The purpose of this systematic review and meta-analysis is to assess the effect of concurrent high intensity interval training (HIIT) and resistance training (RT) on strength and hypertrophy. Five electronic databases were searched using terms related to HIIT, RT, and concurrent training. Effect size (ES), calculated as standardised differences in the means, were used to examine the effect of concurrent HIIT and RT compared to RT alone on muscle strength and hypertrophy. Sub-analyses were performed to assess region-specific strength and hypertrophy, HIIT modality (cycling versus running), and inter-modal rest responses. Compared to RT alone, concurrent HIIT and RT led to similar changes in muscle hypertrophy and upper body strength. Concurrent HIIT and RT resulted in a lower increase in lower body strength compared to RT alone (ES = -0.248, p = 0.049). Sub analyses showed a trend for lower body strength to be negatively affected by cycling HIIT (ES = -0.377, p = 0.074) and not running (ES = -0.176, p = 0.261). Data suggests concurrent HIIT and RT does not negatively impact hypertrophy or upper body strength, and that any possible negative effect on lower body strength may be ameliorated by incorporating running based HIIT and longer inter-modal rest periods.

  13. A tool for modeling concurrent real-time computation

    NASA Technical Reports Server (NTRS)

    Sharma, D. D.; Huang, Shie-Rei; Bhatt, Rahul; Sridharan, N. S.

    1990-01-01

    Real-time computation is a significant area of research in general, and in AI in particular. The complexity of practical real-time problems demands use of knowledge-based problem solving techniques while satisfying real-time performance constraints. Since the demands of a complex real-time problem cannot be predicted (owing to the dynamic nature of the environment) powerful dynamic resource control techniques are needed to monitor and control the performance. A real-time computation model for a real-time tool, an implementation of the QP-Net simulator on a Symbolics machine, and an implementation on a Butterfly multiprocessor machine are briefly described.

  14. Method for simultaneous overlapped communications between neighboring processors in a multiple

    DOEpatents

    Benner, Robert E.; Gustafson, John L.; Montry, Gary R.

    1991-01-01

    A parallel computing system and method having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system.

  15. Association Between Hospital Case Volume and the Use of Bronchoscopy and Esophagoscopy During Head and Neck Cancer Diagnostic Evaluation

    PubMed Central

    Sun, Gordon H.; Aliu, Oluseyi; Moloci, Nicholas M.; Mondschein, Joshua K.; Burke, James F.; Hayward, Rodney A.

    2013-01-01

    Background There are no clinical guidelines on best practices for the use of bronchoscopy and esophagoscopy in diagnosing head and neck cancer. This retrospective cohort study examined variation in the use of bronchoscopy and esophagoscopy across hospitals in Michigan. Patients and Methods We identified 17,828 head and neck cancer patients in the 2006–2010 Michigan State Ambulatory Surgery Databases. We used hierarchical, mixed-effect logistic regression to examine whether a hospital’s risk-adjusted rate of concurrent bronchoscopy or esophagoscopy was associated with its case volume (<100, 100–999, or ≥1000 cases/hospital) for those undergoing diagnostic laryngoscopy. Results Of 9,218 patients undergoing diagnostic laryngoscopy, 1,191 (12.9%) received concurrent bronchoscopy and 1,675 (18.2%) underwent concurrent esophagoscopy. The median hospital rate of bronchoscopy was 2.7% (range 0–61.1%), and low-volume (OR 27.1 [95% CI 1.9, 390.7]) and medium-volume (OR 28.1 [95% CI 2.0, 399.0]) hospitals were more likely to perform concurrent bronchoscopy compared to high-volume hospitals. The median hospital rate of esophagoscopy was 5.1% (range 0–47.1%), and low-volume (OR 9.8 [95% CI 1.5, 63.7]) and medium-volume (OR 8.5 [95% CI 1.3, 55.0]) hospitals were significantly more likely to perform concurrent esophagoscopy relative to high-volume hospitals. Conclusions Head and neck cancer patients undergoing diagnostic laryngoscopy are much more likely to undergo concurrent bronchoscopy and esophagoscopy at low- and medium-volume hospitals than at high-volume hospitals. Whether this represents over-use of concurrent procedures or appropriate care that leads to earlier diagnosis and better outcomes merits further investigation. PMID:24114146

  16. Concurrent partnerships in Cape Town, South Africa: race and sex differences in prevalence and duration of overlap

    PubMed Central

    Beauclair, Roxanne; Hens, Niel; Delva, Wim

    2015-01-01

    Introduction Concurrent partnerships (CPs) have been suggested as a risk factor for transmitting HIV, but their impact on the epidemic depends upon how prevalent they are in populations, the average number of CPs an individual has and the length of time they overlap. However, estimates of prevalence of CPs in Southern Africa vary widely, and the duration of overlap in these relationships is poorly documented. We aim to characterize concurrency in a more accurate and complete manner, using data from three disadvantaged communities of Cape Town, South Africa. Methods We conducted a sexual behaviour survey (n=878) from June 2011 to February 2012 in Cape Town, using Audio Computer-Assisted Self-Interviewing to collect sexual relationship histories on partners in the past year. Using the beginning and end dates for the partnerships, we calculated the point prevalence, the cumulative prevalence and the incidence rate of CPs, as well as the duration of overlap for relationships begun in the previous year. Linear and binomial regression models were used to quantify race (black vs. coloured) and sex differences in the duration of overlap and relative risk of having CPs in the past year. Results The overall point prevalence of CPs six months before the survey was 8.4%: 13.4% for black men, 1.9% for coloured men, 7.8% black women and 5.6% for coloured women. The median duration of overlap in CPs was 7.5 weeks. Women had less risk of CPs in the previous year than men (RR 0.43; 95% CI: 0.32–0.57) and black participants were more at risk than coloured participants (RR 1.86; 95% CI: 1.17–2.97). Conclusions Our results indicate that in this population the prevalence of CPs is relatively high and is characterized by overlaps of long duration, implying there may be opportunities for HIV to be transmitted to concurrent partners. PMID:25697328

  17. Slat Noise Simulations: Status and Challenges

    NASA Technical Reports Server (NTRS)

    Choudhari, Meelan M.; Lockard, David P.; Khorrami, Mehdi R.; Mineck, Raymond E.

    2011-01-01

    Noise radiation from the leading edge slat of a high-lift system is known to be an important component of aircraft noise during approach. NASA's Langley Research Center is engaged in a coordinated series of investigations combining high-fidelity numerical simulations and detailed wind tunnel measurements of a generic, unswept, 3-element, high-lift configuration. The goal of this effort is to provide a validated predictive capability that would enable identification of the dominant noise source mechanisms and, ultimately, help develop physics inspired concepts for reducing the far-field acoustic intensity. This paper provides a brief overview of the current status of the computational effort and describes new findings pertaining to the effects of the angle of attack on the aeroacoustics of the slat cove region. Finally, the interplay of the simulation campaign with the concurrently evolving development of a benchmark dataset for an international workshop on airframe noise is outlined.

  18. Parallel Application Performance on Two Generations of Intel Xeon HPC Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Christopher H.; Long, Hai; Sides, Scott

    2015-10-15

    Two next-generation node configurations hosting the Haswell microarchitecture were tested with a suite of microbenchmarks and application examples, and compared with a current Ivy Bridge production node on NREL" tm s Peregrine high-performance computing cluster. A primary conclusion from this study is that the additional cores are of little value to individual task performance--limitations to application parallelism, or resource contention among concurrently running but independent tasks, limits effective utilization of these added cores. Hyperthreading generally impacts throughput negatively, but can improve performance in the absence of detailed attention to runtime workflow configuration. The observations offer some guidance to procurement ofmore » future HPC systems at NREL. First, raw core count must be balanced with available resources, particularly memory bandwidth. Balance-of-system will determine value more than processor capability alone. Second, hyperthreading continues to be largely irrelevant to the workloads that are commonly seen, and were tested here, at NREL. Finally, perhaps the most impactful enhancement to productivity might occur through enabling multiple concurrent jobs per node. Given the right type and size of workload, more may be achieved by doing many slow things at once, than fast things in order.« less

  19. The usefulness of videomanometry for studying pediatric esophageal motor disease.

    PubMed

    Kawahara, Hisayoshi; Kubota, Akio; Okuyama, Hiroomi; Oue, Takaharu; Tazuke, Yuko; Okada, Akira

    2004-12-01

    Abnormalities in esophageal motor function underlie various symptoms in the pediatric population. Manometry remains an important tool for studying esophageal motor function, whereas its analyses have been conducted with considerable subjective interpretation. The usefulness of videomanometry with topographic analysis was examined in the current study. Videomanometry was conducted in 5 patients with primary gastroesophageal reflux disease (GERD), 4 with postoperative esophageal atresia (EA), 1 with congenital esophageal stenosis (CES), and 1 with diffuse esophageal spasms (DES). Digitized videofluoroscopic images were recorded synchronously with manometric digital data in a personal computer. Manometric analysis was conducted with a view of concurrent esophageal contour and bolus transit. Primary GERD patients showed esophageal flow proceeding into the stomach during peristaltic contractions recorded manometrically, whereas patients with EA/CES frequently showed impaired esophageal transit during defective esophageal peristaltic contractions. A characteristic corkscrew appearance and esophageal flow in a to-and-fro fashion were seen with high-amplitude synchronous esophageal contractions in a DES patient. The topographic analysis showed distinctive images characteristic of each pathological condition. Videomanometry is helpful in interpreting manometric data by analyzing concurrent fluoroscopic images. Topographic analyses provide characteristic images reflecting motor abnormalities in pediatric esophageal disease.

  20. Advanced technologies for scalable ATLAS conditions database access on the grid

    NASA Astrophysics Data System (ADS)

    Basset, R.; Canali, L.; Dimitrov, G.; Girone, M.; Hawkings, R.; Nevski, P.; Valassi, A.; Vaniachine, A.; Viegas, F.; Walker, R.; Wong, A.

    2010-04-01

    During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysis of server performance under stress tests indicates that Conditions Db data access is limited by the disk I/O throughput. An unacceptable side-effect of the disk I/O saturation is a degradation of the WLCG 3D Services that update Conditions Db data at all ten ATLAS Tier-1 sites using the technology of Oracle Streams. To avoid such bottlenecks we prototyped and tested a novel approach for database peak load avoidance in Grid computing. Our approach is based upon the proven idea of pilot job submission on the Grid: instead of the actual query, an ATLAS utility library sends to the database server a pilot query first.

  1. Longitudinal in vivo evaluation of bone regeneration by combined measurement of multi-pinhole SPECT and micro-CT for tissue engineering

    NASA Astrophysics Data System (ADS)

    Lienemann, Philipp S.; Metzger, Stéphanie; Kiveliö, Anna-Sofia; Blanc, Alain; Papageorgiou, Panagiota; Astolfo, Alberto; Pinzer, Bernd R.; Cinelli, Paolo; Weber, Franz E.; Schibli, Roger; Béhé, Martin; Ehrbar, Martin

    2015-05-01

    Over the last decades, great strides were made in the development of novel implants for the treatment of bone defects. The increasing versatility and complexity of these implant designs request for concurrent advances in means to assess in vivo the course of induced bone formation in preclinical models. Since its discovery, micro-computed tomography (micro-CT) has excelled as powerful high-resolution technique for non-invasive assessment of newly formed bone tissue. However, micro-CT fails to provide spatiotemporal information on biological processes ongoing during bone regeneration. Conversely, due to the versatile applicability and cost-effectiveness, single photon emission computed tomography (SPECT) would be an ideal technique for assessing such biological processes with high sensitivity and for nuclear imaging comparably high resolution (<1 mm). Herein, we employ modular designed poly(ethylene glycol)-based hydrogels that release bone morphogenetic protein to guide the healing of critical sized calvarial bone defects. By combined in vivo longitudinal multi-pinhole SPECT and micro-CT evaluations we determine the spatiotemporal course of bone formation and remodeling within this synthetic hydrogel implant. End point evaluations by high resolution micro-CT and histological evaluation confirm the value of this approach to follow and optimize bone-inducing biomaterials.

  2. The Effect of Two Different Concurrent Training Programs on Strength and Power Gains in Highly-Trained Individuals

    PubMed Central

    Petré, Henrik; Löfving, Pontus; Psilander, Niklas

    2018-01-01

    The effects of concurrent strength and endurance training have been well studied in untrained and moderately-trained individuals. However, studies examining these effects in individuals with a long history of resistance training (RT) are lacking. Additionally, few studies have examined how strength and power are affected when different types of endurance training are added to an RT protocol. The purpose of the present study was to compare the effects of concurrent training incorporating either low-volume, high-intensity interval training (HIIT, 8-24 Tabata intervals at ~150% of VO2max) or high-volume, medium-intensity continuous endurance training (CT, 40-80 min at 70% of VO2max), on the strength and power of highly-trained individuals. Sixteen highly-trained ice-hockey and rugby players were divided into two groups that underwent either CT (n = 8) or HIIT (n = 8) in parallel with RT (2-6 sets of heavy parallel squats, > 80% of 1RM) during a 6-week period (3 sessions/wk). Parallel squat performance improved after both RT + CT and RT + HIIT (12 ± 8% and 14 ± 10% respectively, p < 0.01), with no difference between the groups. However, aerobic power (VO2max) only improved after RT + HIIT (4 ± 3%, p < 0.01). We conclude that strength gains can be obtained after both RT + CT and RT + HIIT in athletes with a prior history of RT. This indicates that the volume and/or intensity of the endurance training does not influence the magnitude of strength improvements during short periods of concurrent training, at least for highly-trained individuals when the endurance training is performed after RT. However, since VO2max improved only after RT + HIIT and this is a time efficient protocol, we recommend this type of concurrent endurance training. Key points Lower body maximal strength is improved after concurrent strength and endurance training in highly trained individuals. The magnitude of this strength improvement is not influenced by the type of endurance training, i.e. HIIT or CT. HIIT improves VO2max and is more time efficient than CT. HIIT is recommended to athletes when concurrently training for strength and endurance. PMID:29769816

  3. Weather Research and Forecasting Model with Vertical Nesting Capability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-08-01

    The Weather Research and Forecasting (WRF) model with vertical nesting capability is an extension of the WRF model, which is available in the public domain, from www.wrf-model.org. The new code modifies the nesting procedure, which passes lateral boundary conditions between computational domains in the WRF model. Previously, the same vertical grid was required on all domains, while the new code allows different vertical grids to be used on concurrently run domains. This new functionality improves WRF's ability to produce high-resolution simulations of the atmosphere by allowing a wider range of scales to be efficiently resolved and more accurate lateral boundarymore » conditions to be provided through the nesting procedure.« less

  4. Software Development Technologies for Reactive, Real-Time, and Hybrid Systems

    NASA Technical Reports Server (NTRS)

    Manna, Zohar

    1996-01-01

    The research is directed towards the design and implementation of a comprehensive deductive environment for the development of high-assurance systems, especially reactive (concurrent, real-time, and hybrid) systems. Reactive systems maintain an ongoing interaction with their environment, and are among the most difficult to design and verify. The project aims to provide engineers with a wide variety of tools within a single, general, formal framework in which the tools will be most effective. The entire development process is considered, including the construction, transformation, validation, verification, debugging, and maintenance of computer systems. The goal is to automate the process as much as possible and reduce the errors that pervade hardware and software development.

  5. Social stress reactivity alters reward and punishment learning

    PubMed Central

    Frank, Michael J.; Allen, John J. B.

    2011-01-01

    To examine how stress affects cognitive functioning, individual differences in trait vulnerability (punishment sensitivity) and state reactivity (negative affect) to social evaluative threat were examined during concurrent reinforcement learning. Lower trait-level punishment sensitivity predicted better reward learning and poorer punishment learning; the opposite pattern was found in more punishment sensitive individuals. Increasing state-level negative affect was directly related to punishment learning accuracy in highly punishment sensitive individuals, but these measures were inversely related in less sensitive individuals. Combined electrophysiological measurement, performance accuracy and computational estimations of learning parameters suggest that trait and state vulnerability to stress alter cortico-striatal functioning during reinforcement learning, possibly mediated via medio-frontal cortical systems. PMID:20453038

  6. Social stress reactivity alters reward and punishment learning.

    PubMed

    Cavanagh, James F; Frank, Michael J; Allen, John J B

    2011-06-01

    To examine how stress affects cognitive functioning, individual differences in trait vulnerability (punishment sensitivity) and state reactivity (negative affect) to social evaluative threat were examined during concurrent reinforcement learning. Lower trait-level punishment sensitivity predicted better reward learning and poorer punishment learning; the opposite pattern was found in more punishment sensitive individuals. Increasing state-level negative affect was directly related to punishment learning accuracy in highly punishment sensitive individuals, but these measures were inversely related in less sensitive individuals. Combined electrophysiological measurement, performance accuracy and computational estimations of learning parameters suggest that trait and state vulnerability to stress alter cortico-striatal functioning during reinforcement learning, possibly mediated via medio-frontal cortical systems.

  7. A comparative study of serial and parallel aeroelastic computations of wings

    NASA Technical Reports Server (NTRS)

    Byun, Chansup; Guruswamy, Guru P.

    1994-01-01

    A procedure for computing the aeroelasticity of wings on parallel multiple-instruction, multiple-data (MIMD) computers is presented. In this procedure, fluids are modeled using Euler equations, and structures are modeled using modal or finite element equations. The procedure is designed in such a way that each discipline can be developed and maintained independently by using a domain decomposition approach. In the present parallel procedure, each computational domain is scalable. A parallel integration scheme is used to compute aeroelastic responses by solving fluid and structural equations concurrently. The computational efficiency issues of parallel integration of both fluid and structural equations are investigated in detail. This approach, which reduces the total computational time by a factor of almost 2, is demonstrated for a typical aeroelastic wing by using various numbers of processors on the Intel iPSC/860.

  8. Barista: A Framework for Concurrent Speech Processing by USC-SAIL

    PubMed Central

    Can, Doğan; Gibson, James; Vaz, Colin; Georgiou, Panayiotis G.; Narayanan, Shrikanth S.

    2016-01-01

    We present Barista, an open-source framework for concurrent speech processing based on the Kaldi speech recognition toolkit and the libcppa actor library. With Barista, we aim to provide an easy-to-use, extensible framework for constructing highly customizable concurrent (and/or distributed) networks for a variety of speech processing tasks. Each Barista network specifies a flow of data between simple actors, concurrent entities communicating by message passing, modeled after Kaldi tools. Leveraging the fast and reliable concurrency and distribution mechanisms provided by libcppa, Barista lets demanding speech processing tasks, such as real-time speech recognizers and complex training workflows, to be scheduled and executed on parallel (and/or distributed) hardware. Barista is released under the Apache License v2.0. PMID:27610047

  9. Barista: A Framework for Concurrent Speech Processing by USC-SAIL.

    PubMed

    Can, Doğan; Gibson, James; Vaz, Colin; Georgiou, Panayiotis G; Narayanan, Shrikanth S

    2014-05-01

    We present Barista, an open-source framework for concurrent speech processing based on the Kaldi speech recognition toolkit and the libcppa actor library. With Barista, we aim to provide an easy-to-use, extensible framework for constructing highly customizable concurrent (and/or distributed) networks for a variety of speech processing tasks. Each Barista network specifies a flow of data between simple actors, concurrent entities communicating by message passing, modeled after Kaldi tools. Leveraging the fast and reliable concurrency and distribution mechanisms provided by libcppa, Barista lets demanding speech processing tasks, such as real-time speech recognizers and complex training workflows, to be scheduled and executed on parallel (and/or distributed) hardware. Barista is released under the Apache License v2.0.

  10. Substantial increase in concurrent droughts and heatwaves in the United States

    PubMed Central

    Mazdiyasni, Omid; AghaKouchak, Amir

    2015-01-01

    A combination of climate events (e.g., low precipitation and high temperatures) may cause a significant impact on the ecosystem and society, although individual events involved may not be severe extremes themselves. Analyzing historical changes in concurrent climate extremes is critical to preparing for and mitigating the negative effects of climatic change and variability. This study focuses on the changes in concurrences of heatwaves and meteorological droughts from 1960 to 2010. Despite an apparent hiatus in rising temperature and no significant trend in droughts, we show a substantial increase in concurrent droughts and heatwaves across most parts of the United States, and a statistically significant shift in the distribution of concurrent extremes. Although commonly used trend analysis methods do not show any trend in concurrent droughts and heatwaves, a unique statistical approach discussed in this study exhibits a statistically significant change in the distribution of the data. PMID:26324927

  11. Substantial increase in concurrent droughts and heatwaves in the United States.

    PubMed

    Mazdiyasni, Omid; AghaKouchak, Amir

    2015-09-15

    A combination of climate events (e.g., low precipitation and high temperatures) may cause a significant impact on the ecosystem and society, although individual events involved may not be severe extremes themselves. Analyzing historical changes in concurrent climate extremes is critical to preparing for and mitigating the negative effects of climatic change and variability. This study focuses on the changes in concurrences of heatwaves and meteorological droughts from 1960 to 2010. Despite an apparent hiatus in rising temperature and no significant trend in droughts, we show a substantial increase in concurrent droughts and heatwaves across most parts of the United States, and a statistically significant shift in the distribution of concurrent extremes. Although commonly used trend analysis methods do not show any trend in concurrent droughts and heatwaves, a unique statistical approach discussed in this study exhibits a statistically significant change in the distribution of the data.

  12. Self-checking self-repairing computer nodes using the mirror processor

    NASA Technical Reports Server (NTRS)

    Tamir, Yuval

    1992-01-01

    Circuitry added to fault-tolerant systems for concurrent error deduction usually reduces performance. Using a technique called micro rollback, it is possible to eliminate most of the performance penalty of concurrent error detection. Error detection is performed in parallel with intermodule communication, and erroneous state changes are later undone. The author reports on the design and implementation of a VLSI RISC microprocessor, called the Mirror Processor (MP), which is capable of micro rollback. In order to achieve concurrent error detection, two MP chips operate in lockstep, comparing external signals and a signature of internal signals every clock cycle. If a mismatch is detected, both processors roll back to the beginning of the cycle when the error occurred. In some cases the erroneous state is corrected by copying a value from the fault-free processor to the faulty processor. The architecture, microarchitecture, and VLSI implementation of the MP, emphasizing its error-detection, error-recovery, and self-diagnosis capabilities, are described.

  13. Modelling and simulating decision processes of linked lives: An approach based on concurrent processes and stochastic race.

    PubMed

    Warnke, Tom; Reinhardt, Oliver; Klabunde, Anna; Willekens, Frans; Uhrmacher, Adelinde M

    2017-10-01

    Individuals' decision processes play a central role in understanding modern migration phenomena and other demographic processes. Their integration into agent-based computational demography depends largely on suitable support by a modelling language. We are developing the Modelling Language for Linked Lives (ML3) to describe the diverse decision processes of linked lives succinctly in continuous time. The context of individuals is modelled by networks the individual is part of, such as family ties and other social networks. Central concepts, such as behaviour conditional on agent attributes, age-dependent behaviour, and stochastic waiting times, are tightly integrated in the language. Thereby, alternative decisions are modelled by concurrent processes that compete by stochastic race. Using a migration model, we demonstrate how this allows for compact description of complex decisions, here based on the Theory of Planned Behaviour. We describe the challenges for the simulation algorithm posed by stochastic race between multiple concurrent complex decisions.

  14. The Influence of Recurrent Modes of Climate Variability on the Occurrence of Monthly Temperature Extremes Over South America

    NASA Astrophysics Data System (ADS)

    Loikith, Paul C.; Detzer, Judah; Mechoso, Carlos R.; Lee, Huikyo; Barkhordarian, Armineh

    2017-10-01

    The associations between extreme temperature months and four prominent modes of recurrent climate variability are examined over South America. Associations are computed as the percent of extreme temperature months concurrent with the upper and lower quartiles of the El Niño-Southern Oscillation (ENSO), the Atlantic Niño, the Pacific Decadal Oscillation (PDO), and the Southern Annular Mode (SAM) index distributions, stratified by season. The relationship is strongest for ENSO, with nearly every extreme temperature month concurrent with the upper or lower quartiles of its distribution in portions of northwestern South America during some seasons. The likelihood of extreme warm temperatures is enhanced over parts of northern South America when the Atlantic Niño index is in the upper quartile, while cold extremes are often association with the lowest quartile. Concurrent precipitation anomalies may contribute to these relations. The PDO shows weak associations during December, January, and February, while in June, July, and August its relationship with extreme warm temperatures closely matches that of ENSO. This may be due to the positive relationship between the PDO and ENSO, rather than the PDO acting as an independent physical mechanism. Over Patagonia, the SAM is highly influential during spring and fall, with warm and cold extremes being associated with positive and negative phases of the SAM, respectively. Composites of sea level pressure anomalies for extreme temperature months over Patagonia suggest an important role of local synoptic scale weather variability in addition to a favorable SAM for the occurrence of these extremes.

  15. Effects of serial and concurrent training on receptive identification tasks: A Systematic replication.

    PubMed

    Wunderlich, Kara L; Vollmer, Timothy R

    2017-07-01

    The current study compared the use of serial and concurrent methods to train multiple exemplars when teaching receptive language skills, providing a systematic replication of Wunderlich, Vollmer, Donaldson, and Phillips (2014). Five preschoolers diagnosed with developmental delays or autism spectrum disorders were taught to receptively identify letters or letter sounds. Subjects learned the target stimuli slightly faster in concurrent training and a high degree of generalization was obtained following both methods of training, indicating that both the serial and concurrent methods of training are efficient and effective instructional procedures. © 2017 Society for the Experimental Analysis of Behavior.

  16. Computational estimation of errors generated by lumping of physiologically-based pharmacokinetic (PBPK) interaction models of inhaled complex chemical mixtures

    EPA Science Inventory

    Many cases of environmental contamination result in concurrent or sequential exposure to more than one chemical. However, limitations of available resources make it unlikely that experimental toxicology will provide health risk information about all the possible mixtures to which...

  17. Exploring Media Literacy and Computational Thinking: A Game Maker Curriculum Study

    ERIC Educational Resources Information Center

    Jenson, Jennifer; Droumeva, Milena

    2016-01-01

    While advances in game-based learning are already transforming educative practices globally, with tech giants like Microsoft, Apple and Google taking notice and investing in educational game initiatives, there is a concurrent and critically important development that focuses on "game construction" pedagogy as a vehicle for enhancing…

  18. Post-Positivist Research: Two Examples of Methodological Pluralism.

    ERIC Educational Resources Information Center

    Wildemuth, Barbara M.

    1993-01-01

    Discussion of positivist and interpretive approaches to research and postpositivism focuses on two studies that apply interpretive research in different ways: an exploratory study of user-developed computing applications conducted prior to a positivist study and a study of end-user searching behaviors conducted concurrently with a positivist…

  19. CHEMICAL AND PHYSICAL CHARACTERISTICS OF OUTDOOR, INDOOR, AND PERSONAL PARTICULATE AIR SAMPLES COLLECTED IN AND AROUND A RETIREMENT FACILITY

    EPA Science Inventory

    Residential, personal, indoor, and outdoor sampling of particulate matter was conducted at a retirement center in the Towson area of northern Baltimore County in 1998. Concurrent sampling was conducted at a central community site. Computer-controlled scanning electron microsco...

  20. CHEMICAL AND PHYSICAL CHARACTERIZATION OF INDOOR, OUTDOOR, AND PERSONAL SAMPLES COLLECTED IN AND AROUND A RETIREMENT FACILITY

    EPA Science Inventory

    Residential, personal, indoor, and outdoor sampling of particulate matter was conducted at a retirement center in the Towson area of northern Baltimore County in 1998. Concurrent sampling was conducted at a central community site. Computer-controlled scanning electron microsco...

  1. Superconcurrency: A Form of Distributed Heterogeneous Supercomputing

    DTIC Science & Technology

    1991-05-01

    and Nathaniel J. Davis IV, An Overview of the PASM Parallel Processing System, in Computer Architecture, edited by D. D. Gajski , V. M. Milutinovic, H...nianag- concurrency Research Team has been rarena in the next few months, iag optinmalyconfigured sutes of the development of the Distributed e- g ., an

  2. Balancing Materiel Readiness Risks and Concurrency in Weapon System Acquisition: A Handbook for Program Managers

    DTIC Science & Technology

    1984-07-15

    ftViCCii UWNC COMMAND MIX CM AFT DCP OUTUNC moo ffEOUCST FOR PROGRAM DECISION DRAFT DCP AFSC REVIEW RECOWMEM CATIONS OHI*OC Arse wioc...CS.P3 F16. El*. P» MCA Exhibit 4-6b. EMBEDDED COMPUTER HARDWARE vs. SOFTWARE Exhibit 4-6c. DoD EMBEDDED COMPUTER MARKET 31.J1...the mix of stores carried by that vehicle 6. Anticipated combat tactics employed by the carrying or launching vehicle and its maneuvering

  3. Estimating Noise Levels In An Enclosed Space

    NASA Technical Reports Server (NTRS)

    Azzi, Elias

    1995-01-01

    GEGS Acoustic Analysis Program (GAAP) developed to compute composite profile of noise in Spacelab module on basis of data on noise produced by equipment, data on locations of equipment, and equipment-operating schedules. Impetus for development of GAAP provided by noise that generated in Spacelab Module during SLS-1 mission because of concurrent operation of many pieces of experimental and subsystem equipment. Although originally intended specifically to help compute noise in Spacelab, also applicable to any region with multiple sources of noise. Written in FORTRAN 77.

  4. Handheld computing in pathology

    PubMed Central

    Park, Seung; Parwani, Anil; Satyanarayanan, Mahadev; Pantanowitz, Liron

    2012-01-01

    Handheld computing has had many applications in medicine, but relatively few in pathology. Most reported uses of handhelds in pathology have been limited to experimental endeavors in telemedicine or education. With recent advances in handheld hardware and software, along with concurrent advances in whole-slide imaging (WSI), new opportunities and challenges have presented themselves. This review addresses the current state of handheld hardware and software, provides a history of handheld devices in medicine focusing on pathology, and presents future use cases for such handhelds in pathology. PMID:22616027

  5. Interprocedural Analysis and the Verification of Concurrent Programs

    DTIC Science & Technology

    2009-01-01

    SSPE ) problem is to compute a regular expression that represents paths(s, v) for all vertices v in the graph. The syntax of regular expressions is as...follows: r ::= ∅ | ε | e | r1 ∪ r2 | r1.r2 | r∗, where e stands for an edge in G. We can use any algorithm for SSPE to compute regular expressions for...a closed representation of loops provides an exponential speedup.2 Tarjan’s path-expression algorithm solves the SSPE problem efficiently. It uses

  6. Transcatheter Arterial Embolization of Concurrent Spontaneous Hematomas of the Rectus Sheath and Psoas Muscle in Patients Undergoing Anticoagulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Basile, Antonio; Medina, Jose Garcia; Mundo, Elena

    We report a case of concurrent rectus sheath and psoas hematomas in a patient undergoing anticoagulant therapy, treated by transcatheter arterial embolization (TAE) of inferior epigastric and lumbar arteries. Computed tomography (CT) demonstrated signs of active bleeding in two hematomas of the anterior and posterior abdominal walls. Transfemoral arteriogram confirmed the extravasation of contrast from the right inferior epigastric artery (RIEA). Indirect signs of bleeding were also found in a right lumbar artery (RLA). We successfully performed TAE of the feeding arteries. There have been few reports in the literature of such spontaneous hemorrhages in patients undergoing anticoagulation, successfully treatedmore » by TAE.« less

  7. Coarse-grained component concurrency in Earth system modeling: parallelizing atmospheric radiative transfer in the GFDL AM3 model using the Flexible Modeling System coupling framework

    NASA Astrophysics Data System (ADS)

    Balaji, V.; Benson, Rusty; Wyman, Bruce; Held, Isaac

    2016-10-01

    Climate models represent a large variety of processes on a variety of timescales and space scales, a canonical example of multi-physics multi-scale modeling. Current hardware trends, such as Graphical Processing Units (GPUs) and Many Integrated Core (MIC) chips, are based on, at best, marginal increases in clock speed, coupled with vast increases in concurrency, particularly at the fine grain. Multi-physics codes face particular challenges in achieving fine-grained concurrency, as different physics and dynamics components have different computational profiles, and universal solutions are hard to come by. We propose here one approach for multi-physics codes. These codes are typically structured as components interacting via software frameworks. The component structure of a typical Earth system model consists of a hierarchical and recursive tree of components, each representing a different climate process or dynamical system. This recursive structure generally encompasses a modest level of concurrency at the highest level (e.g., atmosphere and ocean on different processor sets) with serial organization underneath. We propose to extend concurrency much further by running more and more lower- and higher-level components in parallel with each other. Each component can further be parallelized on the fine grain, potentially offering a major increase in the scalability of Earth system models. We present here first results from this approach, called coarse-grained component concurrency, or CCC. Within the Geophysical Fluid Dynamics Laboratory (GFDL) Flexible Modeling System (FMS), the atmospheric radiative transfer component has been configured to run in parallel with a composite component consisting of every other atmospheric component, including the atmospheric dynamics and all other atmospheric physics components. We will explore the algorithmic challenges involved in such an approach, and present results from such simulations. Plans to achieve even greater levels of coarse-grained concurrency by extending this approach within other components, such as the ocean, will be discussed.

  8. The MEOW lunar project for education and science based on concurrent engineering approach

    NASA Astrophysics Data System (ADS)

    Roibás-Millán, E.; Sorribes-Palmer, F.; Chimeno-Manguán, M.

    2018-07-01

    The use of concurrent engineering in the design of space missions allows to take into account in an interrelated methodology the high level of coupling and iteration of mission subsystems in the preliminary conceptual phase. This work presents the result of applying concurrent engineering in a short time lapse to design the main elements of the preliminary design for a lunar exploration mission, developed within ESA Academy Concurrent Engineering Challenge 2017. During this program, students of the Master in Space Systems at Technical University of Madrid designed a low cost satellite to find water on the Moon south pole as prospect of a future human lunar base. The resulting mission, The Moon Explorer And Observer of Water/Ice (MEOW) compromises a 262 kg spacecraft to be launched into a Geostationary Transfer Orbit as a secondary payload in the 2023/2025 time frame. A three months Weak Stability Boundary transfer via the Sun-Earth L1 Lagrange point allows for a high launch timeframe flexibility. The different aspects of the mission (orbit analysis, spacecraft design and payload) and possibilities of concurrent engineering are described.

  9. Implicit attitudes to sexual partner concurrency vary by sexual orientation but not by gender-A cross sectional study of Belgian students.

    PubMed

    Kenyon, Chris R; Wolfs, Kenny; Osbak, Kara; van Lankveld, Jacques; Van Hal, Guido

    2018-01-01

    High rates of sexual partner concurrency have been shown to facilitate the spread of various sexually transmitted infections. Assessments of explicit attitudes to concurrency have however found little difference between populations. Implicit attitudes to concurrency may vary between populations and play a role in generating differences in the prevalence of concurrency. We developed a concurrency implicit associations test (C-IAT) to assess if implicit attitudes towards concurrency may vary between individuals and populations and what the correlates of these variations are. A sample of 869 Belgian students (mean age 23, SD 5.1) completed an online version of the C-IAT together with a questionnaire concerning sexual behavior and explicit attitudes to concurrency. The study participants C-IATs demonstrated a strong preference for monogamy (-0.78, SD = 0.41). 93.2% of participants had a pro-monogamy C-IAT. There was no difference in this implicit preference for monogamy between heterosexual men and women. Men who have sex with men and women who have sex with women were more likely to exhibit implicit but not explicit preferences for concurrency compared to heterosexual men and women. Correlates of the C-IAT varied between men and women.

  10. A Fog Computing and Cloudlet Based Augmented Reality System for the Industry 4.0 Shipyard.

    PubMed

    Fernández-Caramés, Tiago M; Fraga-Lamas, Paula; Suárez-Albela, Manuel; Vilar-Montesinos, Miguel

    2018-06-02

    Augmented Reality (AR) is one of the key technologies pointed out by Industry 4.0 as a tool for enhancing the next generation of automated and computerized factories. AR can also help shipbuilding operators, since they usually need to interact with information (e.g., product datasheets, instructions, maintenance procedures, quality control forms) that could be handled easily and more efficiently through AR devices. This is the reason why Navantia, one of the 10 largest shipbuilders in the world, is studying the application of AR (among other technologies) in different shipyard environments in a project called "Shipyard 4.0". This article presents Navantia's industrial AR (IAR) architecture, which is based on cloudlets and on the fog computing paradigm. Both technologies are ideal for supporting physically-distributed, low-latency and QoS-aware applications that decrease the network traffic and the computational load of traditional cloud computing systems. The proposed IAR communications architecture is evaluated in real-world scenarios with payload sizes according to demanding Microsoft HoloLens applications and when using a cloud, a cloudlet and a fog computing system. The results show that, in terms of response delay, the fog computing system is the fastest when transferring small payloads (less than 128 KB), while for larger file sizes, the cloudlet solution is faster than the others. Moreover, under high loads (with many concurrent IAR clients), the cloudlet in some cases is more than four times faster than the fog computing system in terms of response delay.

  11. Numericware i: Identical by State Matrix Calculator

    PubMed Central

    Kim, Bongsong; Beavis, William D

    2017-01-01

    We introduce software, Numericware i, to compute identical by state (IBS) matrix based on genotypic data. Calculating an IBS matrix with a large dataset requires large computer memory and takes lengthy processing time. Numericware i addresses these challenges with 2 algorithmic methods: multithreading and forward chopping. The multithreading allows computational routines to concurrently run on multiple central processing unit (CPU) processors. The forward chopping addresses memory limitation by dividing a dataset into appropriately sized subsets. Numericware i allows calculation of the IBS matrix for a large genotypic dataset using a laptop or a desktop computer. For comparison with different software, we calculated genetic relationship matrices using Numericware i, SPAGeDi, and TASSEL with the same genotypic dataset. Numericware i calculates IBS coefficients between 0 and 2, whereas SPAGeDi and TASSEL produce different ranges of values including negative values. The Pearson correlation coefficient between the matrices from Numericware i and TASSEL was high at .9972, whereas SPAGeDi showed low correlation with Numericware i (.0505) and TASSEL (.0587). With a high-dimensional dataset of 500 entities by 10 000 000 SNPs, Numericware i spent 382 minutes using 19 CPU threads and 64 GB memory by dividing the dataset into 3 pieces, whereas SPAGeDi and TASSEL failed with the same dataset. Numericware i is freely available for Windows and Linux under CC-BY 4.0 license at https://figshare.com/s/f100f33a8857131eb2db. PMID:28469375

  12. A data distributed parallel algorithm for ray-traced volume rendering

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu; Painter, James S.; Hansen, Charles D.; Krogh, Michael F.

    1993-01-01

    This paper presents a divide-and-conquer ray-traced volume rendering algorithm and a parallel image compositing method, along with their implementation and performance on the Connection Machine CM-5, and networked workstations. This algorithm distributes both the data and the computations to individual processing units to achieve fast, high-quality rendering of high-resolution data. The volume data, once distributed, is left intact. The processing nodes perform local ray tracing of their subvolume concurrently. No communication between processing units is needed during this locally ray-tracing process. A subimage is generated by each processing unit and the final image is obtained by compositing subimages in the proper order, which can be determined a priori. Test results on both the CM-5 and a group of networked workstations demonstrate the practicality of our rendering algorithm and compositing method.

  13. Assessment of the Incremental Benefit of Computer-Aided Detection (CAD) for Interpretation of CT Colonography by Experienced and Inexperienced Readers

    PubMed Central

    Boone, Darren; Mallett, Susan; McQuillan, Justine; Taylor, Stuart A.; Altman, Douglas G.; Halligan, Steve

    2015-01-01

    Objectives To quantify the incremental benefit of computer-assisted-detection (CAD) for polyps, for inexperienced readers versus experienced readers of CT colonography. Methods 10 inexperienced and 16 experienced radiologists interpreted 102 colonography studies unassisted and with CAD utilised in a concurrent paradigm. They indicated any polyps detected on a study sheet. Readers’ interpretations were compared against a ground-truth reference standard: 46 studies were normal and 56 had at least one polyp (132 polyps in total). The primary study outcome was the difference in CAD net benefit (a combination of change in sensitivity and change in specificity with CAD, weighted towards sensitivity) for detection of patients with polyps. Results Inexperienced readers’ per-patient sensitivity rose from 39.1% to 53.2% with CAD and specificity fell from 94.1% to 88.0%, both statistically significant. Experienced readers’ sensitivity rose from 57.5% to 62.1% and specificity fell from 91.0% to 88.3%, both non-significant. Net benefit with CAD assistance was significant for inexperienced readers but not for experienced readers: 11.2% (95%CI 3.1% to 18.9%) versus 3.2% (95%CI -1.9% to 8.3%) respectively. Conclusions Concurrent CAD resulted in a significant net benefit when used by inexperienced readers to identify patients with polyps by CT colonography. The net benefit was nearly four times the magnitude of that observed for experienced readers. Experienced readers did not benefit significantly from concurrent CAD. PMID:26355745

  14. IMACS 󈨟: Proceedings of the IMACS World Congress on Computation and Applied Mathematics (13th) Held in Dublin, Ireland on July 22-26, 1991. Volume 2. Computational Fluid Dynamics and Wave Propagation, Parallel Computing, Concurrent and Supercomputing, Computational Physics/Computational Chemistry and Evolutionary Systems

    DTIC Science & Technology

    1991-01-01

    Investigations of uniform convergence Sorption gel5ster Stoffe in porisen Medien (in German). Ver- are-in progress. lag P -Lang, Frankfurt/M., 1991 (in press...ad- sorption terms. Numerical results for solute transport with instantaneous, /it + 01(p): - q(p)#= 0, X > 0, t > 0. (5) nonlinear adsorption-are...13 ] A, S113S = (PfIJl) 8Iax, S1136= m A, y H137=- pf A, v 138 -f A, IIso l r opic-visco° c ousic- Y 139 im A, z-pliole (C’ilz) y 1141 = [ PfA 𔃻 + P

  15. MaMR: High-performance MapReduce programming model for material cloud applications

    NASA Astrophysics Data System (ADS)

    Jing, Weipeng; Tong, Danyu; Wang, Yangang; Wang, Jingyuan; Liu, Yaqiu; Zhao, Peng

    2017-02-01

    With the increasing data size in materials science, existing programming models no longer satisfy the application requirements. MapReduce is a programming model that enables the easy development of scalable parallel applications to process big data on cloud computing systems. However, this model does not directly support the processing of multiple related data, and the processing performance does not reflect the advantages of cloud computing. To enhance the capability of workflow applications in material data processing, we defined a programming model for material cloud applications that supports multiple different Map and Reduce functions running concurrently based on hybrid share-memory BSP called MaMR. An optimized data sharing strategy to supply the shared data to the different Map and Reduce stages was also designed. We added a new merge phase to MapReduce that can efficiently merge data from the map and reduce modules. Experiments showed that the model and framework present effective performance improvements compared to previous work.

  16. A study of the relationship between the performance and dependability of a fault-tolerant computer

    NASA Technical Reports Server (NTRS)

    Goswami, Kumar K.

    1994-01-01

    This thesis studies the relationship by creating a tool (FTAPE) that integrates a high stress workload generator with fault injection and by using the tool to evaluate system performance under error conditions. The workloads are comprised of processes which are formed from atomic components that represent CPU, memory, and I/O activity. The fault injector is software-implemented and is capable of injecting any memory addressable location, including special registers and caches. This tool has been used to study a Tandem Integrity S2 Computer. Workloads with varying numbers of processes and varying compositions of CPU, memory, and I/O activity are first characterized in terms of performance. Then faults are injected into these workloads. The results show that as the number of concurrent processes increases, the mean fault latency initially increases due to increased contention for the CPU. However, for even higher numbers of processes (less than 3 processes), the mean latency decreases because long latency faults are paged out before they can be activated.

  17. Study of the mapping of Navier-Stokes algorithms onto multiple-instruction/multiple-data-stream computers

    NASA Technical Reports Server (NTRS)

    Eberhardt, D. S.; Baganoff, D.; Stevens, K.

    1984-01-01

    Implicit approximate-factored algorithms have certain properties that are suitable for parallel processing. A particular computational fluid dynamics (CFD) code, using this algorithm, is mapped onto a multiple-instruction/multiple-data-stream (MIMD) computer architecture. An explanation of this mapping procedure is presented, as well as some of the difficulties encountered when trying to run the code concurrently. Timing results are given for runs on the Ames Research Center's MIMD test facility which consists of two VAX 11/780's with a common MA780 multi-ported memory. Speedups exceeding 1.9 for characteristic CFD runs were indicated by the timing results.

  18. A Framework for Load Balancing of Tensor Contraction Expressions via Dynamic Task Partitioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lai, Pai-Wei; Stock, Kevin; Rajbhandari, Samyam

    In this paper, we introduce the Dynamic Load-balanced Tensor Contractions (DLTC), a domain-specific library for efficient task parallel execution of tensor contraction expressions, a class of computation encountered in quantum chemistry and physics. Our framework decomposes each contraction into smaller unit of tasks, represented by an abstraction referred to as iterators. We exploit an extra level of parallelism by having tasks across independent contractions executed concurrently through a dynamic load balancing run- time. We demonstrate the improved performance, scalability, and flexibility for the computation of tensor contraction expressions on parallel computers using examples from coupled cluster methods.

  19. Plancton: an opportunistic distributed computing project based on Docker containers

    NASA Astrophysics Data System (ADS)

    Concas, Matteo; Berzano, Dario; Bagnasco, Stefano; Lusso, Stefano; Masera, Massimo; Puccio, Maximiliano; Vallero, Sara

    2017-10-01

    The computing power of most modern commodity computers is far from being fully exploited by standard usage patterns. In this work we describe the development and setup of a virtual computing cluster based on Docker containers used as worker nodes. The facility is based on Plancton: a lightweight fire-and-forget background service. Plancton spawns and controls a local pool of Docker containers on a host with free resources, by constantly monitoring its CPU utilisation. It is designed to release the resources allocated opportunistically, whenever another demanding task is run by the host user, according to configurable policies. This is attained by killing a number of running containers. One of the advantages of a thin virtualization layer such as Linux containers is that they can be started almost instantly upon request. We will show how fast the start-up and disposal of containers eventually enables us to implement an opportunistic cluster based on Plancton daemons without a central control node, where the spawned Docker containers behave as job pilots. Finally, we will show how Plancton was configured to run up to 10 000 concurrent opportunistic jobs on the ALICE High-Level Trigger facility, by giving a considerable advantage in terms of management compared to virtual machines.

  20. Real-time implementation of optimized maximum noise fraction transform for feature extraction of hyperspectral images

    NASA Astrophysics Data System (ADS)

    Wu, Yuanfeng; Gao, Lianru; Zhang, Bing; Zhao, Haina; Li, Jun

    2014-01-01

    We present a parallel implementation of the optimized maximum noise fraction (G-OMNF) transform algorithm for feature extraction of hyperspectral images on commodity graphics processing units (GPUs). The proposed approach explored the algorithm data-level concurrency and optimized the computing flow. We first defined a three-dimensional grid, in which each thread calculates a sub-block data to easily facilitate the spatial and spectral neighborhood data searches in noise estimation, which is one of the most important steps involved in OMNF. Then, we optimized the processing flow and computed the noise covariance matrix before computing the image covariance matrix to reduce the original hyperspectral image data transmission. These optimization strategies can greatly improve the computing efficiency and can be applied to other feature extraction algorithms. The proposed parallel feature extraction algorithm was implemented on an Nvidia Tesla GPU using the compute unified device architecture and basic linear algebra subroutines library. Through the experiments on several real hyperspectral images, our GPU parallel implementation provides a significant speedup of the algorithm compared with the CPU implementation, especially for highly data parallelizable and arithmetically intensive algorithm parts, such as noise estimation. In order to further evaluate the effectiveness of G-OMNF, we used two different applications: spectral unmixing and classification for evaluation. Considering the sensor scanning rate and the data acquisition time, the proposed parallel implementation met the on-board real-time feature extraction.

  1. Modelling the impact of correlations between condom use and sexual contact pattern on the dynamics of sexually transmitted infections.

    PubMed

    Yamamoto, Nao; Ejima, Keisuke; Nishiura, Hiroshi

    2018-05-31

    It is believed that sexually active people, i.e. people having multiple or concurrent sexual partners, are at a high risk of sexually transmitted infections (STI), but they are likely to be more aware of the risk and may exhibit greater fraction of the use of condom. The purpose of the present study is to examine the correlation between condom use and sexual contact pattern and clarify its impact on the transmission dynamics of STIs using a mathematical model. The definition of sexual contact pattern can be broad, but we focus on two specific aspects: (i) type of partnership (i.e. steady or casual partnership) and (ii) existence of concurrency (i.e. with single or multiple partners). Systematic review and meta-analysis of published studies are performed, analysing literature that epidemiologically examined the relationship between condom use and sexual contact pattern. Subsequently, we employ an epidemiological model and compute the reproduction number that accounts for with and without concurrency so that the corresponding coverage of condom use and its correlation with existence of concurrency can be explicitly investigated using the mathematical model. Combining the model with parameters estimated from the meta-analysis along with other assumed parameters, the impact of varying the proportion of population with multiple partners on the reproduction number is examined. Based on systematic review, we show that a greater number of people used condoms during sexual contact with casual partners than with steady partners. Furthermore, people with multiple partners use condoms more frequently than people with a single partner alone. Our mathematical model revealed a positive relationship between the effective reproduction number and the proportion of people with multiple partners. Nevertheless, the association was reversed to be negative by employing a slightly greater value of the relative risk of condom use for people with multiple partners than that empirically estimated. Depending on the correlation between condom use and the existence of concurrency, association between the proportion of people with multiple partners and the reproduction number can be reversed, suggesting the sexually active population is not necessary a primary target population to encourage condom use (i.e., sexually less active individuals could equivalently be a target in some cases).

  2. Concurrent partnerships and HIV: an inconvenient truth

    PubMed Central

    2011-01-01

    The strength of the evidence linking concurrency to HIV epidemic severity in southern and eastern Africa led the Joint United Nations Programme on HIV/AIDS and the Southern African Development Community in 2006 to conclude that high rates of concurrent sexual partnerships, combined with low rates of male circumcision and infrequent condom use, are major drivers of the AIDS epidemic in southern Africa. In a recent article in the Journal of the International AIDS Society, Larry Sawers and Eileen Stillwaggon attempt to challenge the evidence for the importance of concurrency and call for an end to research on the topic. However, their "systematic review of the evidence" is not an accurate summary of the research on concurrent partnerships and HIV, and it contains factual errors concerning the measurement and mathematical modelling of concurrency. Practical prevention-oriented research on concurrency is only just beginning. Most interventions to raise awareness about the risks of concurrency are less than two years old; few evaluations and no randomized-controlled trials of these programmes have been conducted. Determining whether these interventions can help people better assess their own risks and take steps to reduce them remains an important task for research. This kind of research is indeed the only way to obtain conclusive evidence on the role of concurrency, the programmes needed for effective prevention, the willingness of people to change behaviour, and the obstacles to change. PMID:21406080

  3. Anonymously Productive and Socially Engaged While Learning at Work

    ERIC Educational Resources Information Center

    Magni, Luca

    2016-01-01

    Many concurrent variables appear to influence people when they interact anonymously, either face-to-face (F2F) or in computer-mediated communications (CMC).This paper presents the results of a small exploratory research, conducted in a medical company in Italy, to investigate how the use of pseudonyms influences CMC behaviours. The study involved…

  4. Database Management Systems: A Case Study of Faculty of Open Education

    ERIC Educational Resources Information Center

    Kamisli, Zehra

    2004-01-01

    We live in the information and the microelectronic age, where technological advancements become a major determinant of our lifestyle. Such advances in technology cannot possibly be made or sustained without concurrent advancement in management systems (5). The impact of computer technology on organizations and society is increasing as new…

  5. A New Model of Sensorimotor Coupling in the Development of Speech

    ERIC Educational Resources Information Center

    Westermann, Gert; Miranda, Eduardo Reck

    2004-01-01

    We present a computational model that learns a coupling between motor parameters and their sensory consequences in vocal production during a babbling phase. Based on the coupling, preferred motor parameters and prototypically perceived sounds develop concurrently. Exposure to an ambient language modifies perception to coincide with the sounds from…

  6. The AT Odyssey Continues. Proceedings of the RESNA 2001 Annual Conference (Reno, Nevada, June 22-26, 2001). Volume 21.

    ERIC Educational Resources Information Center

    Simpson, Richard, Ed.

    These proceedings of the annual RESNA (Association for the Advancement of Rehabilitation Technology) conference include more than 200 presentations on all facets of assistive technology, including concurrent sessions, scientific platform sessions, interactive poster presentations, computer demonstrations, and the research symposia. The scientific…

  7. Interpreting beyond Syntactics: A Semiotic Learning Model for Computer Programming Languages

    ERIC Educational Resources Information Center

    May, Jeffrey; Dhillon, Gurpreet

    2009-01-01

    In the information systems field there are numerous programming languages that can be used in specifying the behavior of concurrent and distributed systems. In the literature it has been argued that a lack of pragmatic and semantic consideration decreases the effectiveness of such specifications. In other words, to simply understand the syntactic…

  8. Cortical Activations during a Computer-Based Fraction Learning Game: Preliminary Results from a Pilot Study

    ERIC Educational Resources Information Center

    Baker, Joseph M.; Martin, Taylor; Aghababyan, Ani; Armaghanyan, Armen; Gillam, Ronald

    2015-01-01

    Advances in educational neuroscience have made it possible for researchers to conduct studies that observe concurrent behavioral (i.e., task performance) and neural (i.e., brain activation) responses to naturalistic educational activities. Such studies are important because they help educators, clinicians, and researchers to better understand the…

  9. 16 CFR 1211.5 - General testing parameters.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... § 1211.4(c) for compliance with the Standard for Safety for Tests for Safety-Related Controls Employing... vibration level of 5g is to be used for the Vibration Test. (6) When a Computational Investigation is... tested. (8) The Endurance test is to be conducted concurrently with the Operational test. The control...

  10. 16 CFR § 1211.5 - General testing parameters.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... covered by § 1211.4(c) for compliance with the Standard for Safety for Tests for Safety-Related Controls... vibration level of 5g is to be used for the Vibration Test. (6) When a Computational Investigation is... tested. (8) The Endurance test is to be conducted concurrently with the Operational test. The control...

  11. 16 CFR 1211.5 - General testing parameters.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... § 1211.4(c) for compliance with the Standard for Safety for Tests for Safety-Related Controls Employing... vibration level of 5g is to be used for the Vibration Test. (6) When a Computational Investigation is... tested. (8) The Endurance test is to be conducted concurrently with the Operational test. The control...

  12. 16 CFR 1211.5 - General testing parameters.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... § 1211.4(c) for compliance with the Standard for Safety for Tests for Safety-Related Controls Employing... vibration level of 5g is to be used for the Vibration Test. (6) When a Computational Investigation is... tested. (8) The Endurance test is to be conducted concurrently with the Operational test. The control...

  13. Behavioral Assessment of Impulsivity: A Comparison of Children with and without Attention Deficit Hyperactivity Disorder

    ERIC Educational Resources Information Center

    Neef, Nancy A.; Marckel, Julie; Ferreri, Summer J.; Bicard, David F.; Endo, Sayaka; Aman, Michael G.; Miller, Kelly M.; Jung, Sunhwa; Nist, Lindsay; Armstrong, Nancy

    2005-01-01

    We conducted a brief computer-based assessment involving choices of concurrently presented arithmetic problems associated with competing reinforcer dimensions to assess impulsivity (choices controlled primarily by reinforcer immediacy) as well as the relative influence of other dimensions (reinforcer rate, quality, and response effort), with 58…

  14. Distributed Memory Compiler Methods for Irregular Problems - Data Copy Reuse and Runtime Partitioning

    DTIC Science & Technology

    1991-09-01

    addition, support for Saltz was provided by NSF from NSF Grant ASC-8819374. i 1, introduction Over the past fewyers, ,we have devoped -methods needed to... network . In Third Conf. on Hypercube Concurrent Computers and Applications, pages 241-27278, 1988. [17] G. Fox, S. Hiranandani, K. Kennedy, C. Koelbel

  15. Effects of a History of Differential Reinforcement on Preference for Choice

    ERIC Educational Resources Information Center

    Karsina, Allen; Thompson, Rachel H.; Rodriguez, Nicole M.

    2011-01-01

    The effects of a history of differential reinforcement for selecting a free-choice versus a restricted-choice stimulus arrangement on the subsequent responding of 7 undergraduates in a computer-based game of chance were examined using a concurrent-chains arrangement and a multiple-baseline-across-participants design. In the free-choice…

  16. Combining analysis with optimization at Langley Research Center. An evolutionary process

    NASA Technical Reports Server (NTRS)

    Rogers, J. L., Jr.

    1982-01-01

    The evolutionary process of combining analysis and optimization codes was traced with a view toward providing insight into the long term goal of developing the methodology for an integrated, multidisciplinary software system for the concurrent analysis and optimization of aerospace structures. It was traced along the lines of strength sizing, concurrent strength and flutter sizing, and general optimization to define a near-term goal for combining analysis and optimization codes. Development of a modular software system combining general-purpose, state-of-the-art, production-level analysis computer programs for structures, aerodynamics, and aeroelasticity with a state-of-the-art optimization program is required. Incorporation of a modular and flexible structural optimization software system into a state-of-the-art finite element analysis computer program will facilitate this effort. This effort results in the software system used that is controlled with a special-purpose language, communicates with a data management system, and is easily modified for adding new programs and capabilities. A 337 degree-of-freedom finite element model is used in verifying the accuracy of this system.

  17. Characterizing parallel file-access patterns on a large-scale multiprocessor

    NASA Technical Reports Server (NTRS)

    Purakayastha, Apratim; Ellis, Carla Schlatter; Kotz, David; Nieuwejaar, Nils; Best, Michael

    1994-01-01

    Rapid increases in the computational speeds of multiprocessors have not been matched by corresponding performance enhancements in the I/O subsystem. To satisfy the large and growing I/O requirements of some parallel scientific applications, we need parallel file systems that can provide high-bandwidth and high-volume data transfer between the I/O subsystem and thousands of processors. Design of such high-performance parallel file systems depends on a thorough grasp of the expected workload. So far there have been no comprehensive usage studies of multiprocessor file systems. Our CHARISMA project intends to fill this void. The first results from our study involve an iPSC/860 at NASA Ames. This paper presents results from a different platform, the CM-5 at the National Center for Supercomputing Applications. The CHARISMA studies are unique because we collect information about every individual read and write request and about the entire mix of applications running on the machines. The results of our trace analysis lead to recommendations for parallel file system design. First the file system should support efficient concurrent access to many files, and I/O requests from many jobs under varying load conditions. Second, it must efficiently manage large files kept open for long periods. Third, it should expect to see small requests predominantly sequential access patterns, application-wide synchronous access, no concurrent file-sharing between jobs appreciable byte and block sharing between processes within jobs, and strong interprocess locality. Finally, the trace data suggest that node-level write caches and collective I/O request interfaces may be useful in certain environments.

  18. A Digital Photographic Measurement Method for Quantifying Foot Posture: Validity, Reliability, and Descriptive Data

    PubMed Central

    Cobb, Stephen C.; James, C. Roger; Hjertstedt, Matthew; Kruk, James

    2011-01-01

    Abstract Context: Although abnormal foot posture long has been associated with lower extremity injury risk, the evidence is equivocal. Poor intertester reliability of traditional foot measures might contribute to the inconsistency. Objectives: To investigate the validity and reliability of a digital photographic measurement method (DPMM) technology, the reliability of DPMM-quantified foot measures, and the concurrent validity of the DPMM with clinical-measurement methods (CMMs) and to report descriptive data for DPMM measures with moderate to high intratester and intertester reliability. Design: Descriptive laboratory study. Setting: Biomechanics research laboratory. Patients or Other Participants: A total of 159 people participated in 3 groups. Twenty-eight people (11 men, 17 women; age  =  25 ± 5 years, height  =  1.71 ± 0.10 m, mass  =  77.6 ± 17.3 kg) were recruited for investigation of intratester and intertester reliability of the DPMM technology; 20 (10 men, 10 women; age  =  24 ± 2 years, height  =  1.71 ± 0.09 m, mass  =  76 ± 16 kg) for investigation of DPMM and CMM reliability and concurrent validity; and 111 (42 men, 69 women; age  =  22.8 ± 4.7 years, height  =  168.5 ± 10.4 cm, mass  =  69.8 ± 13.3 kg) for development of a descriptive data set of the DPMM foot measurements with moderate to high intratester and intertester reliabilities. Intervention(s): The dimensions of 10 model rectangles and the 28 participants' feet were measured, and DPMM foot posture was measured in the 111 participants. Two clinicians assessed the DPMM and CMM foot measures of the 20 participants. Main Outcome Measure(s): Validity and reliability were evaluated using mean absolute and percentage errors and intraclass correlation coefficients. Descriptive data were computed from the DPMM foot posture measures. Results: The DPMM technology intratester and intertester reliability intraclass correlation coefficients were 1.0 for each tester and variable. Mean absolute errors were equal to or less than 0.2 mm for the bottom and right-side variables and 0.1° for the calculated angle variable. Mean percentage errors between the DPMM and criterion reference values were equal to or less than 0.4%. Intratester and intertester reliabilities of DPMM-computed structural measures of arch and navicular indices were moderate to high (>0.78), and concurrent validity was moderate to strong. Conclusions: The DPMM is a valid and reliable clinical and research tool for quantifying foot structure. The DPMM and the descriptive data might be used to define groups in future studies in which the relationship between foot posture and function or injury risk is investigated. PMID:21214347

  19. Data-Driven Correlation Analysis Between Observed 3D Fatigue-Crack Path and Computed Fields from High-Fidelity, Crystal-Plasticity, Finite-Element Simulations

    NASA Astrophysics Data System (ADS)

    Pierson, Kyle D.; Hochhalter, Jacob D.; Spear, Ashley D.

    2018-05-01

    Systematic correlation analysis was performed between simulated micromechanical fields in an uncracked polycrystal and the known path of an eventual fatigue-crack surface based on experimental observation. Concurrent multiscale finite-element simulation of cyclic loading was performed using a high-fidelity representation of grain structure obtained from near-field high-energy x-ray diffraction microscopy measurements. An algorithm was developed to parameterize and systematically correlate the three-dimensional (3D) micromechanical fields from simulation with the 3D fatigue-failure surface from experiment. For comparison, correlation coefficients were also computed between the micromechanical fields and hypothetical, alternative surfaces. The correlation of the fields with hypothetical surfaces was found to be consistently weaker than that with the known crack surface, suggesting that the micromechanical fields of the cyclically loaded, uncracked microstructure might provide some degree of predictiveness for microstructurally small fatigue-crack paths, although the extent of such predictiveness remains to be tested. In general, gradients of the field variables exhibit stronger correlations with crack path than the field variables themselves. Results from the data-driven approach implemented here can be leveraged in future model development for prediction of fatigue-failure surfaces (for example, to facilitate univariate feature selection required by convolution-based models).

  20. GaAs Supercomputing: Architecture, Language, And Algorithms For Image Processing

    NASA Astrophysics Data System (ADS)

    Johl, John T.; Baker, Nick C.

    1988-10-01

    The application of high-speed GaAs processors in a parallel system matches the demanding computational requirements of image processing. The architecture of the McDonnell Douglas Astronautics Company (MDAC) vector processor is described along with the algorithms and language translator. Most image and signal processing algorithms can utilize parallel processing and show a significant performance improvement over sequential versions. The parallelization performed by this system is within each vector instruction. Since each vector has many elements, each requiring some computation, useful concurrent arithmetic operations can easily be performed. Balancing the memory bandwidth with the computation rate of the processors is an important design consideration for high efficiency and utilization. The architecture features a bus-based execution unit consisting of four to eight 32-bit GaAs RISC microprocessors running at a 200 MHz clock rate for a peak performance of 1.6 BOPS. The execution unit is connected to a vector memory with three buses capable of transferring two input words and one output word every 10 nsec. The address generators inside the vector memory perform different vector addressing modes and feed the data to the execution unit. The functions discussed in this paper include basic MATRIX OPERATIONS, 2-D SPATIAL CONVOLUTION, HISTOGRAM, and FFT. For each of these algorithms, assembly language programs were run on a behavioral model of the system to obtain performance figures.

  1. Low-cost, high-performance and efficiency computational photometer design

    NASA Astrophysics Data System (ADS)

    Siewert, Sam B.; Shihadeh, Jeries; Myers, Randall; Khandhar, Jay; Ivanov, Vitaly

    2014-05-01

    Researchers at the University of Alaska Anchorage and University of Colorado Boulder have built a low cost high performance and efficiency drop-in-place Computational Photometer (CP) to test in field applications ranging from port security and safety monitoring to environmental compliance monitoring and surveying. The CP integrates off-the-shelf visible spectrum cameras with near to long wavelength infrared detectors and high resolution digital snapshots in a single device. The proof of concept combines three or more detectors into a single multichannel imaging system that can time correlate read-out, capture, and image process all of the channels concurrently with high performance and energy efficiency. The dual-channel continuous read-out is combined with a third high definition digital snapshot capability and has been designed using an FPGA (Field Programmable Gate Array) to capture, decimate, down-convert, re-encode, and transform images from two standard definition CCD (Charge Coupled Device) cameras at 30Hz. The continuous stereo vision can be time correlated to megapixel high definition snapshots. This proof of concept has been fabricated as a fourlayer PCB (Printed Circuit Board) suitable for use in education and research for low cost high efficiency field monitoring applications that need multispectral and three dimensional imaging capabilities. Initial testing is in progress and includes field testing in ports, potential test flights in un-manned aerial systems, and future planned missions to image harsh environments in the arctic including volcanic plumes, ice formation, and arctic marine life.

  2. Investigations of Stability in Junior High School Math and English Classes: The Texas Junior High School Study. Research and Development Report No. 77-3.

    ERIC Educational Resources Information Center

    Evertson, Carolyn M.; And Others

    The stability of classroom behavior is examined from several perspectives: (1) the relative consistency of teacher behavior in two different sections of the same course taught concurrently; (2) the relative consistency of student behavior in math and English classes attended concurrently; and (3) differences in student and teacher behavior in math…

  3. Chemical modeling for precipitation from hypersaline hydrofracturing brines.

    PubMed

    Zermeno-Motante, Maria I; Nieto-Delgado, Cesar; Cannon, Fred S; Cash, Colin C; Wunz, Christopher C

    2016-10-15

    Hypersaline hydrofracturing brines host very high salt concentrations, as high as 120,000-330,000 mg/L total dissolved solids (TDS), corresponding to ionic strengths of 2.1-5.7 mol/kg. This is 4-10 times higher than for ocean water. At such high ionic strengths, the conventional equations for computing activity coefficients no longer apply; and the complex ion-interactive Pitzer model must be invoked. The authors herein have used the Pitzer-based PHREEQC computer program to compute the appropriate activity coefficients when forming such precipitates as BaSO4, CaSO4, MgSO4, SrSO4, CaCO3, SrCO3, and BaCO3 in hydrofracturing waters. The divalent cation activity coefficients (γM) were computed in the 0.1 to 0.2 range at 2.1 mol/kg ionic strength, then by 5.7 mol/kg ionic strength, they rose to 0.2 for Ba(2+), 0.6 for Sr(2+), 0.8 for Ca(2+), and 2.1 for Mg(2+). Concurrently, the [Formula: see text] was 0.02-0.03; and [Formula: see text] was 0.01-0.02. While employing these Pitzer-derived activity coefficients, the authors then used the PHREEQC model to characterize precipitation of several of these sulfates and carbonates from actual hydrofracturing waters. Modeled precipitation matched quite well with actual laboratory experiments and full-scale operations. Also, the authors found that SrSO4 effectively co-precipitated radium from hydrofracturing brines, as discerned when monitoring (228)Ra and other beta-emitting species via liquid scintillation; and also when monitoring gamma emissions from (226)Ra. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Cooperating knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Feigenbaum, Edward A.; Buchanan, Bruce G.

    1988-01-01

    This final report covers work performed under Contract NCC2-220 between NASA Ames Research Center and the Knowledge Systems Laboratory, Stanford University. The period of research was from March 1, 1987 to February 29, 1988. Topics covered were as follows: (1) concurrent architectures for knowledge-based systems; (2) methods for the solution of geometric constraint satisfaction problems, and (3) reasoning under uncertainty. The research in concurrent architectures was co-funded by DARPA, as part of that agency's Strategic Computing Program. The research has been in progress since 1985, under DARPA and NASA sponsorship. The research in geometric constraint satisfaction has been done in the context of a particular application, that of determining the 3-D structure of complex protein molecules, using the constraints inferred from NMR measurements.

  5. A modular theory of multisensory integration for motor control

    PubMed Central

    Tagliabue, Michele; McIntyre, Joseph

    2014-01-01

    To control targeted movements, such as reaching to grasp an object or hammering a nail, the brain can use divers sources of sensory information, such as vision and proprioception. Although a variety of studies have shown that sensory signals are optimally combined according to principles of maximum likelihood, increasing evidence indicates that the CNS does not compute a single, optimal estimation of the target's position to be compared with a single optimal estimation of the hand. Rather, it employs a more modular approach in which the overall behavior is built by computing multiple concurrent comparisons carried out simultaneously in a number of different reference frames. The results of these individual comparisons are then optimally combined in order to drive the hand. In this article we examine at a computational level two formulations of concurrent models for sensory integration and compare this to the more conventional model of converging multi-sensory signals. Through a review of published studies, both our own and those performed by others, we produce evidence favoring the concurrent formulations. We then examine in detail the effects of additive signal noise as information flows through the sensorimotor system. By taking into account the noise added by sensorimotor transformations, one can explain why the CNS may shift its reliance on one sensory modality toward a greater reliance on another and investigate under what conditions those sensory transformations occur. Careful consideration of how transformed signals will co-vary with the original source also provides insight into how the CNS chooses one sensory modality over another. These concepts can be used to explain why the CNS might, for instance, create a visual representation of a task that is otherwise limited to the kinesthetic domain (e.g., pointing with one hand to a finger on the other) and why the CNS might choose to recode sensory information in an external reference frame. PMID:24550816

  6. Design of a verifiable subset for HAL/S

    NASA Technical Reports Server (NTRS)

    Browne, J. C.; Good, D. I.; Tripathi, A. R.; Young, W. D.

    1979-01-01

    An attempt to evaluate the applicability of program verification techniques to the existing programming language, HAL/S is discussed. HAL/S is a general purpose high level language designed to accommodate the software needs of the NASA Space Shuttle project. A diversity of features for scientific computing, concurrent and real-time programming, and error handling are discussed. The criteria by which features were evaluated for inclusion into the verifiable subset are described. Individual features of HAL/S with respect to these criteria are examined and justification for the omission of various features from the subset is provided. Conclusions drawn from the research are presented along with recommendations made for the use of HAL/S with respect to the area of program verification.

  7. Tumor cavitation in patients with stage III non-small-cell lung cancer undergoing concurrent chemoradiotherapy: incidence and outcomes.

    PubMed

    Phernambucq, Erik C J; Hartemink, Koen J; Smit, Egbert F; Paul, Marinus A; Postmus, Pieter E; Comans, Emile F I; Senan, Suresh

    2012-08-01

    Commonly reported complications after concurrent chemoradiotherapy (CCRT) in patients with stage III non-small-cell lung cancer (NSCLC) include febrile neutropenia, radiation esophagitis, and pneumonitis. We studied the incidence of tumor cavitation and/or "tumor abscess" after CCRT in a single-institutional cohort. Between 2003 and 2010, 87 patients with stage III NSCLC underwent cisplatin-based CCRT and all subsequent follow-up at the VU University Medical Center. Diagnostic and radiotherapy planning computed tomography scans were reviewed for tumor cavitation, which was defined as a nonbronchial air-containing cavity located within the primary tumor. Pulmonary toxicities scored as Common Toxicity Criteria v3.0 of grade III or more, occurring within 90 days after end of radiotherapy, were analyzed. In the entire cohort, tumor cavitation was observed on computed tomography scans of 16 patients (18%). The histology in cavitated tumors was squamous cell (n = 14), large cell (n = 1), or adenocarcinoma (n = 1). Twenty patients (23%) experienced pulmonary toxicity of grade III or more, other than radiation pneumonitis. Eight patients with a tumor cavitation (seven squamous cell carcinoma) developed severe pulmonary complications; tumor abscess (n = 5), fatal hemorrhage (n = 2), and fatal embolism (n = 1). Two patients with a tumor abscess required open-window thoracostomy post-CCRT. The median overall survival for patients with or without tumor cavitation were 9.9 and 16.3 months, respectively (p = 0.09). With CCRT, acute pulmonary toxicity of grade III or more developed in 50% of patients with stage III NSCLC, who also had radiological features of tumor cavitation. The optimal treatment of patients with this presentation is unclear given the high risk of a tumor abscess.

  8. Analysis of backward error recovery for concurrent processes with recovery blocks

    NASA Technical Reports Server (NTRS)

    Shin, K. G.; Lee, Y. H.

    1982-01-01

    Three different methods of implementing recovery blocks (RB's). These are the asynchronous, synchronous, and the pseudo recovery point implementations. Pseudo recovery points so that unbounded rollback may be avoided while maintaining process autonomy are proposed. Probabilistic models for analyzing these three methods under standard assumptions in computer performance analysis, i.e., exponential distributions for related random variables were developed. The interval between two successive recovery lines for asynchronous RB's mean loss in computation power for the synchronized method, and additional overhead and rollback distance in case PRP's are used were estimated.

  9. Research in computer science

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.

    1984-01-01

    Several short summaries of the work performed during this reporting period are presented. Topics discussed in this document include: (1) resilient seeded errors via simple techniques; (2) knowledge representation for engineering design; (3) analysis of faults in a multiversion software experiment; (4) implementation of parallel programming environment; (5) symbolic execution of concurrent programs; (6) two computer graphics systems for visualization of pressure distribution and convective density particles; (7) design of a source code management system; (8) vectorizing incomplete conjugate gradient on the Cyber 203/205; (9) extensions of domain testing theory and; (10) performance analyzer for the pisces system.

  10. Exploring Contextual Models in Chemical Patent Search

    NASA Astrophysics Data System (ADS)

    Urbain, Jay; Frieder, Ophir

    We explore the development of probabilistic retrieval models for integrating term statistics with entity search using multiple levels of document context to improve the performance of chemical patent search. A distributed indexing model was developed to enable efficient named entity search and aggregation of term statistics at multiple levels of patent structure including individual words, sentences, claims, descriptions, abstracts, and titles. The system can be scaled to an arbitrary number of compute instances in a cloud computing environment to support concurrent indexing and query processing operations on large patent collections.

  11. Test – Retest Reliability and Concurrent Validity of in vivo Myelin Content Indices: Myelin Water Fraction and Calibrated T1w/T2w Image Ratio

    PubMed Central

    Arshad, Muzamil; Stanley, Jeffrey A.; Raz, Naftali

    2016-01-01

    In an age-heterogeneous sample of healthy adults, we examined test-retest reliability (with and without participant re-positioning) of two popular MRI methods of estimating myelin content: modeling the short spin-spin (T2) relaxation component of multi-echo imaging data and computing the ratio of T1-weighted and T2-weighted images (T1w/T2w). Taking the myelin water fraction (MWF) index of myelin content derived from the multi-component T2 relaxation data as a standard, we evaluate the concurrent and differential validity of T1w/T2w ratio images. The results revealed high reliability of MWF and T1w/T2w ratio. However, we found significant correlations of low to moderate magnitude between MWF and the T1w/T2w ratio in only two of six examined regions of the cerebral white matter. Notably, significant correlations of the same or greater magnitude were observed for T1w/T2w ratio and the intermediate T2 relaxation time constant, which is believed to reflect differences in the mobility of water between the intracellular and extracellular compartments. We conclude that although both methods are highly reliable and thus well-suited for longitudinal studies, T1w/T2w ratio has low criterion validity and may be not an optimal index of subcortical myelin content. PMID:28009069

  12. A 16X16 Discrete Cosine Transform Chip

    NASA Astrophysics Data System (ADS)

    Sun, M. T.; Chen, T. C.; Gottlieb, A.; Wu, L.; Liou, M. L.

    1987-10-01

    Among various transform coding techniques for image compression the Discrete Cosine Transform (DCT) is considered to be the most effective method and has been widely used in the laboratory as well as in the market, place. DCT is computationally intensive. For video application at 14.3 MHz sample rate, a direct implementation of a 16x16 DCT requires a throughput, rate of approximately half a billion multiplications per second. In order to reduce the cost of hardware implementation, a single chip DCT implementation is highly desirable. In this paper, the implementation of a 16x16 DCT chip using a concurrent architecture will be presented. The chip is designed for real-time processing of 14.3 MHz sampled video data. It uses row-column decomposition to implement the two-dimensional transform. Distributed arithmetic combined with hit-serial and hit-parallel structures is used to implement the required vector inner products concurrently. Several schemes are utilized to reduce the size of required memory. The resultant circuit only uses memory, shift registers, and adders. No multipliers are required. It achieves high speed performance with a very regular and efficient integrated circuit realization. The chip accepts 0-bit input and produces 14-bit DCT coefficients. 12 bits are maintained after the first one-dimensional transform. The circuit has been laid out using a 2-μm CMOS technology with a symbolic design tool MULGA. The core contains approximately 73,000 transistors in an area of 7.2 x 7.0

  13. Contemporary considerations in concurrent endoscopic sinus surgery and rhinoplasty.

    PubMed

    Steele, Toby O; Gill, Amarbir; Tollefson, Travis T

    2018-06-11

    Characterize indications, perioperative considerations, clinical outcomes and complications for concurrent endoscopic sinus surgery (ESS) and rhinoplasty. Chronic rhinosinusitis and septal deviation with or without inferior turbinate hypertrophy independently impair patient-reported quality of life. Guidelines implore surgeons to include endoscopy to accurately evaluate patient symptoms. Complication rates parallel those of either surgery (ESS and rhinoplasty) alone and are not increased when performed concurrently. Operative time is generally longer for joint surgeries. Patient satisfaction rates are high. Concurrent functional and/or cosmetic rhinoplasty and ESS is a safe endeavor to perform in a single operative setting and most outcomes data suggest excellent patient outcomes. Additional studies that include patient-reported outcome measures are needed.

  14. A comparison of forward and concurrent chaining strategies in teaching laundromat skills to students with severe handicaps.

    PubMed

    McDonnell, J; McFarland, S

    1988-01-01

    This study compared the relative efficiency of forward and concurrent chaining strategies in teaching the use of a commercial washing machine and laundry soap dispenser to four high school students with severe handicaps. Acquisition and maintenance of the laundromat skills were assessed through a multielement, alternating treatment within subject design. Results indicated that the concurrent chaining strategy was more efficient than forward chaining in facilitating acquisition of the activities. Four week and eight week follow-up probes indicated that concurrent chaining resulted in better maintenance of the activities. The implications of these results for teaching community activities and future research in building complex chains are discussed.

  15. Computing under-ice discharge: A proof-of-concept using hydroacoustics and the Probability Concept

    NASA Astrophysics Data System (ADS)

    Fulton, John W.; Henneberg, Mark F.; Mills, Taylor J.; Kohn, Michael S.; Epstein, Brian; Hittle, Elizabeth A.; Damschen, William C.; Laveau, Christopher D.; Lambrecht, Jason M.; Farmer, William H.

    2018-07-01

    Under-ice discharge is estimated using open-water reference hydrographs; however, the ratings for ice-affected sites are generally qualified as poor. The U.S. Geological Survey (USGS), in collaboration with the Colorado Water Conservation Board, conducted a proof-of-concept to develop an alternative method for computing under-ice discharge using hydroacoustics and the Probability Concept. The study site was located south of Minturn, Colorado (CO), USA, and was selected because of (1) its proximity to the existing USGS streamgage 09064600 Eagle River near Minturn, CO, and (2) its ease-of-access to verify discharge using a variety of conventional methods. From late September 2014 to early March 2015, hydraulic conditions varied from open water to under ice. These temporal changes led to variations in water depth and velocity. Hydroacoustics (tethered and uplooking acoustic Doppler current profilers and acoustic Doppler velocimeters) were deployed to measure the vertical-velocity profile at a singularly important vertical of the channel-cross section. Because the velocity profile was non-standard and cannot be characterized using a Power Law or Log Law, velocity data were analyzed using the Probability Concept, which is a probabilistic formulation of the velocity distribution. The Probability Concept-derived discharge was compared to conventional methods including stage-discharge and index-velocity ratings and concurrent field measurements; each is complicated by the dynamics of ice formation, pressure influences on stage measurements, and variations in cross-sectional area due to ice formation. No particular discharge method was assigned as truth. Rather one statistical metric (Kolmogorov-Smirnov; KS), agreement plots, and concurrent measurements provided a measure of comparability between various methods. Regardless of the method employed, comparisons between each method revealed encouraging results depending on the flow conditions and the absence or presence of ice cover. For example, during lower discharges dominated by under-ice and transition (intermittent open-water and under-ice) conditions, the KS metric suggests there is not sufficient information to reject the null hypothesis and implies that the Probability Concept and index-velocity rating represent similar distributions. During high-flow, open-water conditions, the comparisons are less definitive; therefore, it is important that the appropriate analytical method and instrumentation be selected. Six conventional discharge measurements were collected concurrently with Probability Concept-derived discharges with percent differences (%) of -9.0%, -21%, -8.6%, 17.8%, 3.6%, and -2.3%. This proof-of-concept demonstrates that riverine discharges can be computed using the Probability Concept for a range of hydraulic extremes (variations in discharge, open-water and under-ice conditions) immediately after the siting phase is complete, which typically requires one day. Computing real-time discharges is particularly important at sites, where (1) new streamgages are planned, (2) river hydraulics are complex, and (3) shifts in the stage-discharge rating are needed to correct the streamflow record. Use of the Probability Concept does not preclude the need to maintain a stage-area relation. Both the Probability Concept and index-velocity rating offer water-resource managers and decision makers alternatives for computing real-time discharge for open-water and under-ice conditions.

  16. Computing under-ice discharge: A proof-of-concept using hydroacoustics and the Probability Concept

    USGS Publications Warehouse

    Fulton, John W.; Henneberg, Mark F.; Mills, Taylor J.; Kohn, Michael S.; Epstein, Brian; Hittle, Elizabeth A.; Damschen, William C.; Laveau, Christopher D.; Lambrecht, Jason M.; Farmer, William H.

    2018-01-01

    Under-ice discharge is estimated using open-water reference hydrographs; however, the ratings for ice-affected sites are generally qualified as poor. The U.S. Geological Survey (USGS), in collaboration with the Colorado Water Conservation Board, conducted a proof-of-concept to develop an alternative method for computing under-ice discharge using hydroacoustics and the Probability Concept.The study site was located south of Minturn, Colorado (CO), USA, and was selected because of (1) its proximity to the existing USGS streamgage 09064600 Eagle River near Minturn, CO, and (2) its ease-of-access to verify discharge using a variety of conventional methods. From late September 2014 to early March 2015, hydraulic conditions varied from open water to under ice. These temporal changes led to variations in water depth and velocity. Hydroacoustics (tethered and uplooking acoustic Doppler current profilers and acoustic Doppler velocimeters) were deployed to measure the vertical-velocity profile at a singularly important vertical of the channel-cross section. Because the velocity profile was non-standard and cannot be characterized using a Power Law or Log Law, velocity data were analyzed using the Probability Concept, which is a probabilistic formulation of the velocity distribution. The Probability Concept-derived discharge was compared to conventional methods including stage-discharge and index-velocity ratings and concurrent field measurements; each is complicated by the dynamics of ice formation, pressure influences on stage measurements, and variations in cross-sectional area due to ice formation.No particular discharge method was assigned as truth. Rather one statistical metric (Kolmogorov-Smirnov; KS), agreement plots, and concurrent measurements provided a measure of comparability between various methods. Regardless of the method employed, comparisons between each method revealed encouraging results depending on the flow conditions and the absence or presence of ice cover.For example, during lower discharges dominated by under-ice and transition (intermittent open-water and under-ice) conditions, the KS metric suggests there is not sufficient information to reject the null hypothesis and implies that the Probability Concept and index-velocity rating represent similar distributions. During high-flow, open-water conditions, the comparisons are less definitive; therefore, it is important that the appropriate analytical method and instrumentation be selected. Six conventional discharge measurements were collected concurrently with Probability Concept-derived discharges with percent differences (%) of −9.0%, −21%, −8.6%, 17.8%, 3.6%, and −2.3%.This proof-of-concept demonstrates that riverine discharges can be computed using the Probability Concept for a range of hydraulic extremes (variations in discharge, open-water and under-ice conditions) immediately after the siting phase is complete, which typically requires one day. Computing real-time discharges is particularly important at sites, where (1) new streamgages are planned, (2) river hydraulics are complex, and (3) shifts in the stage-discharge rating are needed to correct the streamflow record. Use of the Probability Concept does not preclude the need to maintain a stage-area relation. Both the Probability Concept and index-velocity rating offer water-resource managers and decision makers alternatives for computing real-time discharge for open-water and under-ice conditions.

  17. Concurrent sexual partnerships among men who have sex with men in Shenzhen, China.

    PubMed

    Ha, Toan H; Liu, Hongjie; Liu, Hui; Cai, Yumao; Feng, Tiejian

    2010-08-01

    The HIV epidemic spreads among men who have sex with men (MSM) in China. The objective of this study was to examine and compare HIV/AIDS knowledge and sexual risk for HIV between MSM who engaged in concurrent sexual partnerships and MSM who did not. A cross-sectional study using respondent driven sampling was conducted among 351 MSM in Shenzhen, China. About half (49%) of respondents reported having concurrent sexual partnerships during the past 6 months. Among MSM with concurrent sexual partnerships, 62% had only male partners and 38% had both male and female partners. The proportion of inconsistent condom use was 42% among MSM with concurrent partners and 30% among MSM without. These 2 groups reported a similar level of self-perceived risk for HIV. Compared to MSM without concurrent sexual partners, those with such partners were more likely to work in entertainment venues and had a lower level of HIV/AIDS knowledge. The large number of MSM engaging in concurrent sexual partnerships and the high prevalence of bisexuality could accelerate the spread of HIV to the general population unless effective HIV interventions for MSM are implemented in China.

  18. CUDASW++ 3.0: accelerating Smith-Waterman protein database search by coupling CPU and GPU SIMD instructions.

    PubMed

    Liu, Yongchao; Wirawan, Adrianto; Schmidt, Bertil

    2013-04-04

    The maximal sensitivity for local alignments makes the Smith-Waterman algorithm a popular choice for protein sequence database search based on pairwise alignment. However, the algorithm is compute-intensive due to a quadratic time complexity. Corresponding runtimes are further compounded by the rapid growth of sequence databases. We present CUDASW++ 3.0, a fast Smith-Waterman protein database search algorithm, which couples CPU and GPU SIMD instructions and carries out concurrent CPU and GPU computations. For the CPU computation, this algorithm employs SSE-based vector execution units as accelerators. For the GPU computation, we have investigated for the first time a GPU SIMD parallelization, which employs CUDA PTX SIMD video instructions to gain more data parallelism beyond the SIMT execution model. Moreover, sequence alignment workloads are automatically distributed over CPUs and GPUs based on their respective compute capabilities. Evaluation on the Swiss-Prot database shows that CUDASW++ 3.0 gains a performance improvement over CUDASW++ 2.0 up to 2.9 and 3.2, with a maximum performance of 119.0 and 185.6 GCUPS, on a single-GPU GeForce GTX 680 and a dual-GPU GeForce GTX 690 graphics card, respectively. In addition, our algorithm has demonstrated significant speedups over other top-performing tools: SWIPE and BLAST+. CUDASW++ 3.0 is written in CUDA C++ and PTX assembly languages, targeting GPUs based on the Kepler architecture. This algorithm obtains significant speedups over its predecessor: CUDASW++ 2.0, by benefiting from the use of CPU and GPU SIMD instructions as well as the concurrent execution on CPUs and GPUs. The source code and the simulated data are available at http://cudasw.sourceforge.net.

  19. From chalkboard, slides, and paper to e-learning: How computing technologies have transformed anatomical sciences education.

    PubMed

    Trelease, Robert B

    2016-11-01

    Until the late-twentieth century, primary anatomical sciences education was relatively unenhanced by advanced technology and dependent on the mainstays of printed textbooks, chalkboard- and photographic projection-based classroom lectures, and cadaver dissection laboratories. But over the past three decades, diffusion of innovations in computer technology transformed the practices of anatomical education and research, along with other aspects of work and daily life. Increasing adoption of first-generation personal computers (PCs) in the 1980s paved the way for the first practical educational applications, and visionary anatomists foresaw the usefulness of computers for teaching. While early computers lacked high-resolution graphics capabilities and interactive user interfaces, applications with video discs demonstrated the practicality of programming digital multimedia linking descriptive text with anatomical imaging. Desktop publishing established that computers could be used for producing enhanced lecture notes, and commercial presentation software made it possible to give lectures using anatomical and medical imaging, as well as animations. Concurrently, computer processing supported the deployment of medical imaging modalities, including computed tomography, magnetic resonance imaging, and ultrasound, that were subsequently integrated into anatomy instruction. Following its public birth in the mid-1990s, the World Wide Web became the ubiquitous multimedia networking technology underlying the conduct of contemporary education and research. Digital video, structural simulations, and mobile devices have been more recently applied to education. Progressive implementation of computer-based learning methods interacted with waves of ongoing curricular change, and such technologies have been deemed crucial for continuing medical education reforms, providing new challenges and opportunities for anatomical sciences educators. Anat Sci Educ 9: 583-602. © 2016 American Association of Anatomists. © 2016 American Association of Anatomists.

  20. Overview of the LINCS architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fletcher, J.G.; Watson, R.W.

    1982-01-13

    Computing at the Lawrence Livermore National Laboratory (LLNL) has evolved over the past 15 years with a computer network based resource sharing environment. The increasing use of low cost and high performance micro, mini and midi computers and commercially available local networking systems will accelerate this trend. Further, even the large scale computer systems, on which much of the LLNL scientific computing depends, are evolving into multiprocessor systems. It is our belief that the most cost effective use of this environment will depend on the development of application systems structured into cooperating concurrent program modules (processes) distributed appropriately over differentmore » nodes of the environment. A node is defined as one or more processors with a local (shared) high speed memory. Given the latter view, the environment can be characterized as consisting of: multiple nodes communicating over noisy channels with arbitrary delays and throughput, heterogenous base resources and information encodings, no single administration controlling all resources, distributed system state, and no uniform time base. The system design problem is - how to turn the heterogeneous base hardware/firmware/software resources of this environment into a coherent set of resources that facilitate development of cost effective, reliable, and human engineered applications. We believe the answer lies in developing a layered, communication oriented distributed system architecture; layered and modular to support ease of understanding, reconfiguration, extensibility, and hiding of implementation or nonessential local details; communication oriented because that is a central feature of the environment. The Livermore Interactive Network Communication System (LINCS) is a hierarchical architecture designed to meet the above needs. While having characteristics in common with other architectures, it differs in several respects.« less

  1. Predictors of pulmonary toxicity in limited stage small cell lung cancer patients treated with induction chemotherapy followed by concurrent platinum-based chemotherapy and 70 Gy daily radiotherapy: CALGB 30904.

    PubMed

    Salama, Joseph K; Pang, Herbert; Bogart, Jeffrey A; Blackstock, A William; Urbanic, James J; Hogson, Lydia; Crawford, Jeffrey; Vokes, Everett E

    2013-12-01

    Standard therapy for limited stage small cell lung cancer (L-SCLC) is concurrent chemotherapy and radiotherapy followed by prophylactic cranial radiotherapy. Predictors of post chemoradiotherapy pulmonary toxicity in limited stage (LS) small cell lung cancer (SCLC) patients are not well defined. Current guidelines are derived from non-small cell lung cancer regimens, and do not account for the unique biology of this disease. Therefore, we analyzed patients on three consecutive CALGB LS-SCLC trials treated with concurrent chemotherapy and daily high dose radiotherapy (70 Gy) to determine patient and treatment related factors predicting for post-treatment pulmonary toxicity. Patients treated on CALGB protocols 39808, 30002, 30206 investigating two cycles of chemotherapy followed by concurrent chemotherapy and 70 Gy daily thoracic radiation therapy were pooled. Patient, tumor, and treatment related factors were evaluated to determine predictors of grade 3–5 pulmonary toxicities after concurrent chemoradiotherapy. 100 patients were included. No patient experienced grade 4–5 post-treatment pulmonary toxicity. Patients who experienced post-treatment pulmonary toxicity were more likely to be older (median age 69 vs 60, p = 0.09) and have smaller total lung volumes (2565 cc vs 3530 cc, p = 0.05).). Furthermore,exposure of larger volumes of lung to lower (median V5 = 70%, p = 0.09, median V10 = 63%, p = 0.07), inter-mediate (median V20 = 50, p = 0.04) and high (median V60 = 25%, p = 0.01) doses of radiation were all associated with post-treatment grade 3 pulmonary toxicity, as was a larger mean lung radiation dose(median 31 Gy) p = 0.019. Post-treatment pulmonary toxicity following the completion of 2 cycles of chemotherapy followed by concurrent chemotherapy and high dose daily radiation therapy was uncommon. Care should be taken to minimize mean lung radiation exposure, as well as volumes of low, intermediate and high doses of radiation.

  2. Cloud-based solution to identify statistically significant MS peaks differentiating sample categories.

    PubMed

    Ji, Jun; Ling, Jeffrey; Jiang, Helen; Wen, Qiaojun; Whitin, John C; Tian, Lu; Cohen, Harvey J; Ling, Xuefeng B

    2013-03-23

    Mass spectrometry (MS) has evolved to become the primary high throughput tool for proteomics based biomarker discovery. Until now, multiple challenges in protein MS data analysis remain: large-scale and complex data set management; MS peak identification, indexing; and high dimensional peak differential analysis with the concurrent statistical tests based false discovery rate (FDR). "Turnkey" solutions are needed for biomarker investigations to rapidly process MS data sets to identify statistically significant peaks for subsequent validation. Here we present an efficient and effective solution, which provides experimental biologists easy access to "cloud" computing capabilities to analyze MS data. The web portal can be accessed at http://transmed.stanford.edu/ssa/. Presented web application supplies large scale MS data online uploading and analysis with a simple user interface. This bioinformatic tool will facilitate the discovery of the potential protein biomarkers using MS.

  3. Automated Measurement of Facial Expression in Infant-Mother Interaction: A Pilot Study

    PubMed Central

    Messinger, Daniel S.; Mahoor, Mohammad H.; Chow, Sy-Miin; Cohn, Jeffrey F.

    2009-01-01

    Automated facial measurement using computer vision has the potential to objectively document continuous changes in behavior. To examine emotional expression and communication, we used automated measurements to quantify smile strength, eye constriction, and mouth opening in two six-month-old/mother dyads who each engaged in a face-to-face interaction. Automated measurements showed high associations with anatomically based manual coding (concurrent validity); measurements of smiling showed high associations with mean ratings of positive emotion made by naive observers (construct validity). For both infants and mothers, smile strength and eye constriction (the Duchenne marker) were correlated over time, creating a continuous index of smile intensity. Infant and mother smile activity exhibited changing (nonstationary) local patterns of association, suggesting the dyadic repair and dissolution of states of affective synchrony. The study provides insights into the potential and limitations of automated measurement of facial action. PMID:19885384

  4. Reengineering the Project Design Process

    NASA Technical Reports Server (NTRS)

    Casani, E.; Metzger, R.

    1994-01-01

    In response to NASA's goal of working faster, better and cheaper, JPL has developed extensive plans to minimize cost, maximize customer and employee satisfaction, and implement small- and moderate-size missions. These plans include improved management structures and processes, enhanced technical design processes, the incorporation of new technology, and the development of more economical space- and ground-system designs. The Laboratory's new Flight Projects Implementation Office has been chartered to oversee these innovations and the reengineering of JPL's project design process, including establishment of the Project Design Center and the Flight System Testbed. Reengineering at JPL implies a cultural change whereby the character of its design process will change from sequential to concurrent and from hierarchical to parallel. The Project Design Center will support missions offering high science return, design to cost, demonstrations of new technology, and rapid development. Its computer-supported environment will foster high-fidelity project life-cycle development and cost estimating.

  5. Challenges and Opportunities in Interdisciplinary Materials Research Experiences for Undergraduates

    NASA Astrophysics Data System (ADS)

    Vohra, Yogesh; Nordlund, Thomas

    2009-03-01

    The University of Alabama at Birmingham (UAB) offer a broad range of interdisciplinary materials research experiences to undergraduate students with diverse backgrounds in physics, chemistry, applied mathematics, and engineering. The research projects offered cover a broad range of topics including high pressure physics, microelectronic materials, nano-materials, laser materials, bioceramics and biopolymers, cell-biomaterials interactions, planetary materials, and computer simulation of materials. The students welcome the opportunity to work with an interdisciplinary team of basic science, engineering, and biomedical faculty but the challenge is in learning the key vocabulary for interdisciplinary collaborations, experimental tools, and working in an independent capacity. The career development workshops dealing with the graduate school application process and the entrepreneurial business activities were found to be most effective. The interdisciplinary university wide poster session helped student broaden their horizons in research careers. The synergy of the REU program with other concurrently running high school summer programs on UAB campus will also be discussed.

  6. True Concurrent Thermal Engineering Integrating CAD Model Building with Finite Element and Finite Difference Methods

    NASA Technical Reports Server (NTRS)

    Panczak, Tim; Ring, Steve; Welch, Mark

    1999-01-01

    Thermal engineering has long been left out of the concurrent engineering environment dominated by CAD (computer aided design) and FEM (finite element method) software. Current tools attempt to force the thermal design process into an environment primarily created to support structural analysis, which results in inappropriate thermal models. As a result, many thermal engineers either build models "by hand" or use geometric user interfaces that are separate from and have little useful connection, if any, to CAD and FEM systems. This paper describes the development of a new thermal design environment called the Thermal Desktop. This system, while fully integrated into a neutral, low cost CAD system, and which utilizes both FEM and FD methods, does not compromise the needs of the thermal engineer. Rather, the features needed for concurrent thermal analysis are specifically addressed by combining traditional parametric surface based radiation and FD based conduction modeling with CAD and FEM methods. The use of flexible and familiar temperature solvers such as SINDA/FLUINT (Systems Improved Numerical Differencing Analyzer/Fluid Integrator) is retained.

  7. Falls Risk and Simulated Driving Performance in Older Adults

    PubMed Central

    Gaspar, John G.; Neider, Mark B.; Kramer, Arthur F.

    2013-01-01

    Declines in executive function and dual-task performance have been related to falls in older adults, and recent research suggests that older adults at risk for falls also show impairments on real-world tasks, such as crossing a street. The present study examined whether falls risk was associated with driving performance in a high-fidelity simulator. Participants were classified as high or low falls risk using the Physiological Profile Assessment and completed a number of challenging simulated driving assessments in which they responded quickly to unexpected events. High falls risk drivers had slower response times (~2.1 seconds) to unexpected events compared to low falls risk drivers (~1.7 seconds). Furthermore, when asked to perform a concurrent cognitive task while driving, high falls risk drivers showed greater costs to secondary task performance than did low falls risk drivers, and low falls risk older adults also outperformed high falls risk older adults on a computer-based measure of dual-task performance. Our results suggest that attentional differences between high and low falls risk older adults extend to simulated driving performance. PMID:23509627

  8. Flame Spread and Extinction Over a Thick Solid Fuel in Low-Velocity Opposed and Concurrent Flows

    NASA Astrophysics Data System (ADS)

    Zhu, Feng; Lu, Zhanbin; Wang, Shuangfeng

    2016-05-01

    Flame spread and extinction phenomena over a thick PMMA in purely opposed and concurrent flows are investigated by conducting systematical experiments in a narrow channel apparatus. The present tests focus on low-velocity flow regime and hence complement experimental data previously reported for high and moderate velocity regimes. In the flow velocity range tested, the opposed flame is found to spread much faster than the concurrent flame at a given flow velocity. The measured spread rates for opposed and concurrent flames can be correlated by corresponding theoretical models of flame spread, indicating that existing models capture the main mechanisms controlling the flame spread. In low-velocity gas flows, however, the experimental results are observed to deviate from theoretical predictions. This may be attributed to the neglect of radiative heat loss in the theoretical models, whereas radiation becomes important for low-intensity flame spread. Flammability limits using oxygen concentration and flow velocity as coordinates are presented for both opposed and concurrent flame spread configurations. It is found that concurrent spread has a wider flammable range than opposed case. Beyond the flammability boundary of opposed spread, there is an additional flammable area for concurrent spread, where the spreading flame is sustainable in concurrent mode only. The lowest oxygen concentration allowing concurrent flame spread in forced flow is estimated to be approximately 14 % O2, substantially below that for opposed spread (18.5 % O2).

  9. a Novel Approach of Indexing and Retrieving Spatial Polygons for Efficient Spatial Region Queries

    NASA Astrophysics Data System (ADS)

    Zhao, J. H.; Wang, X. Z.; Wang, F. Y.; Shen, Z. H.; Zhou, Y. C.; Wang, Y. L.

    2017-10-01

    Spatial region queries are more and more widely used in web-based applications. Mechanisms to provide efficient query processing over geospatial data are essential. However, due to the massive geospatial data volume, heavy geometric computation, and high access concurrency, it is difficult to get response in real time. Spatial indexes are usually used in this situation. In this paper, based on k-d tree, we introduce a distributed KD-Tree (DKD-Tree) suitbable for polygon data, and a two-step query algorithm. The spatial index construction is recursive and iterative, and the query is an in memory process. Both the index and query methods can be processed in parallel, and are implemented based on HDFS, Spark and Redis. Experiments on a large volume of Remote Sensing images metadata have been carried out, and the advantages of our method are investigated by comparing with spatial region queries executed on PostgreSQL and PostGIS. Results show that our approach not only greatly improves the efficiency of spatial region query, but also has good scalability, Moreover, the two-step spatial range query algorithm can also save cluster resources to support a large number of concurrent queries. Therefore, this method is very useful when building large geographic information systems.

  10. Efficiently sampling conformations and pathways using the concurrent adaptive sampling (CAS) algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, Surl-Hee; Grate, Jay W.; Darve, Eric F.

    Molecular dynamics (MD) simulations are useful in obtaining thermodynamic and kinetic properties of bio-molecules but are limited by the timescale barrier, i.e., we may be unable to efficiently obtain properties because we need to run microseconds or longer simulations using femtoseconds time steps. While there are several existing methods to overcome this timescale barrier and efficiently sample thermodynamic and/or kinetic properties, problems remain in regard to being able to sample un- known systems, deal with high-dimensional space of collective variables, and focus the computational effort on slow timescales. Hence, a new sampling method, called the “Concurrent Adaptive Sampling (CAS) algorithm,”more » has been developed to tackle these three issues and efficiently obtain conformations and pathways. The method is not constrained to use only one or two collective variables, unlike most reaction coordinate-dependent methods. Instead, it can use a large number of collective vari- ables and uses macrostates (a partition of the collective variable space) to enhance the sampling. The exploration is done by running a large number of short simula- tions, and a clustering technique is used to accelerate the sampling. In this paper, we introduce the new methodology and show results from two-dimensional models and bio-molecules, such as penta-alanine and triazine polymer« less

  11. The Simple Concurrent Online Processing System (SCOPS) - An open-source interface for remotely sensed data processing

    NASA Astrophysics Data System (ADS)

    Warren, M. A.; Goult, S.; Clewley, D.

    2018-06-01

    Advances in technology allow remotely sensed data to be acquired with increasingly higher spatial and spectral resolutions. These data may then be used to influence government decision making and solve a number of research and application driven questions. However, such large volumes of data can be difficult to handle on a single personal computer or on older machines with slower components. Often the software required to process data is varied and can be highly technical and too advanced for the novice user to fully understand. This paper describes an open-source tool, the Simple Concurrent Online Processing System (SCOPS), which forms part of an airborne hyperspectral data processing chain that allows users accessing the tool over a web interface to submit jobs and process data remotely. It is demonstrated using Natural Environment Research Council Airborne Research Facility (NERC-ARF) instruments together with other free- and open-source tools to take radiometrically corrected data from sensor geometry into geocorrected form and to generate simple or complex band ratio products. The final processed data products are acquired via an HTTP download. SCOPS can cut data processing times and introduce complex processing software to novice users by distributing jobs across a network using a simple to use web interface.

  12. Implementation of a direct-imaging and FX correlator for the BEST-2 array

    NASA Astrophysics Data System (ADS)

    Foster, G.; Hickish, J.; Magro, A.; Price, D.; Zarb Adami, K.

    2014-04-01

    A new digital backend has been developed for the Basic Element for SKA Training II (BEST-2) array at Radiotelescopi di Medicina, INAF-IRA, Italy, which allows concurrent operation of an FX correlator, and a direct-imaging correlator and beamformer. This backend serves as a platform for testing some of the spatial Fourier transform concepts which have been proposed for use in computing correlations on regularly gridded arrays. While spatial Fourier transform-based beamformers have been implemented previously, this is, to our knowledge, the first time a direct-imaging correlator has been deployed on a radio astronomy array. Concurrent observations with the FX and direct-imaging correlator allow for direct comparison between the two architectures. Additionally, we show the potential of the direct-imaging correlator for time-domain astronomy, by passing a subset of beams though a pulsar and transient detection pipeline. These results provide a timely verification for spatial Fourier transform-based instruments that are currently in commissioning. These instruments aim to detect highly redshifted hydrogen from the epoch of reionization and/or to perform wide-field surveys for time-domain studies of the radio sky. We experimentally show the direct-imaging correlator architecture to be a viable solution for correlation and beamforming.

  13. Investigation of the probability of concurrent drought events between the water source and destination regions of China's water diversion project

    NASA Astrophysics Data System (ADS)

    Liu, Xiaomang; Luo, Yuzhou; Yang, Tiantian; Liang, Kang; Zhang, Minghua; Liu, Changming

    2015-10-01

    In this study, we investigate the concurrent drought probability between the water source and destination regions of the central route of China's South to North Water Diversion Project. We find that both regions have been drying from 1960 to 2013. The estimated return period of concurrent drought events in both regions is 11 years. However, since 1997, these regions have experienced 5 years of simultaneous drought. The projection results of global climate models show that the probability of concurrent drought events is highly likely to increase during 2020 to 2050. The increasing concurrent drought events will challenge the success of the water diversion project, which is a strategic attempt to resolve the water crisis of North China Plain. The data suggest great urgency in preparing adaptive measures to ensure the long-term sustainable operation of the water diversion project.

  14. Multiscale Methods, Parallel Computation, and Neural Networks for Real-Time Computer Vision.

    NASA Astrophysics Data System (ADS)

    Battiti, Roberto

    1990-01-01

    This thesis presents new algorithms for low and intermediate level computer vision. The guiding ideas in the presented approach are those of hierarchical and adaptive processing, concurrent computation, and supervised learning. Processing of the visual data at different resolutions is used not only to reduce the amount of computation necessary to reach the fixed point, but also to produce a more accurate estimation of the desired parameters. The presented adaptive multiple scale technique is applied to the problem of motion field estimation. Different parts of the image are analyzed at a resolution that is chosen in order to minimize the error in the coefficients of the differential equations to be solved. Tests with video-acquired images show that velocity estimation is more accurate over a wide range of motion with respect to the homogeneous scheme. In some cases introduction of explicit discontinuities coupled to the continuous variables can be used to avoid propagation of visual information from areas corresponding to objects with different physical and/or kinematic properties. The human visual system uses concurrent computation in order to process the vast amount of visual data in "real -time." Although with different technological constraints, parallel computation can be used efficiently for computer vision. All the presented algorithms have been implemented on medium grain distributed memory multicomputers with a speed-up approximately proportional to the number of processors used. A simple two-dimensional domain decomposition assigns regions of the multiresolution pyramid to the different processors. The inter-processor communication needed during the solution process is proportional to the linear dimension of the assigned domain, so that efficiency is close to 100% if a large region is assigned to each processor. Finally, learning algorithms are shown to be a viable technique to engineer computer vision systems for different applications starting from multiple-purpose modules. In the last part of the thesis a well known optimization method (the Broyden-Fletcher-Goldfarb-Shanno memoryless quasi -Newton method) is applied to simple classification problems and shown to be superior to the "error back-propagation" algorithm for numerical stability, automatic selection of parameters, and convergence properties.

  15. Technology & Disability: Research, Design, Practice, and Policy. Proceedings of the RESNA International Conference (25th, Minneapolis, Minnesota, June 27-July 1, 2002).

    ERIC Educational Resources Information Center

    Simpson, Richard, Ed.

    These proceedings of the 2002 annual RESNA (Association for the Advancement of Rehabilitation Technology) conference include more than 200 presentations on all facets of assistive technology, including concurrent sessions, scientific platform sessions, interactive poster presentations, computer demonstrations, and the research symposium. The…

  16. EPIC Computational Models of Psychological Refractory-Period Effects in Human Multiple-Task Performance.

    ERIC Educational Resources Information Center

    Meyer, David E.; Kieras, David E.

    Perceptual-motor and cognitive processes whereby people perform multiple concurrent tasks have been studied through an overlapping-tasks procedure in which two successive choice-reaction tasks are performed with a variable interval (stimulus onset asynchrony, or SOA) between the beginning of the first and second tasks. The increase in subjects'…

  17. Concurrent Validity of the "Working with Others Scale" of the ICIS Employment Interview System

    ERIC Educational Resources Information Center

    Cassidy, Martha W.

    2011-01-01

    The purpose of this study was to determine if the Working with Others Scale from the American Association of School Personnel Administrators (AASPA) Interactive Computer Interview System (ICIS) was a valid predictor of practicing teachers' interpersonal skills and abilities to work well with colleagues. Participants in the study were all employed…

  18. Utilizing Human Patient Simulators (HPS) to Meet Learning Objectives across Concurrent Core Nursing Courses: A Pilot Study

    ERIC Educational Resources Information Center

    Miller, Charman L.; Leadingham, Camille; Vance, Ronald

    2010-01-01

    Associate Degree Nursing (ADN) faculty are challenged by the monumental responsibility of preparing students to function as safe, professional nurses in a two year course of study. Advances in computer technology and emphasis on integrating technology and active learning strategies into existing course structures have prompted many nurse educators…

  19. Human-Computer Interaction: A Journal of Theoretical, Empirical and Methodological Issues of User Science and of System Design. Volume 7, Number 1

    DTIC Science & Technology

    1992-01-01

    Norman .................................... University of California, San Diego, CA Dan R . Olsen, Jr ........................................ Brigham...Peter G. Poison .............................................. University of Colorado, Boulder, CO James R . Rhyne ................. IBM T J Watson...and artificial intelligence, among which are: * reasoning about concurrent systems, including program verification ( Barringer , 1985), operating

  20. Submicron Systems Architecture Project

    DTIC Science & Technology

    1981-11-01

    This project is concerned with the architecture , design , and testing of VLSI Systems. The principal activities in this report period include: The Tree Machine; COPE, The Homogeneous Machine; Computational Arrays; Switch-Level Model for MOS Logic Design; Testing; Local Network and Designer Workstations; Self-timed Systems; Characterization of Deadlock Free Resource Contention; Concurrency Algebra; Language Design and Logic for Program Verification.

  1. A Concurrent Implementation of the Cascade-Correlation Algorithm, Using the Time Warp Operating System

    NASA Technical Reports Server (NTRS)

    Springer, P.

    1993-01-01

    This paper discusses the method in which the Cascade-Correlation algorithm was parallelized in such a way that it could be run using the Time Warp Operating System (TWOS). TWOS is a special purpose operating system designed to run parellel discrete event simulations with maximum efficiency on parallel or distributed computers.

  2. Solving a mathematical model integrating unequal-area facilities layout and part scheduling in a cellular manufacturing system by a genetic algorithm.

    PubMed

    Ebrahimi, Ahmad; Kia, Reza; Komijan, Alireza Rashidi

    2016-01-01

    In this article, a novel integrated mixed-integer nonlinear programming model is presented for designing a cellular manufacturing system (CMS) considering machine layout and part scheduling problems simultaneously as interrelated decisions. The integrated CMS model is formulated to incorporate several design features including part due date, material handling time, operation sequence, processing time, an intra-cell layout of unequal-area facilities, and part scheduling. The objective function is to minimize makespan, tardiness penalties, and material handling costs of inter-cell and intra-cell movements. Two numerical examples are solved by the Lingo software to illustrate the results obtained by the incorporated features. In order to assess the effects and importance of integration of machine layout and part scheduling in designing a CMS, two approaches, sequentially and concurrent are investigated and the improvement resulted from a concurrent approach is revealed. Also, due to the NP-hardness of the integrated model, an efficient genetic algorithm is designed. As a consequence, computational results of this study indicate that the best solutions found by GA are better than the solutions found by B&B in much less time for both sequential and concurrent approaches. Moreover, the comparisons between the objective function values (OFVs) obtained by sequential and concurrent approaches demonstrate that the OFV improvement is averagely around 17 % by GA and 14 % by B&B.

  3. The internal gravity wave spectrum in two high-resolution global ocean models

    NASA Astrophysics Data System (ADS)

    Arbic, B. K.; Ansong, J. K.; Buijsman, M. C.; Kunze, E. L.; Menemenlis, D.; Müller, M.; Richman, J. G.; Savage, A.; Shriver, J. F.; Wallcraft, A. J.; Zamudio, L.

    2016-02-01

    We examine the internal gravity wave (IGW) spectrum in two sets of high-resolution global ocean simulations that are forced concurrently by atmospheric fields and the astronomical tidal potential. We analyze global 1/12th and 1/25th degree HYCOM simulations, and global 1/12th, 1/24th, and 1/48th degree simulations of the MITgcm. We are motivated by the central role that IGWs play in ocean mixing, by operational considerations of the US Navy, which runs HYCOM as an ocean forecast model, and by the impact of the IGW continuum on the sea surface height (SSH) measurements that will be taken by the planned NASA/CNES SWOT wide-swath altimeter mission. We (1) compute the IGW horizontal wavenumber-frequency spectrum of kinetic energy, and interpret the results with linear dispersion relations computed from the IGW Sturm-Liouville problem, (2) compute and similarly interpret nonlinear spectral kinetic energy transfers in the IGW band, (3) compute and similarly interpret IGW contributions to SSH variance, (4) perform comparisons of modeled IGW kinetic energy frequency spectra with moored current meter observations, and (5) perform comparisons of modeled IGW kinetic energy vertical wavenumber-frequency spectra with moored observations. This presentation builds upon our work in Muller et al. (2015, GRL), who performed tasks (1), (2), and (4) in 1/12th and 1/25th degree HYCOM simulations, for one region of the North Pacific. New for this presentation are tasks (3) and (5), the inclusion of MITgcm solutions, and the analysis of additional ocean regions.

  4. Quo vadimus? The 21st Century and multimedia

    NASA Technical Reports Server (NTRS)

    Kuhn, Allan D.

    1991-01-01

    The concept is related of computer driven multimedia to the NASA Scientific and Technical Information Program (STIP). Multimedia is defined here as computer integration and output of text, animation, audio, video, and graphics. Multimedia is the stage of computer based information that allows access to experience. The concepts are also drawn in of hypermedia, intermedia, interactive multimedia, hypertext, imaging, cyberspace, and virtual reality. Examples of these technology developments are given for NASA, private industry, and academia. Examples of concurrent technology developments and implementations are given to show how these technologies, along with multimedia, have put us at the threshold of the 21st century. The STI Program sees multimedia as an opportunity for revolutionizing the way STI is managed.

  5. FLEXAN (version 2.0) user's guide

    NASA Technical Reports Server (NTRS)

    Stallcup, Scott S.

    1989-01-01

    The FLEXAN (Flexible Animation) computer program, Version 2.0 is described. FLEXAN animates 3-D wireframe structural dynamics on the Evans and Sutherland PS300 graphics workstation with a VAX/VMS host computer. Animation options include: unconstrained vibrational modes, mode time histories (multiple modes), delta time histories (modal and/or nonmodal deformations), color time histories (elements of the structure change colors through time), and rotational time histories (parts of the structure rotate through time). Concurrent color, mode, delta, and rotation, time history animations are supported. FLEXAN does not model structures or calculate the dynamics of structures; it only animates data from other computer programs. FLEXAN was developed to aid in the study of the structural dynamics of spacecraft.

  6. Theoretical and practical considerations for the development of online international collaborative learning for dental hygiene students.

    PubMed

    Gussy, M G; Knevel, R J M; Sigurdson, V; Karlberg, G

    2006-08-01

    Globalization and concurrent development in computer and communication technology has increased interest in collaborative online teaching and learning for students in higher education institutions. Many institutions and teachers have introduced computer-supported programmes in areas including dental hygiene. The potential for the use of this technology is exciting; however, its introduction should be careful and considered. We suggest that educators wanting to introduce computer-supported programmes make explicit their pedagogical principles and then select technologies that support and exploit these principles. This paper describes this process as it was applied to the development of an international web-based collaborative learning programme for dental hygiene students.

  7. ELAS - SCIENCE & TECHNOLOGY LABORATORY APPLICATIONS SOFTWARE (CONCURRENT VERSION)

    NASA Technical Reports Server (NTRS)

    Pearson, R. W.

    1994-01-01

    The Science and Technology Laboratory Applications Software (ELAS) was originally designed to analyze and process digital imagery data, specifically remotely-sensed scanner data. This capability includes the processing of Landsat multispectral data; aircraft-acquired scanner data; digitized topographic data; and numerous other ancillary data, such as soil types and rainfall information, that can be stored in digitized form. ELAS has the subsequent capability to geographically reference this data to dozens of standard, as well as user created projections. As an integrated image processing system, ELAS offers the user of remotely-sensed data a wide range of capabilities in the areas of land cover analysis and general purpose image analysis. ELAS is designed for flexible use and operation and includes its own FORTRAN operating subsystem and an expandable set of FORTRAN application modules. Because all of ELAS resides in one "logical" FORTRAN program, data inputs and outputs, directives, and module switching are convenient for the user. There are over 230 modules presently available to aid the user in performing a wide range of land cover analyses and manipulation. The file management modules enable the user to allocate, define, access, and specify usage for all types of files (ELAS files, subfiles, external files etc.). Various other modules convert specific types of satellite, aircraft, and vector-polygon data into files that can be used by other ELAS modules. The user also has many module options which aid in displaying image data, such as magnification/reduction of the display; true color display; and several memory functions. Additional modules allow for the building and manipulation of polygonal areas of the image data. Finally, there are modules which allow the user to select and classify the image data. An important feature of the ELAS subsystem is that its structure allows new applications modules to be easily integrated in the future. ELAS has as a standard the flexibility to process data elements exceeding 8 bits in length, including floating point (noninteger) elements and 16 or 32 bit integers. Thus it is able to analyze and process "non-standard" nonimage data. The VAX (ERL-10017) and Concurrent (ERL-10013) versions of ELAS 9.0 are written in FORTRAN and ASSEMBLER for DEC VAX series computers running VMS and Concurrent computers running MTM. The Sun (SSC-00019), Masscomp (SSC-00020), and Silicon Graphics (SSC-00021) versions of ELAS 9.0 are written in FORTRAN 77 and C-LANGUAGE for Sun4 series computers running SunOS, Masscomp computers running UNIX, and Silicon Graphics IRIS computers running IRIX. The Concurrent version requires at least 15 bit addressing and a direct memory access channel. The VAX and Concurrent versions of ELAS both require floating-point hardware, at least 1Mb of RAM, and approximately 70Mb of disk space. Both versions also require a COMTAL display device in order to display images. For the Sun, Masscomp, and Silicon Graphics versions of ELAS, the disk storage required is approximately 115Mb, and a minimum of 8Mb of RAM is required for execution. The Sun version of ELAS requires either the X-Window System Version 11 Revision 4 or Sun OpenWindows Version 2. The Masscomp version requires a GA1000 display device and the associated "gp" library. The Silicon Graphics version requires Silicon Graphics' GL library. ELAS display functions will not work with a monochrome monitor. The standard distribution medium for the VAX version (ERL10017) is a set of two 9-track 1600 BPI magnetic tapes in DEC VAX BACKUP format. This version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. The standard distribution medium for the Concurrent version (ERL-10013) is a set of two 9-track 1600 BPI magnetic tapes in Concurrent BACKUP format. The standard distribution medium for the Sun version (SSC-00019) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Masscomp version, (SSC-00020) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Silicon Graphics version (SSC-00021) is a .25 inch streaming magnetic IRIS tape cartridge in UNIX tar format. Version 9.0 was released in 1991. Sun4, SunOS, and Open Windows are trademarks of Sun Microsystems, Inc. MIT X Window System is licensed by Massachusetts Institute of Technology.

  8. Bilingual parallel programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foster, I.; Overbeek, R.

    1990-01-01

    Numerous experiments have demonstrated that computationally intensive algorithms support adequate parallelism to exploit the potential of large parallel machines. Yet successful parallel implementations of serious applications are rare. The limiting factor is clearly programming technology. None of the approaches to parallel programming that have been proposed to date -- whether parallelizing compilers, language extensions, or new concurrent languages -- seem to adequately address the central problems of portability, expressiveness, efficiency, and compatibility with existing software. In this paper, we advocate an alternative approach to parallel programming based on what we call bilingual programming. We present evidence that this approach providesmore » and effective solution to parallel programming problems. The key idea in bilingual programming is to construct the upper levels of applications in a high-level language while coding selected low-level components in low-level languages. This approach permits the advantages of a high-level notation (expressiveness, elegance, conciseness) to be obtained without the cost in performance normally associated with high-level approaches. In addition, it provides a natural framework for reusing existing code.« less

  9. Missed opportunities for concurrent HIV-STD testing in an academic emergency department.

    PubMed

    Klein, Pamela W; Martin, Ian B K; Quinlivan, Evelyn B; Gay, Cynthia L; Leone, Peter A

    2014-01-01

    We evaluated emergency department (ED) provider adherence to guidelines for concurrent HIV-sexually transmitted disease (STD) testing within an expanded HIV testing program and assessed demographic and clinical factors associated with concurrent HIV-STD testing. We examined concurrent HIV-STD testing in a suburban academic ED with a targeted, expanded HIV testing program. Patients aged 18-64 years who were tested for syphilis, gonorrhea, or chlamydia in 2009 were evaluated for concurrent HIV testing. We analyzed demographic and clinical factors associated with concurrent HIV-STD testing using multivariate logistic regression with a robust variance estimator or, where applicable, exact logistic regression. Only 28.3% of patients tested for syphilis, 3.8% tested for gonorrhea, and 3.8% tested for chlamydia were concurrently tested for HIV during an ED visit. Concurrent HIV-syphilis testing was more likely among younger patients aged 25-34 years (adjusted odds ratio [AOR] = 0.36, 95% confidence interval [CI] 0.78, 2.10) and patients with STD-related chief complaints at triage (AOR=11.47, 95% CI 5.49, 25.06). Concurrent HIV-gonorrhea/chlamydia testing was more likely among men (gonorrhea: AOR=3.98, 95% CI 2.25, 7.02; chlamydia: AOR=3.25, 95% CI 1.80, 5.86) and less likely among patients with STD-related chief complaints at triage (gonorrhea: AOR=0.31, 95% CI 0.13, 0.82; chlamydia: AOR=0.21, 95% CI 0.09, 0.50). Concurrent HIV-STD testing in an academic ED remains low. Systematic interventions that remove the decision-making burden of ordering an HIV test from providers may increase HIV testing in this high-risk population of suspected STD patients.

  10. Children concurrently wasted and stunted: A meta‐analysis of prevalence data of children 6–59 months from 84 countries

    PubMed Central

    Khara, Tanya; Mwangome, Martha; Ngari, Moses

    2017-01-01

    Abstract Children can be stunted and wasted at the same time. Having both deficits greatly elevates risk of mortality. The analysis aimed to estimate the prevalence and burden of children aged 6–59 months concurrently wasted and stunted. Data from demographic and health survey and Multi‐indicator Cluster Surveys datasets from 84 countries were analysed. Overall prevalence for being wasted, stunted, and concurrently wasted and stunted among children 6 to 59 months was calculated. A pooled prevalence of concurrence was estimated and reported by gender, age, United Nations regions, and contextual categories. Burden was calculated using population figures from the global joint estimates database. The pooled prevalence of concurrence in the 84 countries was 3.0%, 95% CI [2.97, 3.06], ranging from 0% to 8.0%. Nine countries reported a concurrence prevalence greater than 5%. The estimated burden was 5,963,940 children. Prevalence of concurrence was highest in the 12‐ to 24‐month age group 4.2%, 95% CI [4.1, 4.3], and was significantly higher among boys 3.54%, 95% CI [3.47, 3.61], compared to girls; 2.46%, 95% CI [2.41, 2.52]. Fragile and conflict‐affected states reported significantly higher concurrence 3.6%, 95% CI [3.5, 3.6], than those defined as stable 2.24%, 95% CI [2.18, 2.30]. This analysis represents the first multiple country estimation of the prevalence and burden of children concurrently wasted and stunted. Given the high risk of mortality associated with concurrence, the findings indicate a need to report on this condition as well as investigate whether these children are being reached through existing programmes. PMID:28944990

  11. The Effect of Two Different Concurrent Training Programs on Strength and Power Gains in Highly-Trained Individuals.

    PubMed

    Petré, Henrik; Löfving, Pontus; Psilander, Niklas

    2018-06-01

    The effects of concurrent strength and endurance training have been well studied in untrained and moderately-trained individuals. However, studies examining these effects in individuals with a long history of resistance training (RT) are lacking. Additionally, few studies have examined how strength and power are affected when different types of endurance training are added to an RT protocol. The purpose of the present study was to compare the effects of concurrent training incorporating either low-volume, high-intensity interval training (HIIT, 8-24 Tabata intervals at ~150% of VO 2max ) or high-volume, medium-intensity continuous endurance training (CT, 40-80 min at 70% of VO 2max ), on the strength and power of highly-trained individuals. Sixteen highly-trained ice-hockey and rugby players were divided into two groups that underwent either CT (n = 8) or HIIT (n = 8) in parallel with RT (2-6 sets of heavy parallel squats, > 80% of 1RM) during a 6-week period (3 sessions/wk). Parallel squat performance improved after both RT + CT and RT + HIIT (12 ± 8% and 14 ± 10% respectively, p < 0.01), with no difference between the groups. However, aerobic power (VO 2max ) only improved after RT + HIIT (4 ± 3%, p < 0.01). We conclude that strength gains can be obtained after both RT + CT and RT + HIIT in athletes with a prior history of RT. This indicates that the volume and/or intensity of the endurance training does not influence the magnitude of strength improvements during short periods of concurrent training, at least for highly-trained individuals when the endurance training is performed after RT. However, since VO 2max improved only after RT + HIIT and this is a time efficient protocol, we recommend this type of concurrent endurance training.

  12. Multiple Concurrent Visual-Motor Mappings: Implications for Models of Adaptation

    NASA Technical Reports Server (NTRS)

    Cunningham, H. A.; Welch, Robert B.

    1994-01-01

    Previous research on adaptation to visual-motor rearrangement suggests that the central nervous system represents accurately only 1 visual-motor mapping at a time. This idea was examined in 3 experiments where subjects tracked a moving target under repeated alternations between 2 initially interfering mappings (the 'normal' mapping characteristic of computer input devices and a 108' rotation of the normal mapping). Alternation between the 2 mappings led to significant reduction in error under the rotated mapping and significant reduction in the adaptation aftereffect ordinarily caused by switching between mappings. Color as a discriminative cue, interference versus decay in adaptation aftereffect, and intermanual transfer were also examined. The results reveal a capacity for multiple concurrent visual-motor mappings, possibly controlled by a parametric process near the motor output stage of processing.

  13. Concurrent Androgen Deprivation Therapy During Salvage Prostate Radiotherapy Improves Treatment Outcomes in High-Risk Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soto, Daniel E., E-mail: dsoto2@partners.org; Passarelli, Michael N.; Daignault, Stephanie

    2012-03-01

    Purpose: To determine whether concurrent androgen deprivation therapy (ADT) during salvage radiotherapy (RT) improves prostate cancer treatment outcomes. Methods and Materials: A total of 630 postprostatectomy patients were retrospectively identified who were treated with three-dimensional conformal RT. Of these, 441 were found to be treated for salvage indications. Biochemical failure was defined as prostate-specific antigen (PSA) of 0.2 ng/mL or greater above nadir with another PSA increase or the initiation of salvage ADT. Progression-free survival (PFS) was defined as the absence of biochemical failure, continued PSA rise despite salvage therapy, initiation of systemic therapy, clinical progression, or distant failure. Multivariate-adjustedmore » Cox proportional hazards modeling was performed to determine which factors predict PFS. Results: Low-, intermediate-, and high-risk patients made up 10%, 24%, and 66% of patients, respectively. The mean RT dose was 68 Gy. Twenty-four percent of patients received concurrent ADT (cADT). Regional pelvic nodes were treated in 16% of patients. With a median follow-up of 3 years, the 3-year PFS was 4.0 years for cADT vs. 3.4 years for cADT patients (p = 0.22). Multivariate analysis showed that concurrent ADT (p = 0.05), Gleason score (p < 0.001), and pre-RT PSA (p = 0.03) were independent predictors of PFS. When patients were stratified by risk group, the benefits of cADT (hazard ratio, 0.65; p = 0.046) were significant only for high-risk patients. Conclusions: This retrospective study showed a PFS benefit of concurrent ADT during salvage prostate RT. This benefit was observed only in high-risk patients.« less

  14. Specific Interference between a Cognitive Task and Sensory Organization for Stance Balance Control in Healthy Young Adults: Visuospatial Effects

    ERIC Educational Resources Information Center

    Chong, Raymond K. Y.; Mills, Bradley; Dailey, Leanna; Lane, Elizabeth; Smith, Sarah; Lee, Kyoung-Hyun

    2010-01-01

    We tested the hypothesis that a computational overload results when two activities, one motor and the other cognitive that draw on the same neural processing pathways, are performed concurrently. Healthy young adult subjects carried out two seemingly distinct tasks of maintaining standing balance control under conditions of low (eyes closed),…

  15. Ship Motions and Capsizing in Astern Seas

    DTIC Science & Technology

    1974-12-01

    result of these experiments and concurrent analytical work,a great deal has been learned about the mechanism of capsizing. This...computer time. It does not appear economically feasible using present-generation machines to numerically simulate a complete experimental...a Fast Cargo Liner in San Francisco Bay." Dept. of Naval Archi- tecture, University of Calif., Berkeley. January 1972. (Dept. of Transp

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Filippone, Michele; Dusuel, Sebastien; Vidal, Julien

    We consider a set of fully connected spin models that display first- or second-order transitions and for which we compute the ground-state entanglement in the thermodynamical limit. We analyze several entanglement measures (concurrence, Renyi entropy, and negativity) and show that, in general, discontinuous transitions lead to a jump of these quantities at the transition point. Interestingly, we also find examples where this is not the case.

  17. Spelling Practice Intervention: A Comparison of Tablet PC and Picture Cards as Spelling Practice Methods for Students with Developmental Disabilities

    ERIC Educational Resources Information Center

    Seok, Soonhwa; DaCosta, Boaventura; Yu, Byeong Min

    2015-01-01

    The present study compared a spelling practice intervention using a tablet personal computer (PC) and picture cards with three students diagnosed with developmental disabilities. An alternating-treatments design with a non-concurrent multiple-baseline across participants was used. The aims of the present study were: (a) to determine if…

  18. Effects of Differential Reinforcement and Rules with Feedback on Preference for Choice and Verbal Reports

    ERIC Educational Resources Information Center

    Karsina, Allen; Thompson, Rachel H.; Rodriguez, Nicole M.; Vanselow, Nicholas R.

    2012-01-01

    We evaluated the effects of differential reinforcement and accurate verbal rules with feedback on the preference for choice and the verbal reports of 6 adults. Participants earned points on a probabilistic schedule by completing the terminal links of a concurrent-chains arrangement in a computer-based game of chance. In free-choice terminal links,…

  19. Context Aware Routing Management Architecture for Airborne Networks

    DTIC Science & Technology

    2012-03-22

    awareness, increased survivability, 2 higher operation tempo , greater lethality, improve speed of command and certain degree of self-synchronization [35...first two sets of experiments. This error model simulates deviations from predetermined routes as well as variations on signal strength for radio...routes computed using Maximum Concurrent Multi-Commodity flow algorithm are not susceptible to rapid topology variations induced by noise. 57 5

  20. TRIO: Burst Buffer Based I/O Orchestration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Teng; Oral, H Sarp; Pritchard, Michael

    The growing computing power on leadership HPC systems is often accompanied by ever-escalating failure rates. Checkpointing is a common defensive mechanism used by scientific applications for failure recovery. However, directly writing the large and bursty checkpointing dataset to parallel filesystem can incur significant I/O contention on storage servers. Such contention in turn degrades the raw bandwidth utilization of storage servers and prolongs the average job I/O time of concurrent applications. Recently burst buffer has been proposed as an intermediate layer to absorb the bursty I/O traffic from compute nodes to storage backend. But an I/O orchestration mechanism is still desiredmore » to efficiently move checkpointing data from bursty buffers to storage backend. In this paper, we propose a burst buffer based I/O orchestration framework, named TRIO, to intercept and reshape the bursty writes for better sequential write traffic to storage severs. Meanwhile, TRIO coordinates the flushing orders among concurrent burst buffers to alleviate the contention on storage server bandwidth. Our experimental results reveal that TRIO can deliver 30.5% higher bandwidth and reduce the average job I/O time by 37% on average for data-intensive applications in various checkpointing scenarios.« less

  1. Optical read/write memory system components

    NASA Technical Reports Server (NTRS)

    Kozma, A.

    1972-01-01

    The optical components of a breadboard holographic read/write memory system have been fabricated and the parameters specified of the major system components: (1) a laser system; (2) an x-y beam deflector; (3) a block data composer; (4) the read/write memory material; (5) an output detector array; and (6) the electronics to drive, synchronize, and control all system components. The objectives of the investigation were divided into three concurrent phases: (1) to supply and fabricate the major components according to the previously established specifications; (2) to prepare computer programs to simulate the entire holographic memory system so that a designer can balance the requirements on the various components; and (3) to conduct a development program to optimize the combined recording and reconstruction process of the high density holographic memory system.

  2. Advances in quantum simulations of ATPase catalysis in the myosin motor.

    PubMed

    Kiani, Farooq Ahmad; Fischer, Stefan

    2015-04-01

    During its contraction cycle, the myosin motor catalyzes the hydrolysis of ATP. Several combined quantum/classical mechanics (QM/MM) studies of this step have been published, which substantially contributed to our thinking about the catalytic mechanism. The methodological difficulties encountered over the years in the simulation of this complex reaction are now understood: (a) Polarization of the protein peptide groups surrounding the highly charged ATP(4-) cannot be neglected. (b) Some unsuspected protein groups need to be treated QM. (c) Interactions with the γ-phosphate versus the β-phosphate favor a concurrent versus a sequential mechanism, respectively. Thus, these practical aspects strongly influence the computed mechanism, and should be considered when studying other catalyzed phosphor-ester hydrolysis reactions, such as in ATPases or GTPases. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Data acquisition and processing system for the HT-6M tokamak fusion experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shu, Y.T.; Liu, G.C.; Pang, J.Q.

    1987-08-01

    This paper describes a high-speed data acquisition and processing system which has been successfully operated on the HT-6M tokamak fusion experimental device. The system collects, archives and analyzes up to 512 kilobytes of data from each shot of the experiment. A shot lasts 50-150 milliseconds and occurs every 5-10 minutes. The system consists of two PDP-11/24 computer systems. One PDP-11/24 is used for real-time data taking and on-line data analysis. It is based upon five CAMAC crates organized into a parallel branch. Another PDP-11/24 is used for off-line data processing. Both data acquisition software RSX-DAS and data processing software RSX-DAPmore » have modular, multi-tasking and concurrent processing features.« less

  4. EASAMS' Ariane 5 on-board software experience

    NASA Astrophysics Data System (ADS)

    Birnie, Steven Andrew

    The design and development of the prototype flight software for the Ariane 5 satellite launch vehicle is considered. This was specified as being representative of the eventual real flight program in terms of timing constraints and target computer loading. The usability of HOOD (Hierarchical Object Oriented Design) and Ada for development of such preemptive multitasking computer programs was verified. Features of the prototype development included: design methods supplementary to HOOD for representation of concurrency aspects; visibility of Ada enumerated type literals across HOOD parent-child interfaces; deterministic timings achieved by modification of Ada delays; and linking of interrupts to Ada task entries.

  5. Applications of Multi-Agent Technology to Power Systems

    NASA Astrophysics Data System (ADS)

    Nagata, Takeshi

    Currently, agents are focus of intense on many sub-fields of computer science and artificial intelligence. Agents are being used in an increasingly wide variety of applications. Many important computing applications such as planning, process control, communication networks and concurrent systems will benefit from using multi-agent system approach. A multi-agent system is a structure given by an environment together with a set of artificial agents capable to act on this environment. Multi-agent models are oriented towards interactions, collaborative phenomena, and autonomy. This article presents the applications of multi-agent technology to the power systems.

  6. Early Math Interest and the Development of Math Skills

    ERIC Educational Resources Information Center

    Fisher, Paige H.; Dobbs-Oates, Jennifer; Doctoroff, Greta L.; Arnold, David H.

    2012-01-01

    Prior models suggest that math attitudes and ability might strengthen each other over time in a reciprocal fashion (Ma, 1997). The current study investigated the relationship between math interest and skill both concurrently and over time in a preschool sample. Analyses of concurrent relationships indicated that high levels of interest were…

  7. Improving generalized inverted index lock wait times

    NASA Astrophysics Data System (ADS)

    Borodin, A.; Mirvoda, S.; Porshnev, S.; Ponomareva, O.

    2018-01-01

    Concurrent operations on tree like data structures is a cornerstone of any database system. Concurrent operations intended for improving read\\write performance and usually implemented via some way of locking. Deadlock-free methods of concurrency control are known as tree locking protocols. These protocols provide basic operations(verbs) and algorithm (ways of operation invocations) for applying it to any tree-like data structure. These algorithms operate on data, managed by storage engine which are very different among RDBMS implementations. In this paper, we discuss tree locking protocol implementation for General inverted index (Gin) applied to multiversion concurrency control (MVCC) storage engine inside PostgreSQL RDBMS. After that we introduce improvements to locking protocol and provide usage statistics about evaluation of our improvement in very high load environment in one of the world’s largest IT company.

  8. Attention Demands of Spoken Word Planning: A Review

    PubMed Central

    Roelofs, Ardi; Piai, Vitória

    2011-01-01

    Attention and language are among the most intensively researched abilities in the cognitive neurosciences, but the relation between these abilities has largely been neglected. There is increasing evidence, however, that linguistic processes, such as those underlying the planning of words, cannot proceed without paying some form of attention. Here, we review evidence that word planning requires some but not full attention. The evidence comes from chronometric studies of word planning in picture naming and word reading under divided attention conditions. It is generally assumed that the central attention demands of a process are indexed by the extent that the process delays the performance of a concurrent unrelated task. The studies measured the speed and accuracy of linguistic and non-linguistic responding as well as eye gaze durations reflecting the allocation of attention. First, empirical evidence indicates that in several task situations, processes up to and including phonological encoding in word planning delay, or are delayed by, the performance of concurrent unrelated non-linguistic tasks. These findings suggest that word planning requires central attention. Second, empirical evidence indicates that conflicts in word planning may be resolved while concurrently performing an unrelated non-linguistic task, making a task decision, or making a go/no-go decision. These findings suggest that word planning does not require full central attention. We outline a computationally implemented theory of attention and word planning, and describe at various points the outcomes of computer simulations that demonstrate the utility of the theory in accounting for the key findings. Finally, we indicate how attention deficits may contribute to impaired language performance, such as in individuals with specific language impairment. PMID:22069393

  9. Simulating compressible-incompressible two-phase flows

    NASA Astrophysics Data System (ADS)

    Denner, Fabian; van Wachem, Berend

    2017-11-01

    Simulating compressible gas-liquid flows, e.g. air-water flows, presents considerable numerical issues and requires substantial computational resources, particularly because of the stiff equation of state for the liquid and the different Mach number regimes. Treating the liquid phase (low Mach number) as incompressible, yet concurrently considering the gas phase (high Mach number) as compressible, can improve the computational performance of such simulations significantly without sacrificing important physical mechanisms. A pressure-based algorithm for the simulation of two-phase flows is presented, in which a compressible and an incompressible fluid are separated by a sharp interface. The algorithm is based on a coupled finite-volume framework, discretised in conservative form, with a compressive VOF method to represent the interface. The bulk phases are coupled via a novel acoustically-conservative interface discretisation method that retains the acoustic properties of the compressible phase and does not require a Riemann solver. Representative test cases are presented to scrutinize the proposed algorithm, including the reflection of acoustic waves at the compressible-incompressible interface, shock-drop interaction and gas-liquid flows with surface tension. Financial support from the EPSRC (Grant EP/M021556/1) is gratefully acknowledged.

  10. Application of a distributed network in computational fluid dynamic simulations

    NASA Technical Reports Server (NTRS)

    Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.; Deshpande, Ashish

    1994-01-01

    A general-purpose 3-D, incompressible Navier-Stokes algorithm is implemented on a network of concurrently operating workstations using parallel virtual machine (PVM) and compared with its performance on a CRAY Y-MP and on an Intel iPSC/860. The problem is relatively computationally intensive, and has a communication structure based primarily on nearest-neighbor communication, making it ideally suited to message passing. Such problems are frequently encountered in computational fluid dynamics (CDF), and their solution is increasingly in demand. The communication structure is explicitly coded in the implementation to fully exploit the regularity in message passing in order to produce a near-optimal solution. Results are presented for various grid sizes using up to eight processors.

  11. Multithreaded Model for Dynamic Load Balancing Parallel Adaptive PDE Computations

    NASA Technical Reports Server (NTRS)

    Chrisochoides, Nikos

    1995-01-01

    We present a multithreaded model for the dynamic load-balancing of numerical, adaptive computations required for the solution of Partial Differential Equations (PDE's) on multiprocessors. Multithreading is used as a means of exploring concurrency in the processor level in order to tolerate synchronization costs inherent to traditional (non-threaded) parallel adaptive PDE solvers. Our preliminary analysis for parallel, adaptive PDE solvers indicates that multithreading can be used an a mechanism to mask overheads required for the dynamic balancing of processor workloads with computations required for the actual numerical solution of the PDE's. Also, multithreading can simplify the implementation of dynamic load-balancing algorithms, a task that is very difficult for traditional data parallel adaptive PDE computations. Unfortunately, multithreading does not always simplify program complexity, often makes code re-usability not an easy task, and increases software complexity.

  12. A heterogeneous computing environment for simulating astrophysical fluid flows

    NASA Technical Reports Server (NTRS)

    Cazes, J.

    1994-01-01

    In the Concurrent Computing Laboratory in the Department of Physics and Astronomy at Louisiana State University we have constructed a heterogeneous computing environment that permits us to routinely simulate complicated three-dimensional fluid flows and to readily visualize the results of each simulation via three-dimensional animation sequences. An 8192-node MasPar MP-1 computer with 0.5 GBytes of RAM provides 250 MFlops of execution speed for our fluid flow simulations. Utilizing the parallel virtual machine (PVM) language, at periodic intervals data is automatically transferred from the MP-1 to a cluster of workstations where individual three-dimensional images are rendered for inclusion in a single animation sequence. Work is underway to replace executions on the MP-1 with simulations performed on the 512-node CM-5 at NCSA and to simultaneously gain access to more potent volume rendering workstations.

  13. Trends in media use.

    PubMed

    Roberts, Donald F; Foehr, Ulla G

    2008-01-01

    American youth are awash in media. They have television sets in their bedrooms, personal computers in their family rooms, and digital music players and cell phones in their backpacks. They spend more time with media than any single activity other than sleeping, with the average American eight- to eighteen-year-old reporting more than six hours of daily media use. The growing phenomenon of "media multitasking"--using several media concurrently--multiplies that figure to eight and a half hours of media exposure daily. Donald Roberts and Ulla Foehr examine how both media use and media exposure vary with demographic factors such as age, race and ethnicity, and household socioeconomic status, and with psychosocial variables such as academic performance and personal adjustment. They note that media exposure begins early, increases until children begin school, drops off briefly, then climbs again to peak at almost eight hours daily among eleven- and twelve-year-olds. Television and video exposure is particularly high among African American youth. Media exposure is negatively related to indicators of socioeconomic status, but that relationship may be diminishing. Media exposure is positively related to risk-taking behaviors and is negatively related to personal adjustment and school performance. Roberts and Foehr also review evidence pointing to the existence of a digital divide--variations in access to personal computers and allied technologies by socioeconomic status and by race and ethnicity. The authors also examine how the recent emergence of digital media such as personal computers, video game consoles, and portable music players, as well as the media multitasking phenomenon they facilitate, has increased young people's exposure to media messages while leaving media use time largely unchanged. Newer media, they point out, are not displacing older media but are being used in concert with them. The authors note which young people are more or less likely to use several media concurrently and which media are more or less likely to be paired with various other media. They argue that one implication of such media multitasking is the need to reconceptualize "media exposure."

  14. Ecological association between HIV and concurrency point-prevalence in South Africa's ethnic groups.

    PubMed

    Kenyon, Chris

    2013-11-01

    HIV prevalence between different ethnic groups within South Africa exhibits considerable variation. Numerous authors believe that elevated sexual partner concurrency rates are important in the spread of HIV. Few studies have, however, investigated if differential concurrency rates could explain differential HIV spread within ethnic groups in South Africa. This ecological analysis, explores how much of the variation in HIV prevalence by ethnic group is explained by differential concurrency rates. Using a nationally representative survey (the South African National HIV Prevalence, HIV Incidence, Behaviour and Communication Survey, 2005) the HIV prevalence in each of eight major ethnic groups was calculated. Linear regression analysis was used to assess the association between an ethnic group's HIV prevalence and the point-prevalence of concurrency. Results showed that HIV prevalence rates varied considerably between South Africa's ethnic groups. This applied to both different racial groups and to different ethnic groups within the black group. The point-prevalence of concurrency by ethnic group was strongly associated with HIV prevalence (R(2) = 0.83; p = 0.001). Tackling the key drivers of high HIV transmission in this population may benefit from more emphasis on partner reduction interventions.

  15. Test-retest reliability and concurrent validity of in vivo myelin content indices: Myelin water fraction and calibrated T1 w/T2 w image ratio.

    PubMed

    Arshad, Muzamil; Stanley, Jeffrey A; Raz, Naftali

    2017-04-01

    In an age-heterogeneous sample of healthy adults, we examined test-retest reliability (with and without participant repositioning) of two popular MRI methods of estimating myelin content: modeling the short spin-spin (T 2 ) relaxation component of multi-echo imaging data and computing the ratio of T 1 -weighted and T 2 -weighted images (T 1 w/T 2 w). Taking the myelin water fraction (MWF) index of myelin content derived from the multi-component T 2 relaxation data as a standard, we evaluate the concurrent and differential validity of T 1 w/T 2 w ratio images. The results revealed high reliability of MWF and T 1 w/T 2 w ratio. However, we found significant correlations of low to moderate magnitude between MWF and the T 1 w/T 2 w ratio in only two of six examined regions of the cerebral white matter. Notably, significant correlations of the same or greater magnitude were observed for T 1 w/T 2 w ratio and the intermediate T 2 relaxation time constant, which is believed to reflect differences in the mobility of water between the intracellular and extracellular compartments. We conclude that although both methods are highly reliable and thus well-suited for longitudinal studies, T 1 w/T 2 w ratio has low criterion validity and may be not an optimal index of subcortical myelin content. Hum Brain Mapp 38:1780-1790, 2017. © 2017 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  16. Gait profile score and movement analysis profile in patients with Parkinson's disease during concurrent cognitive load

    PubMed Central

    Speciali, Danielli S.; Oliveira, Elaine M.; Cardoso, Jefferson R.; Correa, João C. F.; Baker, Richard; Lucareli, Paulo R. G.

    2014-01-01

    Background: Gait disorders are common in individuals with Parkinson's Disease (PD) and the concurrent performance of motor and cognitive tasks can have marked effects on gait. The Gait Profile Score (GPS) and the Movement Analysis Profile (MAP) were developed in order to summarize the data of kinematics and facilitate understanding of the results of gait analysis. Objective: To investigate the effectiveness of the GPS and MAP in the quantification of changes in gait during a concurrent cognitive load while walking in adults with and without PD. Method: Fourteen patients with idiopathic PD and nine healthy subjects participated in the study. All subjects performed single and dual walking tasks. The GPS/MAP was computed from three-dimensional gait analysis data. Results: Differences were found between tasks for GPS (P<0.05) and Gait Variable Score (GVS) (pelvic rotation, knee flexion-extension and ankle dorsiflexion-plantarflexion) (P<0.05) in the PD group. An interaction between task and group was observed for GPS (P<0.01) for the right side (Cohen's ¯d=0.99), left side (Cohen's ¯d=0.91), and overall (Cohen's ¯d=0.88). No interaction was observed only for hip internal-external rotation and foot internal-external progression GVS variables in the PD group. Conclusions: The results showed gait impairment during the dual task and suggest that GPS/MAP may be used to evaluate the effects of concurrent cognitive load while walking in patients with PD. PMID:25054382

  17. An Adaptive Flow Solver for Air-Borne Vehicles Undergoing Time-Dependent Motions/Deformations

    NASA Technical Reports Server (NTRS)

    Singh, Jatinder; Taylor, Stephen

    1997-01-01

    This report describes a concurrent Euler flow solver for flows around complex 3-D bodies. The solver is based on a cell-centered finite volume methodology on 3-D unstructured tetrahedral grids. In this algorithm, spatial discretization for the inviscid convective term is accomplished using an upwind scheme. A localized reconstruction is done for flow variables which is second order accurate. Evolution in time is accomplished using an explicit three-stage Runge-Kutta method which has second order temporal accuracy. This is adapted for concurrent execution using another proven methodology based on concurrent graph abstraction. This solver operates on heterogeneous network architectures. These architectures may include a broad variety of UNIX workstations and PCs running Windows NT, symmetric multiprocessors and distributed-memory multi-computers. The unstructured grid is generated using commercial grid generation tools. The grid is automatically partitioned using a concurrent algorithm based on heat diffusion. This results in memory requirements that are inversely proportional to the number of processors. The solver uses automatic granularity control and resource management techniques both to balance load and communication requirements, and deal with differing memory constraints. These ideas are again based on heat diffusion. Results are subsequently combined for visualization and analysis using commercial CFD tools. Flow simulation results are demonstrated for a constant section wing at subsonic, transonic, and a supersonic case. These results are compared with experimental data and numerical results of other researchers. Performance results are under way for a variety of network topologies.

  18. Concurrent extensions to the FORTRAN language for parallel programming of computational fluid dynamics algorithms

    NASA Technical Reports Server (NTRS)

    Weeks, Cindy Lou

    1986-01-01

    Experiments were conducted at NASA Ames Research Center to define multi-tasking software requirements for multiple-instruction, multiple-data stream (MIMD) computer architectures. The focus was on specifying solutions for algorithms in the field of computational fluid dynamics (CFD). The program objectives were to allow researchers to produce usable parallel application software as soon as possible after acquiring MIMD computer equipment, to provide researchers with an easy-to-learn and easy-to-use parallel software language which could be implemented on several different MIMD machines, and to enable researchers to list preferred design specifications for future MIMD computer architectures. Analysis of CFD algorithms indicated that extensions of an existing programming language, adaptable to new computer architectures, provided the best solution to meeting program objectives. The CoFORTRAN Language was written in response to these objectives and to provide researchers a means to experiment with parallel software solutions to CFD algorithms on machines with parallel architectures.

  19. Analysis of a Multiprocessor Guidance Computer. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Maltach, E. G.

    1969-01-01

    The design of the next generation of spaceborne digital computers is described. It analyzes a possible multiprocessor computer configuration. For the analysis, a set of representative space computing tasks was abstracted from the Lunar Module Guidance Computer programs as executed during the lunar landing, from the Apollo program. This computer performs at this time about 24 concurrent functions, with iteration rates from 10 times per second to once every two seconds. These jobs were tabulated in a machine-independent form, and statistics of the overall job set were obtained. It was concluded, based on a comparison of simulation and Markov results, that the Markov process analysis is accurate in predicting overall trends and in configuration comparisons, but does not provide useful detailed information in specific situations. Using both types of analysis, it was determined that the job scheduling function is a critical one for efficiency of the multiprocessor. It is recommended that research into the area of automatic job scheduling be performed.

  20. AWE-WQ: fast-forwarding molecular dynamics using the accelerated weighted ensemble.

    PubMed

    Abdul-Wahid, Badi'; Feng, Haoyun; Rajan, Dinesh; Costaouec, Ronan; Darve, Eric; Thain, Douglas; Izaguirre, Jesús A

    2014-10-27

    A limitation of traditional molecular dynamics (MD) is that reaction rates are difficult to compute. This is due to the rarity of observing transitions between metastable states since high energy barriers trap the system in these states. Recently the weighted ensemble (WE) family of methods have emerged which can flexibly and efficiently sample conformational space without being trapped and allow calculation of unbiased rates. However, while WE can sample correctly and efficiently, a scalable implementation applicable to interesting biomolecular systems is not available. We provide here a GPLv2 implementation called AWE-WQ of a WE algorithm using the master/worker distributed computing WorkQueue (WQ) framework. AWE-WQ is scalable to thousands of nodes and supports dynamic allocation of computer resources, heterogeneous resource usage (such as central processing units (CPU) and graphical processing units (GPUs) concurrently), seamless heterogeneous cluster usage (i.e., campus grids and cloud providers), and support for arbitrary MD codes such as GROMACS, while ensuring that all statistics are unbiased. We applied AWE-WQ to a 34 residue protein which simulated 1.5 ms over 8 months with peak aggregate performance of 1000 ns/h. Comparison was done with a 200 μs simulation collected on a GPU over a similar timespan. The folding and unfolded rates were of comparable accuracy.

  1. Integrated multimodal human-computer interface and augmented reality for interactive display applications

    NASA Astrophysics Data System (ADS)

    Vassiliou, Marius S.; Sundareswaran, Venkataraman; Chen, S.; Behringer, Reinhold; Tam, Clement K.; Chan, M.; Bangayan, Phil T.; McGee, Joshua H.

    2000-08-01

    We describe new systems for improved integrated multimodal human-computer interaction and augmented reality for a diverse array of applications, including future advanced cockpits, tactical operations centers, and others. We have developed an integrated display system featuring: speech recognition of multiple concurrent users equipped with both standard air- coupled microphones and novel throat-coupled sensors (developed at Army Research Labs for increased noise immunity); lip reading for improving speech recognition accuracy in noisy environments, three-dimensional spatialized audio for improved display of warnings, alerts, and other information; wireless, coordinated handheld-PC control of a large display; real-time display of data and inferences from wireless integrated networked sensors with on-board signal processing and discrimination; gesture control with disambiguated point-and-speak capability; head- and eye- tracking coupled with speech recognition for 'look-and-speak' interaction; and integrated tetherless augmented reality on a wearable computer. The various interaction modalities (speech recognition, 3D audio, eyetracking, etc.) are implemented a 'modality servers' in an Internet-based client-server architecture. Each modality server encapsulates and exposes commercial and research software packages, presenting a socket network interface that is abstracted to a high-level interface, minimizing both vendor dependencies and required changes on the client side as the server's technology improves.

  2. Demonstration of two-qubit algorithms with a superconducting quantum processor.

    PubMed

    DiCarlo, L; Chow, J M; Gambetta, J M; Bishop, Lev S; Johnson, B R; Schuster, D I; Majer, J; Blais, A; Frunzio, L; Girvin, S M; Schoelkopf, R J

    2009-07-09

    Quantum computers, which harness the superposition and entanglement of physical states, could outperform their classical counterparts in solving problems with technological impact-such as factoring large numbers and searching databases. A quantum processor executes algorithms by applying a programmable sequence of gates to an initialized register of qubits, which coherently evolves into a final state containing the result of the computation. Building a quantum processor is challenging because of the need to meet simultaneously requirements that are in conflict: state preparation, long coherence times, universal gate operations and qubit readout. Processors based on a few qubits have been demonstrated using nuclear magnetic resonance, cold ion trap and optical systems, but a solid-state realization has remained an outstanding challenge. Here we demonstrate a two-qubit superconducting processor and the implementation of the Grover search and Deutsch-Jozsa quantum algorithms. We use a two-qubit interaction, tunable in strength by two orders of magnitude on nanosecond timescales, which is mediated by a cavity bus in a circuit quantum electrodynamics architecture. This interaction allows the generation of highly entangled states with concurrence up to 94 per cent. Although this processor constitutes an important step in quantum computing with integrated circuits, continuing efforts to increase qubit coherence times, gate performance and register size will be required to fulfil the promise of a scalable technology.

  3. AWE-WQ: Fast-Forwarding Molecular Dynamics Using the Accelerated Weighted Ensemble

    PubMed Central

    2015-01-01

    A limitation of traditional molecular dynamics (MD) is that reaction rates are difficult to compute. This is due to the rarity of observing transitions between metastable states since high energy barriers trap the system in these states. Recently the weighted ensemble (WE) family of methods have emerged which can flexibly and efficiently sample conformational space without being trapped and allow calculation of unbiased rates. However, while WE can sample correctly and efficiently, a scalable implementation applicable to interesting biomolecular systems is not available. We provide here a GPLv2 implementation called AWE-WQ of a WE algorithm using the master/worker distributed computing WorkQueue (WQ) framework. AWE-WQ is scalable to thousands of nodes and supports dynamic allocation of computer resources, heterogeneous resource usage (such as central processing units (CPU) and graphical processing units (GPUs) concurrently), seamless heterogeneous cluster usage (i.e., campus grids and cloud providers), and support for arbitrary MD codes such as GROMACS, while ensuring that all statistics are unbiased. We applied AWE-WQ to a 34 residue protein which simulated 1.5 ms over 8 months with peak aggregate performance of 1000 ns/h. Comparison was done with a 200 μs simulation collected on a GPU over a similar timespan. The folding and unfolded rates were of comparable accuracy. PMID:25207854

  4. Stochastic local operations and classical communication (SLOCC) and local unitary operations (LU) classifications of n qubits via ranks and singular values of the spin-flipping matrices

    NASA Astrophysics Data System (ADS)

    Li, Dafa

    2018-06-01

    We construct ℓ -spin-flipping matrices from the coefficient matrices of pure states of n qubits and show that the ℓ -spin-flipping matrices are congruent and unitary congruent whenever two pure states of n qubits are SLOCC and LU equivalent, respectively. The congruence implies the invariance of ranks of the ℓ -spin-flipping matrices under SLOCC and then permits a reduction of SLOCC classification of n qubits to calculation of ranks of the ℓ -spin-flipping matrices. The unitary congruence implies the invariance of singular values of the ℓ -spin-flipping matrices under LU and then permits a reduction of LU classification of n qubits to calculation of singular values of the ℓ -spin-flipping matrices. Furthermore, we show that the invariance of singular values of the ℓ -spin-flipping matrices Ω 1^{(n)} implies the invariance of the concurrence for even n qubits and the invariance of the n-tangle for odd n qubits. Thus, the concurrence and the n-tangle can be used for LU classification and computing the concurrence and the n-tangle only performs additions and multiplications of coefficients of states.

  5. Bi-Level Integrated System Synthesis (BLISS)

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Agte, Jeremy S.; Sandusky, Robert R., Jr.

    1998-01-01

    BLISS is a method for optimization of engineering systems by decomposition. It separates the system level optimization, having a relatively small number of design variables, from the potentially numerous subsystem optimizations that may each have a large number of local design variables. The subsystem optimizations are autonomous and may be conducted concurrently. Subsystem and system optimizations alternate, linked by sensitivity data, producing a design improvement in each iteration. Starting from a best guess initial design, the method improves that design in iterative cycles, each cycle comprised of two steps. In step one, the system level variables are frozen and the improvement is achieved by separate, concurrent, and autonomous optimizations in the local variable subdomains. In step two, further improvement is sought in the space of the system level variables. Optimum sensitivity data link the second step to the first. The method prototype was implemented using MATLAB and iSIGHT programming software and tested on a simplified, conceptual level supersonic business jet design, and a detailed design of an electronic device. Satisfactory convergence and favorable agreement with the benchmark results were observed. Modularity of the method is intended to fit the human organization and map well on the computing technology of concurrent processing.

  6. Concurrent information affects response inhibition processes via the modulation of theta oscillations in cognitive control networks.

    PubMed

    Chmielewski, Witold X; Mückschel, Moritz; Dippel, Gabriel; Beste, Christian

    2016-11-01

    Inhibiting responses is a challenge, where the outcome (partly) depends on the situational context. In everyday situations, response inhibition performance might be altered when irrelevant input is presented simultaneously with the information relevant for response inhibition. More specifically, irrelevant concurrent information may either brace or interfere with response-relevant information, depending on whether these inputs are redundant or conflicting. The aim of this study is to investigate neurophysiological mechanisms and the network underlying such modulations using EEG beamforming as method. The results show that in comparison to a baseline condition without concurrent information, response inhibition performance can be aggravated or facilitated by manipulating the extent of conflict via concurrent input. This depends on whether the requirement for cognitive control is high, as in conflicting trials, or whether it is low, as in redundant trials. In line with this, the total theta frequency power decreases in a right hemispheric orbitofrontal response inhibition network including the SFG, MFG, and SMA, when concurrent redundant information facilitates response inhibition processes. Vice versa, theta activity in a left-hemispheric response inhibition network (i.e., SFG, MFG, and IFG) increases, when conflicting concurrent information compromises response inhibition processes. We conclude that concurrent information bi-directionally shifts response inhibition performance and modulates the network architecture underlying theta oscillations which are signaling different levels of the need for cognitive control.

  7. Effectiveness of concurrent procedures during high tibial osteotomy for medial compartment osteoarthritis: a systematic review and meta-analysis.

    PubMed

    Lee, O-Sung; Ahn, Soyeon; Ahn, Jin Hwan; Teo, Seow Hui; Lee, Yong Seuk

    2018-02-01

    The purpose of this systematic review and meta-analysis was to evaluate the efficacy of concurrent cartilage procedures during high tibial osteotomy (HTO) for medial compartment osteoarthritis (OA) by comparing the outcomes of studies that directly compared the use of HTO plus concurrent cartilage procedures versus HTO alone. Results that are possible to be compared in more than two articles were presented as forest plots. A 95% confidence interval was calculated for each effect size, and we calculated the I 2 statistic, which presents the percentage of total variation attributable to the heterogeneity among studies. The random effects model was used to calculate the effect size. Seven articles were included to the final analysis. Case groups were composed of HTO without concurrent procedures and control groups were composed of HTO with concurrent procedures such as marrow stimulation procedure, mesenchymal stem cell transplantation, and injection. The case group showed a higher hospital for special surgery score and mean difference was 4.10 [I 2 80.8%, 95% confidence interval (CI) - 9.02 to 4.82]. Mean difference of the mechanical femorotibial angle in five studies was 0.08° (I 2 0%, 95% CI - 0.26 to 0.43). However, improved arthroscopic, histologic, and MRI results were reported in the control group. Our analysis support that concurrent procedures during HTO for medial compartment OA have little beneficial effect regarding clinical and radiological outcomes. However, they might have some beneficial effects in terms of arthroscopic, histologic, and MRI findings even though the quality of healed cartilage is not good as that of original cartilage. Therefore, until now, concurrent procedures for medial compartment OA have been considered optional. Nevertheless, no conclusions can be drawn for younger patients with focal cartilage defects and concomitant varus deformity. This question needs to be addressed separately.

  8. Concurrent use of alcohol interactive medications and alcohol in older adults: a systematic review of prevalence and associated adverse outcomes.

    PubMed

    Holton, Alice E; Gallagher, Paul; Fahey, Tom; Cousins, Gráinne

    2017-07-17

    Older adults are susceptible to adverse effects from the concurrent use of medications and alcohol. The aim of this study was to systematically review the prevalence of concurrent use of alcohol and alcohol-interactive (AI) medicines in older adults and associated adverse outcomes. A systematic search was performed using MEDLINE (PubMed), Embase, Scopus and Web of Science (January 1990 to June 2016), and hand searching references of retrieved articles. Observational studies reporting on the concurrent use of alcohol and AI medicines in the same or overlapping recall periods in older adults were included. Two independent reviewers verified that studies met the inclusion criteria, critically appraised included studies and extracted relevant data. A narrative synthesis is provided. Twenty studies, all cross-sectional, were included. Nine studies classified a wide range of medicines as AI using different medication compendia, thus resulting in heterogeneity across studies. Three studies investigated any medication use and eight focused on psychotropic medications. Based on the quality assessment of included studies, the most reliable estimate of concurrent use in older adults ranges between 21 and 35%. The most reliable estimate of concurrent use of psychotropic medications and alcohol ranges between 7.4 and 7.75%. No study examined longitudinal associations with adverse outcomes. Three cross-sectional studies reported on falls with mixed findings, while one study reported on the association between moderate alcohol consumption and adverse drug reactions at hospital admission. While there appears to be a high propensity for alcohol-medication interactions in older adults, there is a lack of consensus regarding what constitutes an AI medication. An explicit list of AI medications needs to be derived and validated prospectively to quantify the magnitude of risk posed by the concurrent use of alcohol for adverse outcomes in older adults. This will allow for risk stratification of older adults at the point of prescribing, and prioritise alcohol screening and brief alcohol interventions in high-risk groups.

  9. Cooperating reduction machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kluge, W.E.

    1983-11-01

    This paper presents a concept and a system architecture for the concurrent execution of program expressions of a concrete reduction language based on lamda-expressions. If formulated appropriately, these expressions are well-suited for concurrent execution, following a demand-driven model of computation. In particular, recursive program expressions with nonlinear expansion may, at run time, recursively be partitioned into a hierarchy of independent subexpressions which can be reduced by a corresponding hierarchy of virtual reduction machines. This hierarchy unfolds and collapses dynamically, with virtual machines recursively assuming the role of masters that create and eventually terminate, or synchronize with, slaves. The paper alsomore » proposes a nonhierarchically organized system of reduction machines, each featuring a stack architecture, that effectively supports the allocation of virtual machines to the real machines of the system in compliance with their hierarchical order of creation and termination. 25 references.« less

  10. Study on multi-satellite, multi-measurement of the structure of the earth's bow shock

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The pulsation model of the earth's bow shock proposed a nonuniform shock having both perpendicular (abrupt, monotonic) and oblique (oscillatory, multigradient) properties simultaneously, depending on local orientation of the shock surface to the interplanetary field B sub sw in parallel planes defined by B sub sw and solar wind velocity. Multiple, concurrent, satellite observations of the shock and solar wind conditions were used. Twenty-six potentially useful intervals of concurrent Explorer 33 and 35 data acquisition were examined, of which six were selected for closer study. In addition, two years of OGO-5 and HEOS 1 magnetometer data were examined for possible conjunctions to these spacecraft having applicable data. One case of clear nonuniformity and several of field-dependent structure were documented. A computational aid, called pulsation index, was developed.

  11. Software-safety and software quality assurance in real-time applications Part 2: Real-time structures and languages

    NASA Astrophysics Data System (ADS)

    Schoitsch, Erwin

    1988-07-01

    Our society is depending more and more on the reliability of embedded (real-time) computer systems even in every-day life. Considering the complexity of the real world, this might become a severe threat. Real-time programming is a discipline important not only in process control and data acquisition systems, but also in fields like communication, office automation, interactive databases, interactive graphics and operating systems development. General concepts of concurrent programming and constructs for process-synchronization are discussed in detail. Tasking and synchronization concepts, methods of process communication, interrupt- and timeout handling in systems based on semaphores, signals, conditional critical regions or on real-time languages like Concurrent PASCAL, MODULA, CHILL and ADA are explained and compared with each other and with respect to their potential to quality and safety.

  12. High-dose accelerated hypofractionated three-dimensional conformal radiotherapy (at 3 Gy/fraction) with concurrent vinorelbine and carboplatin chemotherapy in locally advanced non-small-cell lung cancer: a feasibility study.

    PubMed

    Liu, Yue-E; Lin, Qiang; Meng, Fan-Jie; Chen, Xue-Ji; Ren, Xiao-Cang; Cao, Bin; Wang, Na; Zong, Jie; Peng, Yu; Ku, Ya-Jun; Chen, Yan

    2013-08-11

    Increasing the radiotherapy dose can result in improved local control for non-small-cell lung cancer (NSCLC) and can thereby improve survival. Accelerated hypofractionated radiotherapy can expose tumors to a high dose of radiation in a short period of time, but the optimal treatment regimen remains unclear. The purpose of this study was to evaluate the feasibility of utilizing high-dose accelerated hypofractionated three-dimensional conformal radiotherapy (at 3 Gy/fraction) with concurrent vinorelbine (NVB) and carboplatin (CBP) chemotherapy for the treatment of local advanced NSCLC. Untreated patients with unresectable stage IIIA/IIIB NSCLC or patients with a recurrence of NSCLC received accelerated hypofractionated three-dimensional conformal radiotherapy. The total dose was greater than or equal to 60 Gy. The accelerated hypofractionated radiotherapy was conducted once daily at 3 Gy/fraction with 5 fractions per week, and the radiotherapy was completed in 5 weeks. In addition to radiotherapy, the patients also received at least 1 cycle of a concurrent two-drug chemotherapy regimen of NVB and CBP. A total of 26 patients (19 previously untreated cases and 7 cases of recurrent disease) received 60Gy-75Gy radiotherapy with concurrent chemotherapy. All of the patients underwent evaluations for toxicity and preliminary therapeutic efficacy. There were no treatment-related deaths within the entire patient group. The major acute adverse reactions were radiation esophagitis (88.5%) and radiation pneumonitis (42.3%). The percentages of grade III acute radiation esophagitis and grade III radiation pneumonitis were 15.4% and 7.7%, respectively. Hematological toxicities were common and did not significantly affect the implementation of chemoradiotherapy after supportive treatment. Two patients received high dose of 75 Gy had grade III late esophageal toxicity, and none had grade IV and above. Grade III and above late lung toxicity did not occur. High-dose accelerated hypofractionated three-dimensional conformal radiotherapy with a dose of 60 Gy or greater with concurrent NVB and CBP chemotherapy might be feasible. However esophagus toxicity needs special attention. A phase I trial is recommended to obtain the maximum tolerated radiation dose of accelerated hypofractionated radiotherapy with concurrent chemotherapy.

  13. High-dose accelerated hypofractionated three-dimensional conformal radiotherapy (at 3 Gy/fraction) with concurrent vinorelbine and carboplatin chemotherapy in locally advanced non-small-cell lung cancer: a feasibility study

    PubMed Central

    2013-01-01

    Background Increasing the radiotherapy dose can result in improved local control for non-small-cell lung cancer (NSCLC) and can thereby improve survival. Accelerated hypofractionated radiotherapy can expose tumors to a high dose of radiation in a short period of time, but the optimal treatment regimen remains unclear. The purpose of this study was to evaluate the feasibility of utilizing high-dose accelerated hypofractionated three-dimensional conformal radiotherapy (at 3 Gy/fraction) with concurrent vinorelbine (NVB) and carboplatin (CBP) chemotherapy for the treatment of local advanced NSCLC. Methods Untreated patients with unresectable stage IIIA/IIIB NSCLC or patients with a recurrence of NSCLC received accelerated hypofractionated three-dimensional conformal radiotherapy. The total dose was greater than or equal to 60 Gy. The accelerated hypofractionated radiotherapy was conducted once daily at 3 Gy/fraction with 5 fractions per week, and the radiotherapy was completed in 5 weeks. In addition to radiotherapy, the patients also received at least 1 cycle of a concurrent two-drug chemotherapy regimen of NVB and CBP. Results A total of 26 patients (19 previously untreated cases and 7 cases of recurrent disease) received 60Gy-75Gy radiotherapy with concurrent chemotherapy. All of the patients underwent evaluations for toxicity and preliminary therapeutic efficacy. There were no treatment-related deaths within the entire patient group. The major acute adverse reactions were radiation esophagitis (88.5%) and radiation pneumonitis (42.3%). The percentages of grade III acute radiation esophagitis and grade III radiation pneumonitis were 15.4% and 7.7%, respectively. Hematological toxicities were common and did not significantly affect the implementation of chemoradiotherapy after supportive treatment. Two patients received high dose of 75 Gy had grade III late esophageal toxicity, and none had grade IV and above. Grade III and above late lung toxicity did not occur. Conclusion High-dose accelerated hypofractionated three-dimensional conformal radiotherapy with a dose of 60 Gy or greater with concurrent NVB and CBP chemotherapy might be feasible. However esophagus toxicity needs special attention. A phase I trial is recommended to obtain the maximum tolerated radiation dose of accelerated hypofractionated radiotherapy with concurrent chemotherapy. PMID:23937855

  14. Test-retest reliability and concurrent validity of a web-based questionnaire measuring workstation and individual correlates of work postures during computer work.

    PubMed

    IJmker, Stefan; Mikkers, Janneke; Blatter, Birgitte M; van der Beek, Allard J; van Mechelen, Willem; Bongers, Paulien M

    2008-11-01

    "Ergonomic" questionnaires are widely used in epidemiological field studies to study the association between workstation characteristics, work posture and musculoskeletal disorders among office workers. Findings have been inconsistent regarding the putative adverse effect of work postures. Underestimation of the true association might be present in studies due to misclassification of subjects to risk (i.e. exposed to non-neutral working postures) and no-risk categories (i.e. not exposed to non-neutral working postures) based on questionnaire responses. The objective of this study was to estimate the amount of misclassification resulting from the use of questionnaires. Test-retest reliability and concurrent validity of a newly developed questionnaire was assessed. This questionnaire collects data on workstation characteristics and on individual characteristics during computer work (i.e. work postures, movements and habits). Pictures were added where possible to provide visual guidance. The study population consisted of 84 office workers of a research department. They filled out the questionnaire on the Internet twice, with an in-between period of 2 weeks. For a subgroup of workers (n=38), additional on-site observations and multiple manual goniometer measurements were performed. Percentage agreement ranged between 71% and 100% for the test-retest analysis, between 31% and 100% for the comparison between questionnaire and on-site observation, and between 26% and 71% for the comparison between questionnaire and manual goniometer measurements. For 9 out of 12 tested items, the percentage agreement between questionnaire and manual goniometer measurements was below 50%. The questionnaire collects reliable data on workstation characteristics and some individual characteristics during computer work (i.e. work movements and habits), but does not seem to be useful to collect data on work postures during computer work in epidemiological field studies among office workers.

  15. Is extracurricular participation associated with beneficial outcomes? Concurrent and longitudinal relations.

    PubMed

    Fredricks, Jennifer A; Eccles, Jacquelynne S

    2006-07-01

    The authors examined the relations between participation in a range of high school extracurricular contexts and developmental outcomes in adolescence and young adulthood among an economically diverse sample of African American and European American youths. In general, when some prior self-selection factors were controlled, 11th graders' participation in school clubs and organized sports was associated with concurrent indicators of academic and psychological adjustment and with drug and alcohol use. In addition, participation in 11th grade school clubs and prosocial activities was associated with educational status and civic engagement at 1 year after high school. A few of the concurrent and longitudinal relations between activity participation and development were moderated by race and gender. Finally, breadth of participation, or number of activity contexts, was associated with positive academic, psychological, and behavioral outcomes.

  16. Localizable entanglement in antiferromagnetic spin chains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, B.-Q.; Korepin, V.E.

    2004-06-01

    Antiferromagnetic spin chains play an important role in condensed matter and statistical mechanics. Recently XXX spin chain was discussed in relation to information theory. Here we consider localizable entanglement. It is how much entanglement can be localized on two spins by performing local measurements on other individual spins (in a system of many interacting spins). We consider the ground state of antiferromagnetic spin chain. We study localizable entanglement [represented by concurrence] between two spins. It is a function of the distance. We start with isotropic spin chain. Then we study effects of anisotropy and magnetic field. We conclude that anisotropymore » increases the localizable entanglement. We discovered high sensitivity to a magnetic field in cases of high symmetry. We also evaluated concurrence of these two spins before the measurement to illustrate that the measurement raises the concurrence.« less

  17. Practical Techniques for Language Design and Prototyping

    DTIC Science & Technology

    2005-01-01

    Practical Techniques for Language Design and Prototyping Mark-Oliver Stehr1 and Carolyn L. Talcott2 1 University of Illinois at Urbana-Champaign...cs.stanford.edu Abstract. Global computing involves the interplay of a vast variety of languages , but practially useful foundations for language ...framework, namely rewriting logic, that allows us to express (1) and (2) and, in addition, language aspects such as concurrency and non-determinism. We

  18. A Computer Mediated Learning Environment for a Joint and Expeditionary Mindset

    DTIC Science & Technology

    2010-08-01

    Tashakkori & Teddlie, 1998). In the second part of each interview, the two experts were asked for their opinions on issues related to learner-centered...naturalistic observations (Camic et al., 2003; Denzin & Lincoln, 2003; Tashakkori & Teddlie, 1998). The concurrent development of a grounded theory 9... Tashakkori , A., & Teddlie, C. (1998). Mixed methodology: Combining qualitative and quantitative approaches. Thousand Oaks, CA: Sage. Wallace

  19. Concepts of Concurrent Programming

    DTIC Science & Technology

    1990-04-01

    to the material presented. Carriero89 Carriero, N., and Gelernter, D. " How to Write Parallel Programs : A Guide to the Perplexed." ACM...between the architectures on which programs can be executed and the application domains from which problems are drawn. Our goal is to show how programs ...Sept. 1989), 251-510. Abstract: There are four papers: 1. Programming Languages for Distributed Computing Systems (52); 2. How to Write Parallel

  20. NAVO MSRC Navigator. Fall 2001

    DTIC Science & Technology

    2001-01-01

    of the CAVE. A view from the VR Juggler simulator . The particles indicate snow (white) & ice (blue). Rainfall is shown on the terrain, and clouds as...the Cover: Virtual environment built by the NAVO MSRC Visualization Center for the Concurrent Computing Laboratory for Materials Simulation at...Louisiana State University. This application allows the researchers to visualize a million atom simulation of an indentor puncturing a block of gallium

  1. Infectious Cognition: Risk Perception Affects Socially Shared Retrieval-Induced Forgetting of Medical Information.

    PubMed

    Coman, Alin; Berry, Jessica N

    2015-12-01

    When speakers selectively retrieve previously learned information, listeners often concurrently, and covertly, retrieve their memories of that information. This concurrent retrieval typically enhances memory for mentioned information (the rehearsal effect) and impairs memory for unmentioned but related information (socially shared retrieval-induced forgetting, SSRIF), relative to memory for unmentioned and unrelated information. Building on research showing that anxiety leads to increased attention to threat-relevant information, we explored whether concurrent retrieval is facilitated in high-anxiety real-world contexts. Participants first learned category-exemplar facts about meningococcal disease. Following a manipulation of perceived risk of infection (low vs. high risk), they listened to a mock radio show in which some of the facts were selectively practiced. Final recall tests showed that the rehearsal effect was equivalent between the two risk conditions, but SSRIF was significantly larger in the high-risk than in the low-risk condition. Thus, the tendency to exaggerate consequences of news events was found to have deleterious consequences. © The Author(s) 2015.

  2. Sexual Behaviors of US Women at Risk of HIV Acquisition: A Longitudinal Analysis of Findings from HPTN 064.

    PubMed

    Justman, J; Befus, M; Hughes, J; Wang, J; Golin, C E; Adimora, A A; Kuo, I; Haley, D F; Del Rio, C; El-Sadr, W M; Rompalo, A; Mannheimer, S; Soto-Torres, L; Hodder, S

    2015-07-01

    We describe the sexual behaviors of women at elevated risk of HIV acquisition who reside in areas of high HIV prevalence and poverty in the US. Participants in HPTN 064, a prospective HIV incidence study, provided information about individual sexual behaviors and male sexual partners in the past 6 months at baseline, 6- and 12-months. Independent predictors of consistent or increased temporal patterns for three high-risk sexual behaviors were assessed separately: exchange sex, unprotected anal intercourse (UAI) and concurrent partnerships. The baseline prevalence of each behavior was >30 % among the 2,099 participants, 88 % reported partner(s) with >1 HIV risk characteristic and both individual and partner risk characteristics decreased over time. Less than high school education and food insecurity predicted consistent/increased engagement in exchange sex and UAI, and partner's concurrency predicted participant concurrency. Our results demonstrate how interpersonal and social factors may influence sustained high-risk behavior by individuals and suggest that further study of the economic issues related to HIV risk could inform future prevention interventions.

  3. Parametric Response Mapping as an Indicator of Bronchiolitis Obliterans Syndrome following Hematopoietic Stem Cell Transplantation

    PubMed Central

    Galbán, Craig J.; Boes, Jennifer L.; Bule, Maria; Kitko, Carrie L; Couriel, Daniel R; Johnson, Timothy D.; Lama, Vihba; Telenga, Eef D.; van den Berge, Maarten; Rehemtulla, Alnawaz; Kazerooni, Ella A.; Ponkowski, Michael J.; Ross, Brian D.; Yanik, Gregory A.

    2014-01-01

    The management of bronchiolitis obliterans syndrome (BOS) following hematopoietic cell transplantation (HCT) presents many challenges, both diagnostically and therapeutically. We have developed a computed tomography (CT) voxel-wise methodology termed Parametric Response Mapping (PRM) that quantifies normal parenchyma (PRMNormal), functional small airway disease (PRMfSAD), emphysema (PRMEmph) and parenchymal disease (PRMPD) as relative lung volumes. We now investigate the use of PRM as an imaging biomarker in the diagnosis of BOS. PRM was applied to CT data from four patient cohorts: acute infection (n=11), BOS at onset (n=34), BOS plus infection (n=9), and age-matched, non-transplant controls (n=23). Pulmonary function tests and broncho-alveolar lavage (BAL) were used for group classification. Mean values for PRMfSAD were significantly greater in patients with BOS (38±2%) when compared to those with infection alone (17±4%, p<0.0001) and age-matched controls (8.4±1%, p<0.0001). Patients with BOS had similar PRMfSAD profiles, whether a concurrent infection was present or not. An optimal cut-point for PRMfSAD of 28% of the total lung volume was identified, with values >28% highly indicative of BOS occurrence. PRM may provide a major advance in our ability to identify the small airway obstruction that characterizes BOS, even in the presence of concurrent infection. PMID:24954547

  4. Mental health, concurrent disorders, and health care utilization in homeless women.

    PubMed

    Strehlau, Verena; Torchalla, Iris; Kathy, Li; Schuetz, Christian; Krausz, Michael

    2012-09-01

    This study assessed lifetime and current prevalence rates of mental disorders and concurrent mental and substance use disorders in a sample of homeless women. Current suicide risk and recent health service utilization were also examined in order to understand the complex mental health issues of this population and to inform the development of new treatment strategies that better meet their specific needs. A cross-sectional survey of 196 adult homeless women in three different Canadian cities was done. Participants were assessed using DSM-IV-based structured clinical interviews. Current diagnoses were compared to available mental health prevalence rates in the Canadian female general population. Current prevalence rates were 63% for any mental disorder, excluding substance use disorders; 17% for depressive episode; 10% for manic episode; 7% for psychotic disorder; 39% for anxiety disorders, 28% for posttraumatic stress disorder; and 19% for obsessive-compulsive disorder; 58% had concurrent substance dependence and mental disorders. Lifetime prevalence rates were notably higher. Current moderate or high suicide risk was found in 22% of the women. Participants used a variety of health services, especially emergency rooms, general practitioners, and walk-in clinics. Prevalence rates of mental disorders among homeless participants were substantially higher than among women from the general Canadian population. The percentage of participants with moderate or high suicide risk and concurrent disorders indicates a high severity of mental health symptomatology. Treatment and housing programs need to be accompanied by multidisciplinary, specialized interventions that account for high rates of complex mental health conditions.

  5. omniClassifier: a Desktop Grid Computing System for Big Data Prediction Modeling

    PubMed Central

    Phan, John H.; Kothari, Sonal; Wang, May D.

    2016-01-01

    Robust prediction models are important for numerous science, engineering, and biomedical applications. However, best-practice procedures for optimizing prediction models can be computationally complex, especially when choosing models from among hundreds or thousands of parameter choices. Computational complexity has further increased with the growth of data in these fields, concurrent with the era of “Big Data”. Grid computing is a potential solution to the computational challenges of Big Data. Desktop grid computing, which uses idle CPU cycles of commodity desktop machines, coupled with commercial cloud computing resources can enable research labs to gain easier and more cost effective access to vast computing resources. We have developed omniClassifier, a multi-purpose prediction modeling application that provides researchers with a tool for conducting machine learning research within the guidelines of recommended best-practices. omniClassifier is implemented as a desktop grid computing system using the Berkeley Open Infrastructure for Network Computing (BOINC) middleware. In addition to describing implementation details, we use various gene expression datasets to demonstrate the potential scalability of omniClassifier for efficient and robust Big Data prediction modeling. A prototype of omniClassifier can be accessed at http://omniclassifier.bme.gatech.edu/. PMID:27532062

  6. Autonomic and Adrenocortical Interactions Predict Mental Health in Late Adolescence: The TRAILS Study.

    PubMed

    Nederhof, Esther; Marceau, Kristine; Shirtcliff, Elizabeth A; Hastings, Paul D; Oldehinkel, Albertine J

    2015-07-01

    The present study is informed by the theory of allostatic load to examine how multiple stress responsive biomarkers are related to mental health outcomes. Data are from the TRAILS study, a large prospective population study of 715 Dutch adolescents (50.9 % girls), assessed at 16.3 and 19.1 years. Reactivity measures of the hypothalamic pituitary-adrenal (HPA) axis and autonomic nervous system (ANS) biomarkers (heart rate, HR; respiratory sinus arrhythmia, RSA; and pre-ejection period, PEP) to a social stress task were used to predict concurrent and longitudinal changes in internalizing and externalizing symptoms. Hierarchical linear modeling revealed relatively few single effects for each biomarker with the exception that high HR reactivity predicted concurrent internalizing problems in boys. More interestingly, interactions were found between HPA-axis reactivity and sympathetic and parasympathetic reactivity. Boys with high HPA reactivity and low RSA reactivity had the largest increases in internalizing problems from 16 to 19 years. Youth with low HPA reactivity along with increased ANS activation characterized by both decreases in RSA and decreases in PEP had the most concurrent externalizing problems, consistent with broad theories of hypo-arousal. Youth with high HPA reactivity along with increases in RSA but decreases in PEP also had elevated concurrent externalizing problems, which increased over time, especially within boys. This profile illustrates the utility of examining the parasympathetic and sympathetic components of the ANS which can act in opposition to one another to achieve, overall, stress responsivity. The framework of allostasis and allostatic load is supported in that examination of multiple biomarkers working together in concert was of value in understanding mental health problems concurrently and longitudinally. Findings argue against an additive panel of risk and instead illustrate the dynamic interplay of stress physiology systems.

  7. The Perfect Storm: Concurrent Stress and Depressive Symptoms Increase Risk of Myocardial Infarction or Death

    PubMed Central

    Alcántara, Carmela; Muntner, Paul; Edmondson, Donald; Safford, Monika M.; Redmond, Nicole; Colantonio, Lisandro D.; Davidson, Karina W.

    2015-01-01

    Background Depression and stress have each been found to be associated with poor prognosis in coronary heart disease (CHD) patients. A recently offered ‘Psychosocial Perfect Storm’ conceptual model hypothesizes amplified risk will occur in those with concurrent stress and depressive symptoms. We tested this hypothesis in a large sample of U.S. adults with CHD. Methods and Results Participants included 4487 adults with CHD from the REasons for Geographic and Racial Differences in Stroke (REGARDS) study, a prospective cohort study of 30,239 Black and White adults. We conducted Cox proportional hazards regression with the composite outcome of myocardial infarction (MI) or death and adjustment for demographic, clinical, and behavioral factors. Overall, 6.1% reported concurrent high stress and high depressive symptoms at baseline. Over a median 5.95-years of follow-up, 1,337 events occurred. In the first 2.5-years of follow-up, participants with concurrent high stress and high depressive symptoms had increased risk for MI or death (adjusted hazard ratio [HR]=1.48, [95% CI: 1.08–2.02]) relative to those with low stress and low depressive symptoms. Those with low stress and high depressive symptoms (HR=0.92, [95% CI: 0.66–1.28]) or high stress and low depressive symptoms (HR=0.86, [95% CI: 0.57–1.29]) were not at increased risk. The association on MI or death was not significant after the initial 2.5-years of follow-up (HR=0.89, [95% CI: 0.65–1.22]). Conclusions Our results provide initial support for a ‘Psychosocial Perfect Storm’ conceptual model; the confluence of depressive symptoms and stress on medical prognosis in adults with CHD may be particularly destructive in the shorter-term. PMID:25759443

  8. High-dose versus standard-dose radiotherapy with concurrent chemotherapy in stages II-III esophageal cancer.

    PubMed

    Suh, Yang-Gun; Lee, Ik Jae; Koom, Wong Sub; Cha, Jihye; Lee, Jong Young; Kim, Soo Kon; Lee, Chang Geol

    2014-06-01

    In this study, we investigated the effects of radiotherapy ≥60 Gy in the setting of concurrent chemo-radiotherapy for treating patients with Stages II-III esophageal cancer. A total of 126 patients treated with 5-fluorouracilbased concurrent chemo-radiotherapy between January 1998 and February 2008 were retrospectively reviewed. Among these patients, 49 received a total radiation dose of <60 Gy (standard-dose group), while 77 received a total radiation dose of ≥60 Gy (high-dose group). The median doses in the standard- and high-dose groups were 54 Gy (range, 45-59.4 Gy) and 63 Gy (range, 60-81 Gy), respectively. The high-dose group showed significantly improved locoregional control (2-year locoregional control rate, 69 versus 32%, P < 0.01) and progression-free survival (2-year progression-free survival, 47 versus 20%, P = 0.01) than the standard-dose group. Median overall survival in the high- and the standard-dose groups was 28 and 18 months, respectively (P = 0.26). In multivariate analysis, 60 Gy or higher radiotherapy was a significant prognostic factor for improved locoregional control, progression-free survival and overall survival. No significant differences were found in frequencies of late radiation pneumonitis, post-treatment esophageal stricture or treatment-related mortality between the two groups. High-dose radiotherapy of 60 Gy or higher with concurrent chemotherapy improved locoregional control and progression-free survival without a significant increase of in treatment-related toxicity in patients with Stages II-III esophageal cancer. Our study could provide the basis for future randomized clinical trials. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Cognitive foundations for model-based sensor fusion

    NASA Astrophysics Data System (ADS)

    Perlovsky, Leonid I.; Weijers, Bertus; Mutz, Chris W.

    2003-08-01

    Target detection, tracking, and sensor fusion are complicated problems, which usually are performed sequentially. First detecting targets, then tracking, then fusing multiple sensors reduces computations. This procedure however is inapplicable to difficult targets which cannot be reliably detected using individual sensors, on individual scans or frames. In such more complicated cases one has to perform functions of fusing, tracking, and detecting concurrently. This often has led to prohibitive combinatorial complexity and, as a consequence, to sub-optimal performance as compared to the information-theoretic content of all the available data. It is well appreciated that in this task the human mind is by far superior qualitatively to existing mathematical methods of sensor fusion, however, the human mind is limited in the amount of information and speed of computation it can cope with. Therefore, research efforts have been devoted toward incorporating "biological lessons" into smart algorithms, yet success has been limited. Why is this so, and how to overcome existing limitations? The fundamental reasons for current limitations are analyzed and a potentially breakthrough research and development effort is outlined. We utilize the way our mind combines emotions and concepts in the thinking process and present the mathematical approach to accomplishing this in the current technology computers. The presentation will summarize the difficulties encountered by intelligent systems over the last 50 years related to combinatorial complexity, analyze the fundamental limitations of existing algorithms and neural networks, and relate it to the type of logic underlying the computational structure: formal, multivalued, and fuzzy logic. A new concept of dynamic logic will be introduced along with algorithms capable of pulling together all the available information from multiple sources. This new mathematical technique, like our brain, combines conceptual understanding with emotional evaluation and overcomes the combinatorial complexity of concurrent fusion, tracking, and detection. The presentation will discuss examples of performance, where computational speedups of many orders of magnitude were attained leading to performance improvements of up to 10 dB (and better).

  10. High-Cost Users of Prescription Drugs: A Population-Based Analysis from British Columbia, Canada.

    PubMed

    Weymann, Deirdre; Smolina, Kate; Gladstone, Emilie J; Morgan, Steven G

    2017-04-01

    To examine variation in pharmaceutical spending and patient characteristics across prescription drug user groups. British Columbia's population-based linked administrative health and sociodemographic databases (N = 3,460,763). We classified individuals into empirically derived prescription drug user groups based on pharmaceutical spending patterns outside hospitals from 2007 to 2011. We examined variation in patient characteristics, mortality, and health services usage and applied hierarchical clustering to determine patterns of concurrent drug use identifying high-cost patients. Approximately 1 in 20 British Columbians had persistently high prescription costs for 5 consecutive years, accounting for 42 percent of 2011 province-wide pharmaceutical spending. Less than 1 percent of the population experienced discrete episodes of high prescription costs; an additional 2.8 percent transitioned to or from high-cost episodes of unknown duration. Persistent high-cost users were more likely to concurrently use multiple chronic medications; episodic and transitory users spent more on specialized medicines, including outpatient cancer drugs. Cluster analyses revealed heterogeneity in concurrent medicine use within high-cost groups. Whether low, moderate, or high, costs of prescription drugs for most individuals are persistent over time. Policies controlling high-cost use should focus on reducing polypharmacy and encouraging price competition in drug classes used by ordinary and high-cost users alike. © 2016 The Authors. Health Services Research published by Wiley Periodicals, Inc. on behalf of Health Research and Educational Trust.

  11. Heterogeneous compute in computer vision: OpenCL in OpenCV

    NASA Astrophysics Data System (ADS)

    Gasparakis, Harris

    2014-02-01

    We explore the relevance of Heterogeneous System Architecture (HSA) in Computer Vision, both as a long term vision, and as a near term emerging reality via the recently ratified OpenCL 2.0 Khronos standard. After a brief review of OpenCL 1.2 and 2.0, including HSA features such as Shared Virtual Memory (SVM) and platform atomics, we identify what genres of Computer Vision workloads stand to benefit by leveraging those features, and we suggest a new mental framework that replaces GPU compute with hybrid HSA APU compute. As a case in point, we discuss, in some detail, popular object recognition algorithms (part-based models), emphasizing the interplay and concurrent collaboration between the GPU and CPU. We conclude by describing how OpenCL has been incorporated in OpenCV, a popular open source computer vision library, emphasizing recent work on the Transparent API, to appear in OpenCV 3.0, which unifies the native CPU and OpenCL execution paths under a single API, allowing the same code to execute either on CPU or on a OpenCL enabled device, without even recompiling.

  12. Concurrent Validity and Classification Accuracy of the Leiter and Leiter-R in Low Functioning Children with Autism.

    ERIC Educational Resources Information Center

    Tsatsanis, Katherine D.; Dartnall, Nancy; Cicchetti, Domenic; Sparrow, Sara S.; Klin, Ami; Volkmar, Fred R.

    2003-01-01

    The concurrent validity of the original and revised versions of the Leiter International Performance Scale was examined with 26 children (ages 4-16) with autism. Although the correlation between the two tests was high (.87), there were significant intra-individual discrepancies present in 10 cases, two of which were both large and clinically…

  13. Multilevel Factor Structure, Concurrent Validity, and Test-Retest Reliability of the High School Teacher Version of the Authoritative School Climate Survey

    ERIC Educational Resources Information Center

    Huang, Francis L.; Cornell, Dewey G.

    2016-01-01

    Although school climate has long been recognized as an important factor in the school improvement process, there are few psychometrically supported measures based on teacher perspectives. The current study replicated and extended the factor structure, concurrent validity, and test-retest reliability of the teacher version of the Authoritative…

  14. Efficient scatter distribution estimation and correction in CBCT using concurrent Monte Carlo fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bootsma, G. J., E-mail: Gregory.Bootsma@rmp.uhn.on.ca; Verhaegen, F.; Medical Physics Unit, Department of Oncology, McGill University, Montreal, Quebec H3G 1A4

    2015-01-15

    Purpose: X-ray scatter is a significant impediment to image quality improvements in cone-beam CT (CBCT). The authors present and demonstrate a novel scatter correction algorithm using a scatter estimation method that simultaneously combines multiple Monte Carlo (MC) CBCT simulations through the use of a concurrently evaluated fitting function, referred to as concurrent MC fitting (CMCF). Methods: The CMCF method uses concurrently run MC CBCT scatter projection simulations that are a subset of the projection angles used in the projection set, P, to be corrected. The scattered photons reaching the detector in each MC simulation are simultaneously aggregated by an algorithmmore » which computes the scatter detector response, S{sub MC}. S{sub MC} is fit to a function, S{sub F}, and if the fit of S{sub F} is within a specified goodness of fit (GOF), the simulations are terminated. The fit, S{sub F}, is then used to interpolate the scatter distribution over all pixel locations for every projection angle in the set P. The CMCF algorithm was tested using a frequency limited sum of sines and cosines as the fitting function on both simulated and measured data. The simulated data consisted of an anthropomorphic head and a pelvis phantom created from CT data, simulated with and without the use of a compensator. The measured data were a pelvis scan of a phantom and patient taken on an Elekta Synergy platform. The simulated data were used to evaluate various GOF metrics as well as determine a suitable fitness value. The simulated data were also used to quantitatively evaluate the image quality improvements provided by the CMCF method. A qualitative analysis was performed on the measured data by comparing the CMCF scatter corrected reconstruction to the original uncorrected and corrected by a constant scatter correction reconstruction, as well as a reconstruction created using a set of projections taken with a small cone angle. Results: Pearson’s correlation, r, proved to be a suitable GOF metric with strong correlation with the actual error of the scatter fit, S{sub F}. Fitting the scatter distribution to a limited sum of sine and cosine functions using a low-pass filtered fast Fourier transform provided a computationally efficient and accurate fit. The CMCF algorithm reduces the number of photon histories required by over four orders of magnitude. The simulated experiments showed that using a compensator reduced the computational time by a factor between 1.5 and 1.75. The scatter estimates for the simulated and measured data were computed between 35–93 s and 114–122 s, respectively, using 16 Intel Xeon cores (3.0 GHz). The CMCF scatter correction improved the contrast-to-noise ratio by 10%–50% and reduced the reconstruction error to under 3% for the simulated phantoms. Conclusions: The novel CMCF algorithm significantly reduces the computation time required to estimate the scatter distribution by reducing the statistical noise in the MC scatter estimate and limiting the number of projection angles that must be simulated. Using the scatter estimate provided by the CMCF algorithm to correct both simulated and real projection data showed improved reconstruction image quality.« less

  15. Communications oriented programming of parallel iterative solutions of sparse linear systems

    NASA Technical Reports Server (NTRS)

    Patrick, M. L.; Pratt, T. W.

    1986-01-01

    Parallel algorithms are developed for a class of scientific computational problems by partitioning the problems into smaller problems which may be solved concurrently. The effectiveness of the resulting parallel solutions is determined by the amount and frequency of communication and synchronization and the extent to which communication can be overlapped with computation. Three different parallel algorithms for solving the same class of problems are presented, and their effectiveness is analyzed from this point of view. The algorithms are programmed using a new programming environment. Run-time statistics and experience obtained from the execution of these programs assist in measuring the effectiveness of these algorithms.

  16. Visual scanning with or without spatial uncertainty and time-sharing performance

    NASA Technical Reports Server (NTRS)

    Liu, Yili; Wickens, Christopher D.

    1989-01-01

    An experiment is reported that examines the pattern of task interference between visual scanning as a sequential and selective attention process and other concurrent spatial or verbal processing tasks. A distinction is proposed between visual scanning with or without spatial uncertainty regarding the possible differential effects of these two types of scanning on interference with other concurrent processes. The experiment required the subject to perform a simulated primary tracking task, which was time-shared with a secondary spatial or verbal decision task. The relevant information that was needed to perform the decision tasks were displayed with or without spatial uncertainty. The experiment employed a 2 x 2 x 2 design with types of scanning (with or without spatial uncertainty), expected scanning distance (low/high), and codes of concurrent processing (spatial/verbal) as the three experimental factors. The results provide strong evidence that visual scanning as a spatial exploratory activity produces greater task interference with concurrent spatial tasks than with concurrent verbal tasks. Furthermore, spatial uncertainty in visual scanning is identified to be the crucial factor in producing this differential effect.

  17. A wirelessly programmable actuation and sensing system for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Long, James; Büyüköztürk, Oral

    2016-04-01

    Wireless sensor networks promise to deliver low cost, low power and massively distributed systems for structural health monitoring. A key component of these systems, particularly when sampling rates are high, is the capability to process data within the network. Although progress has been made towards this vision, it remains a difficult task to develop and program 'smart' wireless sensing applications. In this paper we present a system which allows data acquisition and computational tasks to be specified in Python, a high level programming language, and executed within the sensor network. Key features of this system include the ability to execute custom application code without firmware updates, to run multiple users' requests concurrently and to conserve power through adjustable sleep settings. Specific examples of sensor node tasks are given to demonstrate the features of this system in the context of structural health monitoring. The system comprises of individual firmware for nodes in the wireless sensor network, and a gateway server and web application through which users can remotely submit their requests.

  18. High-Throughput Characterization of Porous Materials Using Graphics Processing Units

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jihan; Martin, Richard L.; Rübel, Oliver

    We have developed a high-throughput graphics processing units (GPU) code that can characterize a large database of crystalline porous materials. In our algorithm, the GPU is utilized to accelerate energy grid calculations where the grid values represent interactions (i.e., Lennard-Jones + Coulomb potentials) between gas molecules (i.e., CHmore » $$_{4}$$ and CO$$_{2}$$) and material's framework atoms. Using a parallel flood fill CPU algorithm, inaccessible regions inside the framework structures are identified and blocked based on their energy profiles. Finally, we compute the Henry coefficients and heats of adsorption through statistical Widom insertion Monte Carlo moves in the domain restricted to the accessible space. The code offers significant speedup over a single core CPU code and allows us to characterize a set of porous materials at least an order of magnitude larger than ones considered in earlier studies. For structures selected from such a prescreening algorithm, full adsorption isotherms can be calculated by conducting multiple grand canonical Monte Carlo simulations concurrently within the GPU.« less

  19. Fast concurrent array-based stacks, queues and deques using fetch-and-increment-bounded, fetch-and-decrement-bounded and store-on-twin synchronization primitives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Dong; Gara, Alana; Heidelberger, Philip

    Implementation primitives for concurrent array-based stacks, queues, double-ended queues (deques) and wrapped deques are provided. In one aspect, each element of the stack, queue, deque or wrapped deque data structure has its own ticket lock, allowing multiple threads to concurrently use multiple elements of the data structure and thus achieving high performance. In another aspect, new synchronization primitives FetchAndIncrementBounded (Counter, Bound) and FetchAndDecrementBounded (Counter, Bound) are implemented. These primitives can be implemented in hardware and thus promise a very fast throughput for queues, stacks and double-ended queues.

  20. Domain decomposition for aerodynamic and aeroacoustic analyses, and optimization

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay

    1995-01-01

    The overarching theme was the domain decomposition, which intended to improve the numerical solution technique for the partial differential equations at hand; in the present study, those that governed either the fluid flow, or the aeroacoustic wave propagation, or the sensitivity analysis for a gradient-based optimization. The role of the domain decomposition extended beyond the original impetus of discretizing geometrical complex regions or writing modular software for distributed-hardware computers. It induced function-space decompositions and operator decompositions that offered the valuable property of near independence of operator evaluation tasks. The objectives have gravitated about the extensions and implementations of either the previously developed or concurrently being developed methodologies: (1) aerodynamic sensitivity analysis with domain decomposition (SADD); (2) computational aeroacoustics of cavities; and (3) dynamic, multibody computational fluid dynamics using unstructured meshes.

Top