Sample records for previous computational work

  1. Computational Fluid Dynamics at ICMA (Institute for Computational Mathematics and Applications)

    DTIC Science & Technology

    1988-10-18

    PERSONAL. AUTHOR(S) Charles A. Hall and Thomas A. Porsching 13a. TYPE OF REPORT 13b. TIME COVERED 114. DATE OF REPORT (YearMOth, De ) 1. PAGE COUNT...of ten ICtA (Institute for Computational Mathe- matics and Applications) personnel, relating to the general area of computational fluid mechanics...questions raised in the previous subsection. Our previous work in this area concentrated on a study of the differential geometric aspects of the prob- lem

  2. ORCA Project: Research on high-performance parallel computer programming environments. Final report, 1 Apr-31 Mar 90

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snyder, L.; Notkin, D.; Adams, L.

    1990-03-31

    This task relates to research on programming massively parallel computers. Previous work on the Ensamble concept of programming was extended and investigation into nonshared memory models of parallel computation was undertaken. Previous work on the Ensamble concept defined a set of programming abstractions and was used to organize the programming task into three distinct levels; Composition of machine instruction, composition of processes, and composition of phases. It was applied to shared memory models of computations. During the present research period, these concepts were extended to nonshared memory models. During the present research period, one Ph D. thesis was completed, onemore » book chapter, and six conference proceedings were published.« less

  3. Sleep problems and computer use during work and leisure: Cross-sectional study among 7800 adults.

    PubMed

    Andersen, Lars Louis; Garde, Anne Helene

    2015-01-01

    Previous studies linked heavy computer use to disturbed sleep. This study investigates the association between computer use during work and leisure and sleep problems in working adults. From the 2010 round of the Danish Work Environment Cohort Study, currently employed wage earners on daytime schedule (N = 7883) replied to the Bergen insomnia scale and questions on weekly duration of computer use. Results showed that sleep problems for three or more days per week (average of six questions) were experienced by 14.9% of the respondents. Logistic regression analyses, controlled for gender, age, physical and psychosocial work factors, lifestyle, chronic disease and mental health showed that computer use during leisure for 30 or more hours per week (reference 0-10 hours per week) was associated with increased odds of sleep problems (OR 1.83 [95% CI 1.06-3.17]). Computer use during work and shorter duration of computer use during leisure were not associated with sleep problems. In conclusion, excessive computer use during leisure - but not work - is associated with sleep problems in adults working on daytime schedule.

  4. Approaches to Classroom-Based Computational Science.

    ERIC Educational Resources Information Center

    Guzdial, Mark

    Computational science includes the use of computer-based modeling and simulation to define and test theories about scientific phenomena. The challenge for educators is to develop techniques for implementing computational science in the classroom. This paper reviews some previous work on the use of simulation alone (without modeling), modeling…

  5. 43 CFR 11.64 - Injury determination phase-testing and sampling methods.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    .... In developing these objectives, the availability of information from response actions relating to the...), test cases proving the code works, and any alteration of previously documented code made to adapt the... computer code (if any), test cases proving the code works, and any alteration of previously documented code...

  6. 43 CFR 11.64 - Injury determination phase-testing and sampling methods.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    .... In developing these objectives, the availability of information from response actions relating to the...), test cases proving the code works, and any alteration of previously documented code made to adapt the... computer code (if any), test cases proving the code works, and any alteration of previously documented code...

  7. 43 CFR 11.64 - Injury determination phase-testing and sampling methods.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    .... In developing these objectives, the availability of information from response actions relating to the...), test cases proving the code works, and any alteration of previously documented code made to adapt the... computer code (if any), test cases proving the code works, and any alteration of previously documented code...

  8. Hexagonalization of correlation functions II: two-particle contributions

    NASA Astrophysics Data System (ADS)

    Fleury, Thiago; Komatsu, Shota

    2018-02-01

    In this work, we compute one-loop planar five-point functions in N=4 super-Yang-Mills using integrability. As in the previous work, we decompose the correlation functions into hexagon form factors and glue them using the weight factors which depend on the cross-ratios. The main new ingredient in the computation, as compared to the four-point functions studied in the previous paper, is the two-particle mirror contribution. We develop techniques to evaluate it and find agreement with the perturbative results in all the cases we analyzed. In addition, we consider next-to-extremal four-point functions, which are known to be protected, and show that the sum of one-particle and two-particle contributions at one loop adds up to zero as expected. The tools developed in this work would be useful for computing higher-particle contributions which would be relevant for more complicated quantities such as higher-loop corrections and non-planar correlators.

  9. Hegemony and Assessment: The Student Experience of Being in a Male Homogenous Higher Education Computing Course

    ERIC Educational Resources Information Center

    Sheedy, Caroline

    2018-01-01

    This work emanates from a previous study examining the experiences of male final year students in computing degree programmes that focused on their perceptions as students where they had few, if any, female classmates. This empirical work consisted of focus groups, with the findings outlined here drawn from two groups that were homogeneous with…

  10. Formal specification of human-computer interfaces

    NASA Technical Reports Server (NTRS)

    Auernheimer, Brent

    1990-01-01

    A high-level formal specification of a human computer interface is described. Previous work is reviewed and the ASLAN specification language is described. Top-level specifications written in ASLAN for a library and a multiwindow interface are discussed.

  11. Code Modernization of VPIC

    NASA Astrophysics Data System (ADS)

    Bird, Robert; Nystrom, David; Albright, Brian

    2017-10-01

    The ability of scientific simulations to effectively deliver performant computation is increasingly being challenged by successive generations of high-performance computing architectures. Code development to support efficient computation on these modern architectures is both expensive, and highly complex; if it is approached without due care, it may also not be directly transferable between subsequent hardware generations. Previous works have discussed techniques to support the process of adapting a legacy code for modern hardware generations, but despite the breakthroughs in the areas of mini-app development, portable-performance, and cache oblivious algorithms the problem still remains largely unsolved. In this work we demonstrate how a focus on platform agnostic modern code-development can be applied to Particle-in-Cell (PIC) simulations to facilitate effective scientific delivery. This work builds directly on our previous work optimizing VPIC, in which we replaced intrinsic based vectorisation with compile generated auto-vectorization to improve the performance and portability of VPIC. In this work we present the use of a specialized SIMD queue for processing some particle operations, and also preview a GPU capable OpenMP variant of VPIC. Finally we include a lessons learnt. Work performed under the auspices of the U.S. Dept. of Energy by the Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by the LANL LDRD program.

  12. 3D nonrigid registration via optimal mass transport on the GPU.

    PubMed

    Ur Rehman, Tauseef; Haber, Eldad; Pryor, Gallagher; Melonakos, John; Tannenbaum, Allen

    2009-12-01

    In this paper, we present a new computationally efficient numerical scheme for the minimizing flow approach for optimal mass transport (OMT) with applications to non-rigid 3D image registration. The approach utilizes all of the gray-scale data in both images, and the optimal mapping from image A to image B is the inverse of the optimal mapping from B to A. Further, no landmarks need to be specified, and the minimizer of the distance functional involved is unique. Our implementation also employs multigrid, and parallel methodologies on a consumer graphics processing unit (GPU) for fast computation. Although computing the optimal map has been shown to be computationally expensive in the past, we show that our approach is orders of magnitude faster then previous work and is capable of finding transport maps with optimality measures (mean curl) previously unattainable by other works (which directly influences the accuracy of registration). We give results where the algorithm was used to compute non-rigid registrations of 3D synthetic data as well as intra-patient pre-operative and post-operative 3D brain MRI datasets.

  13. Bubble nucleation in simple and molecular liquids via the largest spherical cavity method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonzalez, Miguel A., E-mail: m.gonzalez12@imperial.ac.uk; Department of Chemistry, Imperial College London, London SW7 2AZ; Abascal, José L. F.

    2015-04-21

    In this work, we propose a methodology to compute bubble nucleation free energy barriers using trajectories generated via molecular dynamics simulations. We follow the bubble nucleation process by means of a local order parameter, defined by the volume of the largest spherical cavity (LSC) formed in the nucleating trajectories. This order parameter simplifies considerably the monitoring of the nucleation events, as compared with the previous approaches which require ad hoc criteria to classify the atoms and molecules as liquid or vapor. The combination of the LSC and the mean first passage time technique can then be used to obtain themore » free energy curves. Upon computation of the cavity distribution function the nucleation rate and free-energy barrier can then be computed. We test our method against recent computations of bubble nucleation in simple liquids and water at negative pressures. We obtain free-energy barriers in good agreement with the previous works. The LSC method provides a versatile and computationally efficient route to estimate the volume of critical bubbles the nucleation rate and to compute bubble nucleation free-energies in both simple and molecular liquids.« less

  14. Automated social skills training with audiovisual information.

    PubMed

    Tanaka, Hiroki; Sakti, Sakriani; Neubig, Graham; Negoro, Hideki; Iwasaka, Hidemi; Nakamura, Satoshi

    2016-08-01

    People with social communication difficulties tend to have superior skills using computers, and as a result computer-based social skills training systems are flourishing. Social skills training, performed by human trainers, is a well-established method to obtain appropriate skills in social interaction. Previous works have attempted to automate one or several parts of social skills training through human-computer interaction. However, while previous work on simulating social skills training considered only acoustic and linguistic features, human social skills trainers take into account visual features (e.g. facial expression, posture). In this paper, we create and evaluate a social skills training system that closes this gap by considering audiovisual features regarding ratio of smiling, yaw, and pitch. An experimental evaluation measures the difference in effectiveness of social skill training when using audio features and audiovisual features. Results showed that the visual features were effective to improve users' social skills.

  15. ELEMENT MASSES IN THE CRAB NEBULA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sibley, Adam R.; Katz, Andrea M.; Satterfield, Timothy J.

    Using our previously published element abundance or mass-fraction distributions in the Crab Nebula, we derived actual mass distributions and estimates for overall nebular masses of hydrogen, helium, carbon, nitrogen, oxygen and sulfur. As with the previous work, computations were carried out for photoionization models involving constant hydrogen density and also constant nuclear density. In addition, employing new flux measurements for [Ni ii]  λ 7378, along with combined photoionization models and analytic computations, a nickel abundance distribution was mapped and a nebular stable nickel mass estimate was derived.

  16. Correcting Spellings in Second Language Learners' Computer-Assisted Collaborative Writing

    ERIC Educational Resources Information Center

    Musk, Nigel

    2016-01-01

    The present study uses multimodal conversation analysis to examine how pupils studying English as a foreign language make spelling corrections in real time while doing collaborative computer-assisted project work. Unlike most previous related investigations, this study focuses on the "process" rather than evaluating the final…

  17. Using Computer Simulations in Chemistry Problem Solving

    ERIC Educational Resources Information Center

    Avramiotis, Spyridon; Tsaparlis, Georgios

    2013-01-01

    This study is concerned with the effects of computer simulations of two novel chemistry problems on the problem solving ability of students. A control-experimental group, equalized by pair groups (n[subscript Exp] = n[subscript Ctrl] = 78), research design was used. The students had no previous experience of chemical practical work. Student…

  18. Activity Schedules, Computer Technology, and Teaching Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Stromer, Robert; Kimball, Jonathan W.; Kinney, Elisabeth M.; Taylor, Bridget A.

    2006-01-01

    A review of selected literature suggests that integrating multimedia computer supports with activity schedules can be an effective way to teach students to manage their work, play, and skill-building activities independently. Activity schedules originally were a means of promoting independent execution of previously learned responses by using…

  19. An automated method to find reaction mechanisms and solve the kinetics in organometallic catalysis.

    PubMed

    Varela, J A; Vázquez, S A; Martínez-Núñez, E

    2017-05-01

    A novel computational method is proposed in this work for use in discovering reaction mechanisms and solving the kinetics of transition metal-catalyzed reactions. The method does not rely on either chemical intuition or assumed a priori mechanisms, and it works in a fully automated fashion. Its core is a procedure, recently developed by one of the authors, that combines accelerated direct dynamics with an efficient geometry-based post-processing algorithm to find transition states (Martinez-Nunez, E., J. Comput. Chem. 2015 , 36 , 222-234). In the present work, several auxiliary tools have been added to deal with the specific features of transition metal catalytic reactions. As a test case, we chose the cobalt-catalyzed hydroformylation of ethylene because of its well-established mechanism, and the fact that it has already been used in previous automated computational studies. Besides the generally accepted mechanism of Heck and Breslow, several side reactions, such as hydrogenation of the alkene, emerged from our calculations. Additionally, the calculated rate law for the hydroformylation reaction agrees reasonably well with those obtained in previous experimental and theoretical studies.

  20. A Computer Program for Solving a Set of Conditional Maximum Likelihood Equations Arising in the Rasch Model for Questionnaires.

    ERIC Educational Resources Information Center

    Andersen, Erling B.

    A computer program for solving the conditional likelihood equations arising in the Rasch model for questionnaires is described. The estimation method and the computational problems involved are described in a previous research report by Andersen, but a summary of those results are given in two sections of this paper. A working example is also…

  1. The Cyborg Astrobiologist: testing a novelty detection algorithm on two mobile exploration systems at Rivas Vaciamadrid in Spain and at the Mars Desert Research Station in Utah

    NASA Astrophysics Data System (ADS)

    McGuire, P. C.; Gross, C.; Wendt, L.; Bonnici, A.; Souza-Egipsy, V.; Ormö, J.; Díaz-Martínez, E.; Foing, B. H.; Bose, R.; Walter, S.; Oesker, M.; Ontrup, J.; Haschke, R.; Ritter, H.

    2010-01-01

    In previous work, a platform was developed for testing computer-vision algorithms for robotic planetary exploration. This platform consisted of a digital video camera connected to a wearable computer for real-time processing of images at geological and astrobiological field sites. The real-time processing included image segmentation and the generation of interest points based upon uncommonness in the segmentation maps. Also in previous work, this platform for testing computer-vision algorithms has been ported to a more ergonomic alternative platform, consisting of a phone camera connected via the Global System for Mobile Communications (GSM) network to a remote-server computer. The wearable-computer platform has been tested at geological and astrobiological field sites in Spain (Rivas Vaciamadrid and Riba de Santiuste), and the phone camera has been tested at a geological field site in Malta. In this work, we (i) apply a Hopfield neural-network algorithm for novelty detection based upon colour, (ii) integrate a field-capable digital microscope on the wearable computer platform, (iii) test this novelty detection with the digital microscope at Rivas Vaciamadrid, (iv) develop a Bluetooth communication mode for the phone-camera platform, in order to allow access to a mobile processing computer at the field sites, and (v) test the novelty detection on the Bluetooth-enabled phone camera connected to a netbook computer at the Mars Desert Research Station in Utah. This systems engineering and field testing have together allowed us to develop a real-time computer-vision system that is capable, for example, of identifying lichens as novel within a series of images acquired in semi-arid desert environments. We acquired sequences of images of geologic outcrops in Utah and Spain consisting of various rock types and colours to test this algorithm. The algorithm robustly recognized previously observed units by their colour, while requiring only a single image or a few images to learn colours as familiar, demonstrating its fast learning capability.

  2. A new graph-based method for pairwise global network alignment

    PubMed Central

    Klau, Gunnar W

    2009-01-01

    Background In addition to component-based comparative approaches, network alignments provide the means to study conserved network topology such as common pathways and more complex network motifs. Yet, unlike in classical sequence alignment, the comparison of networks becomes computationally more challenging, as most meaningful assumptions instantly lead to NP-hard problems. Most previous algorithmic work on network alignments is heuristic in nature. Results We introduce the graph-based maximum structural matching formulation for pairwise global network alignment. We relate the formulation to previous work and prove NP-hardness of the problem. Based on the new formulation we build upon recent results in computational structural biology and present a novel Lagrangian relaxation approach that, in combination with a branch-and-bound method, computes provably optimal network alignments. The Lagrangian algorithm alone is a powerful heuristic method, which produces solutions that are often near-optimal and – unlike those computed by pure heuristics – come with a quality guarantee. Conclusion Computational experiments on the alignment of protein-protein interaction networks and on the classification of metabolic subnetworks demonstrate that the new method is reasonably fast and has advantages over pure heuristics. Our software tool is freely available as part of the LISA library. PMID:19208162

  3. Implementing Molecular Dynamics on Hybrid High Performance Computers - Particle-Particle Particle-Mesh

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, W Michael; Kohlmeyer, Axel; Plimpton, Steven J

    The use of accelerators such as graphics processing units (GPUs) has become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high-performance computers, machines with nodes containing more than one type of floating-point processor (e.g. CPU and GPU), are now becoming more prevalent due to these advantages. In this paper, we present a continuation of previous work implementing algorithms for using accelerators into the LAMMPS molecular dynamics software for distributed memory parallel hybrid machines. In our previous work, we focused on acceleration for short-range models with anmore » approach intended to harness the processing power of both the accelerator and (multi-core) CPUs. To augment the existing implementations, we present an efficient implementation of long-range electrostatic force calculation for molecular dynamics. Specifically, we present an implementation of the particle-particle particle-mesh method based on the work by Harvey and De Fabritiis. We present benchmark results on the Keeneland InfiniBand GPU cluster. We provide a performance comparison of the same kernels compiled with both CUDA and OpenCL. We discuss limitations to parallel efficiency and future directions for improving performance on hybrid or heterogeneous computers.« less

  4. HyperCard for Educators. An Introduction.

    ERIC Educational Resources Information Center

    Bull, Glen L.; Harris, Judi

    This guide is designed to provide a quick introduction to the basic elements of HyperCard for teachers who are familiar with other computer applications but may not have worked with hypermedia applications; previous familiarity with HyperCard or with Macintosh computers is not necessary. It is noted that HyperCard is a software construction…

  5. Shock compression response of cold-rolled Ni/Al multilayer composites

    DOE PAGES

    Specht, Paul E.; Weihs, Timothy P.; Thadhani, Naresh N.

    2017-01-06

    Uniaxial strain, plate-on-plate impact experiments were performed on cold-rolled Ni/Al multilayer composites and the resulting Hugoniot was determined through time-resolved measurements combined with impedance matching. The experimental Hugoniot agreed with that previously predicted by two dimensional (2D) meso-scale calculations. Additional 2D meso-scale simulations were performed using the same computational method as the prior study to reproduce the experimentally measured free surface velocities and stress profiles. Finally, these simulations accurately replicated the experimental profiles, providing additional validation for the previous computational work.

  6. Increasing processor utilization during parallel computation rundown

    NASA Technical Reports Server (NTRS)

    Jones, W. H.

    1986-01-01

    Some parallel processing environments provide for asynchronous execution and completion of general purpose parallel computations from a single computational phase. When all the computations from such a phase are complete, a new parallel computational phase is begun. Depending upon the granularity of the parallel computations to be performed, there may be a shortage of available work as a particular computational phase draws to a close (computational rundown). This can result in the waste of computing resources and the delay of the overall problem. In many practical instances, strict sequential ordering of phases of parallel computation is not totally required. In such cases, the beginning of one phase can be correctly computed before the end of a previous phase is completed. This allows additional work to be generated somewhat earlier to keep computing resources busy during each computational rundown. The conditions under which this can occur are identified and the frequency of occurrence of such overlapping in an actual parallel Navier-Stokes code is reported. A language construct is suggested and possible control strategies for the management of such computational phase overlapping are discussed.

  7. CYCLOPS-3 System Research.

    ERIC Educational Resources Information Center

    Marill, Thomas; And Others

    The aim of the CYCLOPS Project research is the development of techniques for allowing computers to perform visual scene analysis, pre-processing of visual imagery, and perceptual learning. Work on scene analysis and learning has previously been described. The present report deals with research on pre-processing and with further work on scene…

  8. Robust network design for multispecies conservation

    Treesearch

    Ronan Le Bras; Bistra Dilkina; Yexiang Xue; Carla P. Gomes; Kevin S. McKelvey; Michael K. Schwartz; Claire A. Montgomery

    2013-01-01

    Our work is motivated by an important network design application in computational sustainability concerning wildlife conservation. In the face of human development and climate change, it is important that conservation plans for protecting landscape connectivity exhibit certain level of robustness. While previous work has focused on conservation strategies that result...

  9. Motion Planning in a Society of Intelligent Mobile Agents

    NASA Technical Reports Server (NTRS)

    Esterline, Albert C.; Shafto, Michael (Technical Monitor)

    2002-01-01

    The majority of the work on this grant involved formal modeling of human-computer integration. We conceptualize computer resources as a multiagent system so that these resources and human collaborators may be modeled uniformly. In previous work we had used modal for this uniform modeling, and we had developed a process-algebraic agent abstraction. In this work, we applied this abstraction (using CSP) in uniformly modeling agents and users, which allowed us to use tools for investigating CSP models. This work revealed the power of, process-algebraic handshakes in modeling face-to-face conversation. We also investigated specifications of human-computer systems in the style of algebraic specification. This involved specifying the common knowledge required for coordination and process-algebraic patterns of communication actions intended to establish the common knowledge. We investigated the conditions for agents endowed with perception to gain common knowledge and implemented a prototype neural-network system that allows agents to detect when such conditions hold. The literature on multiagent systems conceptualizes communication actions as speech acts. We implemented a prototype system that infers the deontic effects (obligations, permissions, prohibitions) of speech acts and detects violations of these effects. A prototype distributed system was developed that allows users to collaborate in moving proxy agents; it was designed to exploit handshakes and common knowledge Finally. in work carried over from a previous NASA ARC grant, about fifteen undergraduates developed and presented projects on multiagent motion planning.

  10. Work-related health disorders among Saudi computer users.

    PubMed

    Jomoah, Ibrahim M

    2014-01-01

    The present study was conducted to investigate the prevalence of musculoskeletal disorders and eye and vision complaints among the computer users of King Abdulaziz University (KAU), Saudi Arabian Airlines (SAUDIA), and Saudi Telecom Company (STC). Stratified random samples of the work stations and operators at each of the studied institutions were selected and the ergonomics of the work stations were assessed and the operators' health complaints were investigated. The average ergonomic score of the studied work station at STC, KAU, and SAUDIA was 81.5%, 73.3%, and 70.3, respectively. Most of the examined operators use computers daily for ≤ 7 hours, yet they had some average incidences of general complaints (e.g., headache, body fatigue, and lack of concentration) and relatively high level of incidences of eye and vision complaints and musculoskeletal complaints. The incidences of the complaints have been found to increase with the (a) decrease in work station ergonomic score, (b) progress of age and duration of employment, (c) smoking, (d) use of computers, (e) lack of work satisfaction, and (f) history of operators' previous ailments. It has been recommended to improve the ergonomics of the work stations, set up training programs, and conduct preplacement and periodical examinations for operators.

  11. Work-Related Health Disorders among Saudi Computer Users

    PubMed Central

    Jomoah, Ibrahim M.

    2014-01-01

    The present study was conducted to investigate the prevalence of musculoskeletal disorders and eye and vision complaints among the computer users of King Abdulaziz University (KAU), Saudi Arabian Airlines (SAUDIA), and Saudi Telecom Company (STC). Stratified random samples of the work stations and operators at each of the studied institutions were selected and the ergonomics of the work stations were assessed and the operators' health complaints were investigated. The average ergonomic score of the studied work station at STC, KAU, and SAUDIA was 81.5%, 73.3%, and 70.3, respectively. Most of the examined operators use computers daily for ≤ 7 hours, yet they had some average incidences of general complaints (e.g., headache, body fatigue, and lack of concentration) and relatively high level of incidences of eye and vision complaints and musculoskeletal complaints. The incidences of the complaints have been found to increase with the (a) decrease in work station ergonomic score, (b) progress of age and duration of employment, (c) smoking, (d) use of computers, (e) lack of work satisfaction, and (f) history of operators' previous ailments. It has been recommended to improve the ergonomics of the work stations, set up training programs, and conduct preplacement and periodical examinations for operators. PMID:25383379

  12. Using Palm Technology in Participatory Simulations of Complex Systems: A New Take on Ubiquitous and Accessible Mobile Computing

    ERIC Educational Resources Information Center

    Klopfer, Eric; Yoon, Susan; Perry, Judy

    2005-01-01

    This paper reports on teachers' perceptions of the educational affordances of a handheld application called Participatory Simulations. It presents evidence from five cases representing each of the populations who work with these computational tools. Evidence across multiple data sources yield similar results to previous research evaluations of…

  13. Subjective randomness as statistical inference.

    PubMed

    Griffiths, Thomas L; Daniels, Dylan; Austerweil, Joseph L; Tenenbaum, Joshua B

    2018-06-01

    Some events seem more random than others. For example, when tossing a coin, a sequence of eight heads in a row does not seem very random. Where do these intuitions about randomness come from? We argue that subjective randomness can be understood as the result of a statistical inference assessing the evidence that an event provides for having been produced by a random generating process. We show how this account provides a link to previous work relating randomness to algorithmic complexity, in which random events are those that cannot be described by short computer programs. Algorithmic complexity is both incomputable and too general to capture the regularities that people can recognize, but viewing randomness as statistical inference provides two paths to addressing these problems: considering regularities generated by simpler computing machines, and restricting the set of probability distributions that characterize regularity. Building on previous work exploring these different routes to a more restricted notion of randomness, we define strong quantitative models of human randomness judgments that apply not just to binary sequences - which have been the focus of much of the previous work on subjective randomness - but also to binary matrices and spatial clustering. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Semiannual Report, April 1, 1989 through September 30, 1989 (Institute for Computer Applications in Science and Engineering)

    DTIC Science & Technology

    1990-02-01

    noise. Tobias B. Orloff Work began on developing a high quality rendering algorithm based on the radiosity method. The algorithm is similar to...previous progressive radiosity algorithms except for the following improvements: 1. At each iteration vertex radiosities are computed using a modified scan...line approach, thus eliminating the quadratic cost associated with a ray tracing computation of vortex radiosities . 2. At each iteration the scene is

  15. Studies on Vapor Adsorption Systems

    NASA Technical Reports Server (NTRS)

    Shamsundar, N.; Ramotowski, M.

    1998-01-01

    The project consisted of performing experiments on single and dual bed vapor adsorption systems, thermodynamic cycle optimization, and thermal modeling. The work was described in a technical paper that appeared in conference proceedings and a Master's thesis, which were previously submitted to NASA. The present report describes some additional thermal modeling work done subsequently, and includes listings of computer codes developed during the project. Recommendations for future work are provided.

  16. Shock compression response of cold-rolled Ni/Al multilayer composites

    NASA Astrophysics Data System (ADS)

    Specht, Paul E.; Weihs, Timothy P.; Thadhani, Naresh N.

    2017-01-01

    Uniaxial strain, plate-on-plate impact experiments were performed on cold-rolled Ni/Al multilayer composites and the resulting Hugoniot was determined through time-resolved measurements combined with impedance matching. The experimental Hugoniot agreed with that previously predicted by two dimensional (2D) meso-scale calculations [Specht et al., J. Appl. Phys. 111, 073527 (2012)]. Additional 2D meso-scale simulations were performed using the same computational method as the prior study to reproduce the experimentally measured free surface velocities and stress profiles. These simulations accurately replicated the experimental profiles, providing additional validation for the previous computational work.

  17. Changing computing paradigms towards power efficiency

    PubMed Central

    Klavík, Pavel; Malossi, A. Cristiano I.; Bekas, Costas; Curioni, Alessandro

    2014-01-01

    Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. PMID:24842033

  18. Sinking bubbles in stout beers

    NASA Astrophysics Data System (ADS)

    Lee, W. T.; Kaar, S.; O'Brien, S. B. G.

    2018-04-01

    A surprising phenomenon witnessed by many is the sinking bubbles seen in a settling pint of stout beer. Bubbles are less dense than the surrounding fluid so how does this happen? Previous work has shown that the explanation lies in a circulation of fluid promoted by the tilted sides of the glass. However, this work has relied heavily on computational fluid dynamics (CFD) simulations. Here, we show that the phenomenon of sinking bubbles can be predicted using a simple analytic model. To make the model analytically tractable, we work in the limit of small bubbles and consider a simplified geometry. The model confirms both the existence of sinking bubbles and the previously proposed mechanism.

  19. Nonlinear calculations of the time evolution of black hole accretion disks

    NASA Technical Reports Server (NTRS)

    Luo, C.

    1994-01-01

    Based on previous works on black hole accretion disks, I continue to explore the disk dynamics using the finite difference method to solve the highly nonlinear problem of time-dependent alpha disk equations. Here a radially zoned model is used to develop a computational scheme in order to accommodate functional dependence of the viscosity parameter alpha on the disk scale height and/or surface density. This work is based on the author's previous work on the steady disk structure and the linear analysis of disk dynamics to try to apply to x-ray emissions from black candidates (i.e., multiple-state spectra, instabilities, QPO's, etc.).

  20. Hybrid transport and diffusion modeling using electron thermal transport Monte Carlo SNB in DRACO

    NASA Astrophysics Data System (ADS)

    Chenhall, Jeffrey; Moses, Gregory

    2017-10-01

    The iSNB (implicit Schurtz Nicolai Busquet) multigroup diffusion electron thermal transport method is adapted into an Electron Thermal Transport Monte Carlo (ETTMC) transport method to better model angular and long mean free path non-local effects. Previously, the ETTMC model had been implemented in the 2D DRACO multiphysics code and found to produce consistent results with the iSNB method. Current work is focused on a hybridization of the computationally slower but higher fidelity ETTMC transport method with the computationally faster iSNB diffusion method in order to maximize computational efficiency. Furthermore, effects on the energy distribution of the heat flux divergence are studied. Work to date on the hybrid method will be presented. This work was supported by Sandia National Laboratories and the Univ. of Rochester Laboratory for Laser Energetics.

  1. STS-42 Commander Grabe works with MWPE at IML-1 Rack 8 aboard OV-103

    NASA Technical Reports Server (NTRS)

    1992-01-01

    STS-42 Commander Ronald J. Grabe works with the Mental Workload and Performance Evaluation Experiment (MWPE) (portable laptop computer, keyboard cursor keys, a two-axis joystick, and a track ball) at Rack 8 in the International Microgravity Laboratory 1 (IML-1) module. The test was designed as a result of difficulty experienced by crewmembers working at a computer station on a previous Space Shuttle mission. The problem was due to the workstation's design being based on Earth-bound conditions with the operator in a typical one-G standing position. For STS-42, the workstation was redesigned to evaluate the effects of microgravity on the ability of crewmembers to interact with a computer workstation. Information gained from this experiment will be used to design workstations for future Spacelab missions and Space Station Freedom (SSF).

  2. The applications of computers in biological research

    NASA Technical Reports Server (NTRS)

    Wei, Jennifer

    1988-01-01

    Research in many fields could not be done without computers. There is often a great deal of technical data, even in the biological fields, that need to be analyzed. These data, unfortunately, previously absorbed much of every researcher's time. Now, due to the steady increase in computer technology, biological researchers are able to make incredible advances in their work without the added worries of tedious and difficult tasks such as the many mathematical calculations involved in today's research and health care.

  3. Supercritical wing sections 2, volume 108

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Garabedian, P.; Korn, D.; Jameson, A.; Beckmann, M. (Editor); Kuenzi, H. P. (Editor)

    1975-01-01

    A mathematical theory for the design and analysis of supercritical wing sections was previously presented. Examples and computer programs showing how this method works were included. The work on transonics is presented in a more definitive form. For design, a better model of the trailing edge is introduced which should eliminate a loss of fifteen or twenty percent in lift experienced with previous heavily aft loaded models, which is attributed to boundary layer separation. How drag creep can be reduced at off-design conditions is indicated. A rotated finite difference scheme is presented that enables the application of Murman's method of analysis in more or less arbitrary curvilinear coordinate systems. This allows the use of supersonic as well as subsonic free stream Mach numbers and to capture shock waves as far back on an airfoil as desired. Moreover, it leads to an effective three dimensional program for the computation of transonic flow past an oblique wing. In the case of two dimensional flow, the method is extended to take into account the displacement thickness computed by a semi-empirical turbulent boundary layer correction.

  4. Spike-Timing Dependent Plasticity in Unipolar Silicon Oxide RRAM Devices

    PubMed Central

    Zarudnyi, Konstantin; Mehonic, Adnan; Montesi, Luca; Buckwell, Mark; Hudziak, Stephen; Kenyon, Anthony J.

    2018-01-01

    Resistance switching, or Resistive RAM (RRAM) devices show considerable potential for application in hardware spiking neural networks (neuro-inspired computing) by mimicking some of the behavior of biological synapses, and hence enabling non-von Neumann computer architectures. Spike-timing dependent plasticity (STDP) is one such behavior, and one example of several classes of plasticity that are being examined with the aim of finding suitable algorithms for application in many computing tasks such as coincidence detection, classification and image recognition. In previous work we have demonstrated that the neuromorphic capabilities of silicon-rich silicon oxide (SiOx) resistance switching devices extend beyond plasticity to include thresholding, spiking, and integration. We previously demonstrated such behaviors in devices operated in the unipolar mode, opening up the question of whether we could add plasticity to the list of features exhibited by our devices. Here we demonstrate clear STDP in unipolar devices. Significantly, we show that the response of our devices is broadly similar to that of biological synapses. This work further reinforces the potential of simple two-terminal RRAM devices to mimic neuronal functionality in hardware spiking neural networks. PMID:29472837

  5. Semiautomated skeletonization of the pulmonary arterial tree in micro-CT images

    NASA Astrophysics Data System (ADS)

    Hanger, Christopher C.; Haworth, Steven T.; Molthen, Robert C.; Dawson, Christopher A.

    2001-05-01

    We present a simple and robust approach that utilizes planar images at different angular rotations combined with unfiltered back-projection to locate the central axes of the pulmonary arterial tree. Three-dimensional points are selected interactively by the user. The computer calculates a sub- volume unfiltered back-projection orthogonal to the vector connecting the two points and centered on the first point. Because more x-rays are absorbed at the thickest portion of the vessel, in the unfiltered back-projection, the darkest pixel is assumed to be the center of the vessel. The computer replaces this point with the newly computer-calculated point. A second back-projection is calculated around the original point orthogonal to a vector connecting the newly-calculated first point and user-determined second point. The darkest pixel within the reconstruction is determined. The computer then replaces the second point with the XYZ coordinates of the darkest pixel within this second reconstruction. Following a vector based on a moving average of previously determined 3- dimensional points along the vessel's axis, the computer continues this skeletonization process until stopped by the user. The computer estimates the vessel diameter along the set of previously determined points using a method similar to the full width-half max algorithm. On all subsequent vessels, the process works the same way except that at each point, distances between the current point and all previously determined points along different vessels are determined. If the difference is less than the previously estimated diameter, the vessels are assumed to branch. This user/computer interaction continues until the vascular tree has been skeletonized.

  6. Changing computing paradigms towards power efficiency.

    PubMed

    Klavík, Pavel; Malossi, A Cristiano I; Bekas, Costas; Curioni, Alessandro

    2014-06-28

    Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  7. Multialternative drift-diffusion model predicts the relationship between visual fixations and choice in value-based decisions.

    PubMed

    Krajbich, Ian; Rangel, Antonio

    2011-08-16

    How do we make decisions when confronted with several alternatives (e.g., on a supermarket shelf)? Previous work has shown that accumulator models, such as the drift-diffusion model, can provide accurate descriptions of the psychometric data for binary value-based choices, and that the choice process is guided by visual attention. However, the computational processes used to make choices in more complicated situations involving three or more options are unknown. We propose a model of trinary value-based choice that generalizes what is known about binary choice, and test it using an eye-tracking experiment. We find that the model provides a quantitatively accurate description of the relationship between choice, reaction time, and visual fixation data using the same parameters that were estimated in previous work on binary choice. Our findings suggest that the brain uses similar computational processes to make binary and trinary choices.

  8. An energy efficient and high speed architecture for convolution computing based on binary resistive random access memory

    NASA Astrophysics Data System (ADS)

    Liu, Chen; Han, Runze; Zhou, Zheng; Huang, Peng; Liu, Lifeng; Liu, Xiaoyan; Kang, Jinfeng

    2018-04-01

    In this work we present a novel convolution computing architecture based on metal oxide resistive random access memory (RRAM) to process the image data stored in the RRAM arrays. The proposed image storage architecture shows performances of better speed-device consumption efficiency compared with the previous kernel storage architecture. Further we improve the architecture for a high accuracy and low power computing by utilizing the binary storage and the series resistor. For a 28 × 28 image and 10 kernels with a size of 3 × 3, compared with the previous kernel storage approach, the newly proposed architecture shows excellent performances including: 1) almost 100% accuracy within 20% LRS variation and 90% HRS variation; 2) more than 67 times speed boost; 3) 71.4% energy saving.

  9. Study of basic computer competence among public health nurses in Taiwan.

    PubMed

    Yang, Kuei-Feng; Yu, Shu; Lin, Ming-Sheng; Hsu, Chia-Ling

    2004-03-01

    Rapid advances in information technology and media have made distance learning on the Internet possible. This new model of learning allows greater efficiency and flexibility in knowledge acquisition. Since basic computer competence is a prerequisite for this new learning model, this study was conducted to examine the basic computer competence of public health nurses in Taiwan and explore factors influencing computer competence. A national cross-sectional randomized study was conducted with 329 public health nurses. A questionnaire was used to collect data and was delivered by mail. Results indicate that basic computer competence of public health nurses in Taiwan is still needs to be improved (mean = 57.57 +- 2.83, total score range from 26-130). Among the five most frequently used software programs, nurses were most knowledgeable about Word and least knowledgeable about PowerPoint. Stepwise multiple regression analysis revealed eight variables (weekly number of hours spent online at home, weekly amount of time spent online at work, weekly frequency of computer use at work, previous computer training, computer at workplace and Internet access, job position, education level, and age) that significantly influenced computer competence, which accounted for 39.0 % of the variance. In conclusion, greater computer competence, broader educational programs regarding computer technology, and a greater emphasis on computers at work are necessary to increase the usefulness of distance learning via the Internet in Taiwan. Building a user-friendly environment is important in developing this new media model of learning for the future.

  10. Spontaneous Speech Events in Two Speech Databases of Human-Computer and Human-Human Dialogs in Spanish

    ERIC Educational Resources Information Center

    Rodriguez, Luis J.; Torres, M. Ines

    2006-01-01

    Previous works in English have revealed that disfluencies follow regular patterns and that incorporating them into the language model of a speech recognizer leads to lower perplexities and sometimes to a better performance. Although work on disfluency modeling has been applied outside the English community (e.g., in Japanese), as far as we know…

  11. The symbolic computation of series solutions to ordinary differential equations using trees (extended abstract)

    NASA Technical Reports Server (NTRS)

    Grossman, Robert

    1991-01-01

    Algorithms previously developed by the author give formulas which can be used for the efficient symbolic computation of series expansions to solutions of nonlinear systems of ordinary differential equations. As a by product of this analysis, formulas are derived which relate to trees to the coefficients of the series expansions, similar to the work of Leroux and Viennot, and Lamnabhi, Leroux and Viennot.

  12. Selective updating of working memory content modulates meso-cortico-striatal activity.

    PubMed

    Murty, Vishnu P; Sambataro, Fabio; Radulescu, Eugenia; Altamura, Mario; Iudicello, Jennifer; Zoltick, Bradley; Weinberger, Daniel R; Goldberg, Terry E; Mattay, Venkata S

    2011-08-01

    Accumulating evidence from non-human primates and computational modeling suggests that dopaminergic signals arising from the midbrain (substantia nigra/ventral tegmental area) mediate striatal gating of the prefrontal cortex during the selective updating of working memory. Using event-related functional magnetic resonance imaging, we explored the neural mechanisms underlying the selective updating of information stored in working memory. Participants were scanned during a novel working memory task that parses the neurophysiology underlying working memory maintenance, overwriting, and selective updating. Analyses revealed a functionally coupled network consisting of a midbrain region encompassing the substantia nigra/ventral tegmental area, caudate, and dorsolateral prefrontal cortex that was selectively engaged during working memory updating compared to the overwriting and maintenance of working memory content. Further analysis revealed differential midbrain-dorsolateral prefrontal interactions during selective updating between low-performing and high-performing individuals. These findings highlight the role of this meso-cortico-striatal circuitry during the selective updating of working memory in humans, which complements previous research in behavioral neuroscience and computational modeling. Published by Elsevier Inc.

  13. From Three-Photon Greenberger-Horne-Zeilinger States to Ballistic Universal Quantum Computation.

    PubMed

    Gimeno-Segovia, Mercedes; Shadbolt, Pete; Browne, Dan E; Rudolph, Terry

    2015-07-10

    Single photons, manipulated using integrated linear optics, constitute a promising platform for universal quantum computation. A series of increasingly efficient proposals have shown linear-optical quantum computing to be formally scalable. However, existing schemes typically require extensive adaptive switching, which is experimentally challenging and noisy, thousands of photon sources per renormalized qubit, and/or large quantum memories for repeat-until-success strategies. Our work overcomes all these problems. We present a scheme to construct a cluster state universal for quantum computation, which uses no adaptive switching, no large memories, and which is at least an order of magnitude more resource efficient than previous passive schemes. Unlike previous proposals, it is constructed entirely from loss-detecting gates and offers a robustness to photon loss. Even without the use of an active loss-tolerant encoding, our scheme naturally tolerates a total loss rate ∼1.6% in the photons detected in the gates. This scheme uses only 3 Greenberger-Horne-Zeilinger states as a resource, together with a passive linear-optical network. We fully describe and model the iterative process of cluster generation, including photon loss and gate failure. This demonstrates that building a linear-optical quantum computer needs to be less challenging than previously thought.

  14. Hypercube matrix computation task

    NASA Technical Reports Server (NTRS)

    Calalo, Ruel H.; Imbriale, William A.; Jacobi, Nathan; Liewer, Paulett C.; Lockhart, Thomas G.; Lyzenga, Gregory A.; Lyons, James R.; Manshadi, Farzin; Patterson, Jean E.

    1988-01-01

    A major objective of the Hypercube Matrix Computation effort at the Jet Propulsion Laboratory (JPL) is to investigate the applicability of a parallel computing architecture to the solution of large-scale electromagnetic scattering problems. Three scattering analysis codes are being implemented and assessed on a JPL/California Institute of Technology (Caltech) Mark 3 Hypercube. The codes, which utilize different underlying algorithms, give a means of evaluating the general applicability of this parallel architecture. The three analysis codes being implemented are a frequency domain method of moments code, a time domain finite difference code, and a frequency domain finite elements code. These analysis capabilities are being integrated into an electromagnetics interactive analysis workstation which can serve as a design tool for the construction of antennas and other radiating or scattering structures. The first two years of work on the Hypercube Matrix Computation effort is summarized. It includes both new developments and results as well as work previously reported in the Hypercube Matrix Computation Task: Final Report for 1986 to 1987 (JPL Publication 87-18).

  15. Impact of computer use on children's vision.

    PubMed

    Kozeis, N

    2009-10-01

    Today, millions of children use computers on a daily basis. Extensive viewing of the computer screen can lead to eye discomfort, fatigue, blurred vision and headaches, dry eyes and other symptoms of eyestrain. These symptoms may be caused by poor lighting, glare, an improper work station set-up, vision problems of which the person was not previously aware, or a combination of these factors. Children can experience many of the same symptoms related to computer use as adults. However, some unique aspects of how children use computers may make them more susceptible than adults to the development of these problems. In this study, the most common eye symptoms related to computer use in childhood, the possible causes and ways to avoid them are reviewed.

  16. Continued Development of Expert System Tools for NPSS Engine Diagnostics

    NASA Technical Reports Server (NTRS)

    Lewandowski, Henry

    1996-01-01

    The objectives of this grant were to work with previously developed NPSS (Numerical Propulsion System Simulation) tools and enhance their functionality; explore similar AI systems; and work with the High Performance Computing Communication (HPCC) K-12 program. Activities for this reporting period are briefly summarized and a paper addressing the implementation, monitoring and zooming in a distributed jet engine simulation is included as an attachment.

  17. Computer modeling of batteries from nonlinear circuit elements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waaben, S.; Dyer, C.K.; Federico, J.

    1985-06-01

    Circuit analogs for a single battery cell have previously been composed of resistors, capacitors, and inductors. This work introduces a nonlinear circuit model for cell behavior. The circuit is configured around the PIN junction diode, whose charge-storage behavior has features similar to those of electrochemical cells. A user-friendly integrated circuit simulation computer program has reproduced a variety of complex cell responses including electrica isolation effects causing capacity loss, as well as potentiodynamic peaks and discharge phenomena hitherto thought to be thermodynamic in origin. However, in this work, they are shown to be simply due to spatial distribution of stored chargemore » within a practical electrode.« less

  18. The control of a manipulator by a computer model of the cerebellum.

    NASA Technical Reports Server (NTRS)

    Albus, J. S.

    1973-01-01

    Extension of previous work by Albus (1971, 1972) on the theory of cerebellar function to an application of a computer model of the cerebellum to manipulator control. Following a discussion of the cerebellar function and of a perceptron analogy of the cerebellum, particularly in regard to learning, an electromechanical model of the cerebellum is considered in the form of an IBM 1800 computer connected to a Rancho Los Amigos arm with seven degrees of freedom. It is shown that the computer memory makes it possible to train the arm on some representative sample of the universe of possible states and to achieve satisfactory performance.

  19. Noise Radiation From a Leading-Edge Slat

    NASA Technical Reports Server (NTRS)

    Lockard, David P.; Choudhari, Meelan M.

    2009-01-01

    This paper extends our previous computations of unsteady flow within the slat cove region of a multi-element high-lift airfoil configuration, which showed that both statistical and structural aspects of the experimentally observed unsteady flow behavior can be captured via 3D simulations over a computational domain of narrow spanwise extent. Although such narrow domain simulation can account for the spanwise decorrelation of the slat cove fluctuations, the resulting database cannot be applied towards acoustic predictions of the slat without invoking additional approximations to synthesize the fluctuation field over the rest of the span. This deficiency is partially alleviated in the present work by increasing the spanwise extent of the computational domain from 37.3% of the slat chord to nearly 226% (i.e., 15% of the model span). The simulation database is used to verify consistency with previous computational results and, then, to develop predictions of the far-field noise radiation in conjunction with a frequency-domain Ffowcs-Williams Hawkings solver.

  20. The Â-genus as a Projective Volume form on the Derived Loop Space

    NASA Astrophysics Data System (ADS)

    Grady, Ryan

    2018-06-01

    In the present work, we extend our previous work with Gwilliam by realizing \\hat {A}(X) as the projective volume form associated to the BV operator in our quantization of a one-dimensional sigma model. We also discuss the associated integration/expectation map. We work in the formalism of L ∞ spaces, objects of which are computationally convenient presentations for derived stacks. Both smooth and complex geometry embed into L ∞ spaces and we specialize our results in both of these cases.

  1. The Top Companies You Want to Work for Most and Why.

    ERIC Educational Resources Information Center

    Freedland, Marjorie

    1988-01-01

    Summarizes the results of the 1987 National Engineering Student Employer Preference Survey and compares them with those reported by three previous biennial surveys. Lists the top 25 employer choices in electrical, mechanical, computer science, industrial, chemical, civil and astro/aeronautical engineering. (TW)

  2. Physiology driven adaptivity for the numerical solution of the bidomain equations.

    PubMed

    Whiteley, Jonathan P

    2007-09-01

    Previous work [Whiteley, J. P. IEEE Trans. Biomed. Eng. 53:2139-2147, 2006] derived a stable, semi-implicit numerical scheme for solving the bidomain equations. This scheme allows the timestep used when solving the bidomain equations numerically to be chosen by accuracy considerations rather than stability considerations. In this study we modify this scheme to allow an adaptive numerical solution in both time and space. The spatial mesh size is determined by the gradient of the transmembrane and extracellular potentials while the timestep is determined by the values of: (i) the fast sodium current; and (ii) the calcium release from junctional sarcoplasmic reticulum to myoplasm current. For two-dimensional simulations presented here, combining the numerical algorithm in the paper cited above with the adaptive algorithm presented here leads to an increase in computational efficiency by a factor of around 250 over previous work, together with significantly less computational memory being required. The speedup for three-dimensional simulations is likely to be more impressive.

  3. A revised 5 minute gravimetric geoid and associated errors for the North Atlantic calibration area

    NASA Technical Reports Server (NTRS)

    Mader, G. L.

    1979-01-01

    A revised 5 minute gravimetric geoid and its errors were computed for the North Atlantic calibration area using GEM-8 potential coefficients and the latest gravity data available from the Defense Mapping Agency. This effort was prompted by a number of inconsistencies and small errors found in previous calculations of this geoid. The computational method and constants used are given in detail to serve as a reference for future work.

  4. Evaluation of a Computational Model of Situational Awareness

    NASA Technical Reports Server (NTRS)

    Burdick, Mark D.; Shively, R. Jay; Rutkewski, Michael (Technical Monitor)

    2000-01-01

    Although the use of the psychological construct of situational awareness (SA) assists researchers in creating a flight environment that is safer and more predictable, its true potential remains untapped until a valid means of predicting SA a priori becomes available. Previous work proposed a computational model of SA (CSA) that sought to Fill that void. The current line of research is aimed at validating that model. The results show that the model accurately predicted SA in a piloted simulation.

  5. A Computational and Experimental Study of Resonators in Three Dimensions

    NASA Technical Reports Server (NTRS)

    Tam, C. K. W.; Ju, H.; Jones, Michael G.; Watson, Willie R.; Parrott, Tony L.

    2009-01-01

    In a previous work by the present authors, a computational and experimental investigation of the acoustic properties of two-dimensional slit resonators was carried out. The present paper reports the results of a study extending the previous work to three dimensions. This investigation has two basic objectives. The first is to validate the computed results from direct numerical simulations of the flow and acoustic fields of slit resonators in three dimensions by comparing with experimental measurements in a normal incidence impedance tube. The second objective is to study the flow physics of resonant liners responsible for sound wave dissipation. Extensive comparisons are provided between computed and measured acoustic liner properties with both discrete frequency and broadband sound sources. Good agreements are found over a wide range of frequencies and sound pressure levels. Direct numerical simulation confirms the previous finding in two dimensions that vortex shedding is the dominant dissipation mechanism at high sound pressure intensity. However, it is observed that the behavior of the shed vortices in three dimensions is quite different from those of two dimensions. In three dimensions, the shed vortices tend to evolve into ring (circular in plan form) vortices, even though the slit resonator opening from which the vortices are shed has an aspect ratio of 2.5. Under the excitation of discrete frequency sound, the shed vortices align themselves into two regularly spaced vortex trains moving away from the resonator opening in opposite directions. This is different from the chaotic shedding of vortices found in two-dimensional simulations. The effect of slit aspect ratio at a fixed porosity is briefly studied. For the range of liners considered in this investigation, it is found that the absorption coefficient of a liner increases when the open area of the single slit is subdivided into multiple, smaller slits.

  6. Improved Spectral Calculations for Discrete Schrődinger Operators

    NASA Astrophysics Data System (ADS)

    Puelz, Charles

    This work details an O(n2) algorithm for computing spectra of discrete Schrődinger operators with periodic potentials. Spectra of these objects enhance our understanding of fundamental aperiodic physical systems and contain rich theoretical structure of interest to the mathematical community. Previous work on the Harper model led to an O(n2) algorithm relying on properties not satisfied by other aperiodic operators. Physicists working with the Fibonacci Hamiltonian, a popular quasicrystal model, have instead used a problematic dynamical map approach or a sluggish O(n3) procedure for their calculations. The algorithm presented in this work, a blend of well-established eigenvalue/vector algorithms, provides researchers with a more robust computational tool of general utility. Application to the Fibonacci Hamiltonian in the sparsely studied intermediate coupling regime reveals structure in canonical coverings of the spectrum that will prove useful in motivating conjectures regarding band combinatorics and fractal dimensions.

  7. The modeling and simulation of visuospatial working memory

    PubMed Central

    Liang, Lina; Zhang, Zhikang

    2010-01-01

    Camperi and Wang (Comput Neurosci 5:383–405, 1998) presented a network model for working memory that combines intrinsic cellular bistability with the recurrent network architecture of the neocortex. While Fall and Rinzel (Comput Neurosci 20:97–107, 2006) replaced this intrinsic bistability with a biological mechanism-Ca2+ release subsystem. In this study, we aim to further expand the above work. We integrate the traditional firing-rate network with Ca2+ subsystem-induced bistability, amend the synaptic weights and suggest that Ca2+ concentration only increase the efficacy of synaptic input but has nothing to do with the external input for the transient cue. We found that our network model maintained the persistent activity in response to a brief transient stimulus like that of the previous two models and the working memory performance was resistant to noise and distraction stimulus if Ca2+ subsystem was tuned to be bistable. PMID:22132045

  8. Novel opportunities for computational biology and sociology in drug discovery☆

    PubMed Central

    Yao, Lixia; Evans, James A.; Rzhetsky, Andrey

    2013-01-01

    Current drug discovery is impossible without sophisticated modeling and computation. In this review we outline previous advances in computational biology and, by tracing the steps involved in pharmaceutical development, explore a range of novel, high-value opportunities for computational innovation in modeling the biological process of disease and the social process of drug discovery. These opportunities include text mining for new drug leads, modeling molecular pathways and predicting the efficacy of drug cocktails, analyzing genetic overlap between diseases and predicting alternative drug use. Computation can also be used to model research teams and innovative regions and to estimate the value of academy–industry links for scientific and human benefit. Attention to these opportunities could promise punctuated advance and will complement the well-established computational work on which drug discovery currently relies. PMID:20349528

  9. Reciprocity in computer-human interaction: source-based, norm-based, and affect-based explanations.

    PubMed

    Lee, Seungcheol Austin; Liang, Yuhua Jake

    2015-04-01

    Individuals often apply social rules when they interact with computers, and this is known as the Computers Are Social Actors (CASA) effect. Following previous work, one approach to understand the mechanism responsible for CASA is to utilize computer agents and have the agents attempt to gain human compliance (e.g., completing a pattern recognition task). The current study focuses on three key factors frequently cited to influence traditional notions of compliance: evaluations toward the source (competence and warmth), normative influence (reciprocity), and affective influence (mood). Structural equation modeling assessed the effects of these factors on human compliance with computer request. The final model shows that norm-based influence (reciprocity) increased the likelihood of compliance, while evaluations toward the computer agent did not significantly influence compliance.

  10. Singularity: Scientific containers for mobility of compute.

    PubMed

    Kurtzer, Gregory M; Sochat, Vanessa; Bauer, Michael W

    2017-01-01

    Here we present Singularity, software developed to bring containers and reproducibility to scientific computing. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms. Singularity is an open source initiative that harnesses the expertise of system and software engineers and researchers alike, and integrates seamlessly into common workflows for both of these groups. As its primary use case, Singularity brings mobility of computing to both users and HPC centers, providing a secure means to capture and distribute software and compute environments. This ability to create and deploy reproducible environments across these centers, a previously unmet need, makes Singularity a game changing development for computational science.

  11. Singularity: Scientific containers for mobility of compute

    PubMed Central

    Kurtzer, Gregory M.; Bauer, Michael W.

    2017-01-01

    Here we present Singularity, software developed to bring containers and reproducibility to scientific computing. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms. Singularity is an open source initiative that harnesses the expertise of system and software engineers and researchers alike, and integrates seamlessly into common workflows for both of these groups. As its primary use case, Singularity brings mobility of computing to both users and HPC centers, providing a secure means to capture and distribute software and compute environments. This ability to create and deploy reproducible environments across these centers, a previously unmet need, makes Singularity a game changing development for computational science. PMID:28494014

  12. The discovery of the causes of leprosy: A computational analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corruble, V.; Ganascia, J.G.

    1996-12-31

    The role played by the inductive inference has been studied extensively in the field of Scientific Discovery. The work presented here tackles the problem of induction in medical research. The discovery of the causes of leprosy is analyzed and simulated using computational means. An inductive algorithm is proposed, which is successful in simulating some essential steps in the progress of the understanding of the disease. It also allows us to simulate the false reasoning of previous centuries through the introduction of some medical a priori inherited form archaic medicine. Corroborating previous research, this problem illustrates the importance of the socialmore » and cultural environment on the way the inductive inference is performed in medicine.« less

  13. Free-Field Spatialized Aural Cues for Synthetic Environments

    DTIC Science & Technology

    1994-09-01

    any of the references previously listed. B. MIDI Other than electronic musicians and a few hobbyists, the Musical Instrument Digital Interface (MIDI...developed in 1983 and still has a long way to go in improving its capabilities, but the advantages are numerous. An entire musical score can be stored...the same musical file on a computer in one of the various digital sound formats could easily occupy 90 megabytes of disk space. 7 K III. PREVIOUS WORK

  14. Graphics Processing Unit Assisted Thermographic Compositing

    NASA Technical Reports Server (NTRS)

    Ragasa, Scott; McDougal, Matthew; Russell, Sam

    2012-01-01

    Objective: To develop a software application utilizing general purpose graphics processing units (GPUs) for the analysis of large sets of thermographic data. Background: Over the past few years, an increasing effort among scientists and engineers to utilize the GPU in a more general purpose fashion is allowing for supercomputer level results at individual workstations. As data sets grow, the methods to work them grow at an equal, and often great, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU to allow for throughput that was previously reserved for compute clusters. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Signal (image) processing is one area were GPUs are being used to greatly increase the performance of certain algorithms and analysis techniques. Technical Methodology/Approach: Apply massively parallel algorithms and data structures to the specific analysis requirements presented when working with thermographic data sets.

  15. Identification of Program Signatures from Cloud Computing System Telemetry Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nichols, Nicole M.; Greaves, Mark T.; Smith, William P.

    Malicious cloud computing activity can take many forms, including running unauthorized programs in a virtual environment. Detection of these malicious activities while preserving the privacy of the user is an important research challenge. Prior work has shown the potential viability of using cloud service billing metrics as a mechanism for proxy identification of malicious programs. Previously this novel detection method has been evaluated in a synthetic and isolated computational environment. In this paper we demonstrate the ability of billing metrics to identify programs, in an active cloud computing environment, including multiple virtual machines running on the same hypervisor. The openmore » source cloud computing platform OpenStack, is used for private cloud management at Pacific Northwest National Laboratory. OpenStack provides a billing tool (Ceilometer) to collect system telemetry measurements. We identify four different programs running on four virtual machines under the same cloud user account. Programs were identified with up to 95% accuracy. This accuracy is dependent on the distinctiveness of telemetry measurements for the specific programs we tested. Future work will examine the scalability of this approach for a larger selection of programs to better understand the uniqueness needed to identify a program. Additionally, future work should address the separation of signatures when multiple programs are running on the same virtual machine.« less

  16. Application of Multi-Frequency Modulation (MFM) for High-Speed Data Communications to a Voice Frequency Channel

    DTIC Science & Technology

    1990-06-01

    reader is cautioned that computer programs developed in this research may not have been exercised for all cases of interest. While every effort has been...Source of Funding Numbers _. Program Element No Project No I Task No I Work Unit Accession No 11 Title (Include security classflcation) APPLICATION OF...formats. Previous applications of these encoding formats were on industry standard computers (PC) over a 16-20 klIz channel. This report discusses the

  17. Performance of VPIC on Sequoia

    NASA Astrophysics Data System (ADS)

    Nystrom, William

    2014-10-01

    Sequoia is a major DOE computing resource which is characteristic of future resources in that it has many threads per compute node, 64, and the individual processor cores are simpler and less powerful than cores on previous processors like Intel's Sandy Bridge or AMD's Opteron. An effort is in progress to port VPIC to the Blue Gene Q architecture of Sequoia and evaluate its performance. Results of this work will be presented on single node performance of VPIC as well as multi-node scaling.

  18. Hierarchical Bayesian Models of Subtask Learning

    ERIC Educational Resources Information Center

    Anglim, Jeromy; Wynton, Sarah K. A.

    2015-01-01

    The current study used Bayesian hierarchical methods to challenge and extend previous work on subtask learning consistency. A general model of individual-level subtask learning was proposed focusing on power and exponential functions with constraints to test for inconsistency. To study subtask learning, we developed a novel computer-based booking…

  19. Problem Solving Under Time-Constraints.

    ERIC Educational Resources Information Center

    Richardson, Michael; Hunt, Earl

    A model of how automated and controlled processing can be mixed in computer simulations of problem solving is proposed. It is based on previous work by Hunt and Lansman (1983), who developed a model of problem solving that could reproduce the data obtained with several attention and performance paradigms, extending production-system notation to…

  20. Neuromorphic Computing for Temporal Scientific Data Classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schuman, Catherine D.; Potok, Thomas E.; Young, Steven

    In this work, we apply a spiking neural network model and an associated memristive neuromorphic implementation to an application in classifying temporal scientific data. We demonstrate that the spiking neural network model achieves comparable results to a previously reported convolutional neural network model, with significantly fewer neurons and synapses required.

  1. Processing Polarity: How the Ungrammatical Intrudes on the Grammatical

    ERIC Educational Resources Information Center

    Vasishth, Shravan; Brussow, Sven; Lewis, Richard L.; Drenhaus, Heiner

    2008-01-01

    A central question in online human sentence comprehension is, "How are linguistic relations established between different parts of a sentence?" Previous work has shown that this dependency resolution process can be computationally expensive, but the underlying reasons for this are still unclear. This article argues that dependency…

  2. Unsupervised MDP Value Selection for Automating ITS Capabilities

    ERIC Educational Resources Information Center

    Stamper, John; Barnes, Tiffany

    2009-01-01

    We seek to simplify the creation of intelligent tutors by using student data acquired from standard computer aided instruction (CAI) in conjunction with educational data mining methods to automatically generate adaptive hints. In our previous work, we have automatically generated hints for logic tutoring by constructing a Markov Decision Process…

  3. A method for estimating the probability of lightning causing a methane ignition in an underground mine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sacks, H.K.; Novak, T.

    2008-03-15

    During the past decade, several methane/air explosions in abandoned or sealed areas of underground coal mines have been attributed to lightning. Previously published work by the authors showed, through computer simulations, that currents from lightning could propagate down steel-cased boreholes and ignite explosive methane/air mixtures. The presented work expands on the model and describes a methodology based on IEEE Standard 1410-2004 to estimate the probability of an ignition. The methodology provides a means to better estimate the likelihood that an ignition could occur underground and, more importantly, allows the calculation of what-if scenarios to investigate the effectiveness of engineering controlsmore » to reduce the hazard. The computer software used for calculating fields and potentials is also verified by comparing computed results with an independently developed theoretical model of electromagnetic field propagation through a conductive medium.« less

  4. The acceptability of computer applications to group practices.

    PubMed

    Zimmerman, J; Gordon, R S; Tao, D K; Boxerman, S B

    1978-01-01

    Of the 72 identified group practices in a midwest urban environment, 39 were found to use computers. The practices had been influenced strongly by vendors in their selection of an automated system or service, and had usually spent less than a work-month analyzing their needs and reviewing alternate ways in which those needs could be met. Ninety-seven percent of the practices had some financial applications and 64% had administrative applications, but only 2.5% had medical applications. For half the practices at least 2 months elapsed from the time the automated applications were put into operation until they were considered to be integrated into the office routine. Advantages experienced by at least a third of the practices using computers were that the work was done faster, information was more readily available, and costs were reduced. The most common disadvantage was inflexibility. Most (89%) of the practices believed that automation was preferable to their previous manual system.

  5. Photochromic molecular implementations of universal computation.

    PubMed

    Chaplin, Jack C; Krasnogor, Natalio; Russell, Noah A

    2014-12-01

    Unconventional computing is an area of research in which novel materials and paradigms are utilised to implement computation. Previously we have demonstrated how registers, logic gates and logic circuits can be implemented, unconventionally, with a biocompatible molecular switch, NitroBIPS, embedded in a polymer matrix. NitroBIPS and related molecules have been shown elsewhere to be capable of modifying many biological processes in a manner that is dependent on its molecular form. Thus, one possible application of this type of unconventional computing is to embed computational processes into biological systems. Here we expand on our earlier proof-of-principle work and demonstrate that universal computation can be implemented using NitroBIPS. We have previously shown that spatially localised computational elements, including registers and logic gates, can be produced. We explain how parallel registers can be implemented, then demonstrate an application of parallel registers in the form of Turing machine tapes, and demonstrate both parallel registers and logic circuits in the form of elementary cellular automata. The Turing machines and elementary cellular automata utilise the same samples and same hardware to implement their registers, logic gates and logic circuits; and both represent examples of universal computing paradigms. This shows that homogenous photochromic computational devices can be dynamically repurposed without invasive reconfiguration. The result represents an important, necessary step towards demonstrating the general feasibility of interfacial computation embedded in biological systems or other unconventional materials and environments. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  6. Novel opportunities for computational biology and sociology in drug discovery

    PubMed Central

    Yao, Lixia

    2009-01-01

    Drug discovery today is impossible without sophisticated modeling and computation. In this review we touch on previous advances in computational biology and by tracing the steps involved in pharmaceutical development, we explore a range of novel, high value opportunities for computational innovation in modeling the biological process of disease and the social process of drug discovery. These opportunities include text mining for new drug leads, modeling molecular pathways and predicting the efficacy of drug cocktails, analyzing genetic overlap between diseases and predicting alternative drug use. Computation can also be used to model research teams and innovative regions and to estimate the value of academy-industry ties for scientific and human benefit. Attention to these opportunities could promise punctuated advance, and will complement the well-established computational work on which drug discovery currently relies. PMID:19674801

  7. Lensing of the CMB: non-Gaussian aspects.

    PubMed

    Zaldarriaga, M

    2001-06-01

    We compute the small angle limit of the three- and four-point function of the cosmic microwave background (CMB) temperature induced by the gravitational lensing effect by the large-scale structure of the universe. We relate the non-Gaussian aspects presented in this paper with those in our previous studies of the lensing effects. We interpret the statistics proposed in previous work in terms of different configurations of the four-point function and show how they relate to the statistic that maximizes the S/N.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poutanen, Juri, E-mail: juri.poutanen@utu.fi

    Rosseland mean opacity plays an important role in theories of stellar evolution and X-ray burst models. In the high-temperature regime, when most of the gas is completely ionized, the opacity is dominated by Compton scattering. Our aim here is to critically evaluate previous works on this subject and to compute the exact Rosseland mean opacity for Compton scattering over a broad range of temperature and electron degeneracy parameter. We use relativistic kinetic equations for Compton scattering and compute the photon mean free path as a function of photon energy by solving the corresponding integral equation in the diffusion limit. Asmore » a byproduct we also demonstrate the way to compute photon redistribution functions in the case of degenerate electrons. We then compute the Rosseland mean opacity as a function of temperature and electron degeneracy and present useful approximate expressions. We compare our results to previous calculations and find a significant difference in the low-temperature regime and strong degeneracy. We then proceed to compute the flux mean opacity in both free-streaming and diffusion approximations, and show that the latter is nearly identical to the Rosseland mean opacity. We also provide a simple way to account for the true absorption in evaluating the Rosseland and flux mean opacities.« less

  9. Applications in Data-Intensive Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shah, Anuj R.; Adkins, Joshua N.; Baxter, Douglas J.

    2010-04-01

    This book chapter, to be published in Advances in Computers, Volume 78, in 2010 describes applications of data intensive computing (DIC). This is an invited chapter resulting from a previous publication on DIC. This work summarizes efforts coming out of the PNNL's Data Intensive Computing Initiative. Advances in technology have empowered individuals with the ability to generate digital content with mouse clicks and voice commands. Digital pictures, emails, text messages, home videos, audio, and webpages are common examples of digital content that are generated on a regular basis. Data intensive computing facilitates human understanding of complex problems. Data-intensive applications providemore » timely and meaningful analytical results in response to exponentially growing data complexity and associated analysis requirements through the development of new classes of software, algorithms, and hardware.« less

  10. Universal measurement-based quantum computation in two-dimensional symmetry-protected topological phases

    NASA Astrophysics Data System (ADS)

    Wei, Tzu-Chieh; Huang, Ching-Yu

    2017-09-01

    Recent progress in the characterization of gapped quantum phases has also triggered the search for a universal resource for quantum computation in symmetric gapped phases. Prior works in one dimension suggest that it is a feature more common than previously thought, in that nontrivial one-dimensional symmetry-protected topological (SPT) phases provide quantum computational power characterized by the algebraic structure defining these phases. Progress in two and higher dimensions so far has been limited to special fixed points. Here we provide two families of two-dimensional Z2 symmetric wave functions such that there exists a finite region of the parameter in the SPT phases that supports universal quantum computation. The quantum computational power appears to lose its universality at the boundary between the SPT and the symmetry-breaking phases.

  11. Trusted measurement model based on multitenant behaviors.

    PubMed

    Ning, Zhen-Hu; Shen, Chang-Xiang; Zhao, Yong; Liang, Peng

    2014-01-01

    With a fast growing pervasive computing, especially cloud computing, the behaviour measurement is at the core and plays a vital role. A new behaviour measurement tailored for Multitenants in cloud computing is needed urgently to fundamentally establish trust relationship. Based on our previous research, we propose an improved trust relationship scheme which captures the world of cloud computing where multitenants share the same physical computing platform. Here, we first present the related work on multitenant behaviour; secondly, we give the scheme of behaviour measurement where decoupling of multitenants is taken into account; thirdly, we explicitly explain our decoupling algorithm for multitenants; fourthly, we introduce a new way of similarity calculation for deviation control, which fits the coupled multitenants under study well; lastly, we design the experiments to test our scheme.

  12. Trusted Measurement Model Based on Multitenant Behaviors

    PubMed Central

    Ning, Zhen-Hu; Shen, Chang-Xiang; Zhao, Yong; Liang, Peng

    2014-01-01

    With a fast growing pervasive computing, especially cloud computing, the behaviour measurement is at the core and plays a vital role. A new behaviour measurement tailored for Multitenants in cloud computing is needed urgently to fundamentally establish trust relationship. Based on our previous research, we propose an improved trust relationship scheme which captures the world of cloud computing where multitenants share the same physical computing platform. Here, we first present the related work on multitenant behaviour; secondly, we give the scheme of behaviour measurement where decoupling of multitenants is taken into account; thirdly, we explicitly explain our decoupling algorithm for multitenants; fourthly, we introduce a new way of similarity calculation for deviation control, which fits the coupled multitenants under study well; lastly, we design the experiments to test our scheme. PMID:24987731

  13. Multi-tasking computer control of video related equipment

    NASA Technical Reports Server (NTRS)

    Molina, Rod; Gilbert, Bob

    1989-01-01

    The flexibility, cost-effectiveness and widespread availability of personal computers now makes it possible to completely integrate the previously separate elements of video post-production into a single device. Specifically, a personal computer, such as the Commodore-Amiga, can perform multiple and simultaneous tasks from an individual unit. Relatively low cost, minimal space requirements and user-friendliness, provides the most favorable environment for the many phases of video post-production. Computers are well known for their basic abilities to process numbers, text and graphics and to reliably perform repetitive and tedious functions efficiently. These capabilities can now apply as either additions or alternatives to existing video post-production methods. A present example of computer-based video post-production technology is the RGB CVC (Computer and Video Creations) WorkSystem. A wide variety of integrated functions are made possible with an Amiga computer existing at the heart of the system.

  14. State-of-the-art and dissemination of computational tools for drug-design purposes: a survey among Italian academics and industrial institutions.

    PubMed

    Artese, Anna; Alcaro, Stefano; Moraca, Federica; Reina, Rocco; Ventura, Marzia; Costantino, Gabriele; Beccari, Andrea R; Ortuso, Francesco

    2013-05-01

    During the first edition of the Computationally Driven Drug Discovery meeting, held in November 2011 at Dompé Pharma (L'Aquila, Italy), a questionnaire regarding the diffusion and the use of computational tools for drug-design purposes in both academia and industry was distributed among all participants. This is a follow-up of a previously reported investigation carried out among a few companies in 2007. The new questionnaire implemented five sections dedicated to: research group identification and classification; 18 different computational techniques; software information; hardware data; and economical business considerations. In this article, together with a detailed history of the different computational methods, a statistical analysis of the survey results that enabled the identification of the prevalent computational techniques adopted in drug-design projects is reported and a profile of the computational medicinal chemist currently working in academia and pharmaceutical companies in Italy is highlighted.

  15. Configurable software for satellite graphics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartzman, P D

    An important goal in interactive computer graphics is to provide users with both quick system responses for basic graphics functions and enough computing power for complex calculations. One solution is to have a distributed graphics system in which a minicomputer and a powerful large computer share the work. The most versatile type of distributed system is an intelligent satellite system in which the minicomputer is programmable by the application user and can do most of the work while the large remote machine is used for difficult computations. At New York University, the hardware was configured from available equipment. The levelmore » of system intelligence resulted almost completely from software development. Unlike previous work with intelligent satellites, the resulting system had system control centered in the satellite. It also had the ability to reconfigure software during realtime operation. The design of the system was done at a very high level using set theoretic language. The specification clearly illustrated processor boundaries and interfaces. The high-level specification also produced a compact, machine-independent virtual graphics data structure for picture representation. The software was written in a systems implementation language; thus, only one set of programs was needed for both machines. A user can program both machines in a single language. Tests of the system with an application program indicate that is has very high potential. A major result of this work is the demonstration that a gigantic investment in new hardware is not necessary for computing facilities interested in graphics.« less

  16. An agent-based computational model for tuberculosis spreading on age-structured populations

    NASA Astrophysics Data System (ADS)

    Graciani Rodrigues, C. C.; Espíndola, Aquino L.; Penna, T. J. P.

    2015-06-01

    In this work we present an agent-based computational model to study the spreading of the tuberculosis (TB) disease on age-structured populations. The model proposed is a merge of two previous models: an agent-based computational model for the spreading of tuberculosis and a bit-string model for biological aging. The combination of TB with the population aging, reproduces the coexistence of health states, as seen in real populations. In addition, the universal exponential behavior of mortalities curves is still preserved. Finally, the population distribution as function of age shows the prevalence of TB mostly in elders, for high efficacy treatments.

  17. Operational Characteristics of a High Voltage Dense Plasma Focus.

    DTIC Science & Technology

    1985-11-01

    A high voltage dense plasma focus powered by a single-stage Marx bank was designed, built and operated. The maximum bank parameters are: voltage--120...kV, energy--20 kJ, short-circuit current--600kA. The bank impedance is about 200 millohms. The plasma focus center electrode diameter is 1.27 cm. The...about 50 milliohms. The context of this work is established with a review of previous plasma focus theoretical, experimental and computational work and

  18. Automated Analysis of CT Images for the Inspection of Hardwood Logs

    Treesearch

    Harbin Li; A. Lynn Abbott; Daniel L. Schmoldt

    1996-01-01

    This paper investigates several classifiers for labeling internal features of hardwood logs using computed tomography (CT) images. A primary motivation is to locate and classify internal defects so that an optimal cutting strategy can be chosen. Previous work has relied on combinations of low-level processing, image segmentation, autoregressive texture modeling, and...

  19. Improving Learners' Oral Fuency through Computer-Mediated Emotional Intelligence Activities

    ERIC Educational Resources Information Center

    Abdolrezapour, Parisa

    2017-01-01

    Previous studies have shown that emotional intelligence (henceforth, EI) has a significant impact on important life outcomes (e.g., mental and physical health, academic achievement, work performance, and social relationships). This study aimed to see whether there is any relationship between EI and English as a foreign language (EFL) learners'…

  20. Three-dimensional rendering of segmented object using matlab - biomed 2010.

    PubMed

    Anderson, Jeffrey R; Barrett, Steven F

    2010-01-01

    The three-dimensional rendering of microscopic objects is a difficult and challenging task that often requires specialized image processing techniques. Previous work has been described of a semi-automatic segmentation process of fluorescently stained neurons collected as a sequence of slice images with a confocal laser scanning microscope. Once properly segmented, each individual object can be rendered and studied as a three-dimensional virtual object. This paper describes the work associated with the design and development of Matlab files to create three-dimensional images from the segmented object data previously mentioned. Part of the motivation for this work is to integrate both the segmentation and rendering processes into one software application, providing a seamless transition from the segmentation tasks to the rendering and visualization tasks. Previously these tasks were accomplished on two different computer systems, windows and Linux. This transition basically limits the usefulness of the segmentation and rendering applications to those who have both computer systems readily available. The focus of this work is to create custom Matlab image processing algorithms for object rendering and visualization, and merge these capabilities to the Matlab files that were developed especially for the image segmentation task. The completed Matlab application will contain both the segmentation and rendering processes in a single graphical user interface, or GUI. This process for rendering three-dimensional images in Matlab requires that a sequence of two-dimensional binary images, representing a cross-sectional slice of the object, be reassembled in a 3D space, and covered with a surface. Additional segmented objects can be rendered in the same 3D space. The surface properties of each object can be varied by the user to aid in the study and analysis of the objects. This inter-active process becomes a powerful visual tool to study and understand microscopic objects.

  1. Pseudoracemic amino acid complexes: blind predictions for flexible two-component crystals.

    PubMed

    Görbitz, Carl Henrik; Dalhus, Bjørn; Day, Graeme M

    2010-08-14

    Ab initio prediction of the crystal packing in complexes between two flexible molecules is a particularly challenging computational chemistry problem. In this work we present results of single crystal structure determinations as well as theoretical predictions for three 1 ratio 1 complexes between hydrophobic l- and d-amino acids (pseudoracemates), known from previous crystallographic work to form structures with one of two alternative hydrogen bonding arrangements. These are accurately reproduced in the theoretical predictions together with a series of patterns that have never been observed experimentally. In this bewildering forest of potential polymorphs, hydrogen bonding arrangements and molecular conformations, the theoretical predictions succeeded, for all three complexes, in finding the correct hydrogen bonding pattern. For two of the complexes, the calculations also reproduce the exact space group and side chain orientations in the best ranked predicted structure. This includes one complex for which the observed crystal packing clearly contradicted previous experience based on experimental data for a substantial number of related amino acid complexes. The results highlight the significant recent advances that have been made in computational methods for crystal structure prediction.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farmer, J.D.; Ott, E.; Yorke, J.A.

    Dimension is perhaps the most basic property of an attractor. In this paper we discuss a variety of different definitions of dimension, compute their values for a typical example, and review previous work on the dimension of chaotic attractors. The relevant definitions of dimension are of two general types, those that depend only on metric properties, and those that depend on probabilistic properties (that is, they depend on the frequency with which a typical trajectory visits different regions of the attractor). Both our example and the previous work that we review support the conclusion that all of the probabilistic dimensionsmore » take on the same value, which we call the dimension of the natural measure, and all of the metric dimensions take on a common value, which we call the fractal dimension. Furthermore, the dimension of the natural measure is typically equal to the Lyapunov dimension, which is defined in terms of Lyapunov numbers, and thus is usually far easier to calculate than any other definition. Because it is computable and more physically relevant, we feel that the dimension of the natural measure is more important than the fractal dimension.« less

  3. Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake

    Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less

  4. Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD

    DOE PAGES

    Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake; ...

    2017-03-24

    Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less

  5. Optical signal processing using photonic reservoir computing

    NASA Astrophysics Data System (ADS)

    Salehi, Mohammad Reza; Dehyadegari, Louiza

    2014-10-01

    As a new approach to recognition and classification problems, photonic reservoir computing has such advantages as parallel information processing, power efficient and high speed. In this paper, a photonic structure has been proposed for reservoir computing which is investigated using a simple, yet, non-partial noisy time series prediction task. This study includes the application of a suitable topology with self-feedbacks in a network of SOA's - which lends the system a strong memory - and leads to adjusting adequate parameters resulting in perfect recognition accuracy (100%) for noise-free time series, which shows a 3% improvement over previous results. For the classification of noisy time series, the rate of accuracy showed a 4% increase and amounted to 96%. Furthermore, an analytical approach was suggested to solve rate equations which led to a substantial decrease in the simulation time, which is an important parameter in classification of large signals such as speech recognition, and better results came up compared with previous works.

  6. Combining metric episodes with semantic event concepts within the Symbolic and Sub-Symbolic Robotics Intelligence Control System (SS-RICS)

    NASA Astrophysics Data System (ADS)

    Kelley, Troy D.; McGhee, S.

    2013-05-01

    This paper describes the ongoing development of a robotic control architecture that inspired by computational cognitive architectures from the discipline of cognitive psychology. The Symbolic and Sub-Symbolic Robotics Intelligence Control System (SS-RICS) combines symbolic and sub-symbolic representations of knowledge into a unified control architecture. The new architecture leverages previous work in cognitive architectures, specifically the development of the Adaptive Character of Thought-Rational (ACT-R) and Soar. This paper details current work on learning from episodes or events. The use of episodic memory as a learning mechanism has, until recently, been largely ignored by computational cognitive architectures. This paper details work on metric level episodic memory streams and methods for translating episodes into abstract schemas. The presentation will include research on learning through novelty and self generated feedback mechanisms for autonomous systems.

  7. Representing exact number visually using mental abacus.

    PubMed

    Frank, Michael C; Barner, David

    2012-02-01

    Mental abacus (MA) is a system for performing rapid and precise arithmetic by manipulating a mental representation of an abacus, a physical calculation device. Previous work has speculated that MA is based on visual imagery, suggesting that it might be a method of representing exact number nonlinguistically, but given the limitations on visual working memory, it is unknown how MA structures could be stored. We investigated the structure of the representations underlying MA in a group of children in India. Our results suggest that MA is represented in visual working memory by splitting the abacus into a series of columns, each of which is independently stored as a unit with its own detailed substructure. In addition, we show that the computations of practiced MA users (but not those of control participants) are relatively insensitive to verbal interference, consistent with the hypothesis that MA is a nonlinguistic format for exact numerical computation.

  8. A Kalman Filtering Perspective for Multiatlas Segmentation*

    PubMed Central

    Gao, Yi; Zhu, Liangjia; Cates, Joshua; MacLeod, Rob S.; Bouix, Sylvain; Tannenbaum, Allen

    2016-01-01

    In multiatlas segmentation, one typically registers several atlases to the novel image, and their respective segmented label images are transformed and fused to form the final segmentation. In this work, we provide a new dynamical system perspective for multiatlas segmentation, inspired by the following fact: The transformation that aligns the current atlas to the novel image can be not only computed by direct registration but also inferred from the transformation that aligns the previous atlas to the image together with the transformation between the two atlases. This process is similar to the global positioning system on a vehicle, which gets position by inquiring from the satellite and by employing the previous location and velocity—neither answer in isolation being perfect. To solve this problem, a dynamical system scheme is crucial to combine the two pieces of information; for example, a Kalman filtering scheme is used. Accordingly, in this work, a Kalman multiatlas segmentation is proposed to stabilize the global/affine registration step. The contributions of this work are twofold. First, it provides a new dynamical systematic perspective for standard independent multiatlas registrations, and it is solved by Kalman filtering. Second, with very little extra computation, it can be combined with most existing multiatlas segmentation schemes for better registration/segmentation accuracy. PMID:26807162

  9. Applied Computational Fluid Dynamics at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Kwak, Dochan (Technical Monitor)

    1994-01-01

    The field of Computational Fluid Dynamics (CFD) has advanced to the point where it can now be used for many applications in fluid mechanics research and aerospace vehicle design. A few applications being explored at NASA Ames Research Center will be presented and discussed. The examples presented will range in speed from hypersonic to low speed incompressible flow applications. Most of the results will be from numerical solutions of the Navier-Stokes or Euler equations in three space dimensions for general geometry applications. Computational results will be used to highlight the presentation as appropriate. Advances in computational facilities including those associated with NASA's CAS (Computational Aerosciences) Project of the Federal HPCC (High Performance Computing and Communications) Program will be discussed. Finally, opportunities for future research will be presented and discussed. All material will be taken from non-sensitive, previously-published and widely-disseminated work.

  10. Comparing DNS and Experiments of Subcritical Flow Past an Isolated Surface Roughness Element

    NASA Astrophysics Data System (ADS)

    Doolittle, Charles; Goldstein, David

    2009-11-01

    Results are presented from computational and experimental studies of subcritical roughness within a Blasius boundary layer. This work stems from discrepancies presented by Stephani and Goldstein (AIAA Paper 2009-585) where DNS results did not agree with hot-wire measurements. The near wake regions of cylindrical surface roughness elements corresponding to roughness-based Reynolds numbers Rek of about 202 are of specific concern. Laser-Doppler anemometry and flow visualization in water, as well as the same spectral DNS code used by Stephani and Goldstein are used to obtain both quantitative and qualitative comparisons with previous results. Conclusions regarding previous studies will be presented alongside discussion of current work including grid resolution studies and an examination of vorticity dynamics.

  11. Bandwidth reduction for video-on-demand broadcasting using secondary content insertion

    NASA Astrophysics Data System (ADS)

    Golynski, Alexander; Lopez-Ortiz, Alejandro; Poirier, Guillaume; Quimper, Claude-Guy

    2005-01-01

    An optimal broadcasting scheme under the presence of secondary content (i.e. advertisements) is proposed. The proposed scheme works both for movies encoded in a Constant Bit Rate (CBR) or a Variable Bit Rate (VBR) format. It is shown experimentally that secondary content in movies can make Video-on-Demand (VoD) broadcasting systems more efficient. An efficient algorithm is given to compute the optimal broadcasting schedule with secondary content, which in particular significantly improves over the best previously known algorithm for computing the optimal broadcasting schedule without secondary content.

  12. Computational Study of the Adsorption of Dimethyl Methylphosphonate (DMMP) on the (010) Surface of Anatase TiO2 With and Without Faceting

    DTIC Science & Technology

    2009-12-05

    surface area of anatase nanocrystals [6] and to be es- pecially active in photocatalysis [7]. Recent work by Dzwigaj et al. [8] has clearly shown that the...two-fold-coordinated (O2c) sites can also be involved in hydrogen bond (H-bond) formation. The effects, on the structure of the (100) and other...To reduce the computational cost , geometry optimization was done at the restricted Hartree Fock (RHF) level. This has previously been shown [36,37

  13. A resilient and efficient CFD framework: Statistical learning tools for multi-fidelity and heterogeneous information fusion

    NASA Astrophysics Data System (ADS)

    Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em

    2017-09-01

    Exascale-level simulations require fault-resilient algorithms that are robust against repeated and expected software and/or hardware failures during computations, which may render the simulation results unsatisfactory. If each processor can share some global information about the simulation from a coarse, limited accuracy but relatively costless auxiliary simulator we can effectively fill-in the missing spatial data at the required times by a statistical learning technique - multi-level Gaussian process regression, on the fly; this has been demonstrated in previous work [1]. Based on the previous work, we also employ another (nonlinear) statistical learning technique, Diffusion Maps, that detects computational redundancy in time and hence accelerate the simulation by projective time integration, giving the overall computation a "patch dynamics" flavor. Furthermore, we are now able to perform information fusion with multi-fidelity and heterogeneous data (including stochastic data). Finally, we set the foundations of a new framework in CFD, called patch simulation, that combines information fusion techniques from, in principle, multiple fidelity and resolution simulations (and even experiments) with a new adaptive timestep refinement technique. We present two benchmark problems (the heat equation and the Navier-Stokes equations) to demonstrate the new capability that statistical learning tools can bring to traditional scientific computing algorithms. For each problem, we rely on heterogeneous and multi-fidelity data, either from a coarse simulation of the same equation or from a stochastic, particle-based, more "microscopic" simulation. We consider, as such "auxiliary" models, a Monte Carlo random walk for the heat equation and a dissipative particle dynamics (DPD) model for the Navier-Stokes equations. More broadly, in this paper we demonstrate the symbiotic and synergistic combination of statistical learning, domain decomposition, and scientific computing in exascale simulations.

  14. Rotor-Bearing Dynamics Technology Design Guide. Part 8. A computerized Data Retrieval System for Fluid Film Bearings

    DTIC Science & Technology

    1980-10-01

    AFAPL-TR-78-6 ’: Part Vill (U ROTOR -BEARING DYNAMICS - TECHNOLOGY DESIGN GUIDE ¢ Part Vil A Comput eri eval Syteftor Fluid Film Bearings SHAKER...Protection," Task 304806, "Aerospace Lubrication," Work Unit 30480685, " Rotor -Bearing Dynamics Design." The work reported herein was performed during the...the previous issue of the Rotor -Bearing Dynamics Technology Design Guide, - one volume dealt with the calculation of performance parameters and pertur

  15. Discrimination Power of Polynomial-Based Descriptors for Graphs by Using Functional Matrices.

    PubMed

    Dehmer, Matthias; Emmert-Streib, Frank; Shi, Yongtang; Stefu, Monica; Tripathi, Shailesh

    2015-01-01

    In this paper, we study the discrimination power of graph measures that are based on graph-theoretical matrices. The paper generalizes the work of [M. Dehmer, M. Moosbrugger. Y. Shi, Encoding structural information uniquely with polynomial-based descriptors by employing the Randić matrix, Applied Mathematics and Computation, 268(2015), 164-168]. We demonstrate that by using the new functional matrix approach, exhaustively generated graphs can be discriminated more uniquely than shown in the mentioned previous work.

  16. Discrimination Power of Polynomial-Based Descriptors for Graphs by Using Functional Matrices

    PubMed Central

    Dehmer, Matthias; Emmert-Streib, Frank; Shi, Yongtang; Stefu, Monica; Tripathi, Shailesh

    2015-01-01

    In this paper, we study the discrimination power of graph measures that are based on graph-theoretical matrices. The paper generalizes the work of [M. Dehmer, M. Moosbrugger. Y. Shi, Encoding structural information uniquely with polynomial-based descriptors by employing the Randić matrix, Applied Mathematics and Computation, 268(2015), 164–168]. We demonstrate that by using the new functional matrix approach, exhaustively generated graphs can be discriminated more uniquely than shown in the mentioned previous work. PMID:26479495

  17. A Practical Computational Method for the Anisotropic Redshift-Space 3-Point Correlation Function

    NASA Astrophysics Data System (ADS)

    Slepian, Zachary; Eisenstein, Daniel J.

    2018-04-01

    We present an algorithm enabling computation of the anisotropic redshift-space galaxy 3-point correlation function (3PCF) scaling as N2, with N the number of galaxies. Our previous work showed how to compute the isotropic 3PCF with this scaling by expanding the radially-binned density field around each galaxy in the survey into spherical harmonics and combining these coefficients to form multipole moments. The N2 scaling occurred because this approach never explicitly required the relative angle between a galaxy pair about the primary galaxy. Here we generalize this work, demonstrating that in the presence of azimuthally-symmetric anisotropy produced by redshift-space distortions (RSD) the 3PCF can be described by two triangle side lengths, two independent total angular momenta, and a spin. This basis for the anisotropic 3PCF allows its computation with negligible additional work over the isotropic 3PCF. We also present the covariance matrix of the anisotropic 3PCF measured in this basis. Our algorithm tracks the full 5-D redshift-space 3PCF, uses an accurate line of sight to each triplet, is exact in angle, and easily handles edge correction. It will enable use of the anisotropic large-scale 3PCF as a probe of RSD in current and upcoming large-scale redshift surveys.

  18. PyEEG: an open source Python module for EEG/MEG feature extraction.

    PubMed

    Bao, Forrest Sheng; Liu, Xin; Zhang, Christina

    2011-01-01

    Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction.

  19. PyEEG: An Open Source Python Module for EEG/MEG Feature Extraction

    PubMed Central

    Bao, Forrest Sheng; Liu, Xin; Zhang, Christina

    2011-01-01

    Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction. PMID:21512582

  20. Linking working memory and long-term memory: a computational model of the learning of new words.

    PubMed

    Jones, Gary; Gobet, Fernand; Pine, Julian M

    2007-11-01

    The nonword repetition (NWR) test has been shown to be a good predictor of children's vocabulary size. NWR performance has been explained using phonological working memory, which is seen as a critical component in the learning of new words. However, no detailed specification of the link between phonological working memory and long-term memory (LTM) has been proposed. In this paper, we present a computational model of children's vocabulary acquisition (EPAM-VOC) that specifies how phonological working memory and LTM interact. The model learns phoneme sequences, which are stored in LTM and mediate how much information can be held in working memory. The model's behaviour is compared with that of children in a new study of NWR, conducted in order to ensure the same nonword stimuli and methodology across ages. EPAM-VOC shows a pattern of results similar to that of children: performance is better for shorter nonwords and for wordlike nonwords, and performance improves with age. EPAM-VOC also simulates the superior performance for single consonant nonwords over clustered consonant nonwords found in previous NWR studies. EPAM-VOC provides a simple and elegant computational account of some of the key processes involved in the learning of new words: it specifies how phonological working memory and LTM interact; makes testable predictions; and suggests that developmental changes in NWR performance may reflect differences in the amount of information that has been encoded in LTM rather than developmental changes in working memory capacity.

  1. Expert systems research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duda, R.O.; Shortliffe, E.H.

    1983-04-15

    Artificial intelligence, long a topic of basic computer science research, is now being applied to problems of scientific, technical, and commercial interest. Some consultation programs although limited in versatility, have achieved levels of performance rivaling those of human experts. A collateral benefit of this work is the systematization of previously unformalized knowledge in areas such as medical diagnosis and geology. 30 references.

  2. A system for distributed intrusion detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snapp, S.R.; Brentano, J.; Dias, G.V.

    1991-01-01

    The study of providing security in computer networks is a rapidly growing area of interest because the network is the medium over which most attacks or intrusions on computer systems are launched. One approach to solving this problem is the intrusion-detection concept, whose basic premise is that not only abandoning the existing and huge infrastructure of possibly-insecure computer and network systems is impossible, but also replacing them by totally-secure systems may not be feasible or cost effective. Previous work on intrusion-detection systems were performed on stand-alone hosts and on a broadcast local area network (LAN) environment. The focus of ourmore » present research is to extend our network intrusion-detection concept from the LAN environment to arbitarily wider areas with the network topology being arbitrary as well. The generalized distributed environment is heterogeneous, i.e., the network nodes can be hosts or servers from different vendors, or some of them could be LAN managers, like our previous work, a network security monitor (NSM), as well. The proposed architecture for this distributed intrusion-detection system consists of the following components: a host manager in each host; a LAN manager for monitoring each LAN in the system; and a central manager which is placed at a single secure location and which receives reports from various host and LAN managers to process these reports, correlate them, and detect intrusions. 11 refs., 2 figs.« less

  3. Solving da Vinci stereopsis with depth-edge-selective V2 cells

    PubMed Central

    Assee, Andrew; Qian, Ning

    2007-01-01

    We propose a new model for da Vinci stereopsis based on a coarse-to-fine disparity-energy computation in V1 and disparity-boundary-selective units in V2. Unlike previous work, our model contains only binocular cells, relies on distributed representations of disparity, and has a simple V1-to-V2 feedforward structure. We demonstrate with random dot stereograms that the V2 stage of our model is able to determine the location and the eye-of-origin of monocularly occluded regions and improve disparity map computation. We also examine a few related issues. First, we argue that since monocular regions are binocularly defined, they cannot generally be detected by monocular cells. Second, we show that our coarse-to-fine V1 model for conventional stereopsis explains double matching in Panum’s limiting case. This provides computational support to the notion that the perceived depth of a monocular bar next to a binocular rectangle may not be da Vinci stereopsis per se (Gillam et al., 2003). Third, we demonstrate that some stimuli previously deemed invalid have simple, valid geometric interpretations. Our work suggests that studies of da Vinci stereopsis should focus on stimuli more general than the bar-and-rectangle type and that disparity-boundary-selective V2 cells may provide a simple physiological mechanism for da Vinci stereopsis. PMID:17698163

  4. Software Support for Transiently Powered Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Der Woude, Joel Matthew

    With the continued reduction in size and cost of computing, power becomes an increasingly heavy burden on system designers for embedded applications. While energy harvesting techniques are an increasingly desirable solution for many deeply embedded applications where size and lifetime are a priority, previous work has shown that energy harvesting provides insufficient power for long running computation. We present Ratchet, which to the authors knowledge is the first automatic, software-only checkpointing system for energy harvesting platforms. We show that Ratchet provides a means to extend computation across power cycles, consistent with those experienced by energy harvesting devices. We demonstrate themore » correctness of our system under frequent failures and show that it has an average overhead of 58.9% across a suite of benchmarks representative for embedded applications.« less

  5. Nuclear-Recoil Differential Cross Sections for the Two Photon Double Ionization of Helium

    NASA Astrophysics Data System (ADS)

    Abdel Naby, Shahin; Ciappina, M. F.; Lee, T. G.; Pindzola, M. S.; Colgan, J.

    2013-05-01

    In support of the reaction microscope measurements at the free-electron laser facility at Hamburg (FLASH), we use the time-dependent close-coupling method (TDCC) to calculate fully differential nuclear-recoil cross sections for the two-photon double ionization of He at photon energy of 44 eV. The total cross section for the double ionization is in good agreement with previous calculations. The nuclear-recoil distribution is in good agreement with the experimental measurements. In contrast to the single-photon double ionization, maximum nuclear recoil triple differential cross section is obtained at small nuclear momenta. This work was supported in part by grants from NSF and US DoE. Computational work was carried out at NERSC in Oakland, California and the National Institute for Computational Sciences in Knoxville, Tennessee.

  6. Investigation of Active Interrogation Techniques to Detect Special Nuclear Material in Maritime Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Thomas Martin; Patton, Bruce W

    The detection and interdiction of special nuclear material (SNM) is still a high-priority focus area for many organizations around the world. One method that is commonly considered a leading candidate in the detection of SNM is active interrogation (AI). AI is different from its close relative, passive interrogation, in that an active source is used to enhance or create a detectable signal (usually fission) from SNM, particularly in shielded scenarios or scenarios where the SNM has a low activity. The use of AI thus makes the detection of SNM easier or, in some scenarios, even enables previously impossible detection. Inmore » this work the signal from prompt neutrons and photons as well as delayed neutrons and photons will be combined, as is typically done in AI. In previous work AI has been evaluated experimentally and computationally. However, for the purposes of this work, past scenarios are considered lightly shielded and tightly coupled spatially. At most, the previous work interrogated the contents of one standard cargo container (2.44 x 2.60 x 6.10 m) and the source and detector were both within a few meters of the object being interrogated. A few examples of this type of previous work can be found in references 1 and 2. Obviously, more heavily shielded AI scenarios will require larger source intensities, larger detector surface areas (larger detectors or more detectors), greater detector efficiencies, longer count times, or some combination of these.« less

  7. Quantum Monte Carlo for atoms and molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnett, R.N.

    1989-11-01

    The diffusion quantum Monte Carlo with fixed nodes (QMC) approach has been employed in studying energy-eigenstates for 1--4 electron systems. Previous work employing the diffusion QMC technique yielded energies of high quality for H{sub 2}, LiH, Li{sub 2}, and H{sub 2}O. Here, the range of calculations with this new approach has been extended to include additional first-row atoms and molecules. In addition, improvements in the previously computed fixed-node energies of LiH, Li{sub 2}, and H{sub 2}O have been obtained using more accurate trial functions. All computations were performed within, but are not limited to, the Born-Oppenheimer approximation. In our computations,more » the effects of variation of Monte Carlo parameters on the QMC solution of the Schroedinger equation were studied extensively. These parameters include the time step, renormalization time and nodal structure. These studies have been very useful in determining which choices of such parameters will yield accurate QMC energies most efficiently. Generally, very accurate energies (90--100% of the correlation energy is obtained) have been computed with single-determinant trail functions multiplied by simple correlation functions. Improvements in accuracy should be readily obtained using more complex trial functions.« less

  8. Improving operational plume forecasts

    NASA Astrophysics Data System (ADS)

    Balcerak, Ernie

    2012-04-01

    Forecasting how plumes of particles, such as radioactive particles from a nuclear disaster, will be transported and dispersed in the atmosphere is an important but computationally challenging task. During the Fukushima nuclear disaster in Japan, operational plume forecasts were produced each day, but as the emissions continued, previous emissions were not included in the simulations used for forecasts because it became impractical to rerun the simulations each day from the beginning of the accident. Draxler and Rolph examine whether it is possible to improve plume simulation speed and flexibility as conditions and input data change. The authors use a method known as a transfer coefficient matrix approach that allows them to simulate many radionuclides using only a few generic species for the computation. Their simulations work faster by dividing the computation into separate independent segments in such a way that the most computationally time consuming pieces of the calculation need to be done only once. This makes it possible to provide real-time operational plume forecasts by continuously updating the previous simulations as new data become available. They tested their method using data from the Fukushima incident to show that it performed well. (Journal of Geophysical Research-Atmospheres, doi:10.1029/2011JD017205, 2012)

  9. Hybrid reduced order modeling for assembly calculations

    DOE PAGES

    Bang, Youngsuk; Abdel-Khalik, Hany S.; Jessee, Matthew A.; ...

    2015-08-14

    While the accuracy of assembly calculations has greatly improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the usemore » of the reduced order modeling for a single physics code, such as a radiation transport calculation. This paper extends those works to coupled code systems as currently employed in assembly calculations. Finally, numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.« less

  10. Resource-Efficient, Hierarchical Auto-Tuning of a Hybrid Lattice Boltzmann Computation on the Cray XT4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Computational Research Division, Lawrence Berkeley National Laboratory; NERSC, Lawrence Berkeley National Laboratory; Computer Science Department, University of California, Berkeley

    2009-05-04

    We apply auto-tuning to a hybrid MPI-pthreads lattice Boltzmann computation running on the Cray XT4 at National Energy Research Scientific Computing Center (NERSC). Previous work showed that multicore-specific auto-tuning can improve the performance of lattice Boltzmann magnetohydrodynamics (LBMHD) by a factor of 4x when running on dual- and quad-core Opteron dual-socket SMPs. We extend these studies to the distributed memory arena via a hybrid MPI/pthreads implementation. In addition to conventional auto-tuning at the local SMP node, we tune at the message-passing level to determine the optimal aspect ratio as well as the correct balance between MPI tasks and threads permore » MPI task. Our study presents a detailed performance analysis when moving along an isocurve of constant hardware usage: fixed total memory, total cores, and total nodes. Overall, our work points to approaches for improving intra- and inter-node efficiency on large-scale multicore systems for demanding scientific applications.« less

  11. A Review of Major Nursing Vocabularies and the Extent to Which They Have the Characteristics Required for Implementation in Computer-based Systems

    PubMed Central

    Henry, Suzanne Bakken; Warren, Judith J.; Lange, Linda; Button, Patricia

    1998-01-01

    Building on the work of previous authors, the Computer-based Patient Record Institute (CPRI) Work Group on Codes and Structures has described features of a classification scheme for implementation within a computer-based patient record. The authors of the current study reviewed the evaluation literature related to six major nursing vocabularies (the North American Nursing Diagnosis Association Taxonomy 1, the Nursing Interventions Classification, the Nursing Outcomes Classification, the Home Health Care Classification, the Omaha System, and the International Classification for Nursing Practice) to determine the extent to which the vocabularies include the CPRI features. None of the vocabularies met all criteria. The Omaha System, Home Health Care Classification, and International Classification for Nursing Practice each included five features. Criteria not fully met by any systems were clear and non-redundant representation of concepts, administrative cross-references, syntax and grammar, synonyms, uncertainty, context-free identifiers, and language independence. PMID:9670127

  12. Energy-efficient quantum computing

    NASA Astrophysics Data System (ADS)

    Ikonen, Joni; Salmilehto, Juha; Möttönen, Mikko

    2017-04-01

    In the near future, one of the major challenges in the realization of large-scale quantum computers operating at low temperatures is the management of harmful heat loads owing to thermal conduction of cabling and dissipation at cryogenic components. This naturally raises the question that what are the fundamental limitations of energy consumption in scalable quantum computing. In this work, we derive the greatest lower bound for the gate error induced by a single application of a bosonic drive mode of given energy. Previously, such an error type has been considered to be inversely proportional to the total driving power, but we show that this limitation can be circumvented by introducing a qubit driving scheme which reuses and corrects drive pulses. Specifically, our method serves to reduce the average energy consumption per gate operation without increasing the average gate error. Thus our work shows that precise, scalable control of quantum systems can, in principle, be implemented without the introduction of excessive heat or decoherence.

  13. Functional near-infrared spectroscopy for adaptive human-computer interfaces

    NASA Astrophysics Data System (ADS)

    Yuksel, Beste F.; Peck, Evan M.; Afergan, Daniel; Hincks, Samuel W.; Shibata, Tomoki; Kainerstorfer, Jana; Tgavalekos, Kristen; Sassaroli, Angelo; Fantini, Sergio; Jacob, Robert J. K.

    2015-03-01

    We present a brain-computer interface (BCI) that detects, analyzes and responds to user cognitive state in real-time using machine learning classifications of functional near-infrared spectroscopy (fNIRS) data. Our work is aimed at increasing the narrow communication bandwidth between the human and computer by implicitly measuring users' cognitive state without any additional effort on the part of the user. Traditionally, BCIs have been designed to explicitly send signals as the primary input. However, such systems are usually designed for people with severe motor disabilities and are too slow and inaccurate for the general population. In this paper, we demonstrate with previous work1 that a BCI that implicitly measures cognitive workload can improve user performance and awareness compared to a control condition by adapting to user cognitive state in real-time. We also discuss some of the other applications we have used in this field to measure and respond to cognitive states such as cognitive workload, multitasking, and user preference.

  14. Graphics Processing Unit Assisted Thermographic Compositing

    NASA Technical Reports Server (NTRS)

    Ragasa, Scott; Russell, Samuel S.

    2012-01-01

    Objective Develop a software application utilizing high performance computing techniques, including general purpose graphics processing units (GPGPUs), for the analysis and visualization of large thermographic data sets. Over the past several years, an increasing effort among scientists and engineers to utilize graphics processing units (GPUs) in a more general purpose fashion is allowing for previously unobtainable levels of computation by individual workstations. As data sets grow, the methods to work them grow at an equal, and often greater, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU which yield significant increases in performance. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Image processing is one area were GPUs are being used to greatly increase the performance of certain analysis and visualization techniques.

  15. Integral equation methods for computing likelihoods and their derivatives in the stochastic integrate-and-fire model.

    PubMed

    Paninski, Liam; Haith, Adrian; Szirtes, Gabor

    2008-02-01

    We recently introduced likelihood-based methods for fitting stochastic integrate-and-fire models to spike train data. The key component of this method involves the likelihood that the model will emit a spike at a given time t. Computing this likelihood is equivalent to computing a Markov first passage time density (the probability that the model voltage crosses threshold for the first time at time t). Here we detail an improved method for computing this likelihood, based on solving a certain integral equation. This integral equation method has several advantages over the techniques discussed in our previous work: in particular, the new method has fewer free parameters and is easily differentiable (for gradient computations). The new method is also easily adaptable for the case in which the model conductance, not just the input current, is time-varying. Finally, we describe how to incorporate large deviations approximations to very small likelihoods.

  16. Computer-aided detection of initial polyp candidates with level set-based adaptive convolution

    NASA Astrophysics Data System (ADS)

    Zhu, Hongbin; Duan, Chaijie; Liang, Zhengrong

    2009-02-01

    In order to eliminate or weaken the interference between different topological structures on the colon wall, adaptive and normalized convolution methods were used to compute the first and second order spatial derivatives of computed tomographic colonography images, which is the beginning of various geometric analyses. However, the performance of such methods greatly depends on the single-layer representation of the colon wall, which is called the starting layer (SL) in the following text. In this paper, we introduce a level set-based adaptive convolution (LSAC) method to compute the spatial derivatives, in which the level set method is employed to determine a more reasonable SL. The LSAC was applied to a computer-aided detection (CAD) scheme to detect the initial polyp candidates, and experiments showed that it benefits the CAD scheme in both the detection sensitivity and specificity as compared to our previous work.

  17. Entanglement-Based Machine Learning on a Quantum Computer

    NASA Astrophysics Data System (ADS)

    Cai, X.-D.; Wu, D.; Su, Z.-E.; Chen, M.-C.; Wang, X.-L.; Li, Li; Liu, N.-L.; Lu, C.-Y.; Pan, J.-W.

    2015-03-01

    Machine learning, a branch of artificial intelligence, learns from previous experience to optimize performance, which is ubiquitous in various fields such as computer sciences, financial analysis, robotics, and bioinformatics. A challenge is that machine learning with the rapidly growing "big data" could become intractable for classical computers. Recently, quantum machine learning algorithms [Lloyd, Mohseni, and Rebentrost, arXiv.1307.0411] were proposed which could offer an exponential speedup over classical algorithms. Here, we report the first experimental entanglement-based classification of two-, four-, and eight-dimensional vectors to different clusters using a small-scale photonic quantum computer, which are then used to implement supervised and unsupervised machine learning. The results demonstrate the working principle of using quantum computers to manipulate and classify high-dimensional vectors, the core mathematical routine in machine learning. The method can, in principle, be scaled to larger numbers of qubits, and may provide a new route to accelerate machine learning.

  18. Effect of image scaling and segmentation in digital rock characterisation

    NASA Astrophysics Data System (ADS)

    Jones, B. D.; Feng, Y. T.

    2016-04-01

    Digital material characterisation from microstructural geometry is an emerging field in computer simulation. For permeability characterisation, a variety of studies exist where the lattice Boltzmann method (LBM) has been used in conjunction with computed tomography (CT) imaging to simulate fluid flow through microscopic rock pores. While these previous works show that the technique is applicable, the use of binary image segmentation and the bounceback boundary condition results in a loss of grain surface definition when the modelled geometry is compared to the original CT image. We apply the immersed moving boundary (IMB) condition of Noble and Torczynski as a partial bounceback boundary condition which may be used to better represent the geometric definition provided by a CT image. The IMB condition is validated against published work on idealised porous geometries in both 2D and 3D. Following this, greyscale image segmentation is applied to a CT image of Diemelstadt sandstone. By varying the mapping of CT voxel densities to lattice sites, it is shown that binary image segmentation may underestimate the true permeability of the sample. A CUDA-C-based code, LBM-C, was developed specifically for this work and leverages GPU hardware in order to carry out computations.

  19. A 3D inversion for all-space magnetotelluric data with static shift correction

    NASA Astrophysics Data System (ADS)

    Zhang, Kun

    2017-04-01

    Base on the previous studies on the static shift correction and 3D inversion algorithms, we improve the NLCG 3D inversion method and propose a new static shift correction method which work in the inversion. The static shift correction method is based on the 3D theory and real data. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with 0 cost, and avoids the additional field work and indoor processing with good results. The 3D inversion algorithm is improved (Zhang et al., 2013) base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the parallel structure, improved the computational efficiency, reduced the memory of computer and added the topographic and marine factors. So the 3D inversion could work in general PC with high efficiency and accuracy. And all the MT data of surface stations, seabed stations and underground stations can be used in the inversion algorithm.

  20. Fluid Fe(1 - x)Hx under extreme conditions

    NASA Astrophysics Data System (ADS)

    Seclaman, Alexandra; Wilson, Hugh F.; Cohen, Ronald E.

    We study the fluid Fe-H binary system using first principles molecular dynamics (FPMD) and a new FPMD-based method, CATS, in order to compute efficiently and accurately the equation of state of Fe-H fluids up to 5 TPa and 30,000K. We constructed GRBV-type LDA pseudopotentials for Fe and H with small rcuts in order to avoid pseudo-core overlap. In the liquid Fe regime we find good agreement with previous works, up to the pressures where data is available. In the high density regime of pure H we also find good agreement with previous results. Previous work has been focused on low Fe concentrations in metallic liquid H. We extend previous studies by investigating several intermediate Fe(1 - x)Hx liquid compositions, as well as metallic liquid H and Fe. Preliminary results indicate extreme compositional pressure effects under isothermic and isochoric conditions, 3.9 TPa difference between Fe and H at 20,000K. Thermal pressure effects are comparatively small, 0.12-0.15 TPa per 10,000K for H and Fe, respectively. Equations of state will be presented and fluid immiscibility will be discussed. This work has been supported by the ERC Advanced Grant ToMCaT and NSF and the Carnegie Institution.

  1. Formulation and implementation of a practical algorithm for parameter estimation with process and measurement noise

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1980-01-01

    A new formulation is proposed for the problem of parameter estimation of dynamic systems with both process and measurement noise. The formulation gives estimates that are maximum likelihood asymptotically in time. The means used to overcome the difficulties encountered by previous formulations are discussed. It is then shown how the proposed formulation can be efficiently implemented in a computer program. A computer program using the proposed formulation is available in a form suitable for routine application. Examples with simulated and real data are given to illustrate that the program works well.

  2. Using Social Network Graphs as Visualization Tools to Influence Peer Selection Decision-Making Strategies to Access Information about Complex Socioscientific Issues

    ERIC Educational Resources Information Center

    Yoon, Susan A.

    2011-01-01

    This study extends previous research that explores how visualization affordances that computational tools provide and social network analyses that account for individual- and group-level dynamic processes can work in conjunction to improve learning outcomes. The study's main hypothesis is that when social network graphs are used in instruction,…

  3. Using Words Instead of Jumbled Characters as Stimuli in Keyboard Training Facilitates Fluent Performance

    ERIC Educational Resources Information Center

    DeFulio, Anthony; Crone-Todd, Darlene E.; Long, Lauren V.; Nuzzo, Paul A.; Silverman, Kenneth

    2011-01-01

    Keyboarding skill is an important target for adult education programs due to the ubiquity of computers in modern work environments. A previous study showed that novice typists learned key locations quickly but that fluency took a relatively long time to develop. In the present study, novice typists achieved fluent performance in nearly half the…

  4. A Re-examination of the Black English Copula. Working Papers in Sociolinguistics, No. 66.

    ERIC Educational Resources Information Center

    Baugh, John

    A corpus of Black English (BEV) data is re-examined with exclusive attention to the "is" form of the copula. This analysis differs from previous examinations in that more constraints have been introduced, and the Cedergren/Sankoff computer program for multivariant analysis has been employed. The analytic techniques that are used allow for a finer…

  5. Collaboration, Reflection and Selective Neglect: Campus-Based Marketing Students' Experiences of Using a Virtual Learning Environment

    ERIC Educational Resources Information Center

    Molesworth, Mike

    2004-01-01

    Previous studies have suggested significant benefits to using computer-mediated communication in higher education and the development of the relevant skills may also be important for preparing students for their working careers. This study is a review of the introduction of a virtual learning environment to support a group of 60 campus-based,…

  6. Importance sampling studies of helium using the Feynman-Kac path integral method

    NASA Astrophysics Data System (ADS)

    Datta, S.; Rejcek, J. M.

    2018-05-01

    In the Feynman-Kac path integral approach the eigenvalues of a quantum system can be computed using Wiener measure which uses Brownian particle motion. In our previous work on such systems we have observed that the Wiener process numerically converges slowly for dimensions greater than two because almost all trajectories will escape to infinity. One can speed up this process by using a generalized Feynman-Kac (GFK) method, in which the new measure associated with the trial function is stationary, so that the convergence rate becomes much faster. We thus achieve an example of "importance sampling" and, in the present work, we apply it to the Feynman-Kac (FK) path integrals for the ground and first few excited-state energies for He to speed up the convergence rate. We calculate the path integrals using space averaging rather than the time averaging as done in the past. The best previous calculations from variational computations report precisions of 10-16 Hartrees, whereas in most cases our path integral results obtained for the ground and first excited states of He are lower than these results by about 10-6 Hartrees or more.

  7. Numerical Simulation of the Diffusion Processes in Nanoelectrode Arrays Using an Axial Neighbor Symmetry Approximation.

    PubMed

    Peinetti, Ana Sol; Gilardoni, Rodrigo S; Mizrahi, Martín; Requejo, Felix G; González, Graciela A; Battaglini, Fernando

    2016-06-07

    Nanoelectrode arrays have introduced a complete new battery of devices with fascinating electrocatalytic, sensitivity, and selectivity properties. To understand and predict the electrochemical response of these arrays, a theoretical framework is needed. Cyclic voltammetry is a well-fitted experimental technique to understand the undergoing diffusion and kinetics processes. Previous works describing microelectrode arrays have exploited the interelectrode distance to simulate its behavior as the summation of individual electrodes. This approach becomes limited when the size of the electrodes decreases to the nanometer scale due to their strong radial effect with the consequent overlapping of the diffusional fields. In this work, we present a computational model able to simulate the electrochemical behavior of arrays working either as the summation of individual electrodes or being affected by the overlapping of the diffusional fields without previous considerations. Our computational model relays in dividing a regular electrode array in cells. In each of them, there is a central electrode surrounded by neighbor electrodes; these neighbor electrodes are transformed in a ring maintaining the same active electrode area than the summation of the closest neighbor electrodes. Using this axial neighbor symmetry approximation, the problem acquires a cylindrical symmetry, being applicable to any diffusion pattern. The model is validated against micro- and nanoelectrode arrays showing its ability to predict their behavior and therefore to be used as a designing tool.

  8. Embedded-Based Graphics Processing Unit Cluster Platform for Multiple Sequence Alignments

    PubMed Central

    Wei, Jyh-Da; Cheng, Hui-Jun; Lin, Chun-Yuan; Ye, Jin; Yeh, Kuan-Yu

    2017-01-01

    High-end graphics processing units (GPUs), such as NVIDIA Tesla/Fermi/Kepler series cards with thousands of cores per chip, are widely applied to high-performance computing fields in a decade. These desktop GPU cards should be installed in personal computers/servers with desktop CPUs, and the cost and power consumption of constructing a GPU cluster platform are very high. In recent years, NVIDIA releases an embedded board, called Jetson Tegra K1 (TK1), which contains 4 ARM Cortex-A15 CPUs and 192 Compute Unified Device Architecture cores (belong to Kepler GPUs). Jetson Tegra K1 has several advantages, such as the low cost, low power consumption, and high applicability, and it has been applied into several specific applications. In our previous work, a bioinformatics platform with a single TK1 (STK platform) was constructed, and this previous work is also used to prove that the Web and mobile services can be implemented in the STK platform with a good cost-performance ratio by comparing a STK platform with the desktop CPU and GPU. In this work, an embedded-based GPU cluster platform will be constructed with multiple TK1s (MTK platform). Complex system installation and setup are necessary procedures at first. Then, 2 job assignment modes are designed for the MTK platform to provide services for users. Finally, ClustalW v2.0.11 and ClustalWtk will be ported to the MTK platform. The experimental results showed that the speedup ratios achieved 5.5 and 4.8 times for ClustalW v2.0.11 and ClustalWtk, respectively, by comparing 6 TK1s with a single TK1. The MTK platform is proven to be useful for multiple sequence alignments. PMID:28835734

  9. Embedded-Based Graphics Processing Unit Cluster Platform for Multiple Sequence Alignments.

    PubMed

    Wei, Jyh-Da; Cheng, Hui-Jun; Lin, Chun-Yuan; Ye, Jin; Yeh, Kuan-Yu

    2017-01-01

    High-end graphics processing units (GPUs), such as NVIDIA Tesla/Fermi/Kepler series cards with thousands of cores per chip, are widely applied to high-performance computing fields in a decade. These desktop GPU cards should be installed in personal computers/servers with desktop CPUs, and the cost and power consumption of constructing a GPU cluster platform are very high. In recent years, NVIDIA releases an embedded board, called Jetson Tegra K1 (TK1), which contains 4 ARM Cortex-A15 CPUs and 192 Compute Unified Device Architecture cores (belong to Kepler GPUs). Jetson Tegra K1 has several advantages, such as the low cost, low power consumption, and high applicability, and it has been applied into several specific applications. In our previous work, a bioinformatics platform with a single TK1 (STK platform) was constructed, and this previous work is also used to prove that the Web and mobile services can be implemented in the STK platform with a good cost-performance ratio by comparing a STK platform with the desktop CPU and GPU. In this work, an embedded-based GPU cluster platform will be constructed with multiple TK1s (MTK platform). Complex system installation and setup are necessary procedures at first. Then, 2 job assignment modes are designed for the MTK platform to provide services for users. Finally, ClustalW v2.0.11 and ClustalWtk will be ported to the MTK platform. The experimental results showed that the speedup ratios achieved 5.5 and 4.8 times for ClustalW v2.0.11 and ClustalWtk, respectively, by comparing 6 TK1s with a single TK1. The MTK platform is proven to be useful for multiple sequence alignments.

  10. An investigation of the effects of touchpad location within a notebook computer.

    PubMed

    Kelaher, D; Nay, T; Lawrence, B; Lamar, S; Sommerich, C M

    2001-02-01

    This study evaluated effects of the location of a notebook computer's integrated touchpad, complimenting previous work in the area of desktop mouse location effects. Most often integrated touchpads are located in the computer's wrist rest, and centered on the keyboard. This study characterized effects of this bottom center location and four alternatives (top center, top right, right side, and bottom right) upon upper extremity posture, discomfort, preference, and performance. Touchpad location was found to significantly impact each of those measures. The top center location was particularly poor, in that it elicited more ulnar deviation, more shoulder flexion, more discomfort, and perceptions of performance impedance. In general, the bottom center, bottom right, and right side locations fared better, though subjects' wrists were more extended in the bottom locations. Suggestions for notebook computer design are provided.

  11. Tensor scale-based fuzzy connectedness image segmentation

    NASA Astrophysics Data System (ADS)

    Saha, Punam K.; Udupa, Jayaram K.

    2003-05-01

    Tangible solutions to image segmentation are vital in many medical imaging applications. Toward this goal, a framework based on fuzzy connectedness was developed in our laboratory. A fundamental notion called "affinity" - a local fuzzy hanging togetherness relation on voxels - determines the effectiveness of this segmentation framework in real applications. In this paper, we introduce the notion of "tensor scale" - a recently developed local morphometric parameter - in affinity definition and study its effectiveness. Although, our previous notion of "local scale" using the spherical model successfully incorporated local structure size into affinity and resulted in measureable improvements in segmentation results, a major limitation of the previous approach was that it ignored local structural orientation and anisotropy. The current approach of using tensor scale in affinity computation allows an effective utilization of local size, orientation, and ansiotropy in a unified manner. Tensor scale is used for computing both the homogeneity- and object-feature-based components of affinity. Preliminary results of the proposed method on several medical images and computer generated phantoms of realistic shapes are presented. Further extensions of this work are discussed.

  12. Mathematical analysis of deception.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rogers, Deanna Tamae Koike; Durgin, Nancy Ann

    This report describes the results of a three year research project about the use of deception in information protection. The work involved a collaboration between Sandia employees and students in the Center for Cyber Defenders (CCD) and at the University of California at Davis. This report includes a review of the history of deception, a discussion of some cognitive issues, an overview of previous work in deception, the results of experiments on the effects of deception on an attacker, and a mathematical model of error types associated with deception in computer systems.

  13. Towards a computational- and algorithmic-level account of concept blending using analogies and amalgams

    NASA Astrophysics Data System (ADS)

    Besold, Tarek R.; Kühnberger, Kai-Uwe; Plaza, Enric

    2017-10-01

    Concept blending - a cognitive process which allows for the combination of certain elements (and their relations) from originally distinct conceptual spaces into a new unified space combining these previously separate elements, and enables reasoning and inference over the combination - is taken as a key element of creative thought and combinatorial creativity. In this article, we summarise our work towards the development of a computational-level and algorithmic-level account of concept blending, combining approaches from computational analogy-making and case-based reasoning (CBR). We present the theoretical background, as well as an algorithmic proposal integrating higher-order anti-unification matching and generalisation from analogy with amalgams from CBR. The feasibility of the approach is then exemplified in two case studies.

  14. Further reduction of minimal first-met bad markings for the computationally efficient synthesis of a maximally permissive controller

    NASA Astrophysics Data System (ADS)

    Liu, GaiYun; Chao, Daniel Yuh

    2015-08-01

    To date, research on the supervisor design for flexible manufacturing systems focuses on speeding up the computation of optimal (maximally permissive) liveness-enforcing controllers. Recent deadlock prevention policies for systems of simple sequential processes with resources (S3PR) reduce the computation burden by considering only the minimal portion of all first-met bad markings (FBMs). Maximal permissiveness is ensured by not forbidding any live state. This paper proposes a method to further reduce the size of minimal set of FBMs to efficiently solve integer linear programming problems while maintaining maximal permissiveness using a vector-covering approach. This paper improves the previous work and achieves the simplest structure with the minimal number of monitors.

  15. Relativistic, correlation, and polarization effects in two-photon photoionization of Xe

    NASA Astrophysics Data System (ADS)

    Lagutin, B. M.; Petrov, I. D.; Sukhorukov, V. L.; Demekhin, Ph. V.; Knie, A.; Ehresmann, A.

    2017-06-01

    Two-photon ionization of xenon was investigated theoretically for exciting-photon energies from 6.7 to 11.5 eV, which results in the ionization of Xe between 5 p1 /2 (13.43 eV) and 5 s (23.40 eV) thresholds. We describe the extension of a previously developed computational technique for the inclusion of relativistic effects to calculate energies of intermediate resonance state and cross sections for two-photon ionization. Reasonable consistency of cross sections calculated in length and velocity form was obtained only after considering many-electron correlations. Agreement between calculated and measured resonance energies is found when core polarization was additionally included in the calculations. The presently computed two-photon photoionization cross sections of Xe are compared with Ar cross sections in our previous work. Photoelectron angular distribution parameters calculated here indicate that intermediated resonances strongly influence photoelectron angular distribution of Xe.

  16. Spin-dependent post-Newtonian parameters from EMRI computation in Kerr background

    NASA Astrophysics Data System (ADS)

    Friedman, John; Le Tiec, Alexandre; Shah, Abhay

    2013-04-01

    Because the extreme mass-ratio inspiral (EMRI) approximation is accurate to all orders in v/c, it can be used to find high order post-Newtonian parameters that are not yet analytically accessible. We report here on progress in computing spin-dependent, conservative, post-Newtonian parameters from a radiation-gauge computation for a particle in circular orbit in a family of Kerr geometries. For a particle with 4-velocity u^α= U k^α, with k^α the helical Killing vector of the perturbed spacetime, the renormalized perturbation δU, when written as a function of the particle's angular velocity, is invariant under gauge transformations generated by helically symmetric vectors. The EMRI computations are done in a modified radiation gauge. Extracted parameters are compared to previously known and newly computed spin-dependent post-Newtonian terms. This work is modeled on earlier computations by Blanchet, Detweiler, Le Tiec and Whiting of spin-independent terms for a particle in circular orbit in a Schwarzschild geometry.

  17. Computational Model Tracking Primary Electrons, Secondary Electrons, and Ions in the Discharge Chamber of an Ion Engine

    NASA Technical Reports Server (NTRS)

    Mahalingam, Sudhakar; Menart, James A.

    2005-01-01

    Computational modeling of the plasma located in the discharge chamber of an ion engine is an important activity so that the development and design of the next generation of ion engines may be enhanced. In this work a computational tool called XOOPIC is used to model the primary electrons, secondary electrons, and ions inside the discharge chamber. The details of this computational tool are discussed in this paper. Preliminary results from XOOPIC are presented. The results presented include particle number density distributions for the primary electrons, the secondary electrons, and the ions. In addition the total number of a particular particle in the discharge chamber as a function of time, electric potential maps and magnetic field maps are presented. A primary electron number density plot from PRIMA is given in this paper so that the results of XOOPIC can be compared to it. PRIMA is a computer code that the present investigators have used in much of their previous work that provides results that compare well to experimental results. PRIMA only models the primary electrons in the discharge chamber. Modeling ions and secondary electrons, as well as the primary electrons, will greatly increase our ability to predict different characteristics of the plasma discharge used in an ion engine.

  18. Evidence of effectiveness of health care professionals using handheld computers: a scoping review of systematic reviews.

    PubMed

    Mickan, Sharon; Tilson, Julie K; Atherton, Helen; Roberts, Nia Wyn; Heneghan, Carl

    2013-10-28

    Handheld computers and mobile devices provide instant access to vast amounts and types of useful information for health care professionals. Their reduced size and increased processing speed has led to rapid adoption in health care. Thus, it is important to identify whether handheld computers are actually effective in clinical practice. A scoping review of systematic reviews was designed to provide a quick overview of the documented evidence of effectiveness for health care professionals using handheld computers in their clinical work. A detailed search, sensitive for systematic reviews was applied for Cochrane, Medline, EMBASE, PsycINFO, Allied and Complementary Medicine Database (AMED), Global Health, and Cumulative Index to Nursing and Allied Health Literature (CINAHL) databases. All outcomes that demonstrated effectiveness in clinical practice were included. Classroom learning and patient use of handheld computers were excluded. Quality was assessed using the Assessment of Multiple Systematic Reviews (AMSTAR) tool. A previously published conceptual framework was used as the basis for dual data extraction. Reported outcomes were summarized according to the primary function of the handheld computer. Five systematic reviews met the inclusion and quality criteria. Together, they reviewed 138 unique primary studies. Most reviewed descriptive intervention studies, where physicians, pharmacists, or medical students used personal digital assistants. Effectiveness was demonstrated across four distinct functions of handheld computers: patient documentation, patient care, information seeking, and professional work patterns. Within each of these functions, a range of positive outcomes were reported using both objective and self-report measures. The use of handheld computers improved patient documentation through more complete recording, fewer documentation errors, and increased efficiency. Handheld computers provided easy access to clinical decision support systems and patient management systems, which improved decision making for patient care. Handheld computers saved time and gave earlier access to new information. There were also reports that handheld computers enhanced work patterns and efficiency. This scoping review summarizes the secondary evidence for effectiveness of handheld computers and mhealth. It provides a snapshot of effective use by health care professionals across four key functions. We identified evidence to suggest that handheld computers provide easy and timely access to information and enable accurate and complete documentation. Further, they can give health care professionals instant access to evidence-based decision support and patient management systems to improve clinical decision making. Finally, there is evidence that handheld computers allow health professionals to be more efficient in their work practices. It is anticipated that this evidence will guide clinicians and managers in implementing handheld computers in clinical practice and in designing future research.

  19. A FAST ITERATIVE METHOD FOR SOLVING THE EIKONAL EQUATION ON TETRAHEDRAL DOMAINS

    PubMed Central

    Fu, Zhisong; Kirby, Robert M.; Whitaker, Ross T.

    2014-01-01

    Generating numerical solutions to the eikonal equation and its many variations has a broad range of applications in both the natural and computational sciences. Efficient solvers on cutting-edge, parallel architectures require new algorithms that may not be theoretically optimal, but that are designed to allow asynchronous solution updates and have limited memory access patterns. This paper presents a parallel algorithm for solving the eikonal equation on fully unstructured tetrahedral meshes. The method is appropriate for the type of fine-grained parallelism found on modern massively-SIMD architectures such as graphics processors and takes into account the particular constraints and capabilities of these computing platforms. This work builds on previous work for solving these equations on triangle meshes; in this paper we adapt and extend previous two-dimensional strategies to accommodate three-dimensional, unstructured, tetrahedralized domains. These new developments include a local update strategy with data compaction for tetrahedral meshes that provides solutions on both serial and parallel architectures, with a generalization to inhomogeneous, anisotropic speed functions. We also propose two new update schemes, specialized to mitigate the natural data increase observed when moving to three dimensions, and the data structures necessary for efficiently mapping data to parallel SIMD processors in a way that maintains computational density. Finally, we present descriptions of the implementations for a single CPU, as well as multicore CPUs with shared memory and SIMD architectures, with comparative results against state-of-the-art eikonal solvers. PMID:25221418

  20. Hierarchical combinatorial deep learning architecture for pancreas segmentation of medical computed tomography cancer images.

    PubMed

    Fu, Min; Wu, Wenming; Hong, Xiafei; Liu, Qiuhua; Jiang, Jialin; Ou, Yaobin; Zhao, Yupei; Gong, Xinqi

    2018-04-24

    Efficient computational recognition and segmentation of target organ from medical images are foundational in diagnosis and treatment, especially about pancreas cancer. In practice, the diversity in appearance of pancreas and organs in abdomen, makes detailed texture information of objects important in segmentation algorithm. According to our observations, however, the structures of previous networks, such as the Richer Feature Convolutional Network (RCF), are too coarse to segment the object (pancreas) accurately, especially the edge. In this paper, we extend the RCF, proposed to the field of edge detection, for the challenging pancreas segmentation, and put forward a novel pancreas segmentation network. By employing multi-layer up-sampling structure replacing the simple up-sampling operation in all stages, the proposed network fully considers the multi-scale detailed contexture information of object (pancreas) to perform per-pixel segmentation. Additionally, using the CT scans, we supply and train our network, thus get an effective pipeline. Working with our pipeline with multi-layer up-sampling model, we achieve better performance than RCF in the task of single object (pancreas) segmentation. Besides, combining with multi scale input, we achieve the 76.36% DSC (Dice Similarity Coefficient) value in testing data. The results of our experiments show that our advanced model works better than previous networks in our dataset. On the other words, it has better ability in catching detailed contexture information. Therefore, our new single object segmentation model has practical meaning in computational automatic diagnosis.

  1. Modeling choice and reaction time during arbitrary visuomotor learning through the coordination of adaptive working memory and reinforcement learning

    PubMed Central

    Viejo, Guillaume; Khamassi, Mehdi; Brovelli, Andrea; Girard, Benoît

    2015-01-01

    Current learning theory provides a comprehensive description of how humans and other animals learn, and places behavioral flexibility and automaticity at heart of adaptive behaviors. However, the computations supporting the interactions between goal-directed and habitual decision-making systems are still poorly understood. Previous functional magnetic resonance imaging (fMRI) results suggest that the brain hosts complementary computations that may differentially support goal-directed and habitual processes in the form of a dynamical interplay rather than a serial recruitment of strategies. To better elucidate the computations underlying flexible behavior, we develop a dual-system computational model that can predict both performance (i.e., participants' choices) and modulations in reaction times during learning of a stimulus–response association task. The habitual system is modeled with a simple Q-Learning algorithm (QL). For the goal-directed system, we propose a new Bayesian Working Memory (BWM) model that searches for information in the history of previous trials in order to minimize Shannon entropy. We propose a model for QL and BWM coordination such that the expensive memory manipulation is under control of, among others, the level of convergence of the habitual learning. We test the ability of QL or BWM alone to explain human behavior, and compare them with the performance of model combinations, to highlight the need for such combinations to explain behavior. Two of the tested combination models are derived from the literature, and the latter being our new proposal. In conclusion, all subjects were better explained by model combinations, and the majority of them are explained by our new coordination proposal. PMID:26379518

  2. Further Examination of a Simplified Model for Positronium-Helium Scattering

    NASA Technical Reports Server (NTRS)

    DiRienzi, J.; Drachman, Richard J.

    2012-01-01

    While carrying out investigations on Ps-He scattering we realized that it would be possible to improve the results of a previous work on zero-energy scattering of ortho-positronium by helium atoms. The previous work used a model to account for exchange and also attempted to include the effect of short-range Coulomb interactions in the close-coupling approximation. The 3 terms that were then included did not produce a well-converged result but served to give some justification to the model. Now we improve the calculation by using a simple variational wave function, and derive a much better value of the scattering length. The new result is compared with other computed values, and when an approximate correction due to the van der Waals potential is included the total is consistent with an earlier conjecture.

  3. This Is Your Future: A Case Study Approach to Foster Health Literacy

    ERIC Educational Resources Information Center

    Brey, Rebecca A.; Clark, Susan E.; Wantz, Molly S.

    2008-01-01

    Today's young people seem to live in an even faster fast-paced society than previous generations. As in the past, they are involved in sports, music, school, church, work, and are exposed to many forms of mass media that add to their base of information. However, they also have instant access to computer-generated information such as the Internet,…

  4. Digital multishaker modal testing

    NASA Technical Reports Server (NTRS)

    Blair, M.; Craig, R. R., Jr.

    1983-01-01

    A review of several modal testing techniques is made, along with brief discussions of their advantages and limitations. A new technique is presented which overcomes many of the previous limitations. Several simulated experiments are included to verify the validity and accuracy of the new method. Conclusions are drawn from the simulation studies and recommendations for further work are presented. The complete computer code configured for the simulation study is presented.

  5. Approaching a parameter-free metadynamics.

    PubMed

    Dickson, Bradley M

    2011-09-01

    We present a unique derivation of metadynamics. This work leads to a more robust understanding of the error in the computed free energy than what has been obtained previously. Moreover, a formula for the exact free energy is introduced. The formula can be used to post-process any existing well-tempered metadynamics data, allowing one, in principle, to obtain an exact free energy regardless of the metadynamics parameters.

  6. Approaching a parameter-free metadynamics

    NASA Astrophysics Data System (ADS)

    Dickson, Bradley M.

    2011-09-01

    We present a unique derivation of metadynamics. This work leads to a more robust understanding of the error in the computed free energy than what has been obtained previously. Moreover, a formula for the exact free energy is introduced. The formula can be used to post-process any existing well-tempered metadynamics data, allowing one, in principle, to obtain an exact free energy regardless of the metadynamics parameters.

  7. 1-1 in Education: Current Practice, International Comparative Research Evidence and Policy Implications. OECD Education Working Papers, No. 44,

    ERIC Educational Resources Information Center

    Valiente, Oscar

    2010-01-01

    Over the last decade, more and more public and private stakeholders, in developed and developing countries, have been supporting 1:1 initiatives in education (i.e. every child receives her/his own personal computing device). These 1:1 initiatives represent a qualitative move forward from previous educational experiences with ICT, inasmuch as every…

  8. FOS: A Factored Operating Systems for High Assurance and Scalability on Multicores

    DTIC Science & Technology

    2012-08-01

    computing. It builds on previous work in distributed and microkernel OSes by factoring services out of the kernel, and then further distributing each...2 3.0 Methods, Assumptions, and Procedures (System Design) .................................................. 4 3.1 Microkernel ...cooperating servers. We term such a service a fleet. Figure 2 shows the high-level architecture of fos. A small microkernel runs on every core

  9. College students and computers: assessment of usage patterns and musculoskeletal discomfort.

    PubMed

    Noack-Cooper, Karen L; Sommerich, Carolyn M; Mirka, Gary A

    2009-01-01

    A limited number of studies have focused on computer-use-related MSDs in college students, though risk factor exposure may be similar to that of workers who use computers. This study examined computer use patterns of college students, and made comparisons to a group of previously studied computer-using professionals. 234 students completed a web-based questionnaire concerning computer use habits and physical discomfort respondents specifically associated with computer use. As a group, students reported their computer use to be at least 'Somewhat likely' 18 out of 24 h/day, compared to 12 h for the professionals. Students reported more uninterrupted work behaviours than the professionals. Younger graduate students reported 33.7 average weekly computing hours, similar to hours reported by younger professionals. Students generally reported more frequent upper extremity discomfort than the professionals. Frequent assumption of awkward postures was associated with frequent discomfort. The findings signal a need for intervention, including, training and education, prior to entry into the workforce. Students are future workers, and so it is important to determine whether their increasing exposure to computers, prior to entering the workforce, may make it so they enter already injured or do not enter their chosen profession due to upper extremity MSDs.

  10. Embodied conversational agents for multimodal automated social skills training in people with autism spectrum disorders.

    PubMed

    Tanaka, Hiroki; Negoro, Hideki; Iwasaka, Hidemi; Nakamura, Satoshi

    2017-01-01

    Social skills training, performed by human trainers, is a well-established method for obtaining appropriate skills in social interaction. Previous work automated the process of social skills training by developing a dialogue system that teaches social communication skills through interaction with a computer avatar. Even though previous work that simulated social skills training only considered acoustic and linguistic information, human social skills trainers take into account visual and other non-verbal features. In this paper, we create and evaluate a social skills training system that closes this gap by considering the audiovisual features of the smiling ratio and the head pose (yaw and pitch). In addition, the previous system was only tested with graduate students; in this paper, we applied our system to children or young adults with autism spectrum disorders. For our experimental evaluation, we recruited 18 members from the general population and 10 people with autism spectrum disorders and gave them our proposed multimodal system to use. An experienced human social skills trainer rated the social skills of the users. We evaluated the system's effectiveness by comparing pre- and post-training scores and identified significant improvement in their social skills using our proposed multimodal system. Computer-based social skills training is useful for people who experience social difficulties. Such a system can be used by teachers, therapists, and social skills trainers for rehabilitation and the supplemental use of human-based training anywhere and anytime.

  11. Embodied conversational agents for multimodal automated social skills training in people with autism spectrum disorders

    PubMed Central

    Negoro, Hideki; Iwasaka, Hidemi; Nakamura, Satoshi

    2017-01-01

    Social skills training, performed by human trainers, is a well-established method for obtaining appropriate skills in social interaction. Previous work automated the process of social skills training by developing a dialogue system that teaches social communication skills through interaction with a computer avatar. Even though previous work that simulated social skills training only considered acoustic and linguistic information, human social skills trainers take into account visual and other non-verbal features. In this paper, we create and evaluate a social skills training system that closes this gap by considering the audiovisual features of the smiling ratio and the head pose (yaw and pitch). In addition, the previous system was only tested with graduate students; in this paper, we applied our system to children or young adults with autism spectrum disorders. For our experimental evaluation, we recruited 18 members from the general population and 10 people with autism spectrum disorders and gave them our proposed multimodal system to use. An experienced human social skills trainer rated the social skills of the users. We evaluated the system’s effectiveness by comparing pre- and post-training scores and identified significant improvement in their social skills using our proposed multimodal system. Computer-based social skills training is useful for people who experience social difficulties. Such a system can be used by teachers, therapists, and social skills trainers for rehabilitation and the supplemental use of human-based training anywhere and anytime. PMID:28796781

  12. Medication safety and knowledge-based functions: a stepwise approach against information overload.

    PubMed

    Patapovas, Andrius; Dormann, Harald; Sedlmayr, Brita; Kirchner, Melanie; Sonst, Anja; Müller, Fabian; Pfistermeister, Barbara; Plank-Kiegele, Bettina; Vogler, Renate; Maas, Renke; Criegee-Rieck, Manfred; Prokosch, Hans-Ulrich; Bürkle, Thomas

    2013-09-01

    The aim was to improve medication safety in an emergency department (ED) by enhancing the integration and presentation of safety information for drug therapy. Based on an evaluation of safety of drug therapy issues in the ED and a review of computer-assisted intervention technologies we redesigned an electronic case sheet and implemented computer-assisted interventions into the routine work flow. We devised a four step system of alerts, and facilitated access to different levels of drug information. System use was analyzed over a period of 6 months. In addition, physicians answered a survey based on the technology acceptance model TAM2. The new application was implemented in an informal manner to avoid work flow disruption. Log files demonstrated that step I, 'valid indication' was utilized for 3% of the recorded drugs and step II 'tooltip for well-known drug risks' for 48% of the drugs. In the questionnaire, the computer-assisted interventions were rated better than previous paper based measures (checklists, posters) with regard to usefulness, support of work and information quality. A stepwise assisting intervention received positive user acceptance. Some intervention steps have been seldom used, others quite often. We think that we were able to avoid over-alerting and work flow intrusion in a critical ED environment. © 2013 The Authors. British Journal of Clinical Pharmacology © 2013 The British Pharmacological Society.

  13. 48 CFR 252.227-7028 - Technical data or computer software previously delivered to the government.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... software previously delivered to the government. 252.227-7028 Section 252.227-7028 Federal Acquisition... computer software previously delivered to the government. As prescribed in 227.7103-6(d), 227.7104(f)(2), or 227.7203-6(e), use the following provision: Technical Data or Computer Software Previously...

  14. 48 CFR 252.227-7028 - Technical data or computer software previously delivered to the government.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... software previously delivered to the government. 252.227-7028 Section 252.227-7028 Federal Acquisition... computer software previously delivered to the government. As prescribed in 227.7103-6(d), 227.7104(f)(2), or 227.7203-6(e), use the following provision: Technical Data or Computer Software Previously...

  15. 48 CFR 252.227-7028 - Technical data or computer software previously delivered to the government.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... software previously delivered to the government. 252.227-7028 Section 252.227-7028 Federal Acquisition... computer software previously delivered to the government. As prescribed in 227.7103-6(d), 227.7104(f)(2), or 227.7203-6(e), use the following provision: Technical Data or Computer Software Previously...

  16. 48 CFR 252.227-7028 - Technical data or computer software previously delivered to the government.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... software previously delivered to the government. 252.227-7028 Section 252.227-7028 Federal Acquisition... computer software previously delivered to the government. As prescribed in 227.7103-6(d), 227.7104(f)(2), or 227.7203-6(e), use the following provision: Technical Data or Computer Software Previously...

  17. 48 CFR 252.227-7028 - Technical data or computer software previously delivered to the government.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... software previously delivered to the government. 252.227-7028 Section 252.227-7028 Federal Acquisition... computer software previously delivered to the government. As prescribed in 227.7103-6(d), 227.7104(f)(2), or 227.7203-6(e), use the following provision: Technical Data or Computer Software Previously...

  18. Real-time operation without a real-time operating system for instrument control and data acquisition

    NASA Astrophysics Data System (ADS)

    Klein, Randolf; Poglitsch, Albrecht; Fumi, Fabio; Geis, Norbert; Hamidouche, Murad; Hoenle, Rainer; Looney, Leslie; Raab, Walfried; Viehhauser, Werner

    2004-09-01

    We are building the Field-Imaging Far-Infrared Line Spectrometer (FIFI LS) for the US-German airborne observatory SOFIA. The detector read-out system is driven by a clock signal at a certain frequency. This signal has to be provided and all other sub-systems have to work synchronously to this clock. The data generated by the instrument has to be received by a computer in a timely manner. Usually these requirements are met with a real-time operating system (RTOS). In this presentation we want to show how we meet these demands differently avoiding the stiffness of an RTOS. Digital I/O-cards with a large buffer separate the asynchronous working computers and the synchronous working instrument. The advantage is that the data processing computers do not need to process the data in real-time. It is sufficient that the computer can process the incoming data stream on average. But since the data is read-in synchronously, problems of relating commands and responses (data) have to be solved: The data is arriving at a fixed rate. The receiving I/O-card buffers the data in its buffer until the computer can access it. To relate the data to commands sent previously, the data is tagged by counters in the read-out electronics. These counters count the system's heartbeat and signals derived from that. The heartbeat and control signals synchronous with the heartbeat are sent by an I/O-card working as pattern generator. Its buffer gets continously programmed with a pattern which is clocked out on the control lines. A counter in the I/O-card keeps track of the amount of pattern words clocked out. By reading this counter, the computer knows the state of the instrument or knows the meaning of the data that will arrive with a certain time-tag.

  19. Study of viscous flow about airfoils by the integro-differential method

    NASA Technical Reports Server (NTRS)

    Wu, J. C.; Sampath, S.

    1975-01-01

    An integro-differential method was used for numerically solving unsteady incompressible viscous flow problems. A computer program was prepared to solve the problem of an impulsively started 9% thick symmetric Joukowski airfoil at an angle of attack of 15 deg and a Reynolds number of 1000. Some of the results obtained for this problem were discussed and compared with related work completed previously. Two numerical procedures were used, an Alternating Direction Implicit (ADI) method and a Successive Line Relaxation (SLR) method. Generally, the ADI solution agrees well with the SLR solution and with previous results are stations away from the trailing edge. At the trailing edge station, the ADI solution differs substantially from previous results, while the vorticity profiles obtained from the SLR method there are in good qualitative agreement with previous results.

  20. Graphics Processing Unit Assisted Thermographic Compositing

    NASA Technical Reports Server (NTRS)

    Ragasa, Scott; McDougal, Matthew; Russell, Sam

    2013-01-01

    Objective: To develop a software application utilizing general purpose graphics processing units (GPUs) for the analysis of large sets of thermographic data. Background: Over the past few years, an increasing effort among scientists and engineers to utilize the GPU in a more general purpose fashion is allowing for supercomputer level results at individual workstations. As data sets grow, the methods to work them grow at an equal, and often greater, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU to allow for throughput that was previously reserved for compute clusters. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Signal (image) processing is one area were GPUs are being used to greatly increase the performance of certain algorithms and analysis techniques.

  1. A Computational Intelligence (CI) Approach to the Precision Mars Lander Problem

    NASA Technical Reports Server (NTRS)

    Birge, Brian; Walberg, Gerald

    2002-01-01

    A Mars precision landing requires a landed footprint of no more than 100 meters. Obstacles to reducing the landed footprint include trajectory dispersions due to initial atmospheric entry conditions such as entry angle, parachute deployment height, environment parameters such as wind, atmospheric density, parachute deployment dynamics, unavoidable injection error or propagated error from launch, etc. Computational Intelligence (CI) techniques such as Artificial Neural Nets and Particle Swarm Optimization have been shown to have great success with other control problems. The research period extended previous work on investigating applicability of the computational intelligent approaches. The focus of this investigation was on Particle Swarm Optimization and basic Neural Net architectures. The research investigating these issues was performed for the grant cycle from 5/15/01 to 5/15/02. Matlab 5.1 and 6.0 along with NASA's POST were the primary computational tools.

  2. A Parametric Computational Analysis into Galvanic Coupling Intrabody Communication.

    PubMed

    Callejon, M Amparo; Del Campo, P; Reina-Tosina, Javier; Roa, Laura M

    2017-08-02

    Intrabody Communication (IBC) uses the human body tissues as transmission media for electrical signals to interconnect personal health devices in wireless body area networks. The main goal of this work is to conduct a computational analysis covering some bioelectric issues that still have not been fully explained, such as the modeling of the skin-electrode impedance, the differences associated to the use of constant voltage or current excitation modes, or the influence on attenuation of the subject's anthropometrical and bioelectric properties. With this aim, a computational finite element model has been developed, allowing the IBC channel attenuation as well as the electric field and current density through arm tissues to be computed as a function of these parameters. As a conclusion, this parametric analysis has in turn permitted us to disclose some knowledge about the causes and effects of the above-mentioned issues, thus explaining and complementing previous results reported in the literature.

  3. Computational Investigation of Amine–Oxygen Exciplex Formation

    PubMed Central

    Haupert, Levi M.; Simpson, Garth J.; Slipchenko, Lyudmila V.

    2012-01-01

    It has been suggested that fluorescence from amine-containing dendrimer compounds could be the result of a charge transfer between amine groups and molecular oxygen [Chu, C.-C.; Imae, T. Macromol. Rapid Commun. 2009, 30, 89.]. In this paper we employ equation-of-motion coupled cluster computational methods to study the electronic structure of an ammonia–oxygen model complex to examine this possibility. The results reveal several bound electronic states with charge transfer character with emission energies generally consistent with previous observations. However, further work involving confinement, solvent, and amine structure effects will be necessary for more rigorous examination of the charge transfer fluorescence hypothesis. PMID:21812447

  4. Issues and recommendations associated with distributed computation and data management systems for the space sciences

    NASA Technical Reports Server (NTRS)

    1986-01-01

    The primary purpose of the report is to explore management approaches and technology developments for computation and data management systems designed to meet future needs in the space sciences.The report builds on work presented in previous reports on solar-terrestrial and planetary reports, broadening the outlook to all of the space sciences, and considering policy issues aspects related to coordiantion between data centers, missions, and ongoing research activities, because it is perceived that the rapid growth of data and the wide geographic distribution of relevant facilities will present especially troublesome problems for data archiving, distribution, and analysis.

  5. Non-linear molecular pattern classification using molecular beacons with multiple targets.

    PubMed

    Lee, In-Hee; Lee, Seung Hwan; Park, Tai Hyun; Zhang, Byoung-Tak

    2013-12-01

    In vitro pattern classification has been highlighted as an important future application of DNA computing. Previous work has demonstrated the feasibility of linear classifiers using DNA-based molecular computing. However, complex tasks require non-linear classification capability. Here we design a molecular beacon that can interact with multiple targets and experimentally shows that its fluorescent signals form a complex radial-basis function, enabling it to be used as a building block for non-linear molecular classification in vitro. The proposed method was successfully applied to solving artificial and real-world classification problems: XOR and microRNA expression patterns. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  6. Comparison of nonmesonic hypernuclear decay rates computed in laboratory and center-of-mass coordinates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Conti, C.; Barbero, C.; Galeão, A. P.

    In this work we compute the one-nucleon-induced nonmesonic hypernuclear decay rates of {sub Λ}{sup 5}He, {sub Λ}{sup 12}C and {sub Λ}{sup 13}C using a formalism based on the independent particle shell model in terms of laboratory coordinates. To ascertain the correctness and precision of the method, these results are compared with those obtained using a formalism in terms of center-of-mass coordinates, which has been previously reported in the literature. The formalism in terms of laboratory coordinates will be useful in the shell-model approach to two-nucleon-induced transitions.

  7. Computational modeling and experimental studies on NO{sub x} reduction under pulverized coal combustion conditions. Seventh quarterly technical progress report, July 1, 1996--September 30, 1996

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumpaty, S.K.; Subramanian, K.; Nokku, V.P.

    1996-12-31

    During this quarter (July-August 1996), the experiments for nitric oxide reburning with a combination of methane and ammonia were conducted successfully. This marked the completion of gaseous phase experiments. Preparations are underway for the reburning studies with coal. A coal feeder was designed to suit our reactor facility which is being built by MK Fabrication. The coal feeder should be operational in the coming quarter. Presented here are the experimental results of NO reburning with methane/ammonia. The results are consistent with the computational work submitted in previous reports.

  8. Efficient calculation of general Voigt profiles

    NASA Astrophysics Data System (ADS)

    Cope, D.; Khoury, R.; Lovett, R. J.

    1988-02-01

    An accurate and efficient program is presented for the computation of OIL profiles, generalizations of the Voigt profile resulting from the one-interacting-level model of Ward et al. (1974). These profiles have speed dependent shift and width functions and have asymmetric shapes. The program contains an adjustable error control parameter and includes the Voigt profile as a special case, although the general nature of this program renders it slower than a specialized Voigt profile method. Results on accuracy and computation time are presented for a broad set of test parameters, and a comparison is made with previous work on the asymptotic behavior of general Voigt profiles.

  9. X-Ray Computed Tomography of Tranquility Base Moon Rock

    NASA Technical Reports Server (NTRS)

    Jones, Justin S.; Garvin, Jim; Viens, Mike; Kent, Ryan; Munoz, Bruno

    2016-01-01

    X-ray Computed Tomography (CT) was used for the first time on the Apollo 11 Lunar Sample number 10057.30, which had been previously maintained by the White House, then transferred back to NASA under the care of Goddard Space Flight Center. Results from this analysis show detailed images of the internal structure of the moon rock, including vesicles (pores), crystal needles, and crystal bundles. These crystals, possibly the common mineral ilmenite, are found in abundance and with random orientation. Future work, in particular a greater understanding of these crystals and their formation, may lead to a more in-depth understanding of the lunar surface evolution and mineral content.

  10. Bayesian evidence computation for model selection in non-linear geoacoustic inference problems.

    PubMed

    Dettmer, Jan; Dosso, Stan E; Osler, John C

    2010-12-01

    This paper applies a general Bayesian inference approach, based on Bayesian evidence computation, to geoacoustic inversion of interface-wave dispersion data. Quantitative model selection is carried out by computing the evidence (normalizing constants) for several model parameterizations using annealed importance sampling. The resulting posterior probability density estimate is compared to estimates obtained from Metropolis-Hastings sampling to ensure consistent results. The approach is applied to invert interface-wave dispersion data collected on the Scotian Shelf, off the east coast of Canada for the sediment shear-wave velocity profile. Results are consistent with previous work on these data but extend the analysis to a rigorous approach including model selection and uncertainty analysis. The results are also consistent with core samples and seismic reflection measurements carried out in the area.

  11. Frequency and associated risk factors for neck pain among software engineers in Karachi, Pakistan.

    PubMed

    Rasim Ul Hasanat, Mohammad; Ali, Syed Shahzad; Rasheed, Abdur; Khan, Muhammad

    2017-07-01

    To determine the frequency of neck pain and its association with risk factors among software engineers. This descriptive, cross-sectional study was conducted at the Dow University of Health Sciences, Karachi, from February to March 2016, and comprised software engineers from 19 different locations. Non-probability purposive sampling technique was used to select individuals spending at least 6 hours in front of computer screens every day and having a work experience of at least 6 months. Data were collected using a self-administrable questionnaire. SPSS 21 was used for data analysis. Of the 185 participants, 49(26.5%) had neck pain at the time of data-gathering, while 136(73.5%) reported no pain. However, 119(64.32%) participants had a previous history of neck pain. Other factors like smoking, physical inactivity, history of any muscular pain and neck pain, uncomfortable workstation, and work-related mental stress and insufficient sleep at night, were found to be significantly associated with current neck pain (p<0.05 each). Intensive computer users are likely to experience at least one episode of computer-associated neck pain.

  12. Dynamic steady-state analysis of crack propagation in rubber-like solids using an extended finite element method

    NASA Astrophysics Data System (ADS)

    Kroon, Martin

    2012-01-01

    In the present study, a computational framework for studying high-speed crack growth in rubber-like solids under conditions of plane stress and steady-state is proposed. Effects of inertia, viscoelasticity and finite strains are included. The main purpose of the study is to examine the contribution of viscoelastic dissipation to the total work of fracture required to propagate a crack in a rubber-like solid. The computational framework builds upon a previous work by the present author (Kroon in Int J Fract 169:49-60, 2011). The model was fully able to predict experimental results in terms of the local surface energy at the crack tip and the total energy release rate at different crack speeds. The predicted distributions of stress and dissipation around the propagating crack tip are presented. The predicted crack tip profiles also agree qualitatively with experimental findings.

  13. Nanotoxicity prediction using computational modelling - review and future directions

    NASA Astrophysics Data System (ADS)

    Saini, Bhavna; Srivastava, Sumit

    2018-04-01

    Nanomaterials has stimulated various outlooks for future in a number of industries and scientific ventures. A number of applications such as cosmetics, medicines, and electronics are employing nanomaterials due to their various compelling properties. The unending growth of nanomaterials usage in our daily life has escalated the health and environmental risks. Early nanotoxicity recognition is a big challenge. Various researches are going on in the field of nanotoxicity, which comprised of several problems such as inadequacy of proper datasets, lack of appropriate rules and characterization of nanomaterials. Computational modelling would be beneficial asset for nanomaterials researchers because it can foresee the toxicity, rest on previous experimental data. In this study, we have reviewed sufficient work demonstrating a proper pathway to proceed with QSAR analysis of Nanomaterials for toxicity modelling. The paper aims at providing comprehensive insight of Nano QSAR, various theories, tools and approaches used, along with an outline for future research directions to work on.

  14. Computational Modeling of a Mechanized Benchtop Apparatus for Leading-Edge Slat Noise Treatment Device Prototypes

    NASA Technical Reports Server (NTRS)

    Turner, Travis L.; Moore, James B.; Long, David L.

    2017-01-01

    Airframe noise is a growing concern in the vicinity of airports because of population growth and gains in engine noise reduction that have rendered the airframe an equal contributor during the approach and landing phases of flight for many transport aircraft. The leading-edge-slat device of a typical high-lift system for transport aircraft is a prominent source of airframe noise. Two technologies have significant potential for slat noise reduction; the slat-cove filler (SCF) and the slat-gap filler (SGF). Previous work was done on a 2D section of a transport-aircraft wing to demonstrate the implementation feasibility of these concepts. Benchtop hardware was developed in that work for qualitative parametric study. The benchtop models were mechanized for quantitative measurements of performance. Computational models of the mechanized benchtop apparatus for the SCF were developed and the performance of the system for five different SCF assemblies is demonstrated.

  15. Occupation and thyroid cancer risk in Sweden.

    PubMed

    Lope, Virginia; Pollán, Marina; Gustavsson, Per; Plato, Nils; Pérez-Gómez, Beatriz; Aragonés, Nuria; Suárez, Berta; Carrasco, José Miguel; Rodríguez, Silvia; Ramis, Rebeca; Boldo, Elena; López-Abente, Gonzalo

    2005-09-01

    The objective of this study was to identify occupations and industries with increased incidence of thyroid cancer in Swedish workers. Standardized incidence ratios were computed for each job and industry for the period 1971-1989 through record-linkage with the Swedish National Cancer and Death Registers. Age-, period-, geographically adjusted relative risks were calculated using Poisson models. Increased risks were found for teachers, construction carpenters, policemen, and prison/reformatory officials in men, and medical technicians, shop managers, tailors, and shoecutters among women. Industries with risk excess are manufacture of agricultural machinery, manufacture of computing/accessories, and public administration/police among men; and manufacture of prefabricated wooden buildings, electric installation work, and wholesale of live animals/fertilizers/oilseed/grain among women. Our results corroborate some previously reported increased risks. Further research is needed to assess the influence of specific chemical agents related with some of the highlighted work environments.

  16. Agreement processing and attraction errors in aging: evidence from subject-verb agreement in German.

    PubMed

    Reifegerste, Jana; Hauer, Franziska; Felser, Claudia

    2017-11-01

    Effects of aging on lexical processing are well attested, but the picture is less clear for grammatical processing. Where age differences emerge, these are usually ascribed to working-memory (WM) decline. Previous studies on the influence of WM on agreement computation have yielded inconclusive results, and work on aging and subject-verb agreement processing is lacking. In two experiments (Experiment 1: timed grammaticality judgment, Experiment 2: self-paced reading + WM test), we investigated older (OA) and younger (YA) adults' susceptibility to agreement attraction errors. We found longer reading latencies and judgment reaction times (RTs) for OAs. Further, OAs, particularly those with low WM scores, were more accepting of sentences with attraction errors than YAs. OAs showed longer reading latencies for ungrammatical sentences, again modulated by WM, than YAs. Our results indicate that OAs have greater difficulty blocking intervening nouns from interfering with the computation of agreement dependencies. WM can modulate this effect.

  17. Design and implementation of practical bidirectional texture function measurement devices focusing on the developments at the University of Bonn.

    PubMed

    Schwartz, Christopher; Sarlette, Ralf; Weinmann, Michael; Rump, Martin; Klein, Reinhard

    2014-04-28

    Understanding as well as realistic reproduction of the appearance of materials play an important role in computer graphics, computer vision and industry. They enable applications such as digital material design, virtual prototyping and faithful virtual surrogates for entertainment, marketing, education or cultural heritage documentation. A particularly fruitful way to obtain the digital appearance is the acquisition of reflectance from real-world material samples. Therefore, a great variety of devices to perform this task has been proposed. In this work, we investigate their practical usefulness. We first identify a set of necessary attributes and establish a general categorization of different designs that have been realized. Subsequently, we provide an in-depth discussion of three particular implementations by our work group, demonstrating advantages and disadvantages of different system designs with respect to the previously established attributes. Finally, we survey the existing literature to compare our implementation with related approaches.

  18. General Relativistic Precession in Small Solar System Bodies

    NASA Astrophysics Data System (ADS)

    Sekhar, Aswin; Werner, Stephanie; Hoffmann, Volker; Asher, David; Vaubaillon, Jeremie; Hajdukova, Maria; Li, Gongjie

    2016-10-01

    Introduction: One of the greatest successes of the Einstein's General Theory of Relativity (GR) was the correct prediction of the precession of perihelion of Mercury. The closed form expression to compute this precession tells us that substantial GR precession would occur only if the bodies have a combination of both moderately small perihelion distance and semi-major axis. Minimum Orbit Intersection Distance (MOID) is a quantity which helps us to understand the closest proximity of two orbits in space. Hence evaluating MOID is crucial to understand close encounters and collision scenarios better. In this work, we look at the possible scenarios where a small GR precession in argument of pericentre (ω) can create substantial changes in MOID for small bodies ranging from meteoroids to comets and asteroids.Analytical Approach and Numerical Integrations: Previous works have looked into neat analytical techniques to understand different collision scenarios and we use those standard expressions to compute MOID analytically. We find the nature of this mathematical function is such that a relatively small GR precession can lead to drastic changes in MOID values depending on the initial value of ω. Numerical integrations were done with package MERCURY incorporating the GR code to test the same effects. Numerical approach showed the same interesting relationship (as shown by analytical theory) between values of ω and the peaks/dips in MOID values. Previous works have shown that GR precession suppresses Kozai oscillations and this aspect was verified using our integrations. There is an overall agreement between both analytical and numerical methods.Summary and Discussion: We find that GR precession could play an important role in the calculations pertaining to MOID and close encounter scenarios in the case of certain small solar system bodies (depending on their initial orbital elements). Previous works have looked into impact probabilities and collision scenarios on planets from different small body populations. This work aims to find certain sub-sets of orbits where GR could play an interesting role. Certain parallels are drawn between the cases of asteroids, comets and small perihelion distance meteoroid streams.

  19. Multi-Point Combustion System: Final Report

    NASA Technical Reports Server (NTRS)

    Goeke, Jerry; Pack, Spencer; Zink, Gregory; Ryon, Jason

    2014-01-01

    A low-NOx emission combustor concept has been developed for NASA's Environmentally Responsible Aircraft (ERA) program to meet N+2 emissions goals for a 70,000 lb thrust engine application. These goals include 75 percent reduction of LTO NOx from CAEP6 standards without increasing CO, UHC, or smoke from that of current state of the art. An additional key factor in this work is to improve lean combustion stability over that of previous work performed on similar technology in the early 2000s. The purpose of this paper is to present the final report for the NASA contract. This work included the design, analysis, and test of a multi-point combustion system. All design work was based on the results of Computational Fluid Dynamics modeling with the end results tested on a medium pressure combustion rig at the UC and a medium pressure combustion rig at GRC. The theories behind the designs, results of analysis, and experimental test data will be discussed in this report. The combustion system consists of five radially staged rows of injectors, where ten small scale injectors are used in place of a single traditional nozzle. Major accomplishments of the current work include the design of a Multipoint Lean Direct Injection (MLDI) array and associated air blast and pilot fuel injectors, which is expected to meet or exceed the goal of a 75 percent reduction in LTO NOx from CAEP6 standards. This design incorporates a reduced number of injectors over previous multipoint designs, simplified and lightweight components, and a very compact combustor section. Additional outcomes of the program are validation that the design of these combustion systems can be aided by the use of Computational Fluid Dynamics to predict and reduce emissions. Furthermore, the staging of fuel through the individually controlled radially staged injector rows successfully demonstrated improved low power operability as well as improvements in emissions over previous multipoint designs. Additional comparison between Jet- A fuel and a hydrotreated biofuel is made to determine viability of the technology for use with alternative fuels. Finally, the operability of the array and associated nozzles proved to be very stable without requiring additional active or passive control systems. A number of publications have been publish

  20. Hierarchic models for laminated plates

    NASA Technical Reports Server (NTRS)

    Szabo, Barna A.; Actis, Ricardo L.

    1991-01-01

    The research conducted in the formulation of hierarchic models for laminated plates is described. The work is an extension of the work done for laminated strips. The use of a single parameter, beta, is investigated that represents the degree to which the equilibrium equations of three dimensional elasticity are satisfied. The powers of beta identify members in the hierarchic sequence. Numerical examples that were analyzed with the proposed sequence of models are included. The results obtained for square plates with uniform loading and homogeneous boundary conditions are very encouraging. Several cross-ply and angle-ply laminates were evaluated and the results compared with those of the fully three dimensional model, computed using MSC/PROBE, and with previously reported work on laminated strips.

  1. Modified computation of the nozzle damping coefficient in solid rocket motors

    NASA Astrophysics Data System (ADS)

    Liu, Peijin; Wang, Muxin; Yang, Wenjing; Gupta, Vikrant; Guan, Yu; Li, Larry K. B.

    2018-02-01

    In solid rocket motors, the bulk advection of acoustic energy out of the nozzle constitutes a significant source of damping and can thus influence the thermoacoustic stability of the system. In this paper, we propose and test a modified version of a historically accepted method of calculating the nozzle damping coefficient. Building on previous work, we separate the nozzle from the combustor, but compute the acoustic admittance at the nozzle entry using the linearized Euler equations (LEEs) rather than with short nozzle theory. We compute the combustor's acoustic modes also with the LEEs, taking the nozzle admittance as the boundary condition at the combustor exit while accounting for the mean flow field in the combustor using an analytical solution to Taylor-Culick flow. We then compute the nozzle damping coefficient via a balance of the unsteady energy flux through the nozzle. Compared with established methods, the proposed method offers competitive accuracy at reduced computational costs, helping to improve predictions of thermoacoustic instability in solid rocket motors.

  2. CMS Distributed Computing Integration in the LHC sustained operations era

    NASA Astrophysics Data System (ADS)

    Grandi, C.; Bockelman, B.; Bonacorsi, D.; Fisk, I.; González Caballero, I.; Farina, F.; Hernández, J. M.; Padhi, S.; Sarkar, S.; Sciabà, A.; Sfiligoi, I.; Spiga, F.; Úbeda García, M.; Van Der Ster, D. C.; Zvada, M.

    2011-12-01

    After many years of preparation the CMS computing system has reached a situation where stability in operations limits the possibility to introduce innovative features. Nevertheless it is the same need of stability and smooth operations that requires the introduction of features that were considered not strategic in the previous phases. Examples are: adequate authorization to control and prioritize the access to storage and computing resources; improved monitoring to investigate problems and identify bottlenecks on the infrastructure; increased automation to reduce the manpower needed for operations; effective process to deploy in production new releases of the software tools. We present the work of the CMS Distributed Computing Integration Activity that is responsible for providing a liaison between the CMS distributed computing infrastructure and the software providers, both internal and external to CMS. In particular we describe the introduction of new middleware features during the last 18 months as well as the requirements to Grid and Cloud software developers for the future.

  3. Development of Three-Dimensional Flow Code Package to Predict Performance and Stability of Aircraft with Leading Edge Ice Contamination

    NASA Technical Reports Server (NTRS)

    Strash, D. J.; Summa, J. M.

    1996-01-01

    In the work reported herein, a simplified, uncoupled, zonal procedure is utilized to assess the capability of numerically simulating icing effects on a Boeing 727-200 aircraft. The computational approach combines potential flow plus boundary layer simulations by VSAERO for the un-iced aircraft forces and moments with Navier-Stokes simulations by NPARC for the incremental forces and moments due to iced components. These are compared with wind tunnel force and moment data, supplied by the Boeing Company, examining longitudinal flight characteristics. Grid refinement improved the local flow features over previously reported work with no appreciable difference in the incremental ice effect. The computed lift curve slope with and without empennage ice matches the experimental value to within 1%, and the zero lift angle agrees to within 0.2 of a degree. The computed slope of the un-iced and iced aircraft longitudinal stability curve is within about 2% of the test data. This work demonstrates the feasibility of a zonal method for the icing analysis of complete aircraft or isolated components within the linear angle of attack range. In fact, this zonal technique has allowed for the viscous analysis of a complete aircraft with ice which is currently not otherwise considered tractable.

  4. 5 CFR 839.1002 - Will OPM compute the lost earnings if my qualifying retirement coverage error was previously...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Will OPM compute the lost earnings if my... compute the lost earnings if my qualifying retirement coverage error was previously corrected and I made... coverage error was previously corrected, OPM will compute the lost earnings on your make-up contributions...

  5. Radiolysis Model Sensitivity Analysis for a Used Fuel Storage Canister

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wittman, Richard S.

    2013-09-20

    This report fulfills the M3 milestone (M3FT-13PN0810027) to report on a radiolysis computer model analysis that estimates the generation of radiolytic products for a storage canister. The analysis considers radiolysis outside storage canister walls and within the canister fill gas over a possible 300-year lifetime. Previous work relied on estimates based directly on a water radiolysis G-value. This work also includes that effect with the addition of coupled kinetics for 111 reactions for 40 gas species to account for radiolytic-induced chemistry, which includes water recombination and reactions with air.

  6. Documenting the NASA Armstrong Flight Research Center Oblate Earth Simulation Equations of Motion and Integration Algorithm

    NASA Technical Reports Server (NTRS)

    Clarke, R.; Lintereur, L.; Bahm, C.

    2016-01-01

    A desire for more complete documentation of the National Aeronautics and Space Administration (NASA) Armstrong Flight Research Center (AFRC), Edwards, California legacy code used in the core simulation has led to this e ort to fully document the oblate Earth six-degree-of-freedom equations of motion and integration algorithm. The authors of this report have taken much of the earlier work of the simulation engineering group and used it as a jumping-o point for this report. The largest addition this report makes is that each element of the equations of motion is traced back to first principles and at no point is the reader forced to take an equation on faith alone. There are no discoveries of previously unknown principles contained in this report; this report is a collection and presentation of textbook principles. The value of this report is that those textbook principles are herein documented in standard nomenclature that matches the form of the computer code DERIVC. Previous handwritten notes are much of the backbone of this work, however, in almost every area, derivations are explicitly shown to assure the reader that the equations which make up the oblate Earth version of the computer routine, DERIVC, are correct.

  7. GPU-accelerated two dimensional synthetic aperture focusing for photoacoustic microscopy

    NASA Astrophysics Data System (ADS)

    Liu, Siyu; Feng, Xiaohua; Gao, Fei; Jin, Haoran; Zhang, Ruochong; Luo, Yunqi; Zheng, Yuanjin

    2018-02-01

    Acoustic resolution photoacoustic microscopy (AR-PAM) generally suffers from limited depth of focus, which had been extended by synthetic aperture focusing techniques (SAFTs). However, for three dimensional AR-PAM, current one dimensional (1D) SAFT and its improved version like cross-shaped SAFT do not provide isotropic resolution in the lateral direction. The full potential of the SAFT remains to be tapped. To this end, two dimensional (2D) SAFT with fast computing architecture is proposed in this work. Explained by geometric modeling and Fourier acoustics theories, 2D-SAFT provide the narrowest post-focusing capability, thus to achieve best lateral resolution. Compared with previous 1D-SAFT techniques, the proposed 2D-SAFT improved the lateral resolution by at least 1.7 times and the signal-to-noise ratio (SNR) by about 10 dB in both simulation and experiments. Moreover, the improved 2D-SAFT algorithm is accelerated by a graphical processing unit that reduces the long period of reconstruction to only a few seconds. The proposed 2D-SAFT is demonstrated to outperform previous reported 1D SAFT in the aspects of improving the depth of focus, imaging resolution, and SNR with fast computational efficiency. This work facilitates future studies on in vivo deeper and high-resolution photoacoustic microscopy beyond several centimeters.

  8. Advanced networks and computing in healthcare

    PubMed Central

    Ackerman, Michael

    2011-01-01

    As computing and network capabilities continue to rise, it becomes increasingly important to understand the varied applications for using them to provide healthcare. The objective of this review is to identify key characteristics and attributes of healthcare applications involving the use of advanced computing and communication technologies, drawing upon 45 research and development projects in telemedicine and other aspects of healthcare funded by the National Library of Medicine over the past 12 years. Only projects publishing in the professional literature were included in the review. Four projects did not publish beyond their final reports. In addition, the authors drew on their first-hand experience as project officers, reviewers and monitors of the work. Major themes in the corpus of work were identified, characterizing key attributes of advanced computing and network applications in healthcare. Advanced computing and network applications are relevant to a range of healthcare settings and specialties, but they are most appropriate for solving a narrower range of problems in each. Healthcare projects undertaken primarily to explore potential have also demonstrated effectiveness and depend on the quality of network service as much as bandwidth. Many applications are enabling, making it possible to provide service or conduct research that previously was not possible or to achieve outcomes in addition to those for which projects were undertaken. Most notable are advances in imaging and visualization, collaboration and sense of presence, and mobility in communication and information-resource use. PMID:21486877

  9. Numerical comparison of grid pattern diffraction effects through measurement and modeling with OptiScan software

    NASA Astrophysics Data System (ADS)

    Murray, Ian B.; Densmore, Victor; Bora, Vaibhav; Pieratt, Matthew W.; Hibbard, Douglas L.; Milster, Tom D.

    2011-06-01

    Coatings of various metalized patterns are used for heating and electromagnetic interference (EMI) shielding applications. Previous work has focused on macro differences between different types of grids, and has shown good correlation between measurements and analyses of grid diffraction. To advance this work, we have utilized the University of Arizona's OptiScan software, which has been optimized for this application by using the Babinet Principle. When operating on an appropriate computer system, this algorithm produces results hundreds of times faster than standard Fourier-based methods, and allows realistic cases to be modeled for the first time. By using previously published derivations by Exotic Electro-Optics, we compare diffraction performance of repeating and randomized grid patterns with equivalent sheet resistance using numerical performance metrics. Grid patterns of each type are printed on optical substrates and measured energy is compared against modeled energy.

  10. Autumn Algorithm-Computation of Hybridization Networks for Realistic Phylogenetic Trees.

    PubMed

    Huson, Daniel H; Linz, Simone

    2018-01-01

    A minimum hybridization network is a rooted phylogenetic network that displays two given rooted phylogenetic trees using a minimum number of reticulations. Previous mathematical work on their calculation has usually assumed the input trees to be bifurcating, correctly rooted, or that they both contain the same taxa. These assumptions do not hold in biological studies and "realistic" trees have multifurcations, are difficult to root, and rarely contain the same taxa. We present a new algorithm for computing minimum hybridization networks for a given pair of "realistic" rooted phylogenetic trees. We also describe how the algorithm might be used to improve the rooting of the input trees. We introduce the concept of "autumn trees", a nice framework for the formulation of algorithms based on the mathematics of "maximum acyclic agreement forests". While the main computational problem is hard, the run-time depends mainly on how different the given input trees are. In biological studies, where the trees are reasonably similar, our parallel implementation performs well in practice. The algorithm is available in our open source program Dendroscope 3, providing a platform for biologists to explore rooted phylogenetic networks. We demonstrate the utility of the algorithm using several previously studied data sets.

  11. Evidence of Effectiveness of Health Care Professionals Using Handheld Computers: A Scoping Review of Systematic Reviews

    PubMed Central

    2013-01-01

    Background Handheld computers and mobile devices provide instant access to vast amounts and types of useful information for health care professionals. Their reduced size and increased processing speed has led to rapid adoption in health care. Thus, it is important to identify whether handheld computers are actually effective in clinical practice. Objective A scoping review of systematic reviews was designed to provide a quick overview of the documented evidence of effectiveness for health care professionals using handheld computers in their clinical work. Methods A detailed search, sensitive for systematic reviews was applied for Cochrane, Medline, EMBASE, PsycINFO, Allied and Complementary Medicine Database (AMED), Global Health, and Cumulative Index to Nursing and Allied Health Literature (CINAHL) databases. All outcomes that demonstrated effectiveness in clinical practice were included. Classroom learning and patient use of handheld computers were excluded. Quality was assessed using the Assessment of Multiple Systematic Reviews (AMSTAR) tool. A previously published conceptual framework was used as the basis for dual data extraction. Reported outcomes were summarized according to the primary function of the handheld computer. Results Five systematic reviews met the inclusion and quality criteria. Together, they reviewed 138 unique primary studies. Most reviewed descriptive intervention studies, where physicians, pharmacists, or medical students used personal digital assistants. Effectiveness was demonstrated across four distinct functions of handheld computers: patient documentation, patient care, information seeking, and professional work patterns. Within each of these functions, a range of positive outcomes were reported using both objective and self-report measures. The use of handheld computers improved patient documentation through more complete recording, fewer documentation errors, and increased efficiency. Handheld computers provided easy access to clinical decision support systems and patient management systems, which improved decision making for patient care. Handheld computers saved time and gave earlier access to new information. There were also reports that handheld computers enhanced work patterns and efficiency. Conclusions This scoping review summarizes the secondary evidence for effectiveness of handheld computers and mhealth. It provides a snapshot of effective use by health care professionals across four key functions. We identified evidence to suggest that handheld computers provide easy and timely access to information and enable accurate and complete documentation. Further, they can give health care professionals instant access to evidence-based decision support and patient management systems to improve clinical decision making. Finally, there is evidence that handheld computers allow health professionals to be more efficient in their work practices. It is anticipated that this evidence will guide clinicians and managers in implementing handheld computers in clinical practice and in designing future research. PMID:24165786

  12. Brain-Inspired Photonic Signal Processor for Generating Periodic Patterns and Emulating Chaotic Systems

    NASA Astrophysics Data System (ADS)

    Antonik, Piotr; Haelterman, Marc; Massar, Serge

    2017-05-01

    Reservoir computing is a bioinspired computing paradigm for processing time-dependent signals. Its hardware implementations have received much attention because of their simplicity and remarkable performance on a series of benchmark tasks. In previous experiments, the output was uncoupled from the system and, in most cases, simply computed off-line on a postprocessing computer. However, numerical investigations have shown that feeding the output back into the reservoir opens the possibility of long-horizon time-series forecasting. Here, we present a photonic reservoir computer with output feedback, and we demonstrate its capacity to generate periodic time series and to emulate chaotic systems. We study in detail the effect of experimental noise on system performance. In the case of chaotic systems, we introduce several metrics, based on standard signal-processing techniques, to evaluate the quality of the emulation. Our work significantly enlarges the range of tasks that can be solved by hardware reservoir computers and, therefore, the range of applications they could potentially tackle. It also raises interesting questions in nonlinear dynamics and chaos theory.

  13. Taming Many-Parameter BSM Models with Bayesian Neural Networks

    NASA Astrophysics Data System (ADS)

    Kuchera, M. P.; Karbo, A.; Prosper, H. B.; Sanchez, A.; Taylor, J. Z.

    2017-09-01

    The search for physics Beyond the Standard Model (BSM) is a major focus of large-scale high energy physics experiments. One method is to look for specific deviations from the Standard Model that are predicted by BSM models. In cases where the model has a large number of free parameters, standard search methods become intractable due to computation time. This talk presents results using Bayesian Neural Networks, a supervised machine learning method, to enable the study of higher-dimensional models. The popular phenomenological Minimal Supersymmetric Standard Model was studied as an example of the feasibility and usefulness of this method. Graphics Processing Units (GPUs) are used to expedite the calculations. Cross-section predictions for 13 TeV proton collisions will be presented. My participation in the Conference Experience for Undergraduates (CEU) in 2004-2006 exposed me to the national and global significance of cutting-edge research. At the 2005 CEU, I presented work from the previous summer's SULI internship at Lawrence Berkeley Laboratory, where I learned to program while working on the Majorana Project. That work inspired me to follow a similar research path, which led me to my current work on computational methods applied to BSM physics.

  14. Computing exponentially faster: implementing a non-deterministic universal Turing machine using DNA

    PubMed Central

    Currin, Andrew; Korovin, Konstantin; Ababi, Maria; Roper, Katherine; Kell, Douglas B.; Day, Philip J.

    2017-01-01

    The theory of computer science is based around universal Turing machines (UTMs): abstract machines able to execute all possible algorithms. Modern digital computers are physical embodiments of classical UTMs. For the most important class of problem in computer science, non-deterministic polynomial complete problems, non-deterministic UTMs (NUTMs) are theoretically exponentially faster than both classical UTMs and quantum mechanical UTMs (QUTMs). However, no attempt has previously been made to build an NUTM, and their construction has been regarded as impossible. Here, we demonstrate the first physical design of an NUTM. This design is based on Thue string rewriting systems, and thereby avoids the limitations of most previous DNA computing schemes: all the computation is local (simple edits to strings) so there is no need for communication, and there is no need to order operations. The design exploits DNA's ability to replicate to execute an exponential number of computational paths in P time. Each Thue rewriting step is embodied in a DNA edit implemented using a novel combination of polymerase chain reactions and site-directed mutagenesis. We demonstrate that the design works using both computational modelling and in vitro molecular biology experimentation: the design is thermodynamically favourable, microprogramming can be used to encode arbitrary Thue rules, all classes of Thue rule can be implemented, and non-deterministic rule implementation. In an NUTM, the resource limitation is space, which contrasts with classical UTMs and QUTMs where it is time. This fundamental difference enables an NUTM to trade space for time, which is significant for both theoretical computer science and physics. It is also of practical importance, for to quote Richard Feynman ‘there's plenty of room at the bottom’. This means that a desktop DNA NUTM could potentially utilize more processors than all the electronic computers in the world combined, and thereby outperform the world's current fastest supercomputer, while consuming a tiny fraction of its energy. PMID:28250099

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kervin, Karina E.; Cook, Robert B.; Michener, William K.

    Conventional wisdom makes the suggestion that there are benefits to the creation of shared repositories of scientific data. Funding agencies require that the data from sponsored projects be shared publicly, but individual researchers often see little personal benefit to offset the work of creating easily sharable data. These conflicting forces have led to the emergence of a new role to support researchers: data managers. This paper identifies key differences between the socio-technical context of data managers and other "human infrastructure" roles articulated previously in Computer Supported Cooperative Work (CSCW) literature and summarizes the challenges that data managers face when acceptingmore » data for archival and reuse. Finally, while data managers' work is critical for advancing science and science policy, their work is often invisible and under-appreciated since it takes place behind the scenes.« less

  16. Intra Frame Coding In Advanced Video Coding Standard (H.264) to Obtain Consistent PSNR and Reduce Bit Rate for Diagonal Down Left Mode Using Gaussian Pulse

    NASA Astrophysics Data System (ADS)

    Manjanaik, N.; Parameshachari, B. D.; Hanumanthappa, S. N.; Banu, Reshma

    2017-08-01

    Intra prediction process of H.264 video coding standard used to code first frame i.e. Intra frame of video to obtain good coding efficiency compare to previous video coding standard series. More benefit of intra frame coding is to reduce spatial pixel redundancy with in current frame, reduces computational complexity and provides better rate distortion performance. To code Intra frame it use existing process Rate Distortion Optimization (RDO) method. This method increases computational complexity, increases in bit rate and reduces picture quality so it is difficult to implement in real time applications, so the many researcher has been developed fast mode decision algorithm for coding of intra frame. The previous work carried on Intra frame coding in H.264 standard using fast decision mode intra prediction algorithm based on different techniques was achieved increased in bit rate, degradation of picture quality(PSNR) for different quantization parameters. Many previous approaches of fast mode decision algorithms on intra frame coding achieved only reduction of computational complexity or it save encoding time and limitation was increase in bit rate with loss of quality of picture. In order to avoid increase in bit rate and loss of picture quality a better approach was developed. In this paper developed a better approach i.e. Gaussian pulse for Intra frame coding using diagonal down left intra prediction mode to achieve higher coding efficiency in terms of PSNR and bitrate. In proposed method Gaussian pulse is multiplied with each 4x4 frequency domain coefficients of 4x4 sub macro block of macro block of current frame before quantization process. Multiplication of Gaussian pulse for each 4x4 integer transformed coefficients at macro block levels scales the information of the coefficients in a reversible manner. The resulting signal would turn abstract. Frequency samples are abstract in a known and controllable manner without intermixing of coefficients, it avoids picture getting bad hit for higher values of quantization parameters. The proposed work was implemented using MATLAB and JM 18.6 reference software. The proposed work measure the performance parameters PSNR, bit rate and compression of intra frame of yuv video sequences in QCIF resolution under different values of quantization parameter with Gaussian value for diagonal down left intra prediction mode. The simulation results of proposed algorithm are tabulated and compared with previous algorithm i.e. Tian et al method. The proposed algorithm achieved reduced in bit rate averagely 30.98% and maintain consistent picture quality for QCIF sequences compared to previous algorithm i.e. Tian et al method.

  17. Implementation of the thinking skills for work program in a psychosocial clubhouse.

    PubMed

    McGurk, Susan R; Schiano, Diane; Mueser, Kim T; Wolfe, Rosemarie

    2010-01-01

    Cognitive remediation programs aimed at improving role functioning have been implemented in a variety of different mental health treatment settings, but not in psychosocial clubhouses. This study sought to determine the feasibility and preliminary outcomes of providing a cognitive remediation program (the Thinking Skills for Work program), developed and previously implemented in supported employment programs at mental health agencies, in a psychosocial club-house. Twenty-three members with a history of difficulties getting or keeping jobs, who were participating in a supported employment program at a psychosocial clubhouse, were enrolled in the Thinking Skills for Work program. A neurocognitive battery was administered at baseline and 3 months later after completion of the computer cognitive training component of the program. Hours of competitive work were tracked for the 2 years before enrollment and 2 years following enrollment. Other work-related activities (school, volunteer) were also tracked for 2 years following enrollment. Twenty-one members (91%) completed 6 or more computer cognitive training sessions. Participants demonstrated significant improvements on neurocognitive measures of processing speed, verbal learning and memory, and executive functions. Sixty percent of the members obtained a competitive job during the 2-year follow-up, and 74% were involved in some type of work-related activity. Participants worked significantly more competitive hours over the 2 years after joining the Thinking Skills for Work program than before. The findings support the feasibility and promise of implementing the Thinking Skills for Work program in the context of supported employment provided at psychosocial clubhouses.

  18. Optimizing Teleportation Cost in Distributed Quantum Circuits

    NASA Astrophysics Data System (ADS)

    Zomorodi-Moghadam, Mariam; Houshmand, Mahboobeh; Houshmand, Monireh

    2018-03-01

    The presented work provides a procedure for optimizing the communication cost of a distributed quantum circuit (DQC) in terms of the number of qubit teleportations. Because of technology limitations which do not allow large quantum computers to work as a single processing element, distributed quantum computation is an appropriate solution to overcome this difficulty. Previous studies have applied ad-hoc solutions to distribute a quantum system for special cases and applications. In this study, a general approach is proposed to optimize the number of teleportations for a DQC consisting of two spatially separated and long-distance quantum subsystems. To this end, different configurations of locations for executing gates whose qubits are in distinct subsystems are considered and for each of these configurations, the proposed algorithm is run to find the minimum number of required teleportations. Finally, the configuration which leads to the minimum number of teleportations is reported. The proposed method can be used as an automated procedure to find the configuration with the optimal communication cost for the DQC. This cost can be used as a basic measure of the communication cost for future works in the distributed quantum circuits.

  19. Computational identification of promising thermoelectric materials among known quasi-2D binary compounds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorai, Prashun; Toberer, Eric S.; Stevanović, Vladan

    Quasi low-dimensional structures are abundant among known thermoelectric materials, primarily because of their low lattice thermal conductivities. In this work, we have computationally assessed the potential of 427 known binary quasi-2D structures in 272 different chemistries for thermoelectric performance. To assess the thermoelectric performance, we employ an improved version of our previously developed descriptor for thermoelectric performance [Yan et al., Energy Environ. Sci., 2015, 8, 983]. The improvement is in the explicit treatment of van der Waals interactions in quasi-2D materials, which leads to significantly better predictions of their crystal structures and lattice thermal conductivities. The improved methodology correctly identifiesmore » known binary quasi-2D thermoelectric materials such as Sb2Te3, Bi2Te3, SnSe, SnS, InSe, and In2Se3. As a result, we propose candidate quasi-2D binary materials, a number of which have not been previously considered for thermoelectric applications.« less

  20. Testing the Use of Implicit Solvent in the Molecular Dynamics Modelling of DNA Flexibility

    NASA Astrophysics Data System (ADS)

    Mitchell, J.; Harris, S.

    DNA flexibility controls packaging, looping and in some cases sequence specific protein binding. Molecular dynamics simulations carried out with a computationally efficient implicit solvent model are potentially a powerful tool for studying larger DNA molecules than can be currently simulated when water and counterions are represented explicitly. In this work we compare DNA flexibility at the base pair step level modelled using an implicit solvent model to that previously determined from explicit solvent simulations and database analysis. Although much of the sequence dependent behaviour is preserved in implicit solvent, the DNA is considerably more flexible when the approximate model is used. In addition we test the ability of the implicit solvent to model stress induced DNA disruptions by simulating a series of DNA minicircle topoisomers which vary in size and superhelical density. When compared with previously run explicit solvent simulations, we find that while the levels of DNA denaturation are similar using both computational methodologies, the specific structural form of the disruptions is different.

  1. A digital waveguide-based approach for Clavinet modeling and synthesis

    NASA Astrophysics Data System (ADS)

    Gabrielli, Leonardo; Välimäki, Vesa; Penttinen, Henri; Squartini, Stefano; Bilbao, Stefan

    2013-12-01

    The Clavinet is an electromechanical musical instrument produced in the mid-twentieth century. As is the case for other vintage instruments, it is subject to aging and requires great effort to be maintained or restored. This paper reports analyses conducted on a Hohner Clavinet D6 and proposes a computational model to faithfully reproduce the Clavinet sound in real time, from tone generation to the emulation of the electronic components. The string excitation signal model is physically inspired and represents a cheap solution in terms of both computational resources and especially memory requirements (compared, e.g., to sample playback systems). Pickups and amplifier models have been implemented which enhance the natural character of the sound with respect to previous work. A model has been implemented on a real-time software platform, Pure Data, capable of a 10-voice polyphony with low latency on an embedded device. Finally, subjective listening tests conducted using the current model are compared to previous tests showing slightly improved results.

  2. Summary of synfuel characterization and combustion studies

    NASA Technical Reports Server (NTRS)

    Schultz, D. F.

    1983-01-01

    Combustion component research studies aimed at evolving environmentally acceptable approaches for burning coal derived fuels for ground power applications were performed at the NASA Lewis Research Center under a program titled the ""Critical Research and Support Technology Program'' (CRT). The work was funded by the Department of Energy and was performed in four tasks. This report summarizes these tasks which have all been previously reported. In addition some previously unreported data from Task 4 is also presented. The first, Task 1 consisted of a literature survey aimed at determining the properties of synthetic fuels. This was followed by a computer modeling effort, Task 2, to predict the exhaust emissions resulting from burning coal liquids by various combustion techniques such as lean and rich-lean combustion. The computer predictions were then compared to the results of a flame tube rig, Task 3, in which the fuel properties were varied to simulate coal liquids. Two actual SRC 2 coal liquids were tested in this flame tube task.

  3. A new version of Visual tool for estimating the fractal dimension of images

    NASA Astrophysics Data System (ADS)

    Grossu, I. V.; Felea, D.; Besliu, C.; Jipa, Al.; Bordeianu, C. C.; Stan, E.; Esanu, T.

    2010-04-01

    This work presents a new version of a Visual Basic 6.0 application for estimating the fractal dimension of images (Grossu et al., 2009 [1]). The earlier version was limited to bi-dimensional sets of points, stored in bitmap files. The application was extended for working also with comma separated values files and three-dimensional images. New version program summaryProgram title: Fractal Analysis v02 Catalogue identifier: AEEG_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9999 No. of bytes in distributed program, including test data, etc.: 4 366 783 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 30 M Classification: 14 Catalogue identifier of previous version: AEEG_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 1999 Does the new version supersede the previous version?: Yes Nature of problem: Estimating the fractal dimension of 2D and 3D images. Solution method: Optimized implementation of the box-counting algorithm. Reasons for new version:The previous version was limited to bitmap image files. The new application was extended in order to work with objects stored in comma separated values (csv) files. The main advantages are: Easier integration with other applications (csv is a widely used, simple text file format); Less resources consumed and improved performance (only the information of interest, the "black points", are stored); Higher resolution (the points coordinates are loaded into Visual Basic double variables [2]); Possibility of storing three-dimensional objects (e.g. the 3D Sierpinski gasket). In this version the optimized box-counting algorithm [1] was extended to the three-dimensional case. Summary of revisions:The application interface was changed from SDI (single document interface) to MDI (multi-document interface). One form was added in order to provide a graphical user interface for the new functionalities (fractal analysis of 2D and 3D images stored in csv files). Additional comments: User friendly graphical interface; Easy deployment mechanism. Running time: In the first approximation, the algorithm is linear. References:[1] I.V. Grossu, C. Besliu, M.V. Rusu, Al. Jipa, C.C. Bordeianu, D. Felea, Comput. Phys. Comm. 180 (2009) 1999-2001.[2] F. Balena, Programming Microsoft Visual Basic 6.0, Microsoft Press, US, 1999.

  4. Using Palm Technology in Participatory Simulations of Complex Systems: A New Take on Ubiquitous and Accessible Mobile Computing

    NASA Astrophysics Data System (ADS)

    Klopfer, Eric; Yoon, Susan; Perry, Judy

    2005-09-01

    This paper reports on teachers' perceptions of the educational affordances of a handheld application called Participatory Simulations. It presents evidence from five cases representing each of the populations who work with these computational tools. Evidence across multiple data sources yield similar results to previous research evaluations of handheld activities with respect to enhancing motivation, engagement and self-directed learning. Three additional themes are discussed that provide insight into understanding curricular applicability of Participatory Simulations that suggest a new take on ubiquitous and accessible mobile computing. These themes generally point to the multiple layers of social and cognitive flexibility intrinsic to their design: ease of adaptation to subject-matter content knowledge and curricular integration; facility in attending to teacher-individualized goals; and encouraging the adoption of learner-centered strategies.

  5. Unmet needs for analyzing biological big data: A survey of 704 NSF principal investigators

    PubMed Central

    2017-01-01

    In a 2016 survey of 704 National Science Foundation (NSF) Biological Sciences Directorate principal investigators (BIO PIs), nearly 90% indicated they are currently or will soon be analyzing large data sets. BIO PIs considered a range of computational needs important to their work, including high performance computing (HPC), bioinformatics support, multistep workflows, updated analysis software, and the ability to store, share, and publish data. Previous studies in the United States and Canada emphasized infrastructure needs. However, BIO PIs said the most pressing unmet needs are training in data integration, data management, and scaling analyses for HPC—acknowledging that data science skills will be required to build a deeper understanding of life. This portends a growing data knowledge gap in biology and challenges institutions and funding agencies to redouble their support for computational training in biology. PMID:29049281

  6. Unmet needs for analyzing biological big data: A survey of 704 NSF principal investigators.

    PubMed

    Barone, Lindsay; Williams, Jason; Micklos, David

    2017-10-01

    In a 2016 survey of 704 National Science Foundation (NSF) Biological Sciences Directorate principal investigators (BIO PIs), nearly 90% indicated they are currently or will soon be analyzing large data sets. BIO PIs considered a range of computational needs important to their work, including high performance computing (HPC), bioinformatics support, multistep workflows, updated analysis software, and the ability to store, share, and publish data. Previous studies in the United States and Canada emphasized infrastructure needs. However, BIO PIs said the most pressing unmet needs are training in data integration, data management, and scaling analyses for HPC-acknowledging that data science skills will be required to build a deeper understanding of life. This portends a growing data knowledge gap in biology and challenges institutions and funding agencies to redouble their support for computational training in biology.

  7. MMA-EoS: A Computational Framework for Mineralogical Thermodynamics

    NASA Astrophysics Data System (ADS)

    Chust, T. C.; Steinle-Neumann, G.; Dolejš, D.; Schuberth, B. S. A.; Bunge, H.-P.

    2017-12-01

    We present a newly developed software framework, MMA-EoS, that evaluates phase equilibria and thermodynamic properties of multicomponent systems by Gibbs energy minimization, with application to mantle petrology. The code is versatile in terms of the equation-of-state and mixing properties and allows for the computation of properties of single phases, solution phases, and multiphase aggregates. Currently, the open program distribution contains equation-of-state formulations widely used, that is, Caloric-Murnaghan, Caloric-Modified-Tait, and Birch-Murnaghan-Mie-Grüneisen-Debye models, with published databases included. Through its modular design and easily scripted database, MMA-EoS can readily be extended with new formulations of equations-of-state and changes or extensions to thermodynamic data sets. We demonstrate the application of the program by reproducing and comparing physical properties of mantle phases and assemblages with previously published work and experimental data, successively increasing complexity, up to computing phase equilibria of six-component compositions. Chemically complex systems allow us to trace the budget of minor chemical components in order to explore whether they lead to the formation of new phases or extend stability fields of existing ones. Self-consistently computed thermophysical properties for a homogeneous mantle and a mechanical mixture of slab lithologies show no discernible differences that require a heterogeneous mantle structure as has been suggested previously. Such examples illustrate how thermodynamics of mantle mineralogy can advance the study of Earth's interior.

  8. The 1989 NASA-ASEE Summer Faculty Fellowship Program in Aeronautics and Research

    NASA Technical Reports Server (NTRS)

    Boroson, Harold R.; Soffen, Gerald A.; Fan, Dah-Nien

    1989-01-01

    The 1989 NASA-ASEE Summer Faculty Fellowship Program at the Goddard Space Flight Center was conducted during 5 Jun. 1989 to 11 Aug. 1989. The research projects were previously assigned. Work summaries are presented for the following topics: optical properties data base; particle acceleration; satellite imagery; telemetry workstation; spectroscopy; image processing; stellar spectra; optical radar; robotics; atmospheric composition; semiconductors computer networks; remote sensing; software engineering; solar flares; and glaciers.

  9. MoPCoM Methodology: Focus on Models of Computation

    NASA Astrophysics Data System (ADS)

    Koudri, Ali; Champeau, Joël; Le Lann, Jean-Christophe; Leilde, Vincent

    Today, developments of Real Time Embedded Systems have to face new challenges. On the one hand, Time-To-Market constraints require a reliable development process allowing quick design space exploration. On the other hand, rapidly developing technology, as stated by Moore's law, requires techniques to handle the resulting productivity gap. In a previous paper, we have presented our Model Based Engineering methodology addressing those issues. In this paper, we make a focus on Models of Computation design and analysis. We illustrate our approach on a Cognitive Radio System development implemented on an FPGA. This work is part of the MoPCoM research project gathering academic and industrial organizations (http://www.mopcom.fr).

  10. Interface Generation and Compositional Verification in JavaPathfinder

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Pasareanu, Corina

    2009-01-01

    We present a novel algorithm for interface generation of software components. Given a component, our algorithm uses learning techniques to compute a permissive interface representing legal usage of the component. Unlike our previous work, this algorithm does not require knowledge about the component s environment. Furthermore, in contrast to other related approaches, our algorithm computes permissive interfaces even in the presence of non-determinism in the component. Our algorithm is implemented in the JavaPathfinder model checking framework for UML statechart components. We have also added support for automated assume-guarantee style compositional verification in JavaPathfinder, using component interfaces. We report on the application of the presented approach to the generation of interfaces for flight software components.

  11. Influence of wall couple stress in MHD flow of a micropolar fluid in a porous medium with energy and concentration transfer

    NASA Astrophysics Data System (ADS)

    Khalid, Asma; Khan, Ilyas; Khan, Arshad; Shafie, Sharidan

    2018-06-01

    The intention here is to investigate the effects of wall couple stress with energy and concentration transfer in magnetohydrodynamic (MHD) flow of a micropolar fluid embedded in a porous medium. The mathematical model contains the set of linear conservation forms of partial differential equations. Laplace transforms and convolution technique are used for computation of exact solutions of velocity, microrotations, temperature and concentration equations. Numerical values of skin friction, couple wall stress, Nusselt and Sherwood numbers are also computed. Characteristics for the significant variables on the physical quantities are graphically discussed. Comparison with previously published work in limiting sense shows an excellent agreement.

  12. Computing arbitrary defect structures on arbitrary lattices on arbitrary geometries from arbitrary energies

    NASA Astrophysics Data System (ADS)

    Allen, Brian; Travesset, Alex

    2004-03-01

    Dislocations and disclinations play a fundamental role in the properties of two dimensional crystals. In this talk, it will be shown that a general computational framework can be developed by combining previous work of Seung and Nelson* and modern advances in objected oriented design. This allows separating the problem into independent classes such as: geometry (sphere, plane, torus..), lattice (triangular, square, etc..), type of defect (dislocation, disclinations, etc..), boundary conditions, type of order (crystalline, hexatic) or energy functional. As applications, the ground state of crystals in several geometries will be discussed. Experimental examples with colloidal particles will be shown. *S. Seung and D. Nelson, Phys. Rev. A 38, 1005 (1988)

  13. Conformal blocks from Wilson lines with loop corrections

    NASA Astrophysics Data System (ADS)

    Hikida, Yasuaki; Uetoko, Takahiro

    2018-04-01

    We compute the conformal blocks of the Virasoro minimal model or its WN extension with large central charge from Wilson line networks in a Chern-Simons theory including loop corrections. In our previous work, we offered a prescription to regularize divergences from loops attached to Wilson lines. In this paper, we generalize our method with the prescription by dealing with more general operators for N =3 and apply it to the identity W3 block. We further compute general light-light blocks and heavy-light correlators for N =2 with the Wilson line method and compare the results with known ones obtained using a different prescription. We briefly discuss general W3 blocks.

  14. Experimental Demonstration of a Self-organized Architecture for Emerging Grid Computing Applications on OBS Testbed

    NASA Astrophysics Data System (ADS)

    Liu, Lei; Hong, Xiaobin; Wu, Jian; Lin, Jintong

    As Grid computing continues to gain popularity in the industry and research community, it also attracts more attention from the customer level. The large number of users and high frequency of job requests in the consumer market make it challenging. Clearly, all the current Client/Server(C/S)-based architecture will become unfeasible for supporting large-scale Grid applications due to its poor scalability and poor fault-tolerance. In this paper, based on our previous works [1, 2], a novel self-organized architecture to realize a highly scalable and flexible platform for Grids is proposed. Experimental results show that this architecture is suitable and efficient for consumer-oriented Grids.

  15. Energy-density field approach for low- and medium-frequency vibroacoustic analysis of complex structures using a statistical computational model

    NASA Astrophysics Data System (ADS)

    Kassem, M.; Soize, C.; Gagliardini, L.

    2009-06-01

    In this paper, an energy-density field approach applied to the vibroacoustic analysis of complex industrial structures in the low- and medium-frequency ranges is presented. This approach uses a statistical computational model. The analyzed system consists of an automotive vehicle structure coupled with its internal acoustic cavity. The objective of this paper is to make use of the statistical properties of the frequency response functions of the vibroacoustic system observed from previous experimental and numerical work. The frequency response functions are expressed in terms of a dimensionless matrix which is estimated using the proposed energy approach. Using this dimensionless matrix, a simplified vibroacoustic model is proposed.

  16. Computation of Kinetics for the Hydrogen/Oxygen System Using the Thermodynamic Method

    NASA Technical Reports Server (NTRS)

    Marek, C. John

    1996-01-01

    A new method for predicting chemical rate constants using thermodynamics has been applied to the hydrogen/oxygen system. This method is based on using the gradient of the Gibbs free energy and a single proportionality constant D to determine the kinetic rate constants. Using this method the rate constants for any gas phase reaction can be computed from thermodynamic properties. A modified reaction set for the H/O system is determined. A11 of the third body efficiencies M are taken to be unity. Good agreement was obtained between the thermodynamic method and the experimental shock tube data. In addition, the hydrogen bromide experimental data presented in previous work is recomputed with M's of unity.

  17. Automated quantitative 3D analysis of aorta size, morphology, and mural calcification distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurugol, Sila, E-mail: sila.kurugol@childrens.harvard.edu; Come, Carolyn E.; Diaz, Alejandro A.

    Purpose: The purpose of this work is to develop a fully automated pipeline to compute aorta morphology and calcification measures in large cohorts of CT scans that can be used to investigate the potential of these measures as imaging biomarkers of cardiovascular disease. Methods: The first step of the automated pipeline is aorta segmentation. The algorithm the authors propose first detects an initial aorta boundary by exploiting cross-sectional circularity of aorta in axial slices and aortic arch in reformatted oblique slices. This boundary is then refined by a 3D level-set segmentation that evolves the boundary to the location of nearbymore » edges. The authors then detect the aortic calcifications with thresholding and filter out the false positive regions due to nearby high intensity structures based on their anatomical location. The authors extract the centerline and oblique cross sections of the segmented aortas and compute the aorta morphology and calcification measures of the first 2500 subjects from COPDGene study. These measures include volume and number of calcified plaques and measures of vessel morphology such as average cross-sectional area, tortuosity, and arch width. Results: The authors computed the agreement between the algorithm and expert segmentations on 45 CT scans and obtained a closest point mean error of 0.62 ± 0.09 mm and a Dice coefficient of 0.92 ± 0.01. The calcification detection algorithm resulted in an improved true positive detection rate of 0.96 compared to previous work. The measurements of aorta size agreed with the measurements reported in previous work. The initial results showed associations of aorta morphology with calcification and with aging. These results may indicate aorta stiffening and unwrapping with calcification and aging. Conclusions: The authors have developed an objective tool to assess aorta morphology and aortic calcium plaques on CT scans that may be used to provide information about the presence of cardiovascular disease and its clinical impact in smokers.« less

  18. Automated quantitative 3D analysis of aorta size, morphology, and mural calcification distributions.

    PubMed

    Kurugol, Sila; Come, Carolyn E; Diaz, Alejandro A; Ross, James C; Kinney, Greg L; Black-Shinn, Jennifer L; Hokanson, John E; Budoff, Matthew J; Washko, George R; San Jose Estepar, Raul

    2015-09-01

    The purpose of this work is to develop a fully automated pipeline to compute aorta morphology and calcification measures in large cohorts of CT scans that can be used to investigate the potential of these measures as imaging biomarkers of cardiovascular disease. The first step of the automated pipeline is aorta segmentation. The algorithm the authors propose first detects an initial aorta boundary by exploiting cross-sectional circularity of aorta in axial slices and aortic arch in reformatted oblique slices. This boundary is then refined by a 3D level-set segmentation that evolves the boundary to the location of nearby edges. The authors then detect the aortic calcifications with thresholding and filter out the false positive regions due to nearby high intensity structures based on their anatomical location. The authors extract the centerline and oblique cross sections of the segmented aortas and compute the aorta morphology and calcification measures of the first 2500 subjects from COPDGene study. These measures include volume and number of calcified plaques and measures of vessel morphology such as average cross-sectional area, tortuosity, and arch width. The authors computed the agreement between the algorithm and expert segmentations on 45 CT scans and obtained a closest point mean error of 0.62 ± 0.09 mm and a Dice coefficient of 0.92 ± 0.01. The calcification detection algorithm resulted in an improved true positive detection rate of 0.96 compared to previous work. The measurements of aorta size agreed with the measurements reported in previous work. The initial results showed associations of aorta morphology with calcification and with aging. These results may indicate aorta stiffening and unwrapping with calcification and aging. The authors have developed an objective tool to assess aorta morphology and aortic calcium plaques on CT scans that may be used to provide information about the presence of cardiovascular disease and its clinical impact in smokers.

  19. Automated quantitative 3D analysis of aorta size, morphology, and mural calcification distributions

    PubMed Central

    Kurugol, Sila; Come, Carolyn E.; Diaz, Alejandro A.; Ross, James C.; Kinney, Greg L.; Black-Shinn, Jennifer L.; Hokanson, John E.; Budoff, Matthew J.; Washko, George R.; San Jose Estepar, Raul

    2015-01-01

    Purpose: The purpose of this work is to develop a fully automated pipeline to compute aorta morphology and calcification measures in large cohorts of CT scans that can be used to investigate the potential of these measures as imaging biomarkers of cardiovascular disease. Methods: The first step of the automated pipeline is aorta segmentation. The algorithm the authors propose first detects an initial aorta boundary by exploiting cross-sectional circularity of aorta in axial slices and aortic arch in reformatted oblique slices. This boundary is then refined by a 3D level-set segmentation that evolves the boundary to the location of nearby edges. The authors then detect the aortic calcifications with thresholding and filter out the false positive regions due to nearby high intensity structures based on their anatomical location. The authors extract the centerline and oblique cross sections of the segmented aortas and compute the aorta morphology and calcification measures of the first 2500 subjects from COPDGene study. These measures include volume and number of calcified plaques and measures of vessel morphology such as average cross-sectional area, tortuosity, and arch width. Results: The authors computed the agreement between the algorithm and expert segmentations on 45 CT scans and obtained a closest point mean error of 0.62 ± 0.09 mm and a Dice coefficient of 0.92 ± 0.01. The calcification detection algorithm resulted in an improved true positive detection rate of 0.96 compared to previous work. The measurements of aorta size agreed with the measurements reported in previous work. The initial results showed associations of aorta morphology with calcification and with aging. These results may indicate aorta stiffening and unwrapping with calcification and aging. Conclusions: The authors have developed an objective tool to assess aorta morphology and aortic calcium plaques on CT scans that may be used to provide information about the presence of cardiovascular disease and its clinical impact in smokers. PMID:26328995

  20. Five- and six-electron harmonium atoms: Highly accurate electronic properties and their application to benchmarking of approximate 1-matrix functionals

    NASA Astrophysics Data System (ADS)

    Cioslowski, Jerzy; Strasburger, Krzysztof

    2018-04-01

    Electronic properties of several states of the five- and six-electron harmonium atoms are obtained from large-scale calculations employing explicitly correlated basis functions. The high accuracy of the computed energies (including their components), natural spinorbitals, and their occupation numbers makes them suitable for testing, calibration, and benchmarking of approximate formalisms of quantum chemistry and solid state physics. In the case of the five-electron species, the availability of the new data for a wide range of the confinement strengths ω allows for confirmation and generalization of the previously reached conclusions concerning the performance of the presently known approximations for the electron-electron repulsion energy in terms of the 1-matrix that are at heart of the density matrix functional theory (DMFT). On the other hand, the properties of the three low-lying states of the six-electron harmonium atom, computed at ω = 500 and ω = 1000, uncover deficiencies of the 1-matrix functionals not revealed by previous studies. In general, the previously published assessment of the present implementations of DMFT being of poor accuracy is found to hold. Extending the present work to harmonically confined systems with even more electrons is most likely counterproductive as the steep increase in computational cost required to maintain sufficient accuracy of the calculated properties is not expected to be matched by the benefits of additional information gathered from the resulting benchmarks.

  1. Computational fluid mechanics

    NASA Technical Reports Server (NTRS)

    Hassan, H. A.

    1993-01-01

    Two papers are included in this progress report. In the first, the compressible Navier-Stokes equations have been used to compute leading edge receptivity of boundary layers over parabolic cylinders. Natural receptivity at the leading edge was simulated and Tollmien-Schlichting waves were observed to develop in response to an acoustic disturbance, applied through the farfield boundary conditions. To facilitate comparison with previous work, all computations were carried out at a free stream Mach number of 0.3. The spatial and temporal behavior of the flowfields are calculated through the use of finite volume algorithms and Runge-Kutta integration. The results are dominated by strong decay of the Tollmien-Schlichting wave due to the presence of the mean flow favorable pressure gradient. The effects of numerical dissipation, forcing frequency, and nose radius are studied. The Strouhal number is shown to have the greatest effect on the unsteady results. In the second paper, a transition model for low-speed flows, previously developed by Young et al., which incorporates first-mode (Tollmien-Schlichting) disturbance information from linear stability theory has been extended to high-speed flow by incorporating the effects of second mode disturbances. The transition model is incorporated into a Reynolds-averaged Navier-Stokes solver with a one-equation turbulence model. Results using a variable turbulent Prandtl number approach demonstrate that the current model accurately reproduces available experimental data for first and second-mode dominated transitional flows. The performance of the present model shows significant improvement over previous transition modeling attempts.

  2. Local structure of subcellular input retinotopy in an identified visual interneuron

    NASA Astrophysics Data System (ADS)

    Zhu, Ying; Gabbiani, Fabrizio; Fabrizio Gabbiani's lab Team

    2015-03-01

    How does the spatial layout of the projections that a neuron receives impact its synaptic integration and computation? What is the mapping topography of subcellular wiring at the single neuron level? The LGMD (lobula giant movement detector) neuron in the locust is an identified neuron that responds preferentially to objects approaching on a collision course. It receives excitatory inputs from the entire visual hemifield through calcium-permeable nicotinic acetylcholine receptors. Previous work showed that the projection from the locust compound eye to the LGMD preserved retinotopy down to the level of a single ommatidium (facet) by employing in vivo widefield calcium imaging. Because widefield imaging relies on global excitation of the preparation and has a relatively low resolution, previous work could not investigate this retinotopic mapping at the level of individual thin dendritic branches. Our current work employs a custom-built two-photon microscope with sub-micron resolution in conjunction with a single-facet stimulation setup that provides visual stimuli to the single ommatidium of locust adequate to explore the local structure of this retinotopy at a finer level. We would thank NIMH for funding this research.

  3. Ensuring critical event sequences in high consequence computer based systems as inspired by path expressions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kidd, M.E.C.

    1997-02-01

    The goal of our work is to provide a high level of confidence that critical software driven event sequences are maintained in the face of hardware failures, malevolent attacks and harsh or unstable operating environments. This will be accomplished by providing dynamic fault management measures directly to the software developer and to their varied development environments. The methodology employed here is inspired by previous work in path expressions. This paper discusses the perceived problems, a brief overview of path expressions, the proposed methods, and a discussion of the differences between the proposed methods and traditional path expression usage and implementation.

  4. Security in MANETs using reputation-adjusted routing

    NASA Astrophysics Data System (ADS)

    Ondi, Attila; Hoffman, Katherine; Perez, Carlos; Ford, Richard; Carvalho, Marco; Allen, William

    2009-04-01

    Mobile Ad-Hoc Networks enable communication in various dynamic environments, including military combat operations. Their open and shared communication medium enables new forms of attack that are not applicable for traditional wired networks. Traditional security mechanisms and defense techniques are not prepared to cope with the new attacks and the lack of central authorities make identity verifications difficult. This work extends our previous work in the Biologically Inspired Tactical Security Infrastructure to provide a reputation-based weighing mechanism for linkstate routing protocols to protect the network from attackers that are corrupting legitimate network traffic. Our results indicate that the approach is successful in routing network traffic around compromised computers.

  5. User Manual for the NASA Glenn Ice Accretion Code LEWICE: Version 2.0

    NASA Technical Reports Server (NTRS)

    Wright, William B.

    1999-01-01

    A research project is underway at NASA Glenn to produce a computer code which can accurately predict ice growth under a wide range of meteorological conditions for any aircraft surface. This report will present a description of the code inputs and outputs from version 2.0 of this code, which is called LEWICE. This version differs from previous releases due to its robustness and its ability to reproduce results accurately for different spacing and time step criteria across computing platform. It also differs in the extensive effort undertaken to compare the results against the database of ice shapes which have been generated in the NASA Glenn Icing Research Tunnel (IRT) 1. This report will only describe the features of the code related to the use of the program. The report will not describe the inner working of the code or the physical models used. This information is available in the form of several unpublished documents which will be collectively referred to as a Programmers Manual for LEWICE 2 in this report. These reports are intended as an update/replacement for all previous user manuals of LEWICE. In addition to describing the changes and improvements made for this version, information from previous manuals may be duplicated so that the user will not need to consult previous manuals to use this code.

  6. A facilitative effect of negative affective valence on working memory.

    PubMed

    Gotoh, Fumiko; Kikuchi, Tadashi; Olofsson, Ulrich

    2010-06-01

    Previous studies have shown that negatively valenced information impaired working memory performance due to an attention-capturing effect. The present study examined whether negative valence could also facilitate working memory. Affective words (negative, neutral, positive) were used as retro-cues in a working memory task that required participants to remember colors at different spatial locations on a computer screen. Following the cue, a target detection task was used to either shift attention to a different location or keep attention at the same location as the retro-cue. Finally, participants were required to discriminate the cued color from a set of distractors. It was found that negative cues yielded shorter response times (RTs) in the attention-shift condition and longer RTs in the attention-stay condition, compared with neutral and positive cues. The results suggest that negative affective valence may enhance working memory performance (RTs), provided that attention can be disengaged.

  7. New technologies for advanced three-dimensional optimum shape design in aeronautics

    NASA Astrophysics Data System (ADS)

    Dervieux, Alain; Lanteri, Stéphane; Malé, Jean-Michel; Marco, Nathalie; Rostaing-Schmidt, Nicole; Stoufflet, Bruno

    1999-05-01

    The analysis of complex flows around realistic aircraft geometries is becoming more and more predictive. In order to obtain this result, the complexity of flow analysis codes has been constantly increasing, involving more refined fluid models and sophisticated numerical methods. These codes can only run on top computers, exhausting their memory and CPU capabilities. It is, therefore, difficult to introduce best analysis codes in a shape optimization loop: most previous works in the optimum shape design field used only simplified analysis codes. Moreover, as the most popular optimization methods are the gradient-based ones, the more complex the flow solver, the more difficult it is to compute the sensitivity code. However, emerging technologies are contributing to make such an ambitious project, of including a state-of-the-art flow analysis code into an optimisation loop, feasible. Among those technologies, there are three important issues that this paper wishes to address: shape parametrization, automated differentiation and parallel computing. Shape parametrization allows faster optimization by reducing the number of design variable; in this work, it relies on a hierarchical multilevel approach. The sensitivity code can be obtained using automated differentiation. The automated approach is based on software manipulation tools, which allow the differentiation to be quick and the resulting differentiated code to be rather fast and reliable. In addition, the parallel algorithms implemented in this work allow the resulting optimization software to run on increasingly larger geometries. Copyright

  8. Computer and visual display terminals (VDT) vision syndrome (CVDTS).

    PubMed

    Parihar, J K S; Jain, Vaibhav Kumar; Chaturvedi, Piyush; Kaushik, Jaya; Jain, Gunjan; Parihar, Ashwini K S

    2016-07-01

    Computer and visual display terminals have become an essential part of modern lifestyle. The use of these devices has made our life simple in household work as well as in offices. However the prolonged use of these devices is not without any complication. Computer and visual display terminals syndrome is a constellation of symptoms ocular as well as extraocular associated with prolonged use of visual display terminals. This syndrome is gaining importance in this modern era because of the widespread use of technologies in day-to-day life. It is associated with asthenopic symptoms, visual blurring, dry eyes, musculoskeletal symptoms such as neck pain, back pain, shoulder pain, carpal tunnel syndrome, psychosocial factors, venous thromboembolism, shoulder tendonitis, and elbow epicondylitis. Proper identification of symptoms and causative factors are necessary for the accurate diagnosis and management. This article focuses on the various aspects of the computer vision display terminals syndrome described in the previous literature. Further research is needed for the better understanding of the complex pathophysiology and management.

  9. On the computation of molecular surface correlations for protein docking using fourier techniques.

    PubMed

    Sakk, Eric

    2007-08-01

    The computation of surface correlations using a variety of molecular models has been applied to the unbound protein docking problem. Because of the computational complexity involved in examining all possible molecular orientations, the fast Fourier transform (FFT) (a fast numerical implementation of the discrete Fourier transform (DFT)) is generally applied to minimize the number of calculations. This approach is rooted in the convolution theorem which allows one to inverse transform the product of two DFTs in order to perform the correlation calculation. However, such a DFT calculation results in a cyclic or "circular" correlation which, in general, does not lead to the same result as the linear correlation desired for the docking problem. In this work, we provide computational bounds for constructing molecular models used in the molecular surface correlation problem. The derived bounds are then shown to be consistent with various intuitive guidelines previously reported in the protein docking literature. Finally, these bounds are applied to different molecular models in order to investigate their effect on the correlation calculation.

  10. The application of contraction theory to an iterative formulation of electromagnetic scattering

    NASA Technical Reports Server (NTRS)

    Brand, J. C.; Kauffman, J. F.

    1985-01-01

    Contraction theory is applied to an iterative formulation of electromagnetic scattering from periodic structures and a computational method for insuring convergence is developed. A short history of spectral (or k-space) formulation is presented with an emphasis on application to periodic surfaces. To insure a convergent solution of the iterative equation, a process called the contraction corrector method is developed. Convergence properties of previously presented iterative solutions to one-dimensional problems are examined utilizing contraction theory and the general conditions for achieving a convergent solution are explored. The contraction corrector method is then applied to several scattering problems including an infinite grating of thin wires with the solution data compared to previous works.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albeverio, Sergio; Chen Kai; Fei Shaoming

    A necessary separability criterion that relates the structures of the total density matrix and its reductions is given. The method used is based on the realignment method [K. Chen and L. A. Wu, Quant. Inf. Comput. 3, 193 (2003)]. The separability criterion naturally generalizes the reduction separability criterion introduced independently in the previous work [M. Horodecki and P. Horodecki, Phys. Rev. A 59, 4206 (1999) and N. J. Cerf, C. Adami, and R. M. Gingrich, Phys. Rev. A 60, 898 (1999)]. In special cases, it recovers the previous reduction criterion and the recent generalized partial transposition criterion [K. Chen andmore » L. A. Wu, Phys. Lett. A 306, 14 (2002)]. The criterion involves only simple matrix manipulations and can therefore be easily applied.« less

  12. Fast globally optimal segmentation of cells in fluorescence microscopy images.

    PubMed

    Bergeest, Jan-Philip; Rohr, Karl

    2011-01-01

    Accurate and efficient segmentation of cells in fluorescence microscopy images is of central importance for the quantification of protein expression in high-throughput screening applications. We propose a new approach for segmenting cell nuclei which is based on active contours and convex energy functionals. Compared to previous work, our approach determines the global solution. Thus, the approach does not suffer from local minima and the segmentation result does not depend on the initialization. We also suggest a numeric approach for efficiently computing the solution. The performance of our approach has been evaluated using fluorescence microscopy images of different cell types. We have also performed a quantitative comparison with previous segmentation approaches.

  13. Achieving TASAR Operational Readiness

    NASA Technical Reports Server (NTRS)

    Wing, David J.

    2015-01-01

    NASA has been developing and testing the Traffic Aware Strategic Aircrew Requests (TASAR) concept for aircraft operations featuring a NASA-developed cockpit automation tool, the Traffic Aware Planner (TAP), which computes traffic/hazard-compatible route changes to improve flight efficiency. The TAP technology is anticipated to save fuel and flight time and thereby provide immediate and pervasive benefits to the aircraft operator, as well as improving flight schedule compliance, passenger comfort, and pilot and controller workload. Previous work has indicated the potential for significant benefits for TASAR-equipped aircraft, and a flight trial of the TAP software application in the National Airspace System has demonstrated its technical viability. This paper reviews previous and ongoing activities to prepare TASAR for operational use.

  14. Integrating Retraction Modeling Into an Atlas-Based Framework for Brain Shift Prediction

    PubMed Central

    Chen, Ishita; Ong, Rowena E.; Simpson, Amber L.; Sun, Kay; Thompson, Reid C.

    2015-01-01

    In recent work, an atlas-based statistical model for brain shift prediction, which accounts for uncertainty in the intraoperative environment, has been proposed. Previous work reported in the literature using this technique did not account for local deformation caused by surgical retraction. It is challenging to precisely localize the retractor location prior to surgery and the retractor is often moved in the course of the procedure. This paper proposes a technique that involves computing the retractor-induced brain deformation in the operating room through an active model solve and linearly superposing the solution with the precomputed deformation atlas. As a result, the new method takes advantage of the atlas-based framework’s accounting for uncertainties while also incorporating the effects of retraction with minimal intraoperative computing. This new approach was tested using simulation and phantom experiments. The results showed an improvement in average shift correction from 50% (ranging from 14 to 81%) for gravity atlas alone to 80% using the active solve retraction component (ranging from 73 to 85%). This paper presents a novel yet simple way to integrate retraction into the atlas-based brain shift computation framework. PMID:23864146

  15. Computational approaches for predicting biomedical research collaborations.

    PubMed

    Zhang, Qing; Yu, Hong

    2014-01-01

    Biomedical research is increasingly collaborative, and successful collaborations often produce high impact work. Computational approaches can be developed for automatically predicting biomedical research collaborations. Previous works of collaboration prediction mainly explored the topological structures of research collaboration networks, leaving out rich semantic information from the publications themselves. In this paper, we propose supervised machine learning approaches to predict research collaborations in the biomedical field. We explored both the semantic features extracted from author research interest profile and the author network topological features. We found that the most informative semantic features for author collaborations are related to research interest, including similarity of out-citing citations, similarity of abstracts. Of the four supervised machine learning models (naïve Bayes, naïve Bayes multinomial, SVMs, and logistic regression), the best performing model is logistic regression with an ROC ranging from 0.766 to 0.980 on different datasets. To our knowledge we are the first to study in depth how research interest and productivities can be used for collaboration prediction. Our approach is computationally efficient, scalable and yet simple to implement. The datasets of this study are available at https://github.com/qingzhanggithub/medline-collaboration-datasets.

  16. Performance Modeling in CUDA Streams - A Means for High-Throughput Data Processing.

    PubMed

    Li, Hao; Yu, Di; Kumar, Anand; Tu, Yi-Cheng

    2014-10-01

    Push-based database management system (DBMS) is a new type of data processing software that streams large volume of data to concurrent query operators. The high data rate of such systems requires large computing power provided by the query engine. In our previous work, we built a push-based DBMS named G-SDMS to harness the unrivaled computational capabilities of modern GPUs. A major design goal of G-SDMS is to support concurrent processing of heterogenous query processing operations and enable resource allocation among such operations. Understanding the performance of operations as a result of resource consumption is thus a premise in the design of G-SDMS. With NVIDIA's CUDA framework as the system implementation platform, we present our recent work on performance modeling of CUDA kernels running concurrently under a runtime mechanism named CUDA stream . Specifically, we explore the connection between performance and resource occupancy of compute-bound kernels and develop a model that can predict the performance of such kernels. Furthermore, we provide an in-depth anatomy of the CUDA stream mechanism and summarize the main kernel scheduling disciplines in it. Our models and derived scheduling disciplines are verified by extensive experiments using synthetic and real-world CUDA kernels.

  17. Final Technical Report for Department of Energy award number DE-FG02-06ER54882, Revised

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eggleston, Dennis L.

    The research reported here involves studies of radial particle transport in a cylindrical, low-density Malmberg-Penning non-neutral plasma trap. The research is primarily experimental but involves careful comparisons to analytical theory and includes the results of a single-particle computer code. The transport is produced by applied electric fields that break the cylindrical symmetry of the trap, hence the term ``asymmetry-induced transport.'' Our computer studies have revealed the importance of a previously ignored class of particles that become trapped in the asymmetry potential. In many common situations these particles exhibit large radial excursions and dominate the radial transport. On the experimental side,more » we have developed new data analysis techniques that allowed us to determine the magnetic field dependence of the transport and to place empirical constraints on the form on the transport equation. Experiments designed to test the computer code results gave varying degrees of agreement with further work being necessary to understand the results. This work expands our knowledge of the varied mechanisms of cross-magnetic-field transport and should be of use to other workers studying plasma confinement.« less

  18. Performance of computer vision in vivo flow cytometry with low fluorescence contrast

    NASA Astrophysics Data System (ADS)

    Markovic, Stacey; Li, Siyuan; Niedre, Mark

    2015-03-01

    Detection and enumeration of circulating cells in the bloodstream of small animals are important in many areas of preclinical biomedical research, including cancer metastasis, immunology, and reproductive medicine. Optical in vivo flow cytometry (IVFC) represents a class of technologies that allow noninvasive and continuous enumeration of circulating cells without drawing blood samples. We recently developed a technique termed computer vision in vivo flow cytometry (CV-IVFC) that uses a high-sensitivity fluorescence camera and an automated computer vision algorithm to interrogate relatively large circulating blood volumes in the ear of a mouse. We detected circulating cells at concentrations as low as 20 cells/mL. In the present work, we characterized the performance of CV-IVFC with low-contrast imaging conditions with (1) weak cell fluorescent labeling using cell-simulating fluorescent microspheres with varying brightness and (2) high background tissue autofluorescence by varying autofluorescence properties of optical phantoms. Our analysis indicates that CV-IVFC can robustly track and enumerate circulating cells with at least 50% sensitivity even in conditions with two orders of magnitude degraded contrast than our previous in vivo work. These results support the significant potential utility of CV-IVFC in a wide range of in vivo biological models.

  19. Multithreaded implicitly dealiased convolutions

    NASA Astrophysics Data System (ADS)

    Roberts, Malcolm; Bowman, John C.

    2018-03-01

    Implicit dealiasing is a method for computing in-place linear convolutions via fast Fourier transforms that decouples work memory from input data. It offers easier memory management and, for long one-dimensional input sequences, greater efficiency than conventional zero-padding. Furthermore, for convolutions of multidimensional data, the segregation of data and work buffers can be exploited to reduce memory usage and execution time significantly. This is accomplished by processing and discarding data as it is generated, allowing work memory to be reused, for greater data locality and performance. A multithreaded implementation of implicit dealiasing that accepts an arbitrary number of input and output vectors and a general multiplication operator is presented, along with an improved one-dimensional Hermitian convolution that avoids the loop dependency inherent in previous work. An alternate data format that can accommodate a Nyquist mode and enhance cache efficiency is also proposed.

  20. Multidisciplinary Design Optimization of a Full Vehicle with High Performance Computing

    NASA Technical Reports Server (NTRS)

    Yang, R. J.; Gu, L.; Tho, C. H.; Sobieszczanski-Sobieski, Jaroslaw

    2001-01-01

    Multidisciplinary design optimization (MDO) of a full vehicle under the constraints of crashworthiness, NVH (Noise, Vibration and Harshness), durability, and other performance attributes is one of the imperative goals for automotive industry. However, it is often infeasible due to the lack of computational resources, robust simulation capabilities, and efficient optimization methodologies. This paper intends to move closer towards that goal by using parallel computers for the intensive computation and combining different approximations for dissimilar analyses in the MDO process. The MDO process presented in this paper is an extension of the previous work reported by Sobieski et al. In addition to the roof crush, two full vehicle crash modes are added: full frontal impact and 50% frontal offset crash. Instead of using an adaptive polynomial response surface method, this paper employs a DOE/RSM method for exploring the design space and constructing highly nonlinear crash functions. Two NMO strategies are used and results are compared. This paper demonstrates that with high performance computing, a conventionally intractable real world full vehicle multidisciplinary optimization problem considering all performance attributes with large number of design variables become feasible.

  1. Breast tumor malignancy modelling using evolutionary neural logic networks.

    PubMed

    Tsakonas, Athanasios; Dounias, Georgios; Panagi, Georgia; Panourgias, Evangelia

    2006-01-01

    The present work proposes a computer assisted methodology for the effective modelling of the diagnostic decision for breast tumor malignancy. The suggested approach is based on innovative hybrid computational intelligence algorithms properly applied in related cytological data contained in past medical records. The experimental data used in this study were gathered in the early 1990s in the University of Wisconsin, based in post diagnostic cytological observations performed by expert medical staff. Data were properly encoded in a computer database and accordingly, various alternative modelling techniques were applied on them, in an attempt to form diagnostic models. Previous methods included standard optimisation techniques, as well as artificial intelligence approaches, in a way that a variety of related publications exists in modern literature on the subject. In this report, a hybrid computational intelligence approach is suggested, which effectively combines modern mathematical logic principles, neural computation and genetic programming in an effective manner. The approach proves promising either in terms of diagnostic accuracy and generalization capabilities, or in terms of comprehensibility and practical importance for the related medical staff.

  2. Layout optimization using the homogenization method

    NASA Technical Reports Server (NTRS)

    Suzuki, Katsuyuki; Kikuchi, Noboru

    1993-01-01

    A generalized layout problem involving sizing, shape, and topology optimization is solved by using the homogenization method for three-dimensional linearly elastic shell structures in order to seek a possibility of establishment of an integrated design system of automotive car bodies, as an extension of the previous work by Bendsoe and Kikuchi. A formulation of a three-dimensional homogenized shell, a solution algorithm, and several examples of computing the optimum layout are presented in this first part of the two articles.

  3. Computer Aided Detection of Microcalcifications Utilizing Texture Analysis

    DTIC Science & Technology

    1995-12-01

    encouraging results using features derived from the first moment of the power spectrum of the region[13]. Chitre, et al. and Kocur have made use of...are largely concentrated around the main diagonal. For the example C matrix in Figure 3.11, the ASM value is 0.0972. Previous work by Kocur [17] and...Patterson AFB OH, 1994. BIB-1 16. Hoffmeister, Jeffery W. Personal interviews, May-Nov 1995. Aerospace Physician. AL/CFHV, Wright-Patterson AFB,OH. 17. Kocur

  4. Integrating TV/digital data spectrograph system

    NASA Technical Reports Server (NTRS)

    Duncan, B. J.; Fay, T. D.; Miller, E. R.; Wamsteker, W.; Brown, R. M.; Neely, P. L.

    1975-01-01

    A 25-mm vidicon camera was previously modified to allow operation in an integration mode for low-light-level astronomical work. The camera was then mated to a low-dispersion spectrograph for obtaining spectral information in the 400 to 750 nm range. A high speed digital video image system was utilized to digitize the analog video signal, place the information directly into computer-type memory, and record data on digital magnetic tape for permanent storage and subsequent analysis.

  5. Computational and Spectroscopic Investigations of the Molecular Scale Structure and Dynamics of Geologically Important Fluids and Mineral-Fluid Interfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. James Kirkpatrick; Andrey G. Kalinichev

    2008-11-25

    Research supported by this grant focuses on molecular scale understanding of central issues related to the structure and dynamics of geochemically important fluids, fluid-mineral interfaces, and confined fluids using computational modeling and experimental methods. Molecular scale knowledge about fluid structure and dynamics, how these are affected by mineral surfaces and molecular-scale (nano-) confinement, and how water molecules and dissolved species interact with surfaces is essential to understanding the fundamental chemistry of a wide range of low-temperature geochemical processes, including sorption and geochemical transport. Our principal efforts are devoted to continued development of relevant computational approaches, application of these approaches tomore » important geochemical questions, relevant NMR and other experimental studies, and application of computational modeling methods to understanding the experimental results. The combination of computational modeling and experimental approaches is proving highly effective in addressing otherwise intractable problems. In 2006-2007 we have significantly advanced in new, highly promising research directions along with completion of on-going projects and final publication of work completed in previous years. New computational directions are focusing on modeling proton exchange reactions in aqueous solutions using ab initio molecular dynamics (AIMD), metadynamics (MTD), and empirical valence bond (EVB) approaches. Proton exchange is critical to understanding the structure, dynamics, and reactivity at mineral-water interfaces and for oxy-ions in solution, but has traditionally been difficult to model with molecular dynamics (MD). Our ultimate objective is to develop this capability, because MD is much less computationally demanding than quantum-chemical approaches. We have also extended our previous MD simulations of metal binding to natural organic matter (NOM) to a much longer time scale (up to 10 ns) for significantly larger systems. These calculations have allowed us, for the first time, to study the effects of metal cations with different charges and charge density on the NOM aggregation in aqueous solutions. Other computational work has looked at the longer-time-scale dynamical behavior of aqueous species at mineral-water interfaces investigated simultaneously by NMR spectroscopy. Our experimental NMR studies have focused on understanding the structure and dynamics of water and dissolved species at mineral-water interfaces and in two-dimensional nano-confinement within clay interlayers. Combined NMR and MD study of H2O, Na+, and Cl- interactions with the surface of quartz has direct implications regarding interpretation of sum frequency vibrational spectroscopic experiments for this phase and will be an important reference for future studies. We also used NMR to examine the behavior of K+ and H2O in the interlayer and at the surfaces of the clay minerals hectorite and illite-rich illite-smectite. This the first time K+ dynamics has been characterized spectroscopically in geochemical systems. Preliminary experiments were also performed to evaluate the potential of 75As NMR as a probe of arsenic geochemical behavior. The 75As NMR study used advanced signal enhancement methods, introduced a new data acquisition approach to minimize the time investment in ultra-wide-line NMR experiments, and provides the first evidence of a strong relationship between the chemical shift and structural parameters for this experimentally challenging nucleus. We have also initiated a series of inelastic and quasi-elastic neutron scattering measurements of water dynamics in the interlayers of clays and layered double hydroxides. The objective of these experiments is to probe the correlations of water molecular motions in confined spaces over the scale of times and distances most directly comparable to our MD simulations and on a time scale different than that probed by NMR. This work is being done in collaboration with Drs. C.-K. Loong, N. de Souza, and A.I. Kolesnikov at the Intense Pulsed Neutron Source facility of the Argonne National Lab, and Dr. A. Faraone at the NIST Center for Neutron Research. A manuscript reporting the first results of these experiments, which are highly complimentary to our previous NMR, X-ray, and infra-red results for these phases, is currently in preparation. In total, in 2006-2007 our work has resulted in the publication of 14 peer-reviewed research papers. We also devoted considerable effort to making our work known to a wide range of researchers, as indicated by the 24 contributed abstracts and 14 invited presentations.« less

  6. Logic integration of mRNA signals by an RNAi-based molecular computer.

    PubMed

    Xie, Zhen; Liu, Siyuan John; Bleris, Leonidas; Benenson, Yaakov

    2010-05-01

    Synthetic in vivo molecular 'computers' could rewire biological processes by establishing programmable, non-native pathways between molecular signals and biological responses. Multiple molecular computer prototypes have been shown to work in simple buffered solutions. Many of those prototypes were made of DNA strands and performed computations using cycles of annealing-digestion or strand displacement. We have previously introduced RNA interference (RNAi)-based computing as a way of implementing complex molecular logic in vivo. Because it also relies on nucleic acids for its operation, RNAi computing could benefit from the tools developed for DNA systems. However, these tools must be harnessed to produce bioactive components and be adapted for harsh operating environments that reflect in vivo conditions. In a step toward this goal, we report the construction and implementation of biosensors that 'transduce' mRNA levels into bioactive, small interfering RNA molecules via RNA strand exchange in a cell-free Drosophila embryo lysate, a step beyond simple buffered environments. We further integrate the sensors with our RNAi 'computational' module to evaluate two-input logic functions on mRNA concentrations. Our results show how RNA strand exchange can expand the utility of RNAi computing and point toward the possibility of using strand exchange in a native biological setting.

  7. Deployment of the OSIRIS EM-PIC code on the Intel Knights Landing architecture

    NASA Astrophysics Data System (ADS)

    Fonseca, Ricardo

    2017-10-01

    Electromagnetic particle-in-cell (EM-PIC) codes such as OSIRIS have found widespread use in modelling the highly nonlinear and kinetic processes that occur in several relevant plasma physics scenarios, ranging from astrophysical settings to high-intensity laser plasma interaction. Being computationally intensive, these codes require large scale HPC systems, and a continuous effort in adapting the algorithm to new hardware and computing paradigms. In this work, we report on our efforts on deploying the OSIRIS code on the new Intel Knights Landing (KNL) architecture. Unlike the previous generation (Knights Corner), these boards are standalone systems, and introduce several new features, include the new AVX-512 instructions and on-package MCDRAM. We will focus on the parallelization and vectorization strategies followed, as well as memory management, and present a detailed performance evaluation of code performance in comparison with the CPU code. This work was partially supported by Fundaçã para a Ciência e Tecnologia (FCT), Portugal, through Grant No. PTDC/FIS-PLA/2940/2014.

  8. Algorithmic aspects for the reconstruction of spatio-spectral data cubes in the perspective of the SKA

    NASA Astrophysics Data System (ADS)

    Mary, D.; Ferrari, A.; Ferrari, C.; Deguignet, J.; Vannier, M.

    2016-12-01

    With millions of receivers leading to TerraByte data cubes, the story of the giant SKA telescope is also that of collaborative efforts from radioastronomy, signal processing, optimization and computer sciences. Reconstructing SKA cubes poses two challenges. First, the majority of existing algorithms work in 2D and cannot be directly translated into 3D. Second, the reconstruction implies solving an inverse problem and it is not clear what ultimate limit we can expect on the error of this solution. This study addresses (of course partially) both challenges. We consider an extremely simple data acquisition model, and we focus on strategies making it possible to implement 3D reconstruction algorithms that use state-of-the-art image/spectral regularization. The proposed approach has two main features: (i) reduced memory storage with respect to a previous approach; (ii) efficient parallelization and ventilation of the computational load over the spectral bands. This work will allow to implement and compare various 3D reconstruction approaches in a large scale framework.

  9. Artifact Reduction in X-Ray CT Images of Al-Steel-Perspex Specimens Mimicking a Hip Prosthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madhogarhia, Manish; Munshi, P.; Lukose, Sijo

    2008-09-26

    X-ray Computed Tomography (CT) is a relatively new technique developed in the late 1970's, which enables the nondestructive visualization of the internal structure of objects. Beam hardening caused by the polychromatic spectrum is an important problem in X-ray computed tomography (X-CT). It leads to various artifacts in reconstruction images and reduces image quality. In the present work we are considering the Artifact Reduction in Total Hip Prosthesis CT Scan which is a problem of medical imaging. We are trying to reduce the cupping artifact induced by beam hardening as well as metal artifact as they exist in the CT scanmore » of a human hip after the femur is replaced by a metal implant. The correction method for beam hardening used here is based on a previous work. Simulation study for the present problem includes a phantom consisting of mild steel, aluminium and perspex mimicking the photon attenuation properties of a hum hip cross section with metal implant.« less

  10. Design and Implementation of Practical Bidirectional Texture Function Measurement Devices Focusing on the Developments at the University of Bonn

    PubMed Central

    Schwartz, Christopher; Sarlette, Ralf; Weinmann, Michael; Rump, Martin; Klein, Reinhard

    2014-01-01

    Understanding as well as realistic reproduction of the appearance of materials play an important role in computer graphics, computer vision and industry. They enable applications such as digital material design, virtual prototyping and faithful virtual surrogates for entertainment, marketing, education or cultural heritage documentation. A particularly fruitful way to obtain the digital appearance is the acquisition of reflectance from real-world material samples. Therefore, a great variety of devices to perform this task has been proposed. In this work, we investigate their practical usefulness. We first idey a set of necessary attributes and establish a general categorization of different designs that have been realized. Subsequently, we provide an in-depth discussion of three particular implementations by our work group, demonstrating advantages and disadvantages of different system designs with respect to the previously established attributes. Finally, we survey the existing literature to compare our implementation with related approaches. PMID:24787638

  11. Automated Performance Prediction of Message-Passing Parallel Programs

    NASA Technical Reports Server (NTRS)

    Block, Robert J.; Sarukkai, Sekhar; Mehra, Pankaj; Woodrow, Thomas S. (Technical Monitor)

    1995-01-01

    The increasing use of massively parallel supercomputers to solve large-scale scientific problems has generated a need for tools that can predict scalability trends of applications written for these machines. Much work has been done to create simple models that represent important characteristics of parallel programs, such as latency, network contention, and communication volume. But many of these methods still require substantial manual effort to represent an application in the model's format. The NIK toolkit described in this paper is the result of an on-going effort to automate the formation of analytic expressions of program execution time, with a minimum of programmer assistance. In this paper we demonstrate the feasibility of our approach, by extending previous work to detect and model communication patterns automatically, with and without overlapped computations. The predictions derived from these models agree, within reasonable limits, with execution times of programs measured on the Intel iPSC/860 and Paragon. Further, we demonstrate the use of MK in selecting optimal computational grain size and studying various scalability metrics.

  12. Computing element evolution towards Exascale and its impact on legacy simulation codes

    NASA Astrophysics Data System (ADS)

    Colin de Verdière, Guillaume J. L.

    2015-12-01

    In the light of the current race towards the Exascale, this article highlights the main features of the forthcoming computing elements that will be at the core of next generations of supercomputers. The market analysis, underlying this work, shows that computers are facing a major evolution in terms of architecture. As a consequence, it is important to understand the impacts of those evolutions on legacy codes or programming methods. The problems of dissipated power and memory access are discussed and will lead to a vision of what should be an exascale system. To survive, programming languages had to respond to the hardware evolutions either by evolving or with the creation of new ones. From the previous elements, we elaborate why vectorization, multithreading, data locality awareness and hybrid programming will be the key to reach the exascale, implying that it is time to start rewriting codes.

  13. Test and evaluation of a multifunction keyboard and a dedicated keyboard for control of a flight management computer

    NASA Technical Reports Server (NTRS)

    Crane, J. M.; Boucek, G. P., Jr.; Smith, W. D.

    1986-01-01

    A flight management computer (FMC) control display unit (CDU) test was conducted to compare two types of input devices: a fixed legend (dedicated) keyboard and a programmable legend (multifunction) keyboard. The task used for comparison was operation of the flight management computer for the Boeing 737-300. The same tasks were performed by twelve pilots on the FMC control display unit configured with a programmable legend keyboard and with the currently used B737-300 dedicated keyboard. Flight simulator work activity levels and input task complexity were varied during each pilot session. Half of the points tested were previously familiar with the B737-300 dedicated keyboard CDU and half had no prior experience with it. The data collected included simulator flight parameters, keystroke time and sequences, and pilot questionnaire responses. A timeline analysis was also used for evaluation of the two keyboard concepts.

  14. CFD Modeling of Free-Piston Stirling Engines

    NASA Technical Reports Server (NTRS)

    Ibrahim, Mounir B.; Zhang, Zhi-Guo; Tew, Roy C., Jr.; Gedeon, David; Simon, Terrence W.

    2001-01-01

    NASA Glenn Research Center (GRC) is funding Cleveland State University (CSU) to develop a reliable Computational Fluid Dynamics (CFD) code that can predict engine performance with the goal of significant improvements in accuracy when compared to one-dimensional (1-D) design code predictions. The funding also includes conducting code validation experiments at both the University of Minnesota (UMN) and CSU. In this paper a brief description of the work-in-progress is provided in the two areas (CFD and Experiments). Also, previous test results are compared with computational data obtained using (1) a 2-D CFD code obtained from Dr. Georg Scheuerer and further developed at CSU and (2) a multidimensional commercial code CFD-ACE+. The test data and computational results are for (1) a gas spring and (2) a single piston/cylinder with attached annular heat exchanger. The comparisons among the codes are discussed. The paper also discusses plans for conducting code validation experiments at CSU and UMN.

  15. Magnetic skyrmion-based artificial neuron device

    NASA Astrophysics Data System (ADS)

    Li, Sai; Kang, Wang; Huang, Yangqi; Zhang, Xichao; Zhou, Yan; Zhao, Weisheng

    2017-08-01

    Neuromorphic computing, inspired by the biological nervous system, has attracted considerable attention. Intensive research has been conducted in this field for developing artificial synapses and neurons, attempting to mimic the behaviors of biological synapses and neurons, which are two basic elements of a human brain. Recently, magnetic skyrmions have been investigated as promising candidates in neuromorphic computing design owing to their topologically protected particle-like behaviors, nanoscale size and low driving current density. In one of our previous studies, a skyrmion-based artificial synapse was proposed, with which both short-term plasticity and long-term potentiation functions have been demonstrated. In this work, we further report on a skyrmion-based artificial neuron by exploiting the tunable current-driven skyrmion motion dynamics, mimicking the leaky-integrate-fire function of a biological neuron. With a simple single-device implementation, this proposed artificial neuron may enable us to build a dense and energy-efficient spiking neuromorphic computing system.

  16. 3D noise-resistant segmentation and tracking of unknown and occluded objects using integral imaging

    NASA Astrophysics Data System (ADS)

    Aloni, Doron; Jung, Jae-Hyun; Yitzhaky, Yitzhak

    2017-10-01

    Three dimensional (3D) object segmentation and tracking can be useful in various computer vision applications, such as: object surveillance for security uses, robot navigation, etc. We present a method for 3D multiple-object tracking using computational integral imaging, based on accurate 3D object segmentation. The method does not employ object detection by motion analysis in a video as conventionally performed (such as background subtraction or block matching). This means that the movement properties do not significantly affect the detection quality. The object detection is performed by analyzing static 3D image data obtained through computational integral imaging With regard to previous works that used integral imaging data in such a scenario, the proposed method performs the 3D tracking of objects without prior information about the objects in the scene, and it is found efficient under severe noise conditions.

  17. Sustaining and Extending the Open Science Grid: Science Innovation on a PetaScale Nationwide Facility (DE-FC02-06ER41436) SciDAC-2 Closeout Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livny, Miron; Shank, James; Ernst, Michael

    Under this SciDAC-2 grant the project’s goal w a s t o stimulate new discoveries by providing scientists with effective and dependable access to an unprecedented national distributed computational facility: the Open Science Grid (OSG). We proposed to achieve this through the work of the Open Science Grid Consortium: a unique hands-on multi-disciplinary collaboration of scientists, software developers and providers of computing resources. Together the stakeholders in this consortium sustain and use a shared distributed computing environment that transforms simulation and experimental science in the US. The OSG consortium is an open collaboration that actively engages new research communities. Wemore » operate an open facility that brings together a broad spectrum of compute, storage, and networking resources and interfaces to other cyberinfrastructures, including the US XSEDE (previously TeraGrid), the European Grids for ESciencE (EGEE), as well as campus and regional grids. We leverage middleware provided by computer science groups, facility IT support organizations, and computing programs of application communities for the benefit of consortium members and the US national CI.« less

  18. Key issues of ultraviolet radiation of OH at high altitudes

    NASA Astrophysics Data System (ADS)

    Zhang, Yuhuai; Wan, Tian; Jiang, Jianzheng; Fan, Jing

    2014-12-01

    Ultraviolet (UV) emissions radiated by hydroxyl (OH) is one of the fundamental elements in the prediction of radiation signature of high-altitude and high-speed vehicle. In this work, the OH A2Σ+→ X2Π ultraviolet emission band behind the bow shock is computed under the experimental condition of the second bow-shock ultraviolet flight (BSUV-2). Four related key issues are discussed, namely, the source of hydrogen element in the high-altitude atmosphere, the formation mechanism of OH species, efficient computational algorithm of trace species in rarefied flows, and accurate calculation of OH emission spectra. Firstly, by analyzing the typical atmospheric model, the vertical distributions of the number densities of different species containing hydrogen element are given. According to the different dominating species containing hydrogen element, the atmosphere is divided into three zones, and the formation mechanism of OH species is analyzed in the different zones. The direct simulation Monte Carlo (DSMC) method and the Navier-Stokes equations are employed to compute the number densities of the different OH electronically and vibrationally excited states. Different to the previous work, the trace species separation (TSS) algorithm is applied twice in order to accurately calculate the densities of OH species and its excited states. Using a non-equilibrium radiation model, the OH ultraviolet emission spectra and intensity at different altitudes are computed, and good agreement is obtained with the flight measured data.

  19. puma: a Bioconductor package for propagating uncertainty in microarray analysis.

    PubMed

    Pearson, Richard D; Liu, Xuejun; Sanguinetti, Guido; Milo, Marta; Lawrence, Neil D; Rattray, Magnus

    2009-07-09

    Most analyses of microarray data are based on point estimates of expression levels and ignore the uncertainty of such estimates. By determining uncertainties from Affymetrix GeneChip data and propagating these uncertainties to downstream analyses it has been shown that we can improve results of differential expression detection, principal component analysis and clustering. Previously, implementations of these uncertainty propagation methods have only been available as separate packages, written in different languages. Previous implementations have also suffered from being very costly to compute, and in the case of differential expression detection, have been limited in the experimental designs to which they can be applied. puma is a Bioconductor package incorporating a suite of analysis methods for use on Affymetrix GeneChip data. puma extends the differential expression detection methods of previous work from the 2-class case to the multi-factorial case. puma can be used to automatically create design and contrast matrices for typical experimental designs, which can be used both within the package itself but also in other Bioconductor packages. The implementation of differential expression detection methods has been parallelised leading to significant decreases in processing time on a range of computer architectures. puma incorporates the first R implementation of an uncertainty propagation version of principal component analysis, and an implementation of a clustering method based on uncertainty propagation. All of these techniques are brought together in a single, easy-to-use package with clear, task-based documentation. For the first time, the puma package makes a suite of uncertainty propagation methods available to a general audience. These methods can be used to improve results from more traditional analyses of microarray data. puma also offers improvements in terms of scope and speed of execution over previously available methods. puma is recommended for anyone working with the Affymetrix GeneChip platform for gene expression analysis and can also be applied more generally.

  20. Integration of multiple determinants in the neuronal computation of economic values.

    PubMed

    Raghuraman, Anantha P; Padoa-Schioppa, Camillo

    2014-08-27

    Economic goods may vary on multiple dimensions (determinants). A central conjecture in decision neuroscience is that choices between goods are made by comparing subjective values computed through the integration of all relevant determinants. Previous work identified three groups of neurons in the orbitofrontal cortex (OFC) of monkeys engaged in economic choices: (1) offer value cells, which encode the value of individual offers; (2) chosen value cells, which encode the value of the chosen good; and (3) chosen juice cells, which encode the identity of the chosen good. In principle, these populations could be sufficient to generate a decision. Critically, previous work did not assess whether offer value cells (the putative input to the decision) indeed encode subjective values as opposed to physical properties of the goods, and/or whether offer value cells integrate multiple determinants. To address these issues, we recorded from the OFC while monkeys chose between risky outcomes. Confirming previous observations, three populations of neurons encoded the value of individual offers, the value of the chosen option, and the value-independent choice outcome. The activity of both offer value cells and chosen value cells encoded values defined by the integration of juice quantity and probability. Furthermore, both populations reflected the subjective risk attitude of the animals. We also found additional groups of neurons encoding the risk associated with a particular option, the risky nature of the chosen option, and whether the trial outcome was positive or negative. These results provide substantial support for the conjecture described above and for the involvement of OFC in good-based decisions. Copyright © 2014 the authors 0270-6474/14/3311583-21$15.00/0.

  1. On the utility of threads for data parallel programming

    NASA Technical Reports Server (NTRS)

    Fahringer, Thomas; Haines, Matthew; Mehrotra, Piyush

    1995-01-01

    Threads provide a useful programming model for asynchronous behavior because of their ability to encapsulate units of work that can then be scheduled for execution at runtime, based on the dynamic state of a system. Recently, the threaded model has been applied to the domain of data parallel scientific codes, and initial reports indicate that the threaded model can produce performance gains over non-threaded approaches, primarily through the use of overlapping useful computation with communication latency. However, overlapping computation with communication is possible without the benefit of threads if the communication system supports asynchronous primitives, and this comparison has not been made in previous papers. This paper provides a critical look at the utility of lightweight threads as applied to data parallel scientific programming.

  2. Development of a Nonequilibrium Radiative Heating Prediction Method for Coupled Flowfield Solutions

    NASA Technical Reports Server (NTRS)

    Hartung, Lin C.

    1991-01-01

    A method for predicting radiative heating and coupling effects in nonequilibrium flow-fields has been developed. The method resolves atomic lines with a minimum number of spectral points, and treats molecular radiation using the smeared band approximation. To further minimize computational time, the calculation is performed on an optimized spectrum, which is computed for each flow condition to enhance spectral resolution. Additional time savings are obtained by performing the radiation calculation on a subgrid optimally selected for accuracy. Representative results from the new method are compared to previous work to demonstrate that the speedup does not cause a loss of accuracy and is sufficient to make coupled solutions practical. The method is found to be a useful tool for studies of nonequilibrium flows.

  3. Solution of hydrogen in accident tolerant fuel candidate material: U3Si2

    NASA Astrophysics Data System (ADS)

    Middleburgh, S. C.; Claisse, A.; Andersson, D. A.; Grimes, R. W.; Olsson, P.; Mašková, S.

    2018-04-01

    Hydrogen uptake and accommodation into U3Si2, a candidate accident-tolerant fuel system, has been modelled on the atomic scale using the density functional theory. The solution energy of multiple H atoms is computed, reaching a stoichiometry of U3Si2H2 which has been experimentally observed in previous work (reported as U3Si2H1.8). The absorption of hydrogen is found to be favourable up to U3Si2H2 and the associated volume change is computed, closely matching experimental data. Entropic effects are considered to assess the dissociation temperature of H2, estimated to be at ∼800 K - again in good agreement with the experimentally observed transition temperature.

  4. [Pulmonary paracoccidioidomycosis: a case report with high-resolution computed tomography findings].

    PubMed

    Armas, M; Ruivo, C; Alves, R; Gonçalves, M; Teixeira, L

    2012-01-01

    Paracoccidioidomycosis is a systemic mycosis which is endemic in rural areas of Latin America, an important European source of immigrants and a growing European touristic destination as well, with most cases occurring in Brazil, Argentina, Venezuela and Colombia. The authors report a case of a 43 year old man who previously worked in Venezuela and is living in Portugal for 8 years, presenting with a single cutaneous lesion. Despite the absence of valuable respiratory complaints, severe lung damage was found with high-resolution computed tomography (HRCT). Biopsy of the cutaneous lesion and mycologic sputum examination were performed revealing Paracoccidioides brasiliensis infection. Copyright © 2011 Sociedade Portuguesa de Pneumologia. Published by Elsevier España. All rights reserved.

  5. Experimental Realization of a Quantum Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Li, Zhaokai; Liu, Xiaomei; Xu, Nanyang; Du, Jiangfeng

    2015-04-01

    The fundamental principle of artificial intelligence is the ability of machines to learn from previous experience and do future work accordingly. In the age of big data, classical learning machines often require huge computational resources in many practical cases. Quantum machine learning algorithms, on the other hand, could be exponentially faster than their classical counterparts by utilizing quantum parallelism. Here, we demonstrate a quantum machine learning algorithm to implement handwriting recognition on a four-qubit NMR test bench. The quantum machine learns standard character fonts and then recognizes handwritten characters from a set with two candidates. Because of the wide spread importance of artificial intelligence and its tremendous consumption of computational resources, quantum speedup would be extremely attractive against the challenges of big data.

  6. Vehicle Maneuver Detection with Accelerometer-Based Classification.

    PubMed

    Cervantes-Villanueva, Javier; Carrillo-Zapata, Daniel; Terroso-Saenz, Fernando; Valdes-Vela, Mercedes; Skarmeta, Antonio F

    2016-09-29

    In the mobile computing era, smartphones have become instrumental tools to develop innovative mobile context-aware systems. In that sense, their usage in the vehicular domain eases the development of novel and personal transportation solutions. In this frame, the present work introduces an innovative mechanism to perceive the current kinematic state of a vehicle on the basis of the accelerometer data from a smartphone mounted in the vehicle. Unlike previous proposals, the introduced architecture targets the computational limitations of such devices to carry out the detection process following an incremental approach. For its realization, we have evaluated different classification algorithms to act as agents within the architecture. Finally, our approach has been tested with a real-world dataset collected by means of the ad hoc mobile application developed.

  7. Combination of visual and symbolic knowledge: A survey in anatomy.

    PubMed

    Banerjee, Imon; Patané, Giuseppe; Spagnuolo, Michela

    2017-01-01

    In medicine, anatomy is considered as the most discussed field and results in a huge amount of knowledge, which is heterogeneous and covers aspects that are mostly independent in nature. Visual and symbolic modalities are mainly adopted for exemplifying knowledge about human anatomy and are crucial for the evolution of computational anatomy. In particular, a tight integration of visual and symbolic modalities is beneficial to support knowledge-driven methods for biomedical investigation. In this paper, we review previous work on the presentation and sharing of anatomical knowledge, and the development of advanced methods for computational anatomy, also focusing on the key research challenges for harmonizing symbolic knowledge and spatial 3D data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Effects of portable computing devices on posture, muscle activation levels and efficiency.

    PubMed

    Werth, Abigail; Babski-Reeves, Kari

    2014-11-01

    Very little research exists on ergonomic exposures when using portable computing devices. This study quantified muscle activity (forearm and neck), posture (wrist, forearm and neck), and performance (gross typing speed and error rates) differences across three portable computing devices (laptop, netbook, and slate computer) and two work settings (desk and computer) during data entry tasks. Twelve participants completed test sessions on a single computer using a test-rest-test protocol (30min of work at one work setting, 15min of rest, 30min of work at the other work setting). The slate computer resulted in significantly more non-neutral wrist, elbow and neck postures, particularly when working on the sofa. Performance on the slate computer was four times less than that of the other computers, though lower muscle activity levels were also found. Potential or injury or illness may be elevated when working on smaller, portable computers in non-traditional work settings. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  9. Computation of temperature elevation in rabbit eye irradiated by 2.45-GHz microwaves with different field configurations.

    PubMed

    Hirata, Akimasa; Watanabe, Soichi; Taki, Masao; Fujiwara, Osamu; Kojima, Masami; Sasaki, Kazuyuki

    2008-02-01

    This study calculated the temperature elevation in the rabbit eye caused by 2.45-GHz near-field exposure systems. First, we calculated specific absorption rate distributions in the eye for different antennas and then compared them with those observed in previous studies. Next, we re-examined the temperature elevation in the rabbit eye due to a horizontally-polarized dipole antenna with a C-shaped director, which was used in a previous study. For our computational results, we found that decisive factors of the SAR distribution in the rabbit eye were the polarization of the electromagnetic wave and antenna aperture. Next, we quantified the eye average specific absorption rate as 67 W kg(-1) for the dipole antenna with an input power density at the eye surface of 150 mW cm(-2), which was specified in the previous work as the minimum cataractogenic power density. The effect of administrating anesthesia on the temperature elevation was 30% or so in the above case. Additionally, the position where maximum temperature in the lens appears is discussed due to different 2.45-GHz microwave systems. That position was found to appear around the posterior of the lens regardless of the exposure condition, which indicates that the original temperature distribution in the eye was the dominant factor.

  10. Inhibition in movement plan competition: reach trajectories curve away from remembered and task-irrelevant present but not from task-irrelevant past visual stimuli.

    PubMed

    Moehler, Tobias; Fiehler, Katja

    2017-11-01

    The current study investigated the role of automatic encoding and maintenance of remembered, past, and present visual distractors for reach movement planning. The previous research on eye movements showed that saccades curve away from locations actively kept in working memory and also from task-irrelevant perceptually present visual distractors, but not from task-irrelevant past distractors. Curvature away has been associated with an inhibitory mechanism resolving the competition between multiple active movement plans. Here, we examined whether reach movements underlie a similar inhibitory mechanism and thus show systematic modulation of reach trajectories when the location of a previously presented distractor has to be (a) maintained in working memory or (b) ignored, or (c) when the distractor is perceptually present. Participants performed vertical reach movements on a computer monitor from a home to a target location. Distractors appeared laterally and near or far from the target (equidistant from central fixation). We found that reaches curved away from the distractors located close to the target when the distractor location had to be memorized and when it was perceptually present, but not when the past distractor had to be ignored. Our findings suggest that automatically encoding present distractors and actively maintaining the location of past distractors in working memory evoke a similar response competition resolved by inhibition, as has been previously shown for saccadic eye movements.

  11. Uplink transmit beamforming design for SINR maximization with full multiuser channel state information

    NASA Astrophysics Data System (ADS)

    Xi, Songnan; Zoltowski, Michael D.

    2008-04-01

    Multiuser multiple-input multiple-output (MIMO) systems are considered in this paper. We continue our research on uplink transmit beamforming design for multiple users under the assumption that the full multiuser channel state information, which is the collection of the channel state information between each of the users and the base station, is known not only to the receiver but also to all the transmitters. We propose an algorithm for designing optimal beamforming weights in terms of maximizing the signal-to-interference-plus-noise ratio (SINR). Through statistical modeling, we decouple the original mathematically intractable optimization problem and achieved a closed-form solution. As in our previous work, the minimum mean-squared error (MMSE) receiver with successive interference cancellation (SIC) is adopted for multiuser detection. The proposed scheme is compared with an existing jointly optimized transceiver design, referred to as the joint transceiver in this paper, and our previously proposed eigen-beamforming algorithm. Simulation results demonstrate that our algorithm, with much less computational burden, accomplishes almost the same performance as the joint transceiver for spatially independent MIMO channel and even better performance for spatially correlated MIMO channels. And it always works better than our previously proposed eigen beamforming algorithm.

  12. Gaze data reveal distinct choice processes underlying model-based and model-free reinforcement learning

    PubMed Central

    Konovalov, Arkady; Krajbich, Ian

    2016-01-01

    Organisms appear to learn and make decisions using different strategies known as model-free and model-based learning; the former is mere reinforcement of previously rewarded actions and the latter is a forward-looking strategy that involves evaluation of action-state transition probabilities. Prior work has used neural data to argue that both model-based and model-free learners implement a value comparison process at trial onset, but model-based learners assign more weight to forward-looking computations. Here using eye-tracking, we report evidence for a different interpretation of prior results: model-based subjects make their choices prior to trial onset. In contrast, model-free subjects tend to ignore model-based aspects of the task and instead seem to treat the decision problem as a simple comparison process between two differentially valued items, consistent with previous work on sequential-sampling models of decision making. These findings illustrate a problem with assuming that experimental subjects make their decisions at the same prescribed time. PMID:27511383

  13. Using PEACE Data from the four CLUSTER Spacecraft to Measure Compressibility, Vorticity, and the Taylor Microscale in the Magnetosheath and Plasma Sheet

    NASA Technical Reports Server (NTRS)

    Goldstein, Melvyn L.; Parks, George; Gurgiolo, C.; Fazakerley, Andrew N.

    2008-01-01

    We present determinations of compressibility and vorticity in the magnetosheath and plasma sheet using moments from the four PEACE thermal electron instruments on CLUSTER. The methodology used assumes a linear variation of the moments throughout the volume defined by the four satellites, which allows spatially independent estimates of the divergence, curl, and gradient. Once the vorticity has been computed, it is possible to estimate directly the Taylor microscale. We have shown previously that the technique works well in the solar wind. Because the background flow speed in the magnetosheath and plasma sheet is usually less than the Alfven speed, the Taylor frozen-in-flow approximation cannot be used. Consequently, this four spacecraft approach is the only viable method for obtaining the wave number properties of the ambient fluctuations. Our results using electron velocity moments will be compared with previous work using magnetometer data from the FGM experiment on Cluster.

  14. Electrical Field Guided Electrospray Deposition for Production of Gradient Particle Patterns.

    PubMed

    Yan, Wei-Cheng; Xie, Jingwei; Wang, Chi-Hwa

    2018-06-06

    Our previous work demonstrated the uniform particle pattern formation on the substrates using electrical field guided electrospray deposition. In this work, we reported for the first time the fabrication of gradient particle patterns on glass slides using an additional point, line, or bar electrode based on our previous electrospray deposition configuration. We also demonstrated that the polydimethylsiloxane (PDMS) coating could result in the formation of uniform particle patterns instead of gradient particle patterns on glass slides using the same experimental setup. Meanwhile, we investigated the effect of experimental configurations on the gradient particle pattern formation by computational simulation. The simulation results are in line with experimental observations. The formation of gradient particle patterns was ascribed to the gradient of electric field and the corresponding focusing effect. Cell patterns can be formed on the particle patterns deposited on PDMS-coated glass slides. The formed particle patterns hold great promise for high-throughput screening of biomaterial-cell interactions and sensing.

  15. Z2Pack: Numerical implementation of hybrid Wannier centers for identifying topological materials

    NASA Astrophysics Data System (ADS)

    Gresch, Dominik; Autès, Gabriel; Yazyev, Oleg V.; Troyer, Matthias; Vanderbilt, David; Bernevig, B. Andrei; Soluyanov, Alexey A.

    2017-02-01

    The intense theoretical and experimental interest in topological insulators and semimetals has established band structure topology as a fundamental material property. Consequently, identifying band topologies has become an important, but often challenging, problem, with no exhaustive solution at the present time. In this work we compile a series of techniques, some previously known, that allow for a solution to this problem for a large set of the possible band topologies. The method is based on tracking hybrid Wannier charge centers computed for relevant Bloch states, and it works at all levels of materials modeling: continuous k .p models, tight-binding models, and ab initio calculations. We apply the method to compute and identify Chern, Z2, and crystalline topological insulators, as well as topological semimetal phases, using real material examples. Moreover, we provide a numerical implementation of this technique (the Z2Pack software package) that is ideally suited for high-throughput screening of materials databases for compounds with nontrivial topologies. We expect that our work will allow researchers to (a) identify topological materials optimal for experimental probes, (b) classify existing compounds, and (c) reveal materials that host novel, not yet described, topological states.

  16. Computational Analysis on Stent Geometries in Carotid Artery: A Review

    NASA Astrophysics Data System (ADS)

    Paisal, Muhammad Sufyan Amir; Taib, Ishkrizat; Ismail, Al Emran

    2017-01-01

    This paper reviews the work done by previous researchers in order to gather the information for the current study which about the computational analysis on stent geometry in carotid artery. The implantation of stent in carotid artery has become popular treatment for arterial diseases of hypertension such as stenosis, thrombosis, atherosclerosis and embolization, in reducing the rate of mortality and morbidity. For the stenting of an artery, the previous researchers did many type of mathematical models in which, the physiological variables of artery is analogized to electrical variables. Thus, the computational fluid dynamics (CFD) of artery could be done, which this method is also did by previous researchers. It lead to the current study in finding the hemodynamic characteristics due to artery stenting such as wall shear stress (WSS) and wall shear stress gradient (WSSG). Another objective of this study is to evaluate the nowadays stent configuration for full optimization in reducing the arterial side effect such as restenosis rate after a few weeks of stenting. The evaluation of stent is based on the decrease of strut-strut intersection, decrease of strut width and increase of the strut-strut spacing. The existing configuration of stents are actually good enough in widening the narrowed arterial wall but the disease such as thrombosis still occurs in early and late stage after the stent implantation. Thus, the outcome of this study is the prediction for the reduction of restenosis rate and the WSS distribution is predicted to be able in classifying which stent configuration is the best.

  17. Simulating Self-Assembly with Simple Models

    NASA Astrophysics Data System (ADS)

    Rapaport, D. C.

    Results from recent molecular dynamics simulations of virus capsid self-assembly are described. The model is based on rigid trapezoidal particles designed to form polyhedral shells of size 60, together with an atomistic solvent. The underlying bonding process is fully reversible. More extensive computations are required than in previous work on icosahedral shells built from triangular particles, but the outcome is a high yield of closed shells. Intermediate clusters have a variety of forms, and bond counts provide a useful classification scheme

  18. New electron-energy transfer rates for vibrational excitation of O2

    NASA Astrophysics Data System (ADS)

    Jones, D. B.; Campbell, L.; Bottema, M. J.; Brunger, M. J.

    2003-09-01

    We report on our computation of electron-energy transfer rates for vibrational excitation of O2. This work was necessitated by inadequacies in the electron-impact cross section databases employed in previous studies and, in one case, an inaccurate approximate formulation to the rate equation. Both these inadequacies led to incorrect energy transfer rates being published in the literature. We also demonstrate the importance of using cross sections that encompass an energy range that is extended enough to appropriately describe the environment under investigation.

  19. Towards a Cognitively Realistic Computational Model of Team Problem Solving Using ACT-R Agents and the ELICIT Experimentation Framework

    DTIC Science & Technology

    2014-06-01

    intelligence analysis processes. However, as has been noted in previous work (e.g., [42]), there are a number of important differences between the nature of the...problem encountered in the context of the ELICIT task and the problems dealt with by intelligence analysts. Perhaps most importantly, the fact that a...see Section 7). 6 departure from the reality of most intelligence analysis situations: in most real-world intelligence analysis problems agents have

  20. Gastrointestinal bleeding detection in wireless capsule endoscopy images using handcrafted and CNN features.

    PubMed

    Xiao Jia; Meng, Max Q-H

    2017-07-01

    Gastrointestinal (GI) bleeding detection plays an essential role in wireless capsule endoscopy (WCE) examination. In this paper, we present a new approach for WCE bleeding detection that combines handcrafted (HC) features and convolutional neural network (CNN) features. Compared with our previous work, a smaller-scale CNN architecture is constructed to lower the computational cost. In experiments, we show that the proposed strategy is highly capable when training data is limited, and yields comparable or better results than the latest methods.

  1. Hard-spin mean-field theory: A systematic derivation and exact correlations in one dimension

    PubMed

    Kabakcioglu

    2000-04-01

    Hard-spin mean-field theory is an improved mean-field approach which has proven to give accurate results, especially for frustrated spin systems, with relatively little computational effort. In this work, the previous phenomenological derivation is supplanted by a systematic and generic derivation that opens the possibility for systematic improvements, especially for the calculation of long-range correlation functions. A first level of improvement suffices to recover the exact long-range values of the correlation functions in one dimension.

  2. Severity Summarization and Just in Time Alert Computation in mHealth Monitoring.

    PubMed

    Pathinarupothi, Rahul Krishnan; Alangot, Bithin; Rangan, Ekanath

    2017-01-01

    Mobile health is fast evolving into a practical solution to remotely monitor high-risk patients and deliver timely intervention in case of emergencies. Building upon our previous work on a fast and power efficient summarization framework for remote health monitoring applications, called RASPRO (Rapid Alerts Summarization for Effective Prognosis), we have developed a real-time criticality detection technique, which ensures meeting physician defined interventional time. We also present the results from initial testing of this technique.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cembranos, Jose A. R.; Diaz-Cruz, J. Lorenzo; Prado, Lilian

    Dark Matter direct detection experiments are able to exclude interesting parameter space regions of particle models which predict an important amount of thermal relics. We use recent data to constrain the branon model and to compute the region that is favored by CDMS measurements. Within this work, we also update present colliders constraints with new studies coming from the LHC. Despite the present low luminosity, it is remarkable that for heavy branons, CMS and ATLAS measurements are already more constraining than previous analyses performed with TEVATRON and LEP data.

  4. Fusion Simulation Project Workshop Report

    NASA Astrophysics Data System (ADS)

    Kritz, Arnold; Keyes, David

    2009-03-01

    The mission of the Fusion Simulation Project is to develop a predictive capability for the integrated modeling of magnetically confined plasmas. This FSP report adds to the previous activities that defined an approach to integrated modeling in magnetic fusion. These previous activities included a Fusion Energy Sciences Advisory Committee panel that was charged to study integrated simulation in 2002. The report of that panel [Journal of Fusion Energy 20, 135 (2001)] recommended the prompt initiation of a Fusion Simulation Project. In 2003, the Office of Fusion Energy Sciences formed a steering committee that developed a project vision, roadmap, and governance concepts [Journal of Fusion Energy 23, 1 (2004)]. The current FSP planning effort involved 46 physicists, applied mathematicians and computer scientists, from 21 institutions, formed into four panels and a coordinating committee. These panels were constituted to consider: Status of Physics Components, Required Computational and Applied Mathematics Tools, Integration and Management of Code Components, and Project Structure and Management. The ideas, reported here, are the products of these panels, working together over several months and culminating in a 3-day workshop in May 2007.

  5. Language Networks Associated with Computerized Semantic Indices

    PubMed Central

    Pakhomov, Serguei V. S.; Jones, David T.; Knopman, David S.

    2014-01-01

    Tests of generative semantic verbal fluency are widely used to study organization and representation of concepts in the human brain. Previous studies demonstrated that clustering and switching behavior during verbal fluency tasks is supported by multiple brain mechanisms associated with semantic memory and executive control. Previous work relied on manual assessments of semantic relatedness between words and grouping of words into semantic clusters. We investigated a computational linguistic approach to measuring the strength of semantic relatedness between words based on latent semantic analysis of word co-occurrences in a subset of a large online encyclopedia. We computed semantic clustering indices and compared them to brain network connectivity measures obtained with task-free fMRI in a sample consisting of healthy participants and those differentially affected by cognitive impairment. We found that semantic clustering indices were associated with brain network connectivity in distinct areas including fronto-temporal, fronto-parietal and fusiform gyrus regions. This study shows that computerized semantic indices complement traditional assessments of verbal fluency to provide a more complete account of the relationship between brain and verbal behavior involved organization and retrieval of lexical information from memory. PMID:25315785

  6. Software for keratometry measurements using portable devices

    NASA Astrophysics Data System (ADS)

    Iyomasa, C. M.; Ventura, L.; De Groote, J. J.

    2010-02-01

    In this work we present an image processing software for automatic astigmatism measurements developed for a hand held keratometer. The system projects 36 light spots, from LEDs, displayed in a precise circle at the lachrymal film of the examined cornea. The displacement, the size and deformation of the reflected image of these light spots are analyzed providing the keratometry. The purpose of this research is to develop a software that performs fast and precise calculations in mainstream mobile devices. In another words, a software that can be implemented in portable computer systems, which could be of low cost and easy to handle. This project allows portability for keratometers and is a previous work for a portable corneal topographer.

  7. Explosively produced fracture of oil shale

    NASA Astrophysics Data System (ADS)

    Morris, W. A.

    1982-05-01

    Rock fragmentation research in oil shale to develop the blasting technologies and designs required to prepare a rubble bed for a modified in situ retort is reported. Experimental work is outlined, proposed studies in explosive characterization are detailed and progress in numerical calculation techniques to predict fracture of the shale is described. A detailed geologic characterization of two Anvil Points experiment sites is related to previous work at Colony Mine. The second section focuses on computer modeling and theory. The latest generation of the stress wave code SHALE, its three dimensional potential, and the slide line package for it are described. A general stress rate equation that takes energy dependence into account is discussed.

  8. Decentralised control of continuous Petri nets

    NASA Astrophysics Data System (ADS)

    Wang, Liewei; Wang, Xu

    2017-05-01

    This paper focuses on decentralised control of systems modelled by continuous Petri nets, in which a target marking control problem is discussed. In some previous works, an efficient ON/OFF strategy-based minimum-time controller was developed. Nevertheless, the convergence is only proved for subclasses like Choice-Free nets. For a general net, the pre-conditions of applying the ON/OFF strategy are not given; therefore, the application scope of the method is unclear. In this work, we provide two sufficient conditions of applying the ON/OFF strategy-based controller to general nets. Furthermore, an extended algorithm for general nets is proposed, in which control laws are computed based on some limited information, without knowing the detailed structure of subsystems.

  9. A Dynamic Simulation of Musculoskeletal Function in the Mouse Hindlimb During Trotting Locomotion

    PubMed Central

    Charles, James P.; Cappellari, Ornella; Hutchinson, John R.

    2018-01-01

    Mice are often used as animal models of various human neuromuscular diseases, and analysis of these models often requires detailed gait analysis. However, little is known of the dynamics of the mouse musculoskeletal system during locomotion. In this study, we used computer optimization procedures to create a simulation of trotting in a mouse, using a previously developed mouse hindlimb musculoskeletal model in conjunction with new experimental data, allowing muscle forces, activation patterns, and levels of mechanical work to be estimated. Analyzing musculotendon unit (MTU) mechanical work throughout the stride allowed a deeper understanding of their respective functions, with the rectus femoris MTU dominating the generation of positive and negative mechanical work during the swing and stance phases. This analysis also tested previous functional inferences of the mouse hindlimb made from anatomical data alone, such as the existence of a proximo-distal gradient of muscle function, thought to reflect adaptations for energy-efficient locomotion. The results do not strongly support the presence of this gradient within the mouse musculoskeletal system, particularly given relatively high negative net work output from the ankle plantarflexor MTUs, although more detailed simulations could test this further. This modeling analysis lays a foundation for future studies of the control of vertebrate movement through the development of neuromechanical simulations. PMID:29868576

  10. An Experimental and Computational Analysis of Primary Cilia Deflection Under Fluid Flow

    PubMed Central

    Downs, Matthew E.; Nguyen, An M.; Herzog, Florian A.; Hoey, David A.; Jacobs, Christopher R.

    2013-01-01

    In this work we have developed a novel model of the deflection of primary cilia experiencing fluid flow accounting for phenomena not previously considered. Specifically, we developed a large rotation formulation that accounts for rotation at the base of the cilium, the initial shape of the cilium and fluid drag at high deflection angles. We utilized this model to analyze full three dimensional datasets of primary cilia deflecting under fluid flow acquired with high-speed confocal microscopy. We found a wide variety of previously unreported bending shapes and behaviors. We also analyzed post-flow relaxation patterns. Results from our combined experimental and theoretical approach suggest that the average flexural rigidity of primary cilia might be higher than previously reported (Schwartz et al. 1997). In addition our findings indicate the mechanics of primary cilia are richly varied and mechanisms may exist to alter their mechanical behavior. PMID:22452422

  11. Modeling of Transient Flow Mixing of Streams Injected into a Mixing Chamber

    NASA Technical Reports Server (NTRS)

    Voytovych, Dmytro M.; Merkle, Charles L.; Lucht, Robert P.; Hulka, James R.; Jones, Gregg W.

    2006-01-01

    Ignition is recognized as one the critical drivers in the reliability of multiple-start rocket engines. Residual combustion products from previous engine operation can condense on valves and related structures thereby creating difficulties for subsequent starting procedures. Alternative ignition methods that require fewer valves can mitigate the valve reliability problem, but require improved understanding of the spatial and temporal propellant distribution in the pre-ignition chamber. Current design tools based mainly on one-dimensional analysis and empirical models cannot predict local details of the injection and ignition processes. The goal of this work is to evaluate the capability of the modern computational fluid dynamics (CFD) tools in predicting the transient flow mixing in pre-ignition environment by comparing the results with the experimental data. This study is a part of a program to improve analytical methods and methodologies to analyze reliability and durability of combustion devices. In the present paper we describe a series of detailed computational simulations of the unsteady mixing events as the cold propellants are first introduced into the chamber as a first step in providing this necessary environmental description. The present computational modeling represents a complement to parallel experimental simulations' and includes comparisons with experimental results from that effort. A large number of rocket engine ignition studies has been previously reported. Here we limit our discussion to the work discussed in Refs. 2, 3 and 4 which is both similar to and different from the present approach. The similarities arise from the fact that both efforts involve detailed experimental/computational simulations of the ignition problem. The differences arise from the underlying philosophy of the two endeavors. The approach in Refs. 2 to 4 is a classical ignition study in which the focus is on the response of a propellant mixture to an ignition source, with emphasis on the level of energy needed for ignition and the ensuing flame propagation issues. Our focus in the present paper is on identifying the unsteady mixing processes that provide the propellant mixture in which the ignition source is to be placed. In particular, we wish to characterize the spatial and temporal mixture distribution with a view toward identifying preferred spatial and temporal locations for the ignition source. As such, the present work is limited to cold flow (pre-ignition) conditions

  12. Statistical Analysis of CFD Solutions From the Fifth AIAA Drag Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Morrison, Joseph H.

    2013-01-01

    A graphical framework is used for statistical analysis of the results from an extensive N-version test of a collection of Reynolds-averaged Navier-Stokes computational fluid dynamics codes. The solutions were obtained by code developers and users from North America, Europe, Asia, and South America using a common grid sequence and multiple turbulence models for the June 2012 fifth Drag Prediction Workshop sponsored by the AIAA Applied Aerodynamics Technical Committee. The aerodynamic configuration for this workshop was the Common Research Model subsonic transport wing-body previously used for the 4th Drag Prediction Workshop. This work continues the statistical analysis begun in the earlier workshops and compares the results from the grid convergence study of the most recent workshop with previous workshops.

  13. Computer use at work is associated with self-reported depressive and anxiety disorder.

    PubMed

    Kim, Taeshik; Kang, Mo-Yeol; Yoo, Min-Sang; Lee, Dongwook; Hong, Yun-Chul

    2016-01-01

    With the development of technology, extensive use of computers in the workplace is prevalent and increases efficiency. However, computer users are facing new harmful working conditions with high workloads and longer hours. This study aimed to investigate the association between computer use at work and self-reported depressive and anxiety disorder (DAD) in a nationally representative sample of South Korean workers. This cross-sectional study was based on the third Korean Working Conditions Survey (2011), and 48,850 workers were analyzed. Information about computer use and DAD was obtained from a self-administered questionnaire. We investigated the relation between computer use at work and DAD using logistic regression. The 12-month prevalence of DAD in computer-using workers was 1.46 %. After adjustment for socio-demographic factors, the odds ratio for DAD was higher in workers using computers more than 75 % of their workday (OR 1.69, 95 % CI 1.30-2.20) than in workers using computers less than 50 % of their shift. After stratifying by working hours, computer use for over 75 % of the work time was significantly associated with increased odds of DAD in 20-39, 41-50, 51-60, and over 60 working hours per week. After stratifying by occupation, education, and job status, computer use for more than 75 % of the work time was related with higher odds of DAD in sales and service workers, those with high school and college education, and those who were self-employed and employers. A high proportion of computer use at work may be associated with depressive and anxiety disorder. This finding suggests the necessity of a work guideline to help the workers suffering from high computer use at work.

  14. Time-Resolved Particle Image Velocimetry Measurements with Wall Shear Stress and Uncertainty Quantification for the FDA Nozzle Model.

    PubMed

    Raben, Jaime S; Hariharan, Prasanna; Robinson, Ronald; Malinauskas, Richard; Vlachos, Pavlos P

    2016-03-01

    We present advanced particle image velocimetry (PIV) processing, post-processing, and uncertainty estimation techniques to support the validation of computational fluid dynamics analyses of medical devices. This work is an extension of a previous FDA-sponsored multi-laboratory study, which used a medical device mimicking geometry referred to as the FDA benchmark nozzle model. Experimental measurements were performed using time-resolved PIV at five overlapping regions of the model for Reynolds numbers in the nozzle throat of 500, 2000, 5000, and 8000. Images included a twofold increase in spatial resolution in comparison to the previous study. Data was processed using ensemble correlation, dynamic range enhancement, and phase correlations to increase signal-to-noise ratios and measurement accuracy, and to resolve flow regions with large velocity ranges and gradients, which is typical of many blood-contacting medical devices. Parameters relevant to device safety, including shear stress at the wall and in bulk flow, were computed using radial basis functions. In addition, in-field spatially resolved pressure distributions, Reynolds stresses, and energy dissipation rates were computed from PIV measurements. Velocity measurement uncertainty was estimated directly from the PIV correlation plane, and uncertainty analysis for wall shear stress at each measurement location was performed using a Monte Carlo model. Local velocity uncertainty varied greatly and depended largely on local conditions such as particle seeding, velocity gradients, and particle displacements. Uncertainty in low velocity regions in the sudden expansion section of the nozzle was greatly reduced by over an order of magnitude when dynamic range enhancement was applied. Wall shear stress uncertainty was dominated by uncertainty contributions from velocity estimations, which were shown to account for 90-99% of the total uncertainty. This study provides advancements in the PIV processing methodologies over the previous work through increased PIV image resolution, use of robust image processing algorithms for near-wall velocity measurements and wall shear stress calculations, and uncertainty analyses for both velocity and wall shear stress measurements. The velocity and shear stress analysis, with spatially distributed uncertainty estimates, highlights the challenges of flow quantification in medical devices and provides potential methods to overcome such challenges.

  15. Theory and computation of non-RRKM lifetime distributions and rates in chemical systems with three or more degrees of freedom

    NASA Astrophysics Data System (ADS)

    Gabern, Frederic; Koon, Wang S.; Marsden, Jerrold E.; Ross, Shane D.

    2005-11-01

    The computation, starting from basic principles, of chemical reaction rates in realistic systems (with three or more degrees of freedom) has been a longstanding goal of the chemistry community. Our current work, which merges tube dynamics with Monte Carlo methods provides some key theoretical and computational tools for achieving this goal. We use basic tools of dynamical systems theory, merging the ideas of Koon et al. [W.S. Koon, M.W. Lo, J.E. Marsden, S.D. Ross, Heteroclinic connections between periodic orbits and resonance transitions in celestial mechanics, Chaos 10 (2000) 427-469.] and De Leon et al. [N. De Leon, M.A. Mehta, R.Q. Topper, Cylindrical manifolds in phase space as mediators of chemical reaction dynamics and kinetics. I. Theory, J. Chem. Phys. 94 (1991) 8310-8328.], particularly the use of invariant manifold tubes that mediate the reaction, into a tool for the computation of lifetime distributions and rates of chemical reactions and scattering phenomena, even in systems that exhibit non-statistical behavior. Previously, the main problem with the application of tube dynamics has been with the computation of volumes in phase spaces of high dimension. The present work provides a starting point for overcoming this hurdle with some new ideas and implements them numerically. Specifically, an algorithm that uses tube dynamics to provide the initial bounding box for a Monte Carlo volume determination is used. The combination of a fine scale method for determining the phase space structure (invariant manifold theory) with statistical methods for volume computations (Monte Carlo) is the main contribution of this paper. The methodology is applied here to a three degree of freedom model problem and may be useful for higher degree of freedom systems as well.

  16. GWAS with longitudinal phenotypes: performance of approximate procedures

    PubMed Central

    Sikorska, Karolina; Montazeri, Nahid Mostafavi; Uitterlinden, André; Rivadeneira, Fernando; Eilers, Paul HC; Lesaffre, Emmanuel

    2015-01-01

    Analysis of genome-wide association studies with longitudinal data using standard procedures, such as linear mixed model (LMM) fitting, leads to discouragingly long computation times. There is a need to speed up the computations significantly. In our previous work (Sikorska et al: Fast linear mixed model computations for genome-wide association studies with longitudinal data. Stat Med 2012; 32.1: 165–180), we proposed the conditional two-step (CTS) approach as a fast method providing an approximation to the P-value for the longitudinal single-nucleotide polymorphism (SNP) effect. In the first step a reduced conditional LMM is fit, omitting all the SNP terms. In the second step, the estimated random slopes are regressed on SNPs. The CTS has been applied to the bone mineral density data from the Rotterdam Study and proved to work very well even in unbalanced situations. In another article (Sikorska et al: GWAS on your notebook: fast semi-parallel linear and logistic regression for genome-wide association studies. BMC Bioinformatics 2013; 14: 166), we suggested semi-parallel computations, greatly speeding up fitting many linear regressions. Combining CTS with fast linear regression reduces the computation time from several weeks to a few minutes on a single computer. Here, we explore further the properties of the CTS both analytically and by simulations. We investigate the performance of our proposal in comparison with a related but different approach, the two-step procedure. It is analytically shown that for the balanced case, under mild assumptions, the P-value provided by the CTS is the same as from the LMM. For unbalanced data and in realistic situations, simulations show that the CTS method does not inflate the type I error rate and implies only a minimal loss of power. PMID:25712081

  17. From the front line, report from a near paperless hospital: mixed reception among health care professionals.

    PubMed

    Lium, Jan-Tore; Laerum, Hallvard; Schulz, Tom; Faxvaag, Arild

    2006-01-01

    Many Norwegian hospitals that are equipped with an electronic medical record (EMR) system now have proceeded to withdraw the paper-based medical record from clinical workflow. In two previous survey-based studies on the effect of removing the paper-based medical record on the work of physicians, nurses and medical secretaries, we concluded that to scan and eliminate the paper based record was feasible, but that the medical secretaries were the group that reported to benefit the most from the change. To further explore the effects of removing the paper based record, especially in regard to medical personnel, we now have conducted a follow up study of a hospital that has scanned and eliminated its paper-based record. A survey of 27 physicians, 60 nurses and 30 medical secretaries was conducted. The results were compared with those from a previous study conducted three years earlier at the same department. The questionnaire (see online Appendix) covered the frequency of use of the EMR system for specific tasks by physicians, nurses and medical secretaries, the ease of performing these tasks compared to previous routines, user satisfaction and computer literacy. Both physicians and nurses displayed increased use of the EMR compared to the previous study, while medical secretaries reported generally unchanged but high use. The increase in use was not accompanied by a similar change in factors such as computer literacy or technical changes, suggesting that these typical success factors are necessary but not sufficient.

  18. Controversial electronic structures and energies of Fe{sub 2}, Fe{sub 2}{sup +}, and Fe{sub 2}{sup −} resolved by RASPT2 calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoyer, Chad E.; Manni, Giovanni Li; Truhlar, Donald G., E-mail: truhlar@umn.edu, E-mail: gagliard@umn.edu

    2014-11-28

    The diatomic molecule Fe{sub 2} was investigated using restricted active space second-order perturbation theory (RASPT2). This molecule is very challenging to study computationally because predictions about the ground state and excited states depend sensitively on the choice of the quantum chemical method. For Fe{sub 2} we show that one needs to go beyond a full-valence active space in order to achieve even qualitative agreement with experiment for the dissociation energy, and we also obtain a smooth ground-state potential curve. In addition we report the first multireference study of Fe{sub 2}{sup +}, for which we predict an {sup 8}Σ{sub u}{sup −}more » ground state, which was not predicted by previous computational studies. By using an active space large enough to remove the most serious deficiencies of previous theoretical work and by explicitly investigating the interpretations of previous experimental results, this study elucidates previous difficulties and provides – for the first time – a qualitatively correct treatment of Fe{sub 2}, Fe{sub 2}{sup +}, and Fe{sub 2}{sup −}. Moreover, this study represents a record in terms of the number or active electrons and active orbitals in the active space, namely 16 electrons in 28 orbitals. Conventional CASPT2 calculations can be performed with at most 16 electrons in 16 orbitals. We were able to overcome this limit by using the RASPT2 formalism.« less

  19. Prediction of thermal cycling induced cracking in polmer matrix composites

    NASA Technical Reports Server (NTRS)

    Mcmanus, Hugh L.

    1994-01-01

    The work done in the period August 1993 through February 1994 on the 'Prediction of Thermal Cycling Induced Cracking In Polymer Matrix Composites' program is summarized. Most of the work performed in this period, as well as the previous one, is described in detail in the attached Master's thesis, 'Analysis of Thermally Induced Damage in Composite Space Structures,' by Cecelia Hyun Seon Park. Work on a small thermal cycling and aging chamber was concluded in this period. The chamber was extensively tested and calibrated. Temperatures can be controlled very precisely, and are very uniform in the test chamber. Based on results obtained in the previous period of this program, further experimental progressive cracking studies were carried out. The laminates tested were selected to clarify the differences between the behaviors of thick and thin ply layers, and to explore other variables such as stacking sequence and scaling effects. Most specimens tested were made available from existing stock at Langley Research Center. One laminate type had to be constructed from available prepreg material at Langley Research Center. Specimens from this laminate were cut and prepared at MIT. Thermal conditioning was carried out at Langley Research Center, and at the newly constructed MIT facility. Specimens were examined by edge inspection and by crack configuration studies, in which specimens were sanded down in order to examine the distribution of cracks within the specimens. A method for predicting matrix cracking due to decreasing temperatures and/or thermal cycling in all plies of an arbitrary laminate was implemented as a computer code. The code also predicts changes in properties due to the cracking. Extensive correlations between test results and code predictions were carried out. The computer code was documented and is ready for distribution.

  20. Facilitating higher-fidelity simulations of axial compressor instability and other turbomachinery flow conditions

    NASA Astrophysics Data System (ADS)

    Herrick, Gregory Paul

    The quest to accurately capture flow phenomena with length-scales both short and long and to accurately represent complex flow phenomena within disparately sized geometry inspires a need for an efficient, high-fidelity, multi-block structured computational fluid dynamics (CFD) parallel computational scheme. This research presents and demonstrates a more efficient computational method by which to perform multi-block structured CFD parallel computational simulations, thus facilitating higher-fidelity solutions of complicated geometries (due to the inclusion of grids for "small'' flow areas which are often merely modeled) and their associated flows. This computational framework offers greater flexibility and user-control in allocating the resource balance between process count and wall-clock computation time. The principal modifications implemented in this revision consist of a "multiple grid block per processing core'' software infrastructure and an analytic computation of viscous flux Jacobians. The development of this scheme is largely motivated by the desire to simulate axial compressor stall inception with more complete gridding of the flow passages (including rotor tip clearance regions) than has been previously done while maintaining high computational efficiency (i.e., minimal consumption of computational resources), and thus this paradigm shall be demonstrated with an examination of instability in a transonic axial compressor. However, the paradigm presented herein facilitates CFD simulation of myriad previously impractical geometries and flows and is not limited to detailed analyses of axial compressor flows. While the simulations presented herein were technically possible under the previous structure of the subject software, they were much less computationally efficient and thus not pragmatically feasible; the previous research using this software to perform three-dimensional, full-annulus, time-accurate, unsteady, full-stage (with sliding-interface) simulations of rotating stall inception in axial compressors utilized tip clearance periodic models, while the scheme here is demonstrated by a simulation of axial compressor stall inception utilizing gridded rotor tip clearance regions. As will be discussed, much previous research---experimental, theoretical, and computational---has suggested that understanding clearance flow behavior is critical to understanding stall inception, and previous computational research efforts which have used tip clearance models have begged the question, "What about the clearance flows?''. This research begins to address that question.

  1. Differences in muscle load between computer and non-computer work among office workers.

    PubMed

    Richter, J M; Mathiassen, S E; Slijper, H P; Over, E A B; Frens, M A

    2009-12-01

    Introduction of more non-computer tasks has been suggested to increase exposure variation and thus reduce musculoskeletal complaints (MSC) in computer-intensive office work. This study investigated whether muscle activity did, indeed, differ between computer and non-computer activities. Whole-day logs of input device use in 30 office workers were used to identify computer and non-computer work, using a range of classification thresholds (non-computer thresholds (NCTs)). Exposure during these activities was assessed by bilateral electromyography recordings from the upper trapezius and lower arm. Contrasts in muscle activity between computer and non-computer work were distinct but small, even at the individualised, optimal NCT. Using an average group-based NCT resulted in less contrast, even in smaller subgroups defined by job function or MSC. Thus, computer activity logs should be used cautiously as proxies of biomechanical exposure. Conventional non-computer tasks may have a limited potential to increase variation in muscle activity during computer-intensive office work.

  2. Performance of computer vision in vivo flow cytometry with low fluorescence contrast

    PubMed Central

    Markovic, Stacey; Li, Siyuan; Niedre, Mark

    2015-01-01

    Abstract. Detection and enumeration of circulating cells in the bloodstream of small animals are important in many areas of preclinical biomedical research, including cancer metastasis, immunology, and reproductive medicine. Optical in vivo flow cytometry (IVFC) represents a class of technologies that allow noninvasive and continuous enumeration of circulating cells without drawing blood samples. We recently developed a technique termed computer vision in vivo flow cytometry (CV-IVFC) that uses a high-sensitivity fluorescence camera and an automated computer vision algorithm to interrogate relatively large circulating blood volumes in the ear of a mouse. We detected circulating cells at concentrations as low as 20  cells/mL. In the present work, we characterized the performance of CV-IVFC with low-contrast imaging conditions with (1) weak cell fluorescent labeling using cell-simulating fluorescent microspheres with varying brightness and (2) high background tissue autofluorescence by varying autofluorescence properties of optical phantoms. Our analysis indicates that CV-IVFC can robustly track and enumerate circulating cells with at least 50% sensitivity even in conditions with two orders of magnitude degraded contrast than our previous in vivo work. These results support the significant potential utility of CV-IVFC in a wide range of in vivo biological models. PMID:25822954

  3. Optimization of Simplex Atomizer Inlet Port Configuration through Computational Fluid Dynamics and Experimental Study for Aero-Gas Turbine Applications

    NASA Astrophysics Data System (ADS)

    Marudhappan, Raja; Chandrasekhar, Udayagiri; Hemachandra Reddy, Koni

    2017-10-01

    The design of plain orifice simplex atomizer for use in the annular combustion system of 1100 kW turbo shaft engine is optimized. The discrete flow field of jet fuel inside the swirl chamber of the atomizer and up to 1.0 mm downstream of the atomizer exit are simulated using commercial Computational Fluid Dynamics (CFD) software. The Euler-Euler multiphase model is used to solve two sets of momentum equations for liquid and gaseous phases and the volume fraction of each phase is tracked throughout the computational domain. The atomizer design is optimized after performing several 2D axis symmetric analyses with swirl and the optimized inlet port design parameters are used for 3D simulation. The Volume Of Fluid (VOF) multiphase model is used in the simulation. The orifice exit diameter is 0.6 mm. The atomizer is fabricated with the optimized geometric parameters. The performance of the atomizer is tested in the laboratory. The experimental observations are compared with the results obtained from 2D and 3D CFD simulations. The simulated velocity components, pressure field, streamlines and air core dynamics along the atomizer axis are compared to previous research works and found satisfactory. The work has led to a novel approach in the design of pressure swirl atomizer.

  4. Performance Modeling in CUDA Streams - A Means for High-Throughput Data Processing

    PubMed Central

    Li, Hao; Yu, Di; Kumar, Anand; Tu, Yi-Cheng

    2015-01-01

    Push-based database management system (DBMS) is a new type of data processing software that streams large volume of data to concurrent query operators. The high data rate of such systems requires large computing power provided by the query engine. In our previous work, we built a push-based DBMS named G-SDMS to harness the unrivaled computational capabilities of modern GPUs. A major design goal of G-SDMS is to support concurrent processing of heterogenous query processing operations and enable resource allocation among such operations. Understanding the performance of operations as a result of resource consumption is thus a premise in the design of G-SDMS. With NVIDIA’s CUDA framework as the system implementation platform, we present our recent work on performance modeling of CUDA kernels running concurrently under a runtime mechanism named CUDA stream. Specifically, we explore the connection between performance and resource occupancy of compute-bound kernels and develop a model that can predict the performance of such kernels. Furthermore, we provide an in-depth anatomy of the CUDA stream mechanism and summarize the main kernel scheduling disciplines in it. Our models and derived scheduling disciplines are verified by extensive experiments using synthetic and real-world CUDA kernels. PMID:26566545

  5. Modeling Effects of RNA on Capsid Assembly Pathways via Coarse-Grained Stochastic Simulation

    PubMed Central

    Smith, Gregory R.; Xie, Lu; Schwartz, Russell

    2016-01-01

    The environment of a living cell is vastly different from that of an in vitro reaction system, an issue that presents great challenges to the use of in vitro models, or computer simulations based on them, for understanding biochemistry in vivo. Virus capsids make an excellent model system for such questions because they typically have few distinct components, making them amenable to in vitro and modeling studies, yet their assembly can involve complex networks of possible reactions that cannot be resolved in detail by any current experimental technology. We previously fit kinetic simulation parameters to bulk in vitro assembly data to yield a close match between simulated and real data, and then used the simulations to study features of assembly that cannot be monitored experimentally. The present work seeks to project how assembly in these simulations fit to in vitro data would be altered by computationally adding features of the cellular environment to the system, specifically the presence of nucleic acid about which many capsids assemble. The major challenge of such work is computational: simulating fine-scale assembly pathways on the scale and in the parameter domains of real viruses is far too computationally costly to allow for explicit models of nucleic acid interaction. We bypass that limitation by applying analytical models of nucleic acid effects to adjust kinetic rate parameters learned from in vitro data to see how these adjustments, singly or in combination, might affect fine-scale assembly progress. The resulting simulations exhibit surprising behavioral complexity, with distinct effects often acting synergistically to drive efficient assembly and alter pathways relative to the in vitro model. The work demonstrates how computer simulations can help us understand how assembly might differ between the in vitro and in vivo environments and what features of the cellular environment account for these differences. PMID:27244559

  6. Change in Minimum Orbit Intersection Distance due to General Relativistic Precession in Small Solar System Bodies

    NASA Astrophysics Data System (ADS)

    Sekhar, Aswin; Valsecchi, Giovanni B.; Asher, David; Werner, Stephanie; Vaubaillon, Jeremie; Li, Gongjie

    2017-06-01

    One of the greatest successes of Einstein's General Theory of Relativity (GR) was the correct prediction of the perihelion precession of Mercury. The closed form expression to compute this precession tells us that substantial GR precession would occur only if the bodies have a combination of both moderately small perihelion distance and semi-major axis. Minimum Orbit Intersection Distance (MOID) is a quantity which helps us to understand the closest proximity of two orbits in space. Hence evaluating MOID is crucial to understand close encounters and collision scenarios better. In this work, we look at the possible scenarios where a small GR precession in argument of pericentre can create substantial changes in MOID for small bodies ranging from meteoroids to comets and asteroids.Previous works have looked into neat analytical techniques to understand different collision scenarios and we use those standard expressions to compute MOID analytically. We find the nature of this mathematical function is such that a relatively small GR precession can lead to drastic changes in MOID values depending on the initial value of argument of pericentre. Numerical integrations were done with the MERCURY package incorporating GR code to test the same effects. A numerical approach showed the same interesting relationship (as shown by analytical theory) between values of argument of pericentre and the peaks or dips in MOID values. There is an overall agreement between both analytical and numerical methods.We find that GR precession could play an important role in the calculations pertaining to MOID and close encounter scenarios in the case of certain small solar system bodies (depending on their initial orbital elements) when long term impact risk possibilities are considered. Previous works have looked into impact probabilities and collision scenarios on planets from different small body populations. This work aims to find certain sub-sets of small bodies where GR could play an interesting role. Certain parallels are drawn between the cases of asteroids, comets and small perihelion distance meteoroid streams.

  7. A novel anisotropic fast marching method and its application to blood flow computation in phase-contrast MRI.

    PubMed

    Schwenke, M; Hennemuth, A; Fischer, B; Friman, O

    2012-01-01

    Phase-contrast MRI (PC MRI) can be used to assess blood flow dynamics noninvasively inside the human body. The acquired images can be reconstructed into flow vector fields. Traditionally, streamlines can be computed based on the vector fields to visualize flow patterns and particle trajectories. The traditional methods may give a false impression of precision, as they do not consider the measurement uncertainty in the PC MRI images. In our prior work, we incorporated the uncertainty of the measurement into the computation of particle trajectories. As a major part of the contribution, a novel numerical scheme for solving the anisotropic Fast Marching problem is presented. A computing time comparison to state-of-the-art methods is conducted on artificial tensor fields. A visual comparison of healthy to pathological blood flow patterns is given. The comparison shows that the novel anisotropic Fast Marching solver outperforms previous schemes in terms of computing time. The visual comparison of flow patterns directly visualizes large deviations of pathological flow from healthy flow. The novel anisotropic Fast Marching solver efficiently resolves even strongly anisotropic path costs. The visualization method enables the user to assess the uncertainty of particle trajectories derived from PC MRI images.

  8. Flame-Vortex Studies to Quantify Markstein Numbers Needed to Model Flame Extinction Limits

    NASA Technical Reports Server (NTRS)

    Driscoll, James F.; Feikema, Douglas A.

    2003-01-01

    This has quantified a database of Markstein numbers for unsteady flames; future work will quantify a database of flame extinction limits for unsteady conditions. Unsteady extinction limits have not been documented previously; both a stretch rate and a residence time must be measured, since extinction requires that the stretch rate be sufficiently large for a sufficiently long residence time. Ma was measured for an inwardly-propagating flame (IPF) that is negatively-stretched under microgravity conditions. Computations also were performed using RUN-1DL to explain the measurements. The Markstein number of an inwardly-propagating flame, for both the microgravity experiment and the computations, is significantly larger than that of an outwardy-propagating flame. The computed profiles of the various species within the flame suggest reasons. Computed hydrogen concentrations build up ahead of the IPF but not the OPF. Understanding was gained by running the computations for both simplified and full-chemistry conditions. Numerical Simulations. To explain the experimental findings, numerical simulations of both inwardly and outwardly propagating spherical flames (with complex chemistry) were generated using the RUN-1DL code, which includes 16 species and 46 reactions.

  9. Interactions of spatial strategies producing generalization gradient and blocking: A computational approach

    PubMed Central

    Dollé, Laurent; Chavarriaga, Ricardo

    2018-01-01

    We present a computational model of spatial navigation comprising different learning mechanisms in mammals, i.e., associative, cognitive mapping and parallel systems. This model is able to reproduce a large number of experimental results in different variants of the Morris water maze task, including standard associative phenomena (spatial generalization gradient and blocking), as well as navigation based on cognitive mapping. Furthermore, we show that competitive and cooperative patterns between different navigation strategies in the model allow to explain previous apparently contradictory results supporting either associative or cognitive mechanisms for spatial learning. The key computational mechanism to reconcile experimental results showing different influences of distal and proximal cues on the behavior, different learning times, and different abilities of individuals to alternatively perform spatial and response strategies, relies in the dynamic coordination of navigation strategies, whose performance is evaluated online with a common currency through a modular approach. We provide a set of concrete experimental predictions to further test the computational model. Overall, this computational work sheds new light on inter-individual differences in navigation learning, and provides a formal and mechanistic approach to test various theories of spatial cognition in mammals. PMID:29630600

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aurich, Maike K.; Fleming, Ronan M. T.; Thiele, Ines

    Metabolomic data sets provide a direct read-out of cellular phenotypes and are increasingly generated to study biological questions. Previous work, by us and others, revealed the potential of analyzing extracellular metabolomic data in the context of the metabolic model using constraint-based modeling. With the MetaboTools, we make our methods available to the broader scientific community. The MetaboTools consist of a protocol, a toolbox, and tutorials of two use cases. The protocol describes, in a step-wise manner, the workflow of data integration, and computational analysis. The MetaboTools comprise the Matlab code required to complete the workflow described in the protocol. Tutorialsmore » explain the computational steps for integration of two different data sets and demonstrate a comprehensive set of methods for the computational analysis of metabolic models and stratification thereof into different phenotypes. The presented workflow supports integrative analysis of multiple omics data sets. Importantly, all analysis tools can be applied to metabolic models without performing the entire workflow. Taken together, the MetaboTools constitute a comprehensive guide to the intra-model analysis of extracellular metabolomic data from microbial, plant, or human cells. In conclusion, this computational modeling resource offers a broad set of computational analysis tools for a wide biomedical and non-biomedical research community.« less

  11. Blind quantum computation with identity authentication

    NASA Astrophysics Data System (ADS)

    Li, Qin; Li, Zhulin; Chan, Wai Hong; Zhang, Shengyu; Liu, Chengdong

    2018-04-01

    Blind quantum computation (BQC) allows a client with relatively few quantum resources or poor quantum technologies to delegate his computational problem to a quantum server such that the client's input, output, and algorithm are kept private. However, all existing BQC protocols focus on correctness verification of quantum computation but neglect authentication of participants' identity which probably leads to man-in-the-middle attacks or denial-of-service attacks. In this work, we use quantum identification to overcome such two kinds of attack for BQC, which will be called QI-BQC. We propose two QI-BQC protocols based on a typical single-server BQC protocol and a double-server BQC protocol. The two protocols can ensure both data integrity and mutual identification between participants with the help of a third trusted party (TTP). In addition, an unjammable public channel between a client and a server which is indispensable in previous BQC protocols is unnecessary, although it is required between TTP and each participant at some instant. Furthermore, the method to achieve identity verification in the presented protocols is general and it can be applied to other similar BQC protocols.

  12. Digitizing zone maps, using modified LARSYS program. [computer graphics and computer techniques for mapping

    NASA Technical Reports Server (NTRS)

    Giddings, L.; Boston, S.

    1976-01-01

    A method for digitizing zone maps is presented, starting with colored images and producing a final one-channel digitized tape. This method automates the work previously done interactively on the Image-100 and Data Analysis System computers of the Johnson Space Center (JSC) Earth Observations Division (EOD). A color-coded map was digitized through color filters on a scanner to form a digital tape in LARSYS-2 or JSC Universal format. The taped image was classified by the EOD LARSYS program on the basis of training fields included in the image. Numerical values were assigned to all pixels in a given class, and the resulting coded zone map was written on a LARSYS or Universal tape. A unique spatial filter option permitted zones to be made homogeneous and edges of zones to be abrupt transitions from one zone to the next. A zoom option allowed the output image to have arbitrary dimensions in terms of number of lines and number of samples on a line. Printouts of the computer program are given and the images that were digitized are shown.

  13. Comparison of tests of accommodation for computer users.

    PubMed

    Kolker, David; Hutchinson, Robert; Nilsen, Erik

    2002-04-01

    With the increased use of computers in the workplace and at home, optometrists are finding more patients presenting with symptoms of Computer Vision Syndrome. Among these symptomatic individuals, research supports that accommodative disorders are the most common vision finding. A prepresbyopic group (N= 30) and a presbyopic group (N = 30) were selected from a private practice. Assignment to a group was determined by age, accommodative amplitude, and near visual acuity with their distance prescription. Each subject was given a thorough vision and ocular health examination, then administered several nearpoint tests of accommodation at a computer working distance. All the tests produced similar results in the presbyopic group. For the prepresbyopic group, the tests yielded very different results. To effectively treat symptomatic VDT users, optometrists must assess the accommodative system along with the binocular and refractive status. For presbyopic patients, all nearpoint tests studied will yield virtually the same result. However, the method of testing accommodation, as well as the test stimulus presented, will yield significantly different responses for prepresbyopic patients. Previous research indicates that a majority of patients prefer the higher plus prescription yielded by the Gaussian image test.

  14. An efficient and robust method for predicting helicopter rotor high-speed impulsive noise

    NASA Technical Reports Server (NTRS)

    Brentner, Kenneth S.

    1996-01-01

    A new formulation for the Ffowcs Williams-Hawkings quadrupole source, which is valid for a far-field in-plane observer, is presented. The far-field approximation is new and unique in that no further approximation of the quadrupole source strength is made and integrands with r(exp -2) and r(exp -3) dependence are retained. This paper focuses on the development of a retarded-time formulation in which time derivatives are analytically taken inside the integrals to avoid unnecessary computational work when the observer moves with the rotor. The new quadrupole formulation is similar to Farassat's thickness and loading formulation 1A. Quadrupole noise prediction is carried out in two parts: a preprocessing stage in which the previously computed flow field is integrated in the direction normal to the rotor disk, and a noise computation stage in which quadrupole surface integrals are evaluated for a particular observer position. Preliminary predictions for hover and forward flight agree well with experimental data. The method is robust and requires computer resources comparable to thickness and loading noise prediction.

  15. DeepX: Deep Learning Accelerator for Restricted Boltzmann Machine Artificial Neural Networks.

    PubMed

    Kim, Lok-Won

    2018-05-01

    Although there have been many decades of research and commercial presence on high performance general purpose processors, there are still many applications that require fully customized hardware architectures for further computational acceleration. Recently, deep learning has been successfully used to learn in a wide variety of applications, but their heavy computation demand has considerably limited their practical applications. This paper proposes a fully pipelined acceleration architecture to alleviate high computational demand of an artificial neural network (ANN) which is restricted Boltzmann machine (RBM) ANNs. The implemented RBM ANN accelerator (integrating network size, using 128 input cases per batch, and running at a 303-MHz clock frequency) integrated in a state-of-the art field-programmable gate array (FPGA) (Xilinx Virtex 7 XC7V-2000T) provides a computational performance of 301-billion connection-updates-per-second and about 193 times higher performance than a software solution running on general purpose processors. Most importantly, the architecture enables over 4 times (12 times in batch learning) higher performance compared with a previous work when both are implemented in an FPGA device (XC2VP70).

  16. A combined registration and finite element analysis method for fast estimation of intraoperative brain shift; phantom and animal model study.

    PubMed

    Mohammadi, Amrollah; Ahmadian, Alireza; Rabbani, Shahram; Fattahi, Ehsan; Shirani, Shapour

    2017-12-01

    Finite element models for estimation of intraoperative brain shift suffer from huge computational cost. In these models, image registration and finite element analysis are two time-consuming processes. The proposed method is an improved version of our previously developed Finite Element Drift (FED) registration algorithm. In this work the registration process is combined with the finite element analysis. In the Combined FED (CFED), the deformation of whole brain mesh is iteratively calculated by geometrical extension of a local load vector which is computed by FED. While the processing time of the FED-based method including registration and finite element analysis was about 70 s, the computation time of the CFED was about 3.2 s. The computational cost of CFED is almost 50% less than similar state of the art brain shift estimators based on finite element models. The proposed combination of registration and structural analysis can make the calculation of brain deformation much faster. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Any Ontological Model of the Single Qubit Stabilizer Formalism must be Contextual

    NASA Astrophysics Data System (ADS)

    Lillystone, Piers; Wallman, Joel J.

    Quantum computers allow us to easily solve some problems classical computers find hard. Non-classical improvements in computational power should be due to some non-classical property of quantum theory. Contextuality, a more general notion of non-locality, is a necessary, but not sufficient, resource for quantum speed-up. Proofs of contextuality can be constructed for the classically simulable stabilizer formalism. Previous proofs of stabilizer contextuality are known for 2 or more qubits, for example the Mermin-Peres magic square. In the work presented we extend these results and prove that any ontological model of the single qubit stabilizer theory must be contextual, as defined by R. Spekkens, and give a relation between our result and the Mermin-Peres square. By demonstrating that contextuality is present in the qubit stabilizer formalism we provide further insight into the contextuality present in quantum theory. Understanding the contextuality of classical sub-theories will allow us to better identify the physical properties of quantum theory required for computational speed up. This research was supported by CIFAR, the Government of Ontario, and the Government of Canada through NSERC and Industry Canada.

  18. Spectrally resolving and scattering-compensated x-ray luminescence/fluorescence computed tomography

    PubMed Central

    Cong, Wenxiang; Shen, Haiou; Wang, Ge

    2011-01-01

    The nanophosphors, or other similar materials, emit near-infrared (NIR) light upon x-ray excitation. They were designed as optical probes for in vivo visualization and analysis of molecular and cellular targets, pathways, and responses. Based on the previous work on x-ray fluorescence computed tomography (XFCT) and x-ray luminescence computed tomography (XLCT), here we propose a spectrally-resolving and scattering-compensated x-ray luminescence/fluorescence computed tomography (SXLCT or SXFCT) approach to quantify a spatial distribution of nanophosphors (other similar materials or chemical elements) within a biological object. In this paper, the x-ray scattering is taken into account in the reconstruction algorithm. The NIR scattering is described in the diffusion approximation model. Then, x-ray excitations are applied with different spectra, and NIR signals are measured in a spectrally resolving fashion. Finally, a linear relationship is established between the nanophosphor distribution and measured NIR data using the finite element method and inverted using the compressive sensing technique. The numerical simulation results demonstrate the feasibility and merits of the proposed approach. PMID:21721815

  19. Predicting neutron damage using TEM with in situ ion irradiation and computer modeling

    NASA Astrophysics Data System (ADS)

    Kirk, Marquis A.; Li, Meimei; Xu, Donghua; Wirth, Brian D.

    2018-01-01

    We have constructed a computer model of irradiation defect production closely coordinated with TEM and in situ ion irradiation of Molybdenum at 80 °C over a range of dose, dose rate and foil thickness. We have reexamined our previous ion irradiation data to assign appropriate error and uncertainty based on more recent work. The spatially dependent cascade cluster dynamics model is updated with recent Molecular Dynamics results for cascades in Mo. After a careful assignment of both ion and neutron irradiation dose values in dpa, TEM data are compared for both ion and neutron irradiated Mo from the same source material. Using the computer model of defect formation and evolution based on the in situ ion irradiation of thin foils, the defect microstructure, consisting of densities and sizes of dislocation loops, is predicted for neutron irradiation of bulk material at 80 °C and compared with experiment. Reasonable agreement between model prediction and experimental data demonstrates a promising direction in understanding and predicting neutron damage using a closely coordinated program of in situ ion irradiation experiment and computer simulation.

  20. Discriminatively learning for representing local image features with quadruplet model

    NASA Astrophysics Data System (ADS)

    Zhang, Da-long; Zhao, Lei; Xu, Duan-qing; Lu, Dong-ming

    2017-11-01

    Traditional hand-crafted features for representing local image patches are evolving into current data-driven and learning-based image feature, but learning a robust and discriminative descriptor which is capable of controlling various patch-level computer vision tasks is still an open problem. In this work, we propose a novel deep convolutional neural network (CNN) to learn local feature descriptors. We utilize the quadruplets with positive and negative training samples, together with a constraint to restrict the intra-class variance, to learn good discriminative CNN representations. Compared with previous works, our model reduces the overlap in feature space between corresponding and non-corresponding patch pairs, and mitigates margin varying problem caused by commonly used triplet loss. We demonstrate that our method achieves better embedding result than some latest works, like PN-Net and TN-TG, on benchmark dataset.

  1. Towards the Verification of Human-Robot Teams

    NASA Technical Reports Server (NTRS)

    Fisher, Michael; Pearce, Edward; Wooldridge, Mike; Sierhuis, Maarten; Visser, Willem; Bordini, Rafael H.

    2005-01-01

    Human-Agent collaboration is increasingly important. Not only do high-profile activities such as NASA missions to Mars intend to employ such teams, but our everyday activities involving interaction with computational devices falls into this category. In many of these scenarios, we are expected to trust that the agents will do what we expect and that the agents and humans will work together as expected. But how can we be sure? In this paper, we bring together previous work on the verification of multi-agent systems with work on the modelling of human-agent teamwork. Specifically, we target human-robot teamwork. This paper provides an outline of the way we are using formal verification techniques in order to analyse such collaborative activities. A particular application is the analysis of human-robot teams intended for use in future space exploration.

  2. A parallelizable real-time motion tracking algorithm with applications to ultrasonic strain imaging.

    PubMed

    Jiang, J; Hall, T J

    2007-07-07

    Ultrasound-based mechanical strain imaging systems utilize signals from conventional diagnostic ultrasound systems to image tissue elasticity contrast that provides new diagnostically valuable information. Previous works (Hall et al 2003 Ultrasound Med. Biol. 29 427, Zhu and Hall 2002 Ultrason. Imaging 24 161) demonstrated that uniaxial deformation with minimal elevation motion is preferred for breast strain imaging and real-time strain image feedback to operators is important to accomplish this goal. The work reported here enhances the real-time speckle tracking algorithm with two significant modifications. One fundamental change is that the proposed algorithm is a column-based algorithm (a column is defined by a line of data parallel to the ultrasound beam direction, i.e. an A-line), as opposed to a row-based algorithm (a row is defined by a line of data perpendicular to the ultrasound beam direction). Then, displacement estimates from its adjacent columns provide good guidance for motion tracking in a significantly reduced search region to reduce computational cost. Consequently, the process of displacement estimation can be naturally split into at least two separated tasks, computed in parallel, propagating outward from the center of the region of interest (ROI). The proposed algorithm has been implemented and optimized in a Windows system as a stand-alone ANSI C++ program. Results of preliminary tests, using numerical and tissue-mimicking phantoms, and in vivo tissue data, suggest that high contrast strain images can be consistently obtained with frame rates (10 frames s(-1)) that exceed our previous methods.

  3. Social Noise: Generating Random Numbers from Twitter Streams

    NASA Astrophysics Data System (ADS)

    Fernández, Norberto; Quintas, Fernando; Sánchez, Luis; Arias, Jesús

    2015-12-01

    Due to the multiple applications of random numbers in computer systems (cryptography, online gambling, computer simulation, etc.) it is important to have mechanisms to generate these numbers. True Random Number Generators (TRNGs) are commonly used for this purpose. TRNGs rely on non-deterministic sources to generate randomness. Physical processes (like noise in semiconductors, quantum phenomenon, etc.) play this role in state of the art TRNGs. In this paper, we depart from previous work and explore the possibility of defining social TRNGs using the stream of public messages of the microblogging service Twitter as randomness source. Thus, we define two TRNGs based on Twitter stream information and evaluate them using the National Institute of Standards and Technology (NIST) statistical test suite. The results of the evaluation confirm the feasibility of the proposed approach.

  4. High-Fidelity 3D-Nanoprinting via Focused Electron Beams: Computer-Aided Design (3BID)

    DOE PAGES

    Fowlkes, Jason D.; Winkler, Robert; Lewis, Brett B.; ...

    2018-02-14

    Currently, there are few techniques that allow true 3D-printing on the nanoscale. The most promising candidate to fill this void is focused electron-beam-induced deposition (FEBID), a resist-free, nanofabrication compatible, direct-write method. The basic working principles of a computer-aided design (CAD) program (3BID) enabling 3D-FEBID is presented and simultaneously released for download. The 3BID capability significantly expands the currently limited toolbox for 3D-nanoprinting, providing access to geometries for optoelectronic, plasmonic, and nanomagnetic applications that were previously unattainable due to the lack of a suitable method for synthesis. In conclusion, the CAD approach supplants trial and error toward more precise/accurate FEBID requiredmore » for real applications/device prototyping.« less

  5. High-Fidelity 3D-Nanoprinting via Focused Electron Beams: Computer-Aided Design (3BID)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fowlkes, Jason D.; Winkler, Robert; Lewis, Brett B.

    Currently, there are few techniques that allow true 3D-printing on the nanoscale. The most promising candidate to fill this void is focused electron-beam-induced deposition (FEBID), a resist-free, nanofabrication compatible, direct-write method. The basic working principles of a computer-aided design (CAD) program (3BID) enabling 3D-FEBID is presented and simultaneously released for download. The 3BID capability significantly expands the currently limited toolbox for 3D-nanoprinting, providing access to geometries for optoelectronic, plasmonic, and nanomagnetic applications that were previously unattainable due to the lack of a suitable method for synthesis. In conclusion, the CAD approach supplants trial and error toward more precise/accurate FEBID requiredmore » for real applications/device prototyping.« less

  6. Circuit design advances for ultra-low power sensing platforms

    NASA Astrophysics Data System (ADS)

    Wieckowski, Michael; Dreslinski, Ronald G.; Mudge, Trevor; Blaauw, David; Sylvester, Dennis

    2010-04-01

    This paper explores the recent advances in circuit structures and design methodologies that have enabled ultra-low power sensing platforms and opened up a host of new applications. Central to this theme is the development of Near Threshold Computing (NTC) as a viable design space for low power sensing platforms. In this paradigm, the system's supply voltage is approximately equal to the threshold voltage of its transistors. Operating in this "near-threshold" region provides much of the energy savings previously demonstrated for subthreshold operation while offering more favorable performance and variability characteristics. This makes NTC applicable to a broad range of power-constrained computing segments including energy constrained sensing platforms. This paper explores the barriers to the adoption of NTC and describes current work aimed at overcoming these obstacles in the circuit design space.

  7. Improving Distributed Diagnosis Through Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Bregon, Anibal; Daigle, Matthew John; Roychoudhury, Indranil; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino

    2011-01-01

    Complex engineering systems require efficient fault diagnosis methodologies, but centralized approaches do not scale well, and this motivates the development of distributed solutions. This work presents an event-based approach for distributed diagnosis of abrupt parametric faults in continuous systems, by using the structural model decomposition capabilities provided by Possible Conflicts. We develop a distributed diagnosis algorithm that uses residuals computed by extending Possible Conflicts to build local event-based diagnosers based on global diagnosability analysis. The proposed approach is applied to a multitank system, and results demonstrate an improvement in the design of local diagnosers. Since local diagnosers use only a subset of the residuals, and use subsystem models to compute residuals (instead of the global system model), the local diagnosers are more efficient than previously developed distributed approaches.

  8. Moving target, distributed, real-time simulation using Ada

    NASA Technical Reports Server (NTRS)

    Collins, W. R.; Feyock, S.; King, L. A.; Morell, L. J.

    1985-01-01

    Research on a precompiler solution is described for the moving target compiler problem encountered when trying to run parallel simulation algorithms on several microcomputers. The precompiler is under development at NASA-Lewis for simulating jet engines. Since the behavior of any component of a jet engine, e.g., the fan inlet, rear duct, forward sensor, etc., depends on the previous behaviors and not the current behaviors of other components, the behaviors can be modeled on different processors provided the outputs of the processors reach other processors in appropriate time intervals. The simulator works in compute and transfer modes. The Ada procedure sets for the behaviors of different components are divided up and routed by the precompiler, which essentially receives a multitasking program. The subroutines are synchronized after each computation cycle.

  9. Integration of Geographical Information Systems and Geophysical Applications with Distributed Computing Technologies.

    NASA Astrophysics Data System (ADS)

    Pierce, M. E.; Aktas, M. S.; Aydin, G.; Fox, G. C.; Gadgil, H.; Sayar, A.

    2005-12-01

    We examine the application of Web Service Architectures and Grid-based distributed computing technologies to geophysics and geo-informatics. We are particularly interested in the integration of Geographical Information System (GIS) services with distributed data mining applications. GIS services provide the general purpose framework for building archival data services, real time streaming data services, and map-based visualization services that may be integrated with data mining and other applications through the use of distributed messaging systems and Web Service orchestration tools. Building upon on our previous work in these areas, we present our current research efforts. These include fundamental investigations into increasing XML-based Web service performance, supporting real time data streams, and integrating GIS mapping tools with audio/video collaboration systems for shared display and annotation.

  10. Ionization ratios and elemental abundances in the atmosphere of 68 Tauri

    NASA Astrophysics Data System (ADS)

    Aouina, A.; Monier, R.

    2017-12-01

    We have derived the ionization ratios of twelve elements in the atmosphere of the star 68 Tauri (HD 27962) using an ATLAS9 model atmosphere with 72 layers computed for the effective temperature and surface gravity of the star. We then computed a grid of synthetic spectra generated by SYNSPEC49 based on an ATLAS9 model atmosphere in order to model one high resolution spectrum secured by one of us (RM) with the échelle spectrograph SOPHIE at Observatoire de Haute Provence. We could determine the abundances of several elements in their dominant ionization stage, including those defining the Am phenomenon. We thus provide new abundance determinations for 68 Tauri using updated accurate atomic data retrieved from the NIST database which extend previous abundance works.

  11. The Effective Conductivity of Random Suspensions of Spherical Particles

    NASA Astrophysics Data System (ADS)

    Bonnecaze, R. T.; Brady, J. F.

    1991-03-01

    The effective conductivity of an infinite, random, mono-disperse, hard-sphere suspension is reported for particle to matrix conductivity ratios of ∞ , 10 and 0.01 for sphere volume fractions, c, up to 0.6. The conductivities are computed with a method previously described by the authors, which includes both far- and near-field interactions, and the particle configurations are generated via a Monte Carlo method. The results are consistent with the previous theoretical work of D. J. Jeffrey to O(c2) and the bounds computed by S. Torquato and F. Lado. It is also found that the Clausius-Mosotti equation is reasonably accurate for conductivity ratios of 10 or less all the way up to 60% (by volume). The calculated conductivities compare very well with those of experiments. In addition, percolation-like numerical experiments are performed on periodically replicated cubic lattices of N nearly touching spheres with an infinite particle to matrix conductivity ratio where the conductivity is computed as spheres are removed one by one from the lattice. Under suitable normalization of the conductivity and volume fraction, it is found that the initial volume fraction must be extremely close to maximum packing in order to observe a percolation transition, indicating that the near-field effects must be very large relative to far-field effects. These percolation transitions occur at the accepted values for simple (SC), bodycentred (BCC) and face-centred (FCC) cubic lattices. Also, the vulnerability of the lattices computed here are exactly those of previous investigators. Due to limited data above the percolation threshold, we could not correlate the conductivity with a power law near the threshold; however, it can be correlated with a power law for large normalized volume fractions. In this case the exponents are found to be 1.70, 1.75 and 1.79 for SC, BCC and FCC lattices respectively.

  12. Wrist Hypothermia Related to Continuous Work with a Computer Mouse: A Digital Infrared Imaging Pilot Study

    PubMed Central

    Reste, Jelena; Zvagule, Tija; Kurjane, Natalja; Martinsone, Zanna; Martinsone, Inese; Seile, Anita; Vanadzins, Ivars

    2015-01-01

    Computer work is characterized by sedentary static workload with low-intensity energy metabolism. The aim of our study was to evaluate the dynamics of skin surface temperature in the hand during prolonged computer mouse work under different ergonomic setups. Digital infrared imaging of the right forearm and wrist was performed during three hours of continuous computer work (measured at the start and every 15 minutes thereafter) in a laboratory with controlled ambient conditions. Four people participated in the study. Three different ergonomic computer mouse setups were tested on three different days (horizontal computer mouse without mouse pad; horizontal computer mouse with mouse pad and padded wrist support; vertical computer mouse without mouse pad). The study revealed a significantly strong negative correlation between the temperature of the dorsal surface of the wrist and time spent working with a computer mouse. Hand skin temperature decreased markedly after one hour of continuous computer mouse work. Vertical computer mouse work preserved more stable and higher temperatures of the wrist (>30 °C), while continuous use of a horizontal mouse for more than two hours caused an extremely low temperature (<28 °C) in distal parts of the hand. The preliminary observational findings indicate the significant effect of the duration and ergonomics of computer mouse work on the development of hand hypothermia. PMID:26262633

  13. Wrist Hypothermia Related to Continuous Work with a Computer Mouse: A Digital Infrared Imaging Pilot Study.

    PubMed

    Reste, Jelena; Zvagule, Tija; Kurjane, Natalja; Martinsone, Zanna; Martinsone, Inese; Seile, Anita; Vanadzins, Ivars

    2015-08-07

    Computer work is characterized by sedentary static workload with low-intensity energy metabolism. The aim of our study was to evaluate the dynamics of skin surface temperature in the hand during prolonged computer mouse work under different ergonomic setups. Digital infrared imaging of the right forearm and wrist was performed during three hours of continuous computer work (measured at the start and every 15 minutes thereafter) in a laboratory with controlled ambient conditions. Four people participated in the study. Three different ergonomic computer mouse setups were tested on three different days (horizontal computer mouse without mouse pad; horizontal computer mouse with mouse pad and padded wrist support; vertical computer mouse without mouse pad). The study revealed a significantly strong negative correlation between the temperature of the dorsal surface of the wrist and time spent working with a computer mouse. Hand skin temperature decreased markedly after one hour of continuous computer mouse work. Vertical computer mouse work preserved more stable and higher temperatures of the wrist (>30 °C), while continuous use of a horizontal mouse for more than two hours caused an extremely low temperature (<28 °C) in distal parts of the hand. The preliminary observational findings indicate the significant effect of the duration and ergonomics of computer mouse work on the development of hand hypothermia.

  14. Logic integration of mRNA signals by an RNAi-based molecular computer

    PubMed Central

    Xie, Zhen; Liu, Siyuan John; Bleris, Leonidas; Benenson, Yaakov

    2010-01-01

    Synthetic in vivo molecular ‘computers’ could rewire biological processes by establishing programmable, non-native pathways between molecular signals and biological responses. Multiple molecular computer prototypes have been shown to work in simple buffered solutions. Many of those prototypes were made of DNA strands and performed computations using cycles of annealing-digestion or strand displacement. We have previously introduced RNA interference (RNAi)-based computing as a way of implementing complex molecular logic in vivo. Because it also relies on nucleic acids for its operation, RNAi computing could benefit from the tools developed for DNA systems. However, these tools must be harnessed to produce bioactive components and be adapted for harsh operating environments that reflect in vivo conditions. In a step toward this goal, we report the construction and implementation of biosensors that ‘transduce’ mRNA levels into bioactive, small interfering RNA molecules via RNA strand exchange in a cell-free Drosophila embryo lysate, a step beyond simple buffered environments. We further integrate the sensors with our RNAi ‘computational’ module to evaluate two-input logic functions on mRNA concentrations. Our results show how RNA strand exchange can expand the utility of RNAi computing and point toward the possibility of using strand exchange in a native biological setting. PMID:20194121

  15. Thermodynamics of natural selection III: Landauer's principle in computation and chemistry.

    PubMed

    Smith, Eric

    2008-05-21

    This is the third in a series of three papers devoted to energy flow and entropy changes in chemical and biological processes, and their relations to the thermodynamics of computation. The previous two papers have developed reversible chemical transformations as idealizations for studying physiology and natural selection, and derived bounds from the second law of thermodynamics, between information gain in an ensemble and the chemical work required to produce it. This paper concerns the explicit mapping of chemistry to computation, and particularly the Landauer decomposition of irreversible computations, in which reversible logical operations generating no heat are separated from heat-generating erasure steps which are logically irreversible but thermodynamically reversible. The Landauer arrangement of computation is shown to produce the same entropy-flow diagram as that of the chemical Carnot cycles used in the second paper of the series to idealize physiological cycles. The specific application of computation to data compression and error-correcting encoding also makes possible a Landauer analysis of the somewhat different problem of optimal molecular recognition, which has been considered as an information theory problem. It is shown here that bounds on maximum sequence discrimination from the enthalpy of complex formation, although derived from the same logical model as the Shannon theorem for channel capacity, arise from exactly the opposite model for erasure.

  16. A five-layer users' need hierarchy of computer input device selection: a contextual observation survey of computer users with cervical spinal injuries (CSI).

    PubMed

    Tsai, Tsai-Hsuan; Nash, Robert J; Tseng, Kevin C

    2009-05-01

    This article presents how the researcher goes about answering the research question, 'how assistive technology impacts computer use among individuals with cervical spinal cord injury?' through an in-depth investigation into the real-life situations among computer operators with cervical spinal cord injuries (CSI). An in-depth survey was carried out to provide an insight into the function abilities and limitation, habitual practice and preference, choices and utilisation of input devices, personal and/or technical assistance, environmental set-up and arrangements and special requirements among 20 experienced computer users with cervical spinal cord injuries. Following the survey findings, a five-layer CSI users' needs hierarchy of input device selection and use was proposed. These needs were ranked in order: beginning with the most basic criterion at the bottom of the pyramid; lower-level criteria must be met before one moves onto the higher level. The users' needs hierarchy for CSI computer users, which had not been applied by previous research work and which has established a rationale for the development of alternative input devices. If an input device achieves the criteria set up in the needs hierarchy, then a good match of person and technology will be achieved.

  17. Key issues of ultraviolet radiation of OH at high altitudes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yuhuai; Wan, Tian; Jiang, Jianzheng

    2014-12-09

    Ultraviolet (UV) emissions radiated by hydroxyl (OH) is one of the fundamental elements in the prediction of radiation signature of high-altitude and high-speed vehicle. In this work, the OH A{sup 2}Σ{sup +}→X{sup 2}Π ultraviolet emission band behind the bow shock is computed under the experimental condition of the second bow-shock ultraviolet flight (BSUV-2). Four related key issues are discussed, namely, the source of hydrogen element in the high-altitude atmosphere, the formation mechanism of OH species, efficient computational algorithm of trace species in rarefied flows, and accurate calculation of OH emission spectra. Firstly, by analyzing the typical atmospheric model, the verticalmore » distributions of the number densities of different species containing hydrogen element are given. According to the different dominating species containing hydrogen element, the atmosphere is divided into three zones, and the formation mechanism of OH species is analyzed in the different zones. The direct simulation Monte Carlo (DSMC) method and the Navier-Stokes equations are employed to compute the number densities of the different OH electronically and vibrationally excited states. Different to the previous work, the trace species separation (TSS) algorithm is applied twice in order to accurately calculate the densities of OH species and its excited states. Using a non-equilibrium radiation model, the OH ultraviolet emission spectra and intensity at different altitudes are computed, and good agreement is obtained with the flight measured data.« less

  18. The joint effect of mesoscale and microscale roughness on perceived gloss.

    PubMed

    Qi, Lin; Chantler, Mike J; Siebert, J Paul; Dong, Junyu

    2015-10-01

    Computer simulated stimuli can provide a flexible method for creating artificial scenes in the study of visual perception of material surface properties. Previous work based on this approach reported that the properties of surface roughness and glossiness are mutually interdependent and therefore, perception of one affects the perception of the other. In this case roughness was limited to a surface property termed bumpiness. This paper reports a study into how perceived gloss varies with two model parameters related to surface roughness in computer simulations: the mesoscale roughness parameter in a surface geometry model and the microscale roughness parameter in a surface reflectance model. We used a real-world environment map to provide complex illumination and a physically-based path tracer for rendering the stimuli. Eight observers took part in a 2AFC experiment, and the results were tested against conjoint measurement models. We found that although both of the above roughness parameters significantly affect perceived gloss, the additive model does not adequately describe their mutually interactive and nonlinear influence, which is at variance with previous findings. We investigated five image properties used to quantify specular highlights, and found that perceived gloss is well predicted using a linear model. Our findings provide computational support to the 'statistical appearance models' proposed recently for material perception. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Simulation of Mach Probes in Non-Uniform Magnetized Plasmas: the Influence of a Background Density Gradient

    NASA Astrophysics Data System (ADS)

    Haakonsen, Christian Bernt; Hutchinson, Ian H.

    2013-10-01

    Mach probes can be used to measure transverse flow in magnetized plasmas, but what they actually measure in strongly non-uniform plasmas has not been definitively established. A fluid treatment in previous work has suggested that the diamagnetic drifts associated with background density and temperature gradients affect transverse flow measurements, but detailed computational study is required to validate and elaborate on those results; it is really a kinetic problem, since the probe deforms and introduces voids in the ion and electron distribution functions. A new code, the Plasma-Object Simulator with Iterated Trajectories (POSIT) has been developed to self-consistently compute the steady-state six-dimensional ion and electron distribution functions in the perturbed plasma. Particle trajectories are integrated backwards in time to the domain boundary, where arbitrary background distribution functions can be specified. This allows POSIT to compute the ion and electron density at each node of its unstructured mesh, update the potential based on those densities, and then iterate until convergence. POSIT is used to study the impact of a background density gradient on transverse Mach probe measurements, and the results compared to the previous fluid theory. C.B. Haakonsen was supported in part by NSF/DOE Grant No. DE-FG02-06ER54512, and in part by an SCGF award administered by ORISE under DOE Contract No. DE-AC05-06OR23100.

  20. Working Memory Contributions to Reinforcement Learning Impairments in Schizophrenia

    PubMed Central

    Brown, Jaime K.; Gold, James M.; Waltz, James A.; Frank, Michael J.

    2014-01-01

    Previous research has shown that patients with schizophrenia are impaired in reinforcement learning tasks. However, behavioral learning curves in such tasks originate from the interaction of multiple neural processes, including the basal ganglia- and dopamine-dependent reinforcement learning (RL) system, but also prefrontal cortex-dependent cognitive strategies involving working memory (WM). Thus, it is unclear which specific system induces impairments in schizophrenia. We recently developed a task and computational model allowing us to separately assess the roles of RL (slow, cumulative learning) mechanisms versus WM (fast but capacity-limited) mechanisms in healthy adult human subjects. Here, we used this task to assess patients' specific sources of impairments in learning. In 15 separate blocks, subjects learned to pick one of three actions for stimuli. The number of stimuli to learn in each block varied from two to six, allowing us to separate influences of capacity-limited WM from the incremental RL system. As expected, both patients (n = 49) and healthy controls (n = 36) showed effects of set size and delay between stimulus repetitions, confirming the presence of working memory effects. Patients performed significantly worse than controls overall, but computational model fits and behavioral analyses indicate that these deficits could be entirely accounted for by changes in WM parameters (capacity and reliability), whereas RL processes were spared. These results suggest that the working memory system contributes strongly to learning impairments in schizophrenia. PMID:25297101

  1. Computer ergonomics: the medical practice guide to developing good computer habits.

    PubMed

    Hills, Laura

    2011-01-01

    Medical practice employees are likely to use computers for at least some of their work. Some sit several hours each day at computer workstations. Therefore, it is important that members of your medical practice team develop good computer work habits and that they know how to align equipment, furniture, and their bodies to prevent strain, stress, and computer-related injuries. This article delves into the field of computer ergonomics-the design of computer workstations and work habits to reduce user fatigue, discomfort, and injury. It describes practical strategies medical practice employees can use to improve their computer work habits. Specifically, this article describes the proper use of the computer workstation chair, the ideal placement of the computer monitor and keyboard, and the best lighting for computer work areas and tasks. Moreover, this article includes computer ergonomic guidelines especially for bifocal and progressive lens wearers and offers 10 tips for proper mousing. Ergonomically correct posture, movements, positioning, and equipment are all described in detail to enable the frequent computer user in your medical practice to remain healthy, pain-free, and productive.

  2. Probability distributions of molecular observables computed from Markov models. II. Uncertainties in observables and their time-evolution

    NASA Astrophysics Data System (ADS)

    Chodera, John D.; Noé, Frank

    2010-09-01

    Discrete-state Markov (or master equation) models provide a useful simplified representation for characterizing the long-time statistical evolution of biomolecules in a manner that allows direct comparison with experiments as well as the elucidation of mechanistic pathways for an inherently stochastic process. A vital part of meaningful comparison with experiment is the characterization of the statistical uncertainty in the predicted experimental measurement, which may take the form of an equilibrium measurement of some spectroscopic signal, the time-evolution of this signal following a perturbation, or the observation of some statistic (such as the correlation function) of the equilibrium dynamics of a single molecule. Without meaningful error bars (which arise from both approximation and statistical error), there is no way to determine whether the deviations between model and experiment are statistically meaningful. Previous work has demonstrated that a Bayesian method that enforces microscopic reversibility can be used to characterize the statistical component of correlated uncertainties in state-to-state transition probabilities (and functions thereof) for a model inferred from molecular simulation data. Here, we extend this approach to include the uncertainty in observables that are functions of molecular conformation (such as surrogate spectroscopic signals) characterizing each state, permitting the full statistical uncertainty in computed spectroscopic experiments to be assessed. We test the approach in a simple model system to demonstrate that the computed uncertainties provide a useful indicator of statistical variation, and then apply it to the computation of the fluorescence autocorrelation function measured for a dye-labeled peptide previously studied by both experiment and simulation.

  3. Optimally setting up directed searches for continuous gravitational waves in Advanced LIGO O1 data

    NASA Astrophysics Data System (ADS)

    Ming, Jing; Papa, Maria Alessandra; Krishnan, Badri; Prix, Reinhard; Beer, Christian; Zhu, Sylvia J.; Eggenstein, Heinz-Bernd; Bock, Oliver; Machenschalk, Bernd

    2018-02-01

    In this paper we design a search for continuous gravitational waves from three supernova remnants: Vela Jr., Cassiopeia A (Cas A) and G347.3. These systems might harbor rapidly rotating neutron stars emitting quasiperiodic gravitational radiation detectable by the advanced LIGO detectors. Our search is designed to use the volunteer computing project Einstein@Home for a few months and assumes the sensitivity and duty cycles of the advanced LIGO detectors during their first science run. For all three supernova remnants, the sky positions of their central compact objects are well known but the frequency and spin-down rates of the neutron stars are unknown which makes the searches computationally limited. In a previous paper we have proposed a general framework for deciding on what target we should spend computational resources and in what proportion, what frequency and spin-down ranges we should search for every target, and with what search setup. Here we further expand this framework and apply it to design a search directed at detecting continuous gravitational wave signals from the most promising three supernova remnants identified as such in the previous work. Our optimization procedure yields broad frequency and spin-down searches for all three objects, at an unprecedented level of sensitivity: The smallest detectable gravitational wave strain h0 for Cas A is expected to be 2 times smaller than the most sensitive upper limits published to date, and our proposed search, which was set up and ran on the volunteer computing project Einstein@Home, covers a much larger frequency range.

  4. A novel semi-transductive learning framework for efficient atypicality detection in chest radiographs

    NASA Astrophysics Data System (ADS)

    Alzubaidi, Mohammad; Balasubramanian, Vineeth; Patel, Ameet; Panchanathan, Sethuraman; Black, John A., Jr.

    2012-03-01

    Inductive learning refers to machine learning algorithms that learn a model from a set of training data instances. Any test instance is then classified by comparing it to the learned model. When the set of training instances lend themselves well to modeling, the use of a model substantially reduces the computation cost of classification. However, some training data sets are complex, and do not lend themselves well to modeling. Transductive learning refers to machine learning algorithms that classify test instances by comparing them to all of the training instances, without creating an explicit model. This can produce better classification performance, but at a much higher computational cost. Medical images vary greatly across human populations, constituting a data set that does not lend itself well to modeling. Our previous work showed that the wide variations seen across training sets of "normal" chest radiographs make it difficult to successfully classify test radiographs with an inductive (modeling) approach, and that a transductive approach leads to much better performance in detecting atypical regions. The problem with the transductive approach is its high computational cost. This paper develops and demonstrates a novel semi-transductive framework that can address the unique challenges of atypicality detection in chest radiographs. The proposed framework combines the superior performance of transductive methods with the reduced computational cost of inductive methods. Our results show that the proposed semitransductive approach provides both effective and efficient detection of atypical regions within a set of chest radiographs previously labeled by Mayo Clinic expert thoracic radiologists.

  5. A Lagrangian subgrid-scale model with dynamic estimation of Lagrangian time scale for large eddy simulation of complex flows

    NASA Astrophysics Data System (ADS)

    Verma, Aman; Mahesh, Krishnan

    2012-08-01

    The dynamic Lagrangian averaging approach for the dynamic Smagorinsky model for large eddy simulation is extended to an unstructured grid framework and applied to complex flows. The Lagrangian time scale is dynamically computed from the solution and does not need any adjustable parameter. The time scale used in the standard Lagrangian model contains an adjustable parameter θ. The dynamic time scale is computed based on a "surrogate-correlation" of the Germano-identity error (GIE). Also, a simple material derivative relation is used to approximate GIE at different events along a pathline instead of Lagrangian tracking or multi-linear interpolation. Previously, the time scale for homogeneous flows was computed by averaging along directions of homogeneity. The present work proposes modifications for inhomogeneous flows. This development allows the Lagrangian averaged dynamic model to be applied to inhomogeneous flows without any adjustable parameter. The proposed model is applied to LES of turbulent channel flow on unstructured zonal grids at various Reynolds numbers. Improvement is observed when compared to other averaging procedures for the dynamic Smagorinsky model, especially at coarse resolutions. The model is also applied to flow over a cylinder at two Reynolds numbers and good agreement with previous computations and experiments is obtained. Noticeable improvement is obtained using the proposed model over the standard Lagrangian model. The improvement is attributed to a physically consistent Lagrangian time scale. The model also shows good performance when applied to flow past a marine propeller in an off-design condition; it regularizes the eddy viscosity and adjusts locally to the dominant flow features.

  6. Tensor scale: An analytic approach with efficient computation and applications☆

    PubMed Central

    Xu, Ziyue; Saha, Punam K.; Dasgupta, Soura

    2015-01-01

    Scale is a widely used notion in computer vision and image understanding that evolved in the form of scale-space theory where the key idea is to represent and analyze an image at various resolutions. Recently, we introduced a notion of local morphometric scale referred to as “tensor scale” using an ellipsoidal model that yields a unified representation of structure size, orientation and anisotropy. In the previous work, tensor scale was described using a 2-D algorithmic approach and a precise analytic definition was missing. Also, the application of tensor scale in 3-D using the previous framework is not practical due to high computational complexity. In this paper, an analytic definition of tensor scale is formulated for n-dimensional (n-D) images that captures local structure size, orientation and anisotropy. Also, an efficient computational solution in 2- and 3-D using several novel differential geometric approaches is presented and the accuracy of results is experimentally examined. Also, a matrix representation of tensor scale is derived facilitating several operations including tensor field smoothing to capture larger contextual knowledge. Finally, the applications of tensor scale in image filtering and n-linear interpolation are presented and the performance of their results is examined in comparison with respective state-of-art methods. Specifically, the performance of tensor scale based image filtering is compared with gradient and Weickert’s structure tensor based diffusive filtering algorithms. Also, the performance of tensor scale based n-linear interpolation is evaluated in comparison with standard n-linear and windowed-sinc interpolation methods. PMID:26236148

  7. Computer work duration and its dependence on the used pause definition.

    PubMed

    Richter, Janneke M; Slijper, Harm P; Over, Eelco A B; Frens, Maarten A

    2008-11-01

    Several ergonomic studies have estimated computer work duration using registration software. In these studies, an arbitrary pause definition (Pd; the minimal time between two computer events to constitute a pause) is chosen and the resulting duration of computer work is estimated. In order to uncover the relationship between the used pause definition and the computer work duration (PWT), we used registration software to record usage patterns of 571 computer users across almost 60,000 working days. For a large range of Pds (1-120 s), we found a shallow, log-linear relationship between PWT and Pds. For keyboard and mouse use, a second-order function fitted the data best. We found that these relationships were dependent on the amount of computer work and subject characteristics. Comparison of exposure duration from studies using different pause definitions should take this into account, since it could lead to misclassification. Software manufacturers and ergonomists assessing computer work duration could use the found relationships for software design and study comparison.

  8. Development of a computational model on the neural activity patterns of a visual working memory in a hierarchical feedforward Network

    NASA Astrophysics Data System (ADS)

    An, Soyoung; Choi, Woochul; Paik, Se-Bum

    2015-11-01

    Understanding the mechanism of information processing in the human brain remains a unique challenge because the nonlinear interactions between the neurons in the network are extremely complex and because controlling every relevant parameter during an experiment is difficult. Therefore, a simulation using simplified computational models may be an effective approach. In the present study, we developed a general model of neural networks that can simulate nonlinear activity patterns in the hierarchical structure of a neural network system. To test our model, we first examined whether our simulation could match the previously-observed nonlinear features of neural activity patterns. Next, we performed a psychophysics experiment for a simple visual working memory task to evaluate whether the model could predict the performance of human subjects. Our studies show that the model is capable of reproducing the relationship between memory load and performance and may contribute, in part, to our understanding of how the structure of neural circuits can determine the nonlinear neural activity patterns in the human brain.

  9. Temporal analysis of laser beam propagation in the atmosphere using computer-generated long phase screens.

    PubMed

    Dios, Federico; Recolons, Jaume; Rodríguez, Alejandro; Batet, Oscar

    2008-02-04

    Temporal analysis of the irradiance at the detector plane is intended as the first step in the study of the mean fade time in a free optical communication system. In the present work this analysis has been performed for a Gaussian laser beam propagating in the atmospheric turbulence by means of computer simulation. To this end, we have adapted a previously known numerical method to the generation of long phase screens. The screens are displaced in a transverse direction as the wave is propagated, in order to simulate the wind effect. The amplitude of the temporal covariance and its power spectrum have been obtained at the optical axis, at the beam centroid and at a certain distance from these two points. Results have been worked out for weak, moderate and strong turbulence regimes and when possible they have been compared with theoretical models. These results show a significant contribution of beam wander to the temporal behaviour of the irradiance, even in the case of weak turbulence. We have also found that the spectral bandwidth of the covariance is hardly dependent on the Rytov variance.

  10. Hydrodynamic Analyses and Evaluation of New Fluid Film Bearing Concepts

    NASA Technical Reports Server (NTRS)

    Keith, Theo G., Jr.; Dimofte, Florin

    1998-01-01

    Over the past several years, numerical and experimental investigations have been performed on a waved journal bearing. The research work was undertaken by Dr. Florin Dimofte, a Senior Research Associate in the Mechanical Engineering Department at the University of Toledo. Dr. Theo Keith, Distinguished University Professor in the Mechanical Engineering Department was the Technical Coordinator of the project. The wave journal bearing is a bearing with a slight but precise variation in its circular profile such that a waved profile is circumscribed on the inner bearing diameter. The profile has a wave amplitude that is equal to a fraction of the bearing clearance. Prior to this period of research on the wave bearing, computer codes were written and an experimental facility was established. During this period of research considerable effort was directed towards the study of the bearing's stability. The previously developed computer codes and the experimental facility were of critical importance in performing this stability research. A collection of papers and reports were written to describe the results of this work. The attached captures that effort and represents the research output during the grant period.

  11. Review of Real-Time Simulator and the Steps Involved for Implementation of a Model from MATLAB/SIMULINK to Real-Time

    NASA Astrophysics Data System (ADS)

    Mikkili, Suresh; Panda, Anup Kumar; Prattipati, Jayanthi

    2015-06-01

    Nowadays the researchers want to develop their model in real-time environment. Simulation tools have been widely used for the design and improvement of electrical systems since the mid twentieth century. The evolution of simulation tools has progressed in step with the evolution of computing technologies. In recent years, computing technologies have improved dramatically in performance and become widely available at a steadily decreasing cost. Consequently, simulation tools have also seen dramatic performance gains and steady cost decreases. Researchers and engineers now have the access to affordable, high performance simulation tools that were previously too cost prohibitive, except for the largest manufacturers. This work has introduced a specific class of digital simulator known as a real-time simulator by answering the questions "what is real-time simulation", "why is it needed" and "how it works". The latest trend in real-time simulation consists of exporting simulation models to FPGA. In this article, the Steps involved for implementation of a model from MATLAB to REAL-TIME are provided in detail.

  12. Quantum-assisted Helmholtz machines: A quantum–classical deep learning framework for industrial datasets in near-term devices

    NASA Astrophysics Data System (ADS)

    Benedetti, Marcello; Realpe-Gómez, John; Perdomo-Ortiz, Alejandro

    2018-07-01

    Machine learning has been presented as one of the key applications for near-term quantum technologies, given its high commercial value and wide range of applicability. In this work, we introduce the quantum-assisted Helmholtz machine:a hybrid quantum–classical framework with the potential of tackling high-dimensional real-world machine learning datasets on continuous variables. Instead of using quantum computers only to assist deep learning, as previous approaches have suggested, we use deep learning to extract a low-dimensional binary representation of data, suitable for processing on relatively small quantum computers. Then, the quantum hardware and deep learning architecture work together to train an unsupervised generative model. We demonstrate this concept using 1644 quantum bits of a D-Wave 2000Q quantum device to model a sub-sampled version of the MNIST handwritten digit dataset with 16 × 16 continuous valued pixels. Although we illustrate this concept on a quantum annealer, adaptations to other quantum platforms, such as ion-trap technologies or superconducting gate-model architectures, could be explored within this flexible framework.

  13. Improving Hall Thruster Plume Simulation through Refined Characterization of Near-field Plasma Properties

    NASA Astrophysics Data System (ADS)

    Huismann, Tyler D.

    Due to the rapidly expanding role of electric propulsion (EP) devices, it is important to evaluate their integration with other spacecraft systems. Specifically, EP device plumes can play a major role in spacecraft integration, and as such, accurate characterization of plume structure bears on mission success. This dissertation addresses issues related to accurate prediction of plume structure in a particular type of EP device, a Hall thruster. This is done in two ways: first, by coupling current plume simulation models with current models that simulate a Hall thruster's internal plasma behavior; second, by improving plume simulation models and thereby increasing physical fidelity. These methods are assessed by comparing simulated results to experimental measurements. Assessment indicates the two methods improve plume modeling capabilities significantly: using far-field ion current density as a metric, these approaches used in conjunction improve agreement with measurements by a factor of 2.5, as compared to previous methods. Based on comparison to experimental measurements, recent computational work on discharge chamber modeling has been largely successful in predicting properties of internal thruster plasmas. This model can provide detailed information on plasma properties at a variety of locations. Frequently, experimental data is not available at many locations that are of interest regarding computational models. Excepting the presence of experimental data, there are limited alternatives for scientifically determining plasma properties that are necessary as inputs into plume simulations. Therefore, this dissertation focuses on coupling current models that simulate internal thruster plasma behavior with plume simulation models. Further, recent experimental work on atom-ion interactions has provided a better understanding of particle collisions within plasmas. This experimental work is used to update collision models in a current plume simulation code. Previous versions of the code assume an unknown dependence between particles' pre-collision velocities and post-collision scattering angles. This dissertation focuses on updating several of these types of collisions by assuming a curve fit based on the measurements of atom-ion interactions, such that previously unknown angular dependences are well-characterized.

  14. Knowledge and Utilization of Computers Among Health Professionals in a Developing Country: A Cross-Sectional Study

    PubMed Central

    2015-01-01

    Background Incorporation of information communication technology in health care has gained wide acceptance in the last two decades. Developing countries are also incorporating information communication technology into the health system including the implementation of electronic medical records in major hospitals and the use of mobile health in rural community-based health interventions. However, the literature on the level of knowledge and utilization of information communication technology by health professionals in those settings is scarce for proper implementation planning. Objective The objective of this study is to assess knowledge, computer utilization, and associated factors among health professionals in hospitals and health institutions in Ethiopia. Methods A quantitative cross-sectional study was conducted on 554 health professionals working in 7 hospitals, 19 primary health centers, and 10 private clinics in the Harari region of Ethiopia. Data were collected using a semi-structured, self-administered, and pre-tested questionnaire. Descriptive and logistic regression techniques using SPSS version 16.0 (IBM Corporation) were applied to determine the level of knowledge and identify determinants of utilization of information communication technology. Results Out of 554 participants, 482 (87.0%) of them responded to the questionnaire. Among them, 90 (18.7%) demonstrated good knowledge of computers while 142 (29.5%) demonstrated good utilization habits. Health professionals who work in the primary health centers were found to have lower knowledge (3.4%) and utilization (18.4%). Age (adjusted odds ratio [AOR]=3.06, 95% CI 0.57-5.37), field of study (AOR=3.08, 95% CI 1.65-5.73), level of education (AOR=2.78, 95% CI 1.43-5.40), and previous computer training participation (AOR=3.65, 95% CI 1.62-8.21) were found to be significantly associated with computer utilization habits of health professionals. Conclusions Computer knowledge and utilization habits of health professionals, especially those who work in primary health centers, were found to be low. Providing trainings and continuous follow-up are necessary measures to increase the likelihood of the success of implemented eHealth systems in those settings. PMID:27025996

  15. Knowledge and Utilization of Computers Among Health Professionals in a Developing Country: A Cross-Sectional Study.

    PubMed

    Alwan, Kalid; Awoke, Tadesse; Tilahun, Binyam

    2015-03-26

    Incorporation of information communication technology in health care has gained wide acceptance in the last two decades. Developing countries are also incorporating information communication technology into the health system including the implementation of electronic medical records in major hospitals and the use of mobile health in rural community-based health interventions. However, the literature on the level of knowledge and utilization of information communication technology by health professionals in those settings is scarce for proper implementation planning. The objective of this study is to assess knowledge, computer utilization, and associated factors among health professionals in hospitals and health institutions in Ethiopia. A quantitative cross-sectional study was conducted on 554 health professionals working in 7 hospitals, 19 primary health centers, and 10 private clinics in the Harari region of Ethiopia. Data were collected using a semi-structured, self-administered, and pre-tested questionnaire. Descriptive and logistic regression techniques using SPSS version 16.0 (IBM Corporation) were applied to determine the level of knowledge and identify determinants of utilization of information communication technology. Out of 554 participants, 482 (87.0%) of them responded to the questionnaire. Among them, 90 (18.7%) demonstrated good knowledge of computers while 142 (29.5%) demonstrated good utilization habits. Health professionals who work in the primary health centers were found to have lower knowledge (3.4%) and utilization (18.4%). Age (adjusted odds ratio [AOR]=3.06, 95% CI 0.57-5.37), field of study (AOR=3.08, 95% CI 1.65-5.73), level of education (AOR=2.78, 95% CI 1.43-5.40), and previous computer training participation (AOR=3.65, 95% CI 1.62-8.21) were found to be significantly associated with computer utilization habits of health professionals. Computer knowledge and utilization habits of health professionals, especially those who work in primary health centers, were found to be low. Providing trainings and continuous follow-up are necessary measures to increase the likelihood of the success of implemented eHealth systems in those settings.

  16. Use of the parameterised finite element method to robustly and efficiently evolve the edge of a moving cell.

    PubMed

    Neilson, Matthew P; Mackenzie, John A; Webb, Steven D; Insall, Robert H

    2010-11-01

    In this paper we present a computational tool that enables the simulation of mathematical models of cell migration and chemotaxis on an evolving cell membrane. Recent models require the numerical solution of systems of reaction-diffusion equations on the evolving cell membrane and then the solution state is used to drive the evolution of the cell edge. Previous work involved moving the cell edge using a level set method (LSM). However, the LSM is computationally very expensive, which severely limits the practical usefulness of the algorithm. To address this issue, we have employed the parameterised finite element method (PFEM) as an alternative method for evolving a cell boundary. We show that the PFEM is far more efficient and robust than the LSM. We therefore suggest that the PFEM potentially has an essential role to play in computational modelling efforts towards the understanding of many of the complex issues related to chemotaxis.

  17. Fast computation of quadrupole and hexadecapole approximations in microlensing with a single point-source evaluation

    NASA Astrophysics Data System (ADS)

    Cassan, Arnaud

    2017-07-01

    The exoplanet detection rate from gravitational microlensing has grown significantly in recent years thanks to a great enhancement of resources and improved observational strategy. Current observatories include ground-based wide-field and/or robotic world-wide networks of telescopes, as well as space-based observatories such as satellites Spitzer or Kepler/K2. This results in a large quantity of data to be processed and analysed, which is a challenge for modelling codes because of the complexity of the parameter space to be explored and the intensive computations required to evaluate the models. In this work, I present a method that allows to compute the quadrupole and hexadecapole approximations of the finite-source magnification with more efficiency than previously available codes, with routines about six times and four times faster, respectively. The quadrupole takes just about twice the time of a point-source evaluation, which advocates for generalizing its use to large portions of the light curves. The corresponding routines are available as open-source python codes.

  18. Comparison of Monoenergetic Photon Organ Dose Rate Coefficients for the Female Stylized and Voxel Phantoms Submerged in Air

    DOE PAGES

    Hiller, Mauritius; Dewji, Shaheen Azim

    2017-02-16

    Dose rate coefficients computed using the International Commission on Radiological Protection (ICRP) reference adult female voxel phantom were compared with values computed using the Oak Ridge National Laboratory (ORNL) adult female stylized phantom in an air submersion exposure geometry. This is a continuation of previous work comparing monoenergetic organ dose rate coefficients for the male adult phantoms. With both the male and female data computed, effective dose rate as defined by ICRP Publication 103 was compared for both phantoms. Organ dose rate coefficients for the female phantom and ratios of organ dose rates for the voxel and stylized phantoms aremore » provided in the energy range from 30 to 5 MeV. Analysis of the contribution of the organs to effective dose is also provided. Lastly, comparison of effective dose rates between the voxel and stylized phantoms was within 8% at 100 keV and is <5% between 200 and 5000 keV.« less

  19. Applications of Phase-Based Motion Processing

    NASA Technical Reports Server (NTRS)

    Branch, Nicholas A.; Stewart, Eric C.

    2018-01-01

    Image pyramids provide useful information in determining structural response at low cost using commercially available cameras. The current effort applies previous work on the complex steerable pyramid to analyze and identify imperceptible linear motions in video. Instead of implicitly computing motion spectra through phase analysis of the complex steerable pyramid and magnifying the associated motions, instead present a visual technique and the necessary software to display the phase changes of high frequency signals within video. The present technique quickly identifies regions of largest motion within a video with a single phase visualization and without the artifacts of motion magnification, but requires use of the computationally intensive Fourier transform. While Riesz pyramids present an alternative to the computationally intensive complex steerable pyramid for motion magnification, the Riesz formulation contains significant noise, and motion magnification still presents large amounts of data that cannot be quickly assessed by the human eye. Thus, user-friendly software is presented for quickly identifying structural response through optical flow and phase visualization in both Python and MATLAB.

  20. Irrigant flow within a prepared root canal using various flow rates: a Computational Fluid Dynamics study.

    PubMed

    Boutsioukis, C; Lambrianidis, T; Kastrinakis, E

    2009-02-01

    To study using computer simulation the effect of irrigant flow rate on the flow pattern within a prepared root canal, during final irrigation with a syringe and needle. Geometrical characteristics of a side-vented endodontic needle and clinically realistic flow rate values were obtained from previous and preliminary studies. A Computational Fluid Dynamics (CFD) model was created using FLUENT 6.2 software. Calculations were carried out for five selected flow rates (0.02-0.79 mL sec(-1)) and velocity and turbulence quantities along the domain were evaluated. Irrigant replacement was limited to 1-1.5 mm apical to the needle tip for all flow rates tested. Low-Reynolds number turbulent flow was detected near the needle outlet. Irrigant flow rate affected significantly the flow pattern within the root canal. Irrigation needles should be placed to within 1 mm from working length to ensure fluid exchange. Turbulent flow of irrigant leads to more efficient irrigant replacement. CFD represents a powerful tool for the study of irrigation.

  1. Optical Activity of Benzil Crystal

    NASA Astrophysics Data System (ADS)

    Říha, Jan; Vyšín, Ivo

    2003-09-01

    Optical activity of benzil as an example of optically active matter in the crystalline state only, not in solution, is studied for the wavelengths ranging from 0.320 m to 0.585 m. Previously measured experimental data are approximated by the theoretical set of formulas, which were derived by the use of the three coupled oscillators model. The earlier published formula consisting of six terms differed from the experimental data particularly in the wavelength region (0.380-0.510) m. This formula is replaced by the twelve-term formula which was computed by our specially worked computer program for the interpretation of the experimental data of optical activity based on the Marquardt-Levenberg method of the sum of least squares minimization. The possibility of molecular contribution to the resulting optical activity of benzil is mentioned. The use of Kramers-Kronig transforms for the determination of the circular dichroism curve based on the optical rotatory dispersion result is shown. The theoretically computed circular dichroism is compared with the available experimental data.

  2. Comparison of Monoenergetic Photon Organ Dose Rate Coefficients for the Female Stylized and Voxel Phantoms Submerged in Air

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hiller, Mauritius; Dewji, Shaheen Azim

    Dose rate coefficients computed using the International Commission on Radiological Protection (ICRP) reference adult female voxel phantom were compared with values computed using the Oak Ridge National Laboratory (ORNL) adult female stylized phantom in an air submersion exposure geometry. This is a continuation of previous work comparing monoenergetic organ dose rate coefficients for the male adult phantoms. With both the male and female data computed, effective dose rate as defined by ICRP Publication 103 was compared for both phantoms. Organ dose rate coefficients for the female phantom and ratios of organ dose rates for the voxel and stylized phantoms aremore » provided in the energy range from 30 to 5 MeV. Analysis of the contribution of the organs to effective dose is also provided. Lastly, comparison of effective dose rates between the voxel and stylized phantoms was within 8% at 100 keV and is <5% between 200 and 5000 keV.« less

  3. A virtual reality environment for telescope operation

    NASA Astrophysics Data System (ADS)

    Martínez, Luis A.; Villarreal, José L.; Ángeles, Fernando; Bernal, Abel

    2010-07-01

    Astronomical observatories and telescopes are becoming increasingly large and complex systems, demanding to any potential user the acquirement of great amount of information previous to access them. At present, the most common way to overcome that information is through the implementation of larger graphical user interfaces and computer monitors to increase the display area. Tonantzintla Observatory has a 1-m telescope with a remote observing system. As a step forward in the improvement of the telescope software, we have designed a Virtual Reality (VR) environment that works as an extension of the remote system and allows us to operate the telescope. In this work we explore this alternative technology that is being suggested here as a software platform for the operation of the 1-m telescope.

  4. Determinant representation of the domain-wall boundary condition partition function of a Richardson-Gaudin model containing one arbitrary spin

    NASA Astrophysics Data System (ADS)

    Faribault, Alexandre; Tschirhart, Hugo; Muller, Nicolas

    2016-05-01

    In this work we present a determinant expression for the domain-wall boundary condition partition function of rational (XXX) Richardson-Gaudin models which, in addition to N-1 spins \\frac{1}{2}, contains one arbitrarily large spin S. The proposed determinant representation is written in terms of a set of variables which, from previous work, are known to define eigenstates of the quantum integrable models belonging to this class as solutions to quadratic Bethe equations. Such a determinant can be useful numerically since systems of quadratic equations are much simpler to solve than the usual highly nonlinear Bethe equations. It can therefore offer significant gains in stability and computation speed.

  5. Uniform rovibrational collisional N2 bin model for DSMC, with application to atmospheric entry flows

    NASA Astrophysics Data System (ADS)

    Torres, E.; Bondar, Ye. A.; Magin, T. E.

    2016-11-01

    A state-to-state model for internal energy exchange and molecular dissociation allows for high-fidelity DSMC simulations. Elementary reaction cross sections for the N2 (v, J)+ N system were previously extracted from a quantum-chemical database, originally compiled at NASA Ames Research Center. Due to the high computational cost of simulating the full range of inelastic collision processes (approx. 23 million reactions), a coarse-grain model, called the Uniform RoVibrational Collisional (URVC) bin model can be used instead. This allows to reduce the original 9390 rovibrational levels of N2 to 10 energy bins. In the present work, this reduced model is used to simulate a 2D flow configuration, which more closely reproduces the conditions of high-speed entry into Earth's atmosphere. For this purpose, the URVC bin model had to be adapted for integration into the "Rarefied Gas Dynamics Analysis System" (RGDAS), a separate high-performance DSMC code capable of handling complex geometries and parallel computations. RGDAS was developed at the Institute of Theoretical and Applied Mechanics in Novosibirsk, Russia for use by the European Space Agency (ESA) and shares many features with the well-known SMILE code developed by the same group. We show that the reduced mechanism developed previously can be implemented in RGDAS, and the results exhibit nonequilibrium effects consistent with those observed in previous 1D-simulations.

  6. Computer work and musculoskeletal disorders of the neck and upper extremity: A systematic review

    PubMed Central

    2010-01-01

    Background This review examines the evidence for an association between computer work and neck and upper extremity disorders (except carpal tunnel syndrome). Methods A systematic critical review of studies of computer work and musculoskeletal disorders verified by a physical examination was performed. Results A total of 22 studies (26 articles) fulfilled the inclusion criteria. Results show limited evidence for a causal relationship between computer work per se, computer mouse and keyboard time related to a diagnosis of wrist tendonitis, and for an association between computer mouse time and forearm disorders. Limited evidence was also found for a causal relationship between computer work per se and computer mouse time related to tension neck syndrome, but the evidence for keyboard time was insufficient. Insufficient evidence was found for an association between other musculoskeletal diagnoses of the neck and upper extremities, including shoulder tendonitis and epicondylitis, and any aspect of computer work. Conclusions There is limited epidemiological evidence for an association between aspects of computer work and some of the clinical diagnoses studied. None of the evidence was considered as moderate or strong and there is a need for more and better documentation. PMID:20429925

  7. The Search for Effective Algorithms for Recovery from Loss of Separation

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Hagen, George E.; Maddalon, Jeffrey M.; Munoz, Cesar A.; Narawicz, Anthony J.

    2012-01-01

    Our previous work presented an approach for developing high confidence algorithms for recovering aircraft from loss of separation situations. The correctness theorems for the algorithms relied on several key assumptions, namely that state data for all local aircraft is perfectly known, that resolution maneuvers can be achieved instantaneously, and that all aircraft compute resolutions using exactly the same data. Experiments showed that these assumptions were adequate in cases where the aircraft are far away from losing separation, but are insufficient when the aircraft have already lost separation. This paper describes the results of this experimentation and proposes a new criteria specification for loss of separation recovery that preserves the formal safety properties of the previous criteria while overcoming some key limitations. Candidate algorithms that satisfy the new criteria are presented.

  8. A systematic investigation of computation models for predicting Adverse Drug Reactions (ADRs).

    PubMed

    Kuang, Qifan; Wang, MinQi; Li, Rong; Dong, YongCheng; Li, Yizhou; Li, Menglong

    2014-01-01

    Early and accurate identification of adverse drug reactions (ADRs) is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs. In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper. Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms.

  9. Proton affinity and enthalpy of formation of formaldehyde

    NASA Astrophysics Data System (ADS)

    Czakó, Gábor; Nagy, Balázs; Tasi, Gyula; Somogyi, Árpád; Šimunek, Ján; Noga, Jozef; Braams, Bastiaan J.; Bowman, Joel M.; Császár; , Attila G.

    The proton affinity and the enthalpy of formation of the prototypical carbonyl, formaldehyde, have been determined by the first-principles composite focal-point analysis (FPA) approach. The electronic structure computations employed the all-electron coupled-cluster method with up to single, double, triple, quadruple, and even pentuple excitations. In these computations the aug-cc-p(C)VXZ [X = 2(D), 3(T), 4(Q), 5, and 6] correlation-consistent Gaussian basis sets for C and O were used in conjunction with the corresponding aug-cc-pVXZ (X = 2-6) sets for H. The basis set limit values have been confirmed via explicitly correlated computations. Our FPA study supersedes previous computational work for the proton affinity and to some extent the enthalpy of formation of formaldehyde by accounting for (a) electron correlation beyond the "gold standard" CCSD(T) level; (b) the non-additivity of core electron correlation effects; (c) scalar relativity; (d) diagonal Born-Oppenheimer corrections computed at a correlated level; (e) anharmonicity of zero-point vibrational energies, based on global potential energy surfaces and variational vibrational computations; and (f) thermal corrections to enthalpies by direct summation over rovibrational energy levels. Our final proton affinities at 298.15 (0.0) K are ΔpaHo (H2CO) = 711.02 (704.98) ± 0.39 kJ mol-1. Our final enthalpies of formation at 298.15 (0.0) K are ΔfHo (H2CO) = -109.23 (-105.42) ± 0.33 kJ mol-1. The latter values are based on the enthalpy of the H2 + CO → H2CO reaction but supported by two further reaction schemes, H2O + C → H2CO and 2H + C + O → H2CO. These values, especially ΔpaHo (H2CO), have better accuracy and considerably lower uncertainty than the best previous recommendations and thus should be employed in future studies.

  10. Computer Assistance in Information Work. Part I: Conceptual Framework for Improving the Computer/User Interface in Information Work. Part II: Catalog of Acceleration, Augmentation, and Delegation Functions in Information Work.

    ERIC Educational Resources Information Center

    Paisley, William; Butler, Matilda

    This study of the computer/user interface investigated the role of the computer in performing information tasks that users now perform without computer assistance. Users' perceptual/cognitive processes are to be accelerated or augmented by the computer; a long term goal is to delegate information tasks entirely to the computer. Cybernetic and…

  11. Biorthogonal projected energies of a Gutzwiller similarity transformed Hamiltonian.

    PubMed

    Wahlen-Strothman, J M; Scuseria, G E

    2016-12-07

    We present a method incorporating biorthogonal orbital-optimization, symmetry projection, and double-occupancy screening with a non-unitary similarity transformation generated by the Gutzwiller factor [Formula: see text], and apply it to the Hubbard model. Energies are calculated with mean-field computational scaling with high-quality results comparable to coupled cluster singles and doubles. This builds on previous work performing similarity transformations with more general, two-body Jastrow-style correlators. The theory is tested on 2D lattices ranging from small systems into the thermodynamic limit and is compared to available reference data.

  12. Cosmological backgrounds of gravitational waves and eLISA/NGO: phase transitions, cosmic strings and other sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Binétruy, Pierre; Dufaux, Jean-François; Bohé, Alejandro

    We review several cosmological backgrounds of gravitational waves accessible to direct-detection experiments, with a special emphasis on those backgrounds due to first-order phase transitions and networks of cosmic (super-)strings. For these two particular sources, we revisit in detail the computation of the gravitational wave background and improve the results of previous works in the literature. We apply our results to identify the scientific potential of the NGO/eLISA mission of ESA regarding the detectability of cosmological backgrounds.

  13. Research on the equivalent circuit model of a circular flexural-vibration-research on the equivalent circuit model of a circular flexural-vibration-mode piezoelectric transformer with moderate thickness.

    PubMed

    Huang, Yihua; Huang, Wenjin; Wang, Qinglei; Su, Xujian

    2013-07-01

    The equivalent circuit model of a piezoelectric transformer is useful in designing and optimizing the related driving circuits. Based on previous work, an equivalent circuit model for a circular flexural-vibration-mode piezoelectric transformer with moderate thickness is proposed and validated by finite element analysis. The input impedance, voltage gain, and efficiency of the transformer are determined through computation. The basic behaviors of the transformer are shown by numerical results.

  14. Scenario Decomposition for 0-1 Stochastic Programs: Improvements and Asynchronous Implementation

    DOE PAGES

    Ryan, Kevin; Rajan, Deepak; Ahmed, Shabbir

    2016-05-01

    We recently proposed scenario decomposition algorithm for stochastic 0-1 programs finds an optimal solution by evaluating and removing individual solutions that are discovered by solving scenario subproblems. In our work, we develop an asynchronous, distributed implementation of the algorithm which has computational advantages over existing synchronous implementations of the algorithm. Improvements to both the synchronous and asynchronous algorithm are proposed. We also test the results on well known stochastic 0-1 programs from the SIPLIB test library and is able to solve one previously unsolved instance from the test set.

  15. Nucleation and growth in one dimension. I. The generalized Kolmogorov-Johnson-Mehl-Avrami model

    NASA Astrophysics Data System (ADS)

    Jun, Suckjoon; Zhang, Haiyang; Bechhoefer, John

    2005-01-01

    Motivated by a recent application of the Kolmogorov-Johnson-Mehl-Avrami (KJMA) model to the study of DNA replication, we consider the one-dimensional (1D) version of this model. We generalize previous work to the case where the nucleation rate is an arbitrary function I(t) and obtain analytical results for the time-dependent distributions of various quantities (such as the island distribution). We also present improved computer simulation algorithms to study the 1D KJMA model. The analytical results and simulations are in excellent agreement.

  16. Nature and origins of virtual environments - A bibliographical essay

    NASA Technical Reports Server (NTRS)

    Ellis, S. R.

    1991-01-01

    Virtual environments presented via head-mounted, computer-driven displays provide a new media for communication. They may be analyzed by considering: (1) what may be meant by an environment; (2) what is meant by the process of virtualization; and (3) some aspects of human performance that constrain environmental design. Their origins are traced from previous work in vehicle simulation and multimedia research. Pointers are provided to key technical references, in the dispersed, archival literature, that are relevant to the development and evaluation of virtual-environment interface systems.

  17. Complexity reduction of rate-equations models for two-choice decision-making.

    PubMed

    Carrillo, José Antonio; Cordier, Stéphane; Deco, Gustavo; Mancini, Simona

    2013-01-01

    We are concerned with the complexity reduction of a stochastic system of differential equations governing the dynamics of a neuronal circuit describing a decision-making task. This reduction is based on the slow-fast behavior of the problem and holds on the whole phase space and not only locally around the spontaneous state. Macroscopic quantities, such as performance and reaction times, computed applying this reduction are in agreement with previous works in which the complexity reduction is locally performed at the spontaneous point by means of a Taylor expansion.

  18. On the computation of steady Hopper flows. II: von Mises materials in various geometries

    NASA Astrophysics Data System (ADS)

    Gremaud, Pierre A.; Matthews, John V.; O'Malley, Meghan

    2004-11-01

    Similarity solutions are constructed for the flow of granular materials through hoppers. Unlike previous work, the present approach applies to nonaxisymmetric containers. The model involves ten unknowns (stresses, velocity, and plasticity function) determined by nine nonlinear first order partial differential equations together with a quadratic algebraic constraint (yield condition). A pseudospectral discretization is applied; the resulting problem is solved with a trust region method. The important role of the hopper geometry on the flow is illustrated by several numerical experiments of industrial relevance.

  19. A parallel approach of COFFEE objective function to multiple sequence alignment

    NASA Astrophysics Data System (ADS)

    Zafalon, G. F. D.; Visotaky, J. M. V.; Amorim, A. R.; Valêncio, C. R.; Neves, L. A.; de Souza, R. C. G.; Machado, J. M.

    2015-09-01

    The computational tools to assist genomic analyzes show even more necessary due to fast increasing of data amount available. With high computational costs of deterministic algorithms for sequence alignments, many works concentrate their efforts in the development of heuristic approaches to multiple sequence alignments. However, the selection of an approach, which offers solutions with good biological significance and feasible execution time, is a great challenge. Thus, this work aims to show the parallelization of the processing steps of MSA-GA tool using multithread paradigm in the execution of COFFEE objective function. The standard objective function implemented in the tool is the Weighted Sum of Pairs (WSP), which produces some distortions in the final alignments when sequences sets with low similarity are aligned. Then, in studies previously performed we implemented the COFFEE objective function in the tool to smooth these distortions. Although the nature of COFFEE objective function implies in the increasing of execution time, this approach presents points, which can be executed in parallel. With the improvements implemented in this work, we can verify the execution time of new approach is 24% faster than the sequential approach with COFFEE. Moreover, the COFFEE multithreaded approach is more efficient than WSP, because besides it is slightly fast, its biological results are better.

  20. On the Calculation of Uncertainty Statistics with Error Bounds for CFD Calculations Containing Random Parameters and Fields

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2016-01-01

    This chapter discusses the ongoing development of combined uncertainty and error bound estimates for computational fluid dynamics (CFD) calculations subject to imposed random parameters and random fields. An objective of this work is the construction of computable error bound formulas for output uncertainty statistics that guide CFD practitioners in systematically determining how accurately CFD realizations should be approximated and how accurately uncertainty statistics should be approximated for output quantities of interest. Formal error bounds formulas for moment statistics that properly account for the presence of numerical errors in CFD calculations and numerical quadrature errors in the calculation of moment statistics have been previously presented in [8]. In this past work, hierarchical node-nested dense and sparse tensor product quadratures are used to calculate moment statistics integrals. In the present work, a framework has been developed that exploits the hierarchical structure of these quadratures in order to simplify the calculation of an estimate of the quadrature error needed in error bound formulas. When signed estimates of realization error are available, this signed error may also be used to estimate output quantity of interest probability densities as a means to assess the impact of realization error on these density estimates. Numerical results are presented for CFD problems with uncertainty to demonstrate the capabilities of this framework.

  1. Probabilistic Fracture Mechanics Analysis of the Orbiter's LH2 Feedline Flowliner

    NASA Technical Reports Server (NTRS)

    Bonacuse, Peter J. (Technical Monitor); Hudak, Stephen J., Jr.; Huyse, Luc; Chell, Graham; Lee, Yi-Der; Riha, David S.; Thacker, Ben; McClung, Craig; Gardner, Brian; Leverant, Gerald R.; hide

    2005-01-01

    Work performed by Southwest Research Institute (SwRI) as part of an Independent Technical Assessment (ITA) for the NASA Engineering and Safety Center (NESC) is summarized. The ITA goal was to establish a flight rationale in light of a history of fatigue cracking due to flow induced vibrations in the feedline flowliners that supply liquid hydrogen to the space shuttle main engines. Prior deterministic analyses using worst-case assumptions predicted failure in a single flight. The current work formulated statistical models for dynamic loading and cryogenic fatigue crack growth properties, instead of using worst-case assumptions. Weight function solutions for bivariant stressing were developed to determine accurate crack "driving-forces". Monte Carlo simulations showed that low flowliner probabilities of failure (POF = 0.001 to 0.0001) are achievable, provided pre-flight inspections for cracks are performed with adequate probability of detection (POD)-specifically, 20/75 mils with 50%/99% POD. Measurements to confirm assumed POD curves are recommended. Since the computed POFs are very sensitive to the cyclic loads/stresses and the analysis of strain gage data revealed inconsistencies with the previous assumption of a single dominant vibrant mode, further work to reconcile this difference is recommended. It is possible that the unaccounted vibrational modes in the flight spectra could increase the computed POFs.

  2. Visual ergonomics and computer work--is it all about computer glasses?

    PubMed

    Jonsson, Christina

    2012-01-01

    The Swedish Provisions on Work with Display Screen Equipment and the EU Directive on the minimum safety and health requirements for work with display screen equipment cover several important visual ergonomics aspects. But a review of cases and questions to the Swedish Work Environment Authority clearly shows that most attention is given to the demands for eyesight tests and special computer glasses. Other important visual ergonomics factors are at risk of being neglected. Today computers are used everywhere, both at work and at home. Computers can be laptops, PDA's, tablet computers, smart phones, etc. The demands on eyesight tests and computer glasses still apply but the visual demands and the visual ergonomics conditions are quite different compared to the use of a stationary computer. Based on this review, we raise the question if the demand on the employer to provide the employees with computer glasses is outdated.

  3. Body-wide anatomy recognition in PET/CT images

    NASA Astrophysics Data System (ADS)

    Wang, Huiqian; Udupa, Jayaram K.; Odhner, Dewey; Tong, Yubing; Zhao, Liming; Torigian, Drew A.

    2015-03-01

    With the rapid growth of positron emission tomography/computed tomography (PET/CT)-based medical applications, body-wide anatomy recognition on whole-body PET/CT images becomes crucial for quantifying body-wide disease burden. This, however, is a challenging problem and seldom studied due to unclear anatomy reference frame and low spatial resolution of PET images as well as low contrast and spatial resolution of the associated low-dose CT images. We previously developed an automatic anatomy recognition (AAR) system [15] whose applicability was demonstrated on diagnostic computed tomography (CT) and magnetic resonance (MR) images in different body regions on 35 objects. The aim of the present work is to investigate strategies for adapting the previous AAR system to low-dose CT and PET images toward automated body-wide disease quantification. Our adaptation of the previous AAR methodology to PET/CT images in this paper focuses on 16 objects in three body regions - thorax, abdomen, and pelvis - and consists of the following steps: collecting whole-body PET/CT images from existing patient image databases, delineating all objects in these images, modifying the previous hierarchical models built from diagnostic CT images to account for differences in appearance in low-dose CT and PET images, automatically locating objects in these images following object hierarchy, and evaluating performance. Our preliminary evaluations indicate that the performance of the AAR approach on low-dose CT images achieves object localization accuracy within about 2 voxels, which is comparable to the accuracies achieved on diagnostic contrast-enhanced CT images. Object recognition on low-dose CT images from PET/CT examinations without requiring diagnostic contrast-enhanced CT seems feasible.

  4. Modified free volume theory of self-diffusion and molecular theory of shear viscosity of liquid carbon dioxide.

    PubMed

    Nasrabad, Afshin Eskandari; Laghaei, Rozita; Eu, Byung Chan

    2005-04-28

    In previous work on the density fluctuation theory of transport coefficients of liquids, it was necessary to use empirical self-diffusion coefficients to calculate the transport coefficients (e.g., shear viscosity of carbon dioxide). In this work, the necessity of empirical input of the self-diffusion coefficients in the calculation of shear viscosity is removed, and the theory is thus made a self-contained molecular theory of transport coefficients of liquids, albeit it contains an empirical parameter in the subcritical regime. The required self-diffusion coefficients of liquid carbon dioxide are calculated by using the modified free volume theory for which the generic van der Waals equation of state and Monte Carlo simulations are combined to accurately compute the mean free volume by means of statistical mechanics. They have been computed as a function of density along four different isotherms and isobars. A Lennard-Jones site-site interaction potential was used to model the molecular carbon dioxide interaction. The density and temperature dependence of the theoretical self-diffusion coefficients are shown to be in excellent agreement with experimental data when the minimum critical free volume is identified with the molecular volume. The self-diffusion coefficients thus computed are then used to compute the density and temperature dependence of the shear viscosity of liquid carbon dioxide by employing the density fluctuation theory formula for shear viscosity as reported in an earlier paper (J. Chem. Phys. 2000, 112, 7118). The theoretical shear viscosity is shown to be robust and yields excellent density and temperature dependence for carbon dioxide. The pair correlation function appearing in the theory has been computed by Monte Carlo simulations.

  5. Computational simulations of supersonic magnetohydrodynamic flow control, power and propulsion systems

    NASA Astrophysics Data System (ADS)

    Wan, Tian

    This work is motivated by the lack of fully coupled computational tool that solves successfully the turbulent chemically reacting Navier-Stokes equation, the electron energy conservation equation and the electric current Poisson equation. In the present work, the abovementioned equations are solved in a fully coupled manner using fully implicit parallel GMRES methods. The system of Navier-Stokes equations are solved using a GMRES method with combined Schwarz and ILU(0) preconditioners. The electron energy equation and the electric current Poisson equation are solved using a GMRES method with combined SOR and Jacobi preconditioners. The fully coupled method has also been implemented successfully in an unstructured solver, US3D, and convergence test results were presented. This new method is shown two to five times faster than the original DPLR method. The Poisson solver is validated with analytic test problems. Then, four problems are selected; two of them are computed to explore the possibility of onboard MHD control and power generation, and the other two are simulation of experiments. First, the possibility of onboard reentry shock control by a magnetic field is explored. As part of a previous project, MHD power generation onboard a re-entry vehicle is also simulated. Then, the MHD acceleration experiments conducted at NASA Ames research center are simulated. Lastly, the MHD power generation experiments known as the HVEPS project are simulated. For code validation, the scramjet experiments at University of Queensland are simulated first. The generator section of the HVEPS test facility is computed then. The main conclusion is that the computational tool is accurate for different types of problems and flow conditions, and its accuracy and efficiency are necessary when the flow complexity increases.

  6. FY17 Status Report on the Computing Systems for the Yucca Mountain Project TSPA-LA Models.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Appel, Gordon John; Hadgu, Teklu; Appel, Gordon John

    Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014), Hadgu et al. (2015) and Hadgu and Appel (2016). The TSPA computing hardware (CL2014) and storage system described in Hadgu et al. (2015) weremore » used for the current analysis. One floating license of GoldSim with Versions 9.60.300, 10.5, 11.1 and 12.0 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA- type analysis on the server cluster. The current tasks included preliminary upgrade of the TSPA-LA from Version 9.60.300 to the latest version 12.0 and address DLL-related issues observed in the FY16 work. The model upgrade task successfully converted the Nominal Modeling case to GoldSim Versions 11.1/12. Conversions of the rest of the TSPA models were also attempted but program and operational difficulties precluded this. Upgrade of the remaining of the modeling cases and distributed processing tasks is expected to continue. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.« less

  7. Effect of fuel nitrogen and hydrogen content on emissions in hydrocarbon combustion

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.; Wolfbrandt, G.

    1981-01-01

    How the emissions of nitrogen oxides and carbon monoxide are affected by: (1) the decreased hydrogen content and (2) the increased organic nitrogen content of coal derived fuels is investigated. Previous CRT experimental work in a two stage flame tube has shown the effectiveness of rich lean two stage combustion in reducing fuel nitrogen conversion to nitrogen oxides. Previous theoretical work gave preliminary indications that emissions trends from the flame tube experiment could be predicted by a two stage, well stirred reactor combustor model using a detailed chemical mechanism for propane oxidation and nitrogen oxide formation. Additional computations are reported and comparisons with experimental results for two additional fuels and a wide range of operating conditions are given. Fuels used in the modeling are pure propane, a propane toluene mixture and pure toluene. These give hydrogen contents 18, 11 and 9 percent by weight, respectively. Fuel bound nitrogen contents of 0.5 and 1.0 percent were used. Results are presented for oxides of nitrogen and also carbon monoxide concentrations as a function of primary equivalence ratio, hydrogen content and fuel bound nitrogen content.

  8. Towards an Entropy Stable Spectral Element Framework for Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Parsani, Matteo; Fisher, Travis C.; Nielsen, Eric J.

    2016-01-01

    Entropy stable (SS) discontinuous spectral collocation formulations of any order are developed for the compressible Navier-Stokes equations on hexahedral elements. Recent progress on two complementary efforts is presented. The first effort is a generalization of previous SS spectral collocation work to extend the applicable set of points from tensor product, Legendre-Gauss-Lobatto (LGL) to tensor product Legendre-Gauss (LG) points. The LG and LGL point formulations are compared on a series of test problems. Although being more costly to implement, it is shown that the LG operators are significantly more accurate on comparable grids. Both the LGL and LG operators are of comparable efficiency and robustness, as is demonstrated using test problems for which conventional FEM techniques suffer instability. The second effort generalizes previous SS work to include the possibility of p-refinement at non-conforming interfaces. A generalization of existing entropy stability machinery is developed to accommodate the nuances of fully multi-dimensional summation-by-parts (SBP) operators. The entropy stability of the compressible Euler equations on non-conforming interfaces is demonstrated using the newly developed LG operators and multi-dimensional interface interpolation operators.

  9. Refinement of determination of critical thresholds of stress-strain behaviour by using AE data: potential for evaluation of durability of natural stone

    NASA Astrophysics Data System (ADS)

    Prikryl, Richard; Lokajíček, Tomáš

    2017-04-01

    According to previous studies, evaluation of stress-strain behaviour (in uniaxial compression) of various rocks appears to be effective tool allowing for prediction of resistance of natural stone to some physical weathering processes. Precise determination of critical thresholds, specifically of 'crack initiation' and 'crack damage' is fundamental issue in this approach. In contrast to 'crack damage stress/strain threshold', which can be easily read from deflection point on volumetric curve, detection of 'crack initiation' is much more difficult. Besides previously proposed mathematical processing of axial stress-strain curve, recording of acoustic emission (AE) data and their processing provide direct measure of various stress/strain thresholds, specifically of 'crack initiation'. This specific parameter is required during successive computation of energetic parameters (mechanical work), that can be stored by a material without formation of new defects (microcracks) due to acting stress. Based on our experimental data, this mechanical work seems to be proportional to the resistance of a material to formation of mode I (tensile) cracks that are responsible for destruction of subsurface below exposed faces of natural stone.

  10. Multifractal analysis of information processing in hippocampal neural ensembles during working memory under Δ9-tetrahydrocannabinol administration

    PubMed Central

    Fetterhoff, Dustin; Opris, Ioan; Simpson, Sean L.; Deadwyler, Sam A.; Hampson, Robert E.; Kraft, Robert A.

    2014-01-01

    Background Multifractal analysis quantifies the time-scale-invariant properties in data by describing the structure of variability over time. By applying this analysis to hippocampal interspike interval sequences recorded during performance of a working memory task, a measure of long-range temporal correlations and multifractal dynamics can reveal single neuron correlates of information processing. New method Wavelet leaders-based multifractal analysis (WLMA) was applied to hippocampal interspike intervals recorded during a working memory task. WLMA can be used to identify neurons likely to exhibit information processing relevant to operation of brain–computer interfaces and nonlinear neuronal models. Results Neurons involved in memory processing (“Functional Cell Types” or FCTs) showed a greater degree of multifractal firing properties than neurons without task-relevant firing characteristics. In addition, previously unidentified FCTs were revealed because multifractal analysis suggested further functional classification. The cannabinoid-type 1 receptor partial agonist, tetrahydrocannabinol (THC), selectively reduced multifractal dynamics in FCT neurons compared to non-FCT neurons. Comparison with existing methods WLMA is an objective tool for quantifying the memory-correlated complexity represented by FCTs that reveals additional information compared to classification of FCTs using traditional z-scores to identify neuronal correlates of behavioral events. Conclusion z-Score-based FCT classification provides limited information about the dynamical range of neuronal activity characterized by WLMA. Increased complexity, as measured with multifractal analysis, may be a marker of functional involvement in memory processing. The level of multifractal attributes can be used to differentially emphasize neural signals to improve computational models and algorithms underlying brain–computer interfaces. PMID:25086297

  11. Math anxiety differentially affects WAIS-IV arithmetic performance in undergraduates.

    PubMed

    Buelow, Melissa T; Frakey, Laura L

    2013-06-01

    Previous research has shown that math anxiety can influence the math performance level; however, to date, it is unknown whether math anxiety influences performance on working memory tasks during neuropsychological evaluation. In the present study, 172 undergraduate students completed measures of math achievement (the Math Computation subtest from the Wide Range Achievement Test-IV), math anxiety (the Math Anxiety Rating Scale-Revised), general test anxiety (from the Adult Manifest Anxiety Scale-College version), and the three Working Memory Index tasks from the Wechsler Adult Intelligence Scale-IV Edition (WAIS-IV; Digit Span [DS], Arithmetic, Letter-Number Sequencing [LNS]). Results indicated that math anxiety predicted performance on Arithmetic, but not DS or LNS, above and beyond the effects of gender, general test anxiety, and math performance level. Our findings suggest that math anxiety can negatively influence WAIS-IV working memory subtest scores. Implications for clinical practice include the utilization of LNS in individuals expressing high math anxiety.

  12. VERS: a virtual environment for reconstructive surgery planning

    NASA Astrophysics Data System (ADS)

    Montgomery, Kevin N.

    1997-05-01

    The virtual environment for reconstructive surgery (VERS) project at the NASA Ames Biocomputation Center is applying virtual reality technology to aid surgeons in planning surgeries. We are working with a craniofacial surgeon at Stanford to assemble and visualize the bone structure of patients requiring reconstructive surgery either through developmental abnormalities or trauma. This project is an extension of our previous work in 3D reconstruction, mesh generation, and immersive visualization. The current VR system, consisting of an SGI Onyx RE2, FakeSpace BOOM and ImmersiveWorkbench, Virtual Technologies CyberGlove and Ascension Technologies tracker, is currently in development and has already been used to visualize defects preoperatively. In the near future it will be used to more fully plan the surgery and compute the projected result to soft tissue structure. This paper presents the work in progress and details the production of a high-performance, collaborative, and networked virtual environment.

  13. Efficient global fiber tracking on multi-dimensional diffusion direction maps

    NASA Astrophysics Data System (ADS)

    Klein, Jan; Köhler, Benjamin; Hahn, Horst K.

    2012-02-01

    Global fiber tracking algorithms have recently been proposed which were able to compute results of unprecedented quality. They account for avoiding accumulation errors by a global optimization process at the cost of a high computation time of several hours or even days. In this paper, we introduce a novel global fiber tracking algorithm which, for the first time, globally optimizes the underlying diffusion direction map obtained from DTI or HARDI data, instead of single fiber segments. As a consequence, the number of iterations in the optimization process can drastically be reduced by about three orders of magnitude. Furthermore, in contrast to all previous algorithms, the density of the tracked fibers can be adjusted after the optimization within a few seconds. We evaluated our method for diffusion-weighted images obtained from software phantoms, healthy volunteers, and tumor patients. We show that difficult fiber bundles, e.g., the visual pathways or tracts for different motor functions can be determined and separated in an excellent quality. Furthermore, crossing and kissing bundles are correctly resolved. On current standard hardware, a dense fiber tracking result of a whole brain can be determined in less than half an hour which is a strong improvement compared to previous work.

  14. Updated energy budgets for neural computation in the neocortex and cerebellum

    PubMed Central

    Howarth, Clare; Gleeson, Padraig; Attwell, David

    2012-01-01

    The brain's energy supply determines its information processing power, and generates functional imaging signals. The energy use on the different subcellular processes underlying neural information processing has been estimated previously for the grey matter of the cerebral and cerebellar cortex. However, these estimates need reevaluating following recent work demonstrating that action potentials in mammalian neurons are much more energy efficient than was previously thought. Using this new knowledge, this paper provides revised estimates for the energy expenditure on neural computation in a simple model for the cerebral cortex and a detailed model of the cerebellar cortex. In cerebral cortex, most signaling energy (50%) is used on postsynaptic glutamate receptors, 21% is used on action potentials, 20% on resting potentials, 5% on presynaptic transmitter release, and 4% on transmitter recycling. In the cerebellar cortex, excitatory neurons use 75% and inhibitory neurons 25% of the signaling energy, and most energy is used on information processing by non-principal neurons: Purkinje cells use only 15% of the signaling energy. The majority of cerebellar signaling energy use is on the maintenance of resting potentials (54%) and postsynaptic receptors (22%), while action potentials account for only 17% of the signaling energy use. PMID:22434069

  15. Space Station Furnace Facility. Volume 3: Program cost estimate

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The approach used to estimate costs for the Space Station Furnace Facility (SSFF) is based on a computer program developed internally at Teledyne Brown Engineering (TBE). The program produces time-phased estimates of cost elements for each hardware component, based on experience with similar components. Engineering estimates of the degree of similarity or difference between the current project and the historical data is then used to adjust the computer-produced cost estimate and to fit it to the current project Work Breakdown Structure (WBS). The SSFF Concept as presented at the Requirements Definition Review (RDR) was used as the base configuration for the cost estimate. This program incorporates data on costs of previous projects and the allocation of those costs to the components of one of three, time-phased, generic WBS's. Input consists of a list of similar components for which cost data exist, number of interfaces with their type and complexity, identification of the extent to which previous designs are applicable, and programmatic data concerning schedules and miscellaneous data (travel, off-site assignments). Output is program cost in labor hours and material dollars, for each component, broken down by generic WBS task and program schedule phase.

  16. Atmospheric energetics as related to cyclogenesis over the eastern United States. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    West, P. W.

    1973-01-01

    A method is presented to investigate the atmospheric energy budget as related to cyclogenesis. Energy budget equations are developed that are shown to be advantageous because the individual terms represent basic physical processes which produce changes in atmospheric energy, and the equations provide a means to study the interaction of the cyclone with the larger scales of motion. The work presented represents an extension of previous studies because all of the terms of the energy budget equations were evaluated throughout the development period of the cyclone. Computations are carried out over a limited atmospheric volume which encompasses the cyclone, and boundary fluxes of energy that were ignored in most previous studies are evaluated. Two examples of cyclogenesis over the eastern United States were chosen for study. One of the cases (1-4 November, 1966) represented an example of vigorous development, while the development in the other case (5-8 December, 1969) was more modest. Objectively analyzed data were used in the evaluation of the energy budget terms in order to minimize computational errors, and an objective analysis scheme is described that insures that all of the resolution contained in the rawinsonde observations is incorporated in the analyses.

  17. Language networks associated with computerized semantic indices.

    PubMed

    Pakhomov, Serguei V S; Jones, David T; Knopman, David S

    2015-01-01

    Tests of generative semantic verbal fluency are widely used to study organization and representation of concepts in the human brain. Previous studies demonstrated that clustering and switching behavior during verbal fluency tasks is supported by multiple brain mechanisms associated with semantic memory and executive control. Previous work relied on manual assessments of semantic relatedness between words and grouping of words into semantic clusters. We investigated a computational linguistic approach to measuring the strength of semantic relatedness between words based on latent semantic analysis of word co-occurrences in a subset of a large online encyclopedia. We computed semantic clustering indices and compared them to brain network connectivity measures obtained with task-free fMRI in a sample consisting of healthy participants and those differentially affected by cognitive impairment. We found that semantic clustering indices were associated with brain network connectivity in distinct areas including fronto-temporal, fronto-parietal and fusiform gyrus regions. This study shows that computerized semantic indices complement traditional assessments of verbal fluency to provide a more complete account of the relationship between brain and verbal behavior involved organization and retrieval of lexical information from memory. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. On the Importance of Spatial Resolution for Flap Side Edge Noise Prediction

    NASA Technical Reports Server (NTRS)

    Mineck, Raymond E.; Khorrami, Mehdi R.

    2017-01-01

    A spatial resolution study of flap tip flow and the effects on the farfield noise signature for an 18%-scale, semispan Gulfstream aircraft model are presented. The NASA FUN3D unstructured, compressible Navier-Stokes solver was used to perform the highly resolved, time-dependent, detached eddy simulations of the flow field associated with the flap for this high-fidelity aircraft model. Following our previous work on the same model, the latest computations were undertaken to determine the causes of deficiencies observed in our earlier predictions of the steady and unsteady surface pressures and off-surface flow field at the flap tip regions, in particular the outboard tip area, where the presence of a cavity at the side-edge produces very complex flow features and interactions. The present results show gradual improvement in steady loading at the outboard flap edge region with increasing spatial resolution, yielding more accurate fluctuating surface pressures, off-surface flow field, and farfield noise with improved high-frequency content when compared with wind tunnel measurements. The spatial resolution trends observed in the present study demonstrate that the deficiencies reported in our previous computations are mostly caused by inadequate spatial resolution and are not related to the turbulence model.

  19. Deaf individuals who work with computers present a high level of visual attention.

    PubMed

    Ribeiro, Paula Vieira; Ribas, Valdenilson Ribeiro; Ribas, Renata de Melo Guerra; de Melo, Teresinha de Jesus Oliveira Guimarães; Marinho, Carlos Antonio de Sá; Silva, Kátia Karina do Monte; de Albuquerque, Elizabete Elias; Ribas, Valéria Ribeiro; de Lima, Renata Mirelly Silva; Santos, Tuthcha Sandrelle Botelho Tavares

    2011-01-01

    Some studies in the literature indicate that deaf individuals seem to develop a higher level of attention and concentration during the process of constructing of different ways of communicating. The aim of this study was to evaluate the level of attention in individuals deaf from birth that worked with computers. A total of 161 individuals in the 18-25 age group were assessed. Of these, 40 were congenitally deaf individuals that worked with computers, 42 were deaf individuals that did not work, did not know how to use nor used computers (Control 1), 39 individuals with normal hearing that did not work, did not know how to use computers nor used them (Control 2), and 40 individuals with normal hearing that worked with computers (Control 3). The group of subjects deaf from birth that worked with computers (IDWC) presented a higher level of focused attention, sustained attention, mental manipulation capacity and resistance to interference compared to the control groups. This study highlights the relevance sensory to cognitive processing.

  20. SD-CAS: Spin Dynamics by Computer Algebra System.

    PubMed

    Filip, Xenia; Filip, Claudiu

    2010-11-01

    A computer algebra tool for describing the Liouville-space quantum evolution of nuclear 1/2-spins is introduced and implemented within a computational framework named Spin Dynamics by Computer Algebra System (SD-CAS). A distinctive feature compared with numerical and previous computer algebra approaches to solving spin dynamics problems results from the fact that no matrix representation for spin operators is used in SD-CAS, which determines a full symbolic character to the performed computations. Spin correlations are stored in SD-CAS as four-entry nested lists of which size increases linearly with the number of spins into the system and are easily mapped into analytical expressions in terms of spin operator products. For the so defined SD-CAS spin correlations a set of specialized functions and procedures is introduced that are essential for implementing basic spin algebra operations, such as the spin operator products, commutators, and scalar products. They provide results in an abstract algebraic form: specific procedures to quantitatively evaluate such symbolic expressions with respect to the involved spin interaction parameters and experimental conditions are also discussed. Although the main focus in the present work is on laying the foundation for spin dynamics symbolic computation in NMR based on a non-matrix formalism, practical aspects are also considered throughout the theoretical development process. In particular, specific SD-CAS routines have been implemented using the YACAS computer algebra package (http://yacas.sourceforge.net), and their functionality was demonstrated on a few illustrative examples. Copyright © 2010 Elsevier Inc. All rights reserved.

  1. The relationship between psychosocial work factors, work stress and computer-related musculoskeletal discomforts among computer users in Malaysia.

    PubMed

    Zakerian, Seyed Abolfazl; Subramaniam, Indra Devi

    2009-01-01

    Increasing numbers of workers use computer for work. So, especially among office workers, there is a high risk of musculoskeletal discomforts. This study examined the associations among 3 factors, psychosocial work factors, work stress and musculoskeletal discomforts. These associations were examined via a questionnaire survey on 30 office workers (at a university in Malaysia), whose jobs required an extensive use of computers. The questionnaire was distributed and collected daily for 20 days. While the results indicated a significant relationship among psychosocial work factors, work stress and musculoskeletal discomfort, 3 psychosocial work factors were found to be more important than others in both work stress and musculoskeletal discomfort: job demands, negative social interaction and computer-related problems. To further develop study design, it is necessary to investigate industrial and other workers who have experienced musculoskeletal discomforts and work stress.

  2. Combinatorial pattern discovery approach for the folding trajectory analysis of a beta-hairpin.

    PubMed

    Parida, Laxmi; Zhou, Ruhong

    2005-06-01

    The study of protein folding mechanisms continues to be one of the most challenging problems in computational biology. Currently, the protein folding mechanism is often characterized by calculating the free energy landscape versus various reaction coordinates, such as the fraction of native contacts, the radius of gyration, RMSD from the native structure, and so on. In this paper, we present a combinatorial pattern discovery approach toward understanding the global state changes during the folding process. This is a first step toward an unsupervised (and perhaps eventually automated) approach toward identification of global states. The approach is based on computing biclusters (or patterned clusters)-each cluster is a combination of various reaction coordinates, and its signature pattern facilitates the computation of the Z-score for the cluster. For this discovery process, we present an algorithm of time complexity c in RO((N + nm) log n), where N is the size of the output patterns and (n x m) is the size of the input with n time frames and m reaction coordinates. To date, this is the best time complexity for this problem. We next apply this to a beta-hairpin folding trajectory and demonstrate that this approach extracts crucial information about protein folding intermediate states and mechanism. We make three observations about the approach: (1) The method recovers states previously obtained by visually analyzing free energy surfaces. (2) It also succeeds in extracting meaningful patterns and structures that had been overlooked in previous works, which provides a better understanding of the folding mechanism of the beta-hairpin. These new patterns also interconnect various states in existing free energy surfaces versus different reaction coordinates. (3) The approach does not require calculating the free energy values, yet it offers an analysis comparable to, and sometimes better than, the methods that use free energy landscapes, thus validating the choice of reaction coordinates. (An abstract version of this work was presented at the 2005 Asia Pacific Bioinformatics Conference [1].).

  3. Computational Study of Hypersonic Boundary Layer Stability on Cones

    NASA Astrophysics Data System (ADS)

    Gronvall, Joel Edwin

    Due to the complex nature of boundary layer laminar-turbulent transition in hypersonic flows and the resultant effect on the design of re-entry vehicles, there remains considerable interest in developing a deeper understanding of the underlying physics. To that end, the use of experimental observations and computational analysis in a complementary manner will provide the greatest insights. It is the intent of this work to provide such an analysis for two ongoing experimental investigations. The first focuses on the hypersonic boundary layer transition experiments for a slender cone that are being conducted at JAXA's free-piston shock tunnel HIEST facility. Of particular interest are the measurements of disturbance frequencies associated with transition at high enthalpies. The computational analysis provided for these cases included two-dimensional CFD mean flow solutions for use in boundary layer stability analyses. The disturbances in the boundary layer were calculated using the linear parabolized stability equations. Estimates for transition locations, comparisons of measured disturbance frequencies and computed frequencies, and a determination of the type of disturbances present were made. It was found that for the cases where the disturbances were measured at locations where the flow was still laminar but nearly transitional, that the highly amplified disturbances showed reasonable agreement with the computations. Additionally, an investigation of the effects of finite-rate chemistry and vibrational excitation on flows over cones was conducted for a set of theoretical operational conditions at the HIEST facility. The second study focuses on transition in three-dimensional hypersonic boundary layers, and for this the cone at angle of attack experiments being conducted at the Boeing/AFOSR Mach-6 quiet tunnel at Purdue University were examined. Specifically, the effect of surface roughness on the development of the stationary crossflow instability are investigated in this work. One standard mean flow solution and two direct numerical simulations of a slender cone at an angle of attack were computed. The direct numerical simulations included a digitally-filtered, randomly distributed surface roughness and were performed using a high-order, low-dissipation numerical scheme on appropriately resolved grids. Comparisons with experimental observations showed excellent qualitative agreement. Comparisons with similar previous computational work were also made and showed agreement in the wavenumber range of the most unstable crossflow modes.

  4. A Robust Absorbing Boundary Condition for Compressible Flows

    NASA Technical Reports Server (NTRS)

    Loh, Ching Y.; orgenson, Philip C. E.

    2005-01-01

    An absorbing non-reflecting boundary condition (NRBC) for practical computations in fluid dynamics and aeroacoustics is presented with theoretical proof. This paper is a continuation and improvement of a previous paper by the author. The absorbing NRBC technique is based on a first principle of non reflecting, which contains the essential physics that a plane wave solution of the Euler equations remains intact across the boundary. The technique is theoretically shown to work for a large class of finite volume approaches. When combined with the hyperbolic conservation laws, the NRBC is simple, robust and truly multi-dimensional; no additional implementation is needed except the prescribed physical boundary conditions. Several numerical examples in multi-dimensional spaces using two different finite volume schemes are illustrated to demonstrate its robustness in practical computations. Limitations and remedies of the technique are also discussed.

  5. Temperature dependence of electron impact ionization coefficient in bulk silicon

    NASA Astrophysics Data System (ADS)

    Ahmed, Mowfaq Jalil

    2017-09-01

    This work exhibits a modified procedure to compute the electron impact ionization coefficient of silicon for temperatures between 77 and 800K and electric fields ranging from 70 to 400 kV/cm. The ionization coefficients are computed from the electron momentum distribution function through solving the Boltzmann transport equation (BTE). The arrangement is acquired by joining Legendre polynomial extension with BTE. The resulting BTE is solved by differences-differential method using MATLAB®. Six (X) equivalent ellipsoidal and non-parabolic valleys of the conduction band of silicon are taken into account. Concerning the scattering mechanisms, the interval acoustic scattering, non-polar optical scattering and II scattering are taken into consideration. This investigation showed that the ionization coefficients decrease with increasing temperature. The overall results are in good agreement with previous experimental and theoretical reported data predominantly at high electric fields.

  6. Acetylcholine-modulated plasticity in reward-driven navigation: a computational study.

    PubMed

    Zannone, Sara; Brzosko, Zuzanna; Paulsen, Ole; Clopath, Claudia

    2018-06-21

    Neuromodulation plays a fundamental role in the acquisition of new behaviours. In previous experimental work, we showed that acetylcholine biases hippocampal synaptic plasticity towards depression, and the subsequent application of dopamine can retroactively convert depression into potentiation. We also demonstrated that incorporating this sequentially neuromodulated Spike-Timing-Dependent Plasticity (STDP) rule in a network model of navigation yields effective learning of changing reward locations. Here, we employ computational modelling to further characterize the effects of cholinergic depression on behaviour. We find that acetylcholine, by allowing learning from negative outcomes, enhances exploration over the action space. We show that this results in a variety of effects, depending on the structure of the model, the environment and the task. Interestingly, sequentially neuromodulated STDP also yields flexible learning, surpassing the performance of other reward-modulated plasticity rules.

  7. Empirical Relationships Between Optical Properties and Equivalent Diameters of Fractal Soot Aggregates at 550 Nm Wavelength.

    NASA Technical Reports Server (NTRS)

    Pandey, Apoorva; Chakrabarty, Rajan K.; Liu, Li; Mishchenko, Michael I.

    2015-01-01

    Soot aggregates (SAs)-fractal clusters of small, spherical carbonaceous monomers-modulate the incoming visible solar radiation and contribute significantly to climate forcing. Experimentalists and climate modelers typically assume a spherical morphology for SAs when computing their optical properties, causing significant errors. Here, we calculate the optical properties of freshly-generated (fractal dimension Df = 1.8) and aged (Df = 2.6) SAs at 550 nm wavelength using the numericallyexact superposition T-Matrix method. These properties were expressed as functions of equivalent aerosol diameters as measured by contemporary aerosol instruments. This work improves upon previous efforts wherein SA optical properties were computed as a function of monomer number, rendering them unusable in practical applications. Future research will address the sensitivity of variation in refractive index, fractal prefactor, and monomer overlap of SAs on the reported empirical relationships.

  8. Discrete Adjoint-Based Design Optimization of Unsteady Turbulent Flows on Dynamic Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Diskin, Boris; Yamaleev, Nail K.

    2009-01-01

    An adjoint-based methodology for design optimization of unsteady turbulent flows on dynamic unstructured grids is described. The implementation relies on an existing unsteady three-dimensional unstructured grid solver capable of dynamic mesh simulations and discrete adjoint capabilities previously developed for steady flows. The discrete equations for the primal and adjoint systems are presented for the backward-difference family of time-integration schemes on both static and dynamic grids. The consistency of sensitivity derivatives is established via comparisons with complex-variable computations. The current work is believed to be the first verified implementation of an adjoint-based optimization methodology for the true time-dependent formulation of the Navier-Stokes equations in a practical computational code. Large-scale shape optimizations are demonstrated for turbulent flows over a tiltrotor geometry and a simulated aeroelastic motion of a fighter jet.

  9. Differential equations as a tool for community identification.

    PubMed

    Krawczyk, Małgorzata J

    2008-06-01

    We consider the task of identification of a cluster structure in random networks. The results of two methods are presented: (i) the Newman algorithm [M. E. J. Newman and M. Girvan, Phys. Rev. E 69, 026113 (2004)]; and (ii) our method based on differential equations. A series of computer experiments is performed to check if in applying these methods we are able to determine the structure of the network. The trial networks consist initially of well-defined clusters and are disturbed by introducing noise into their connectivity matrices. Further, we show that an improvement of the previous version of our method is possible by an appropriate choice of the threshold parameter beta . With this change, the results obtained by the two methods above are similar, and our method works better, for all the computer experiments we have done.

  10. Experimental and computational investigation of lateral gauge response in polycarbonate

    NASA Astrophysics Data System (ADS)

    Eliot, Jim; Harris, Ernst; Hazell, Paul; Appleby-Thomas, Gareth; Winter, Ronald; Wood, David; Owen, Gareth

    2011-06-01

    Polycarbonate's use in personal armour systems means its high strain-rate response has been extensively studied. Interestingly, embedded lateral manganin stress gauges in polycarbonate have shown gradients behind incident shocks, suggestive of increasing shear strength. However, such gauges need to be embedded in a central (typically) epoxy interlayer - an inherently invasive approach. Recently, research has suggested that in such metal systems interlayer/target impedance may contribute to observed gradients in lateral stress. Here, experimental T-gauge (Vishay Micro-Measurements® type J2M-SS-580SF-025) traces from polycarbonate targets are compared to computational simulations. This work extends previous efforts such that similar impedance exists between the interlayer and matrix (target) interface. Further, experiments and simulations are presented investigating the effects of a ``dry joint'' in polycarbonate, in which no encapsulating medium is employed.

  11. Measurement of breast volume using body scan technology(computer-aided anthropometry).

    PubMed

    Veitch, Daisy; Burford, Karen; Dench, Phil; Dean, Nicola; Griffin, Philip

    2012-01-01

    Assessment of breast volume is an important tool for preoperative planning in various breast surgeries and other applications, such as bra development. Accurate assessment can improve the consistency and quality of surgery outcomes. This study outlines a non-invasive method to measure breast volume using a whole body 3D laser surface anatomy scanner, the Cyberware WBX. It expands on a previous publication where this method was validated against patients undergoing mastectomy. It specifically outlines and expands the computer-aided anthropometric (CAA) method for extracting breast volumes in a non-invasive way from patients enrolled in a breast reduction study at Flinders Medical Centre, South Australia. This step-by-step description allows others to replicate this work and provides an additional tool to assist them in their own clinical practice and development of designs.

  12. Reply to comment by Melsen et al. on "Most computational hydrology is not reproducible, so is it really science?"

    NASA Astrophysics Data System (ADS)

    Hutton, Christopher; Wagener, Thorsten; Freer, Jim; Han, Dawei; Duffy, Chris; Arheimer, Berit

    2017-03-01

    In this article, we reply to a comment made by Melsen et al. [2017] on our previous commentary regarding reproducibility in computational hydrology. Re-executing someone else's code and workflow to derive a set of published results does not by itself constitute reproducibility. However, it forms a key part of the process: it demonstrates that all the degrees of freedom and choices made by the scientist in running the experiment are contained within that code and workflow. This does not only allow us to build and extend directly from the original work, but with full knowledge of decisions made in the original experimental setup, we can then focus our attention to the degrees of freedom of interest: those that occur in hydrological systems that are ultimately our subject of study.

  13. Interactions of Earth's atmospheric oxygen and fuel moisture in smouldering wildfires.

    PubMed

    Huang, Xinyan; Rein, Guillermo

    2016-12-01

    Vegetation, wildfire and atmospheric oxygen on Earth have changed throughout geological times, and are dependent on each other, determining the evolution of ecosystems, the carbon cycle, and the climate, as found in the fossil record. Previous work in the literature has only studied flaming wildfires, but smouldering is the most persistent type of fire phenomena, consuming large amounts of biomass. In this study, the dependence of smouldering fires in peatlands, the largest wildfires on Earth, with atmospheric oxygen is investigated. A physics-based computational model of reactive porous media for peat fires, which has been previously validated against experiments, is used. Simulations are conducted for wide ranges of atmospheric oxygen concentrations and fuel moisture contents to find thresholds for ignition and extinction. Results show that the predicted rate of spread increases in oxygen-rich atmospheres, while it decreases over wetter fuels. A novel nonlinear relationship between critical oxygen and critical moisture is found. More importantly, we show that compared to previous work on flaming fires, smouldering fires can be ignited and sustained at substantially higher moisture contents (up to 100% MC vs. 40% for 21% oxygen level), and lower oxygen concentrations (down to 13% vs. 16%). This defines a new atmospheric oxygen threshold for wildfires (13%), even lower than previously thought in Earth Sciences (16%). This finding should lead to reinterpretation of how the char remains observed in the fossil record constrain the lower concentration of oxygen in Earth's atmosphere in geological timescale. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Final Report: Subcontract B623868 Algebraic Multigrid solvers for coupled PDE systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brannick, J.

    The Pennsylvania State University (“Subcontractor”) continued to work on the design of algebraic multigrid solvers for coupled systems of partial differential equations (PDEs) arising in numerical modeling of various applications, with a main focus on solving the Dirac equation arising in Quantum Chromodynamics (QCD). The goal of the proposed work was to develop combined geometric and algebraic multilevel solvers that are robust and lend themselves to efficient implementation on massively parallel heterogeneous computers for these QCD systems. The research in these areas built on previous works, focusing on the following three topics: (1) the development of parallel full-multigrid (PFMG) andmore » non-Galerkin coarsening techniques in this frame work for solving the Wilson Dirac system; (2) the use of these same Wilson MG solvers for preconditioning the Overlap and Domain Wall formulations of the Dirac equation; and (3) the design and analysis of algebraic coarsening algorithms for coupled PDE systems including Stokes equation, Maxwell equation and linear elasticity.« less

  15. COMIS -- an international multizone air-flow and contaminant transport model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feustel, H.E.

    1998-08-01

    A number of interzonal models have been developed to calculate air flows and pollutant transport mechanisms in both single and multizone buildings. A recent development in multizone air-flow modeling, the COMIS model, has a number of capabilities that go beyond previous models, much as COMIS can be used as either a stand-alone air-flow model with input and output features or as an infiltration module for thermal building simulation programs. COMIS was designed during a 12 month workshop at Lawrence Berkeley National Laboratory (LBNL) in 1988-89. In 1990, the Executive Committee of the International Energy Agency`s Energy Conservation in Buildings andmore » Community Systems program created a working group on multizone air-flow modeling, which continued work on COMIS. The group`s objectives were to study physical phenomena causing air flow and pollutant (e.g., moisture) transport in multizone buildings, develop numerical modules to be integrated in the previously designed multizone air flow modeling system, and evaluate the computer code. The working group supported by nine nations, officially finished in late 1997 with the release of IISiBat/COMIS 3.0, which contains the documented simulation program COMIS, the user interface IISiBat, and reports describing the evaluation exercise.« less

  16. Relations between work and upper extremity musculoskeletal problems (UEMSP) and the moderating role of psychosocial work factors on the relation between computer work and UEMSP.

    PubMed

    Nicolakakis, Nektaria; Stock, Susan R; Abrahamowicz, Michal; Kline, Rex; Messing, Karen

    2017-11-01

    Computer work has been identified as a risk factor for upper extremity musculoskeletal problems (UEMSP). But few studies have investigated how psychosocial and organizational work factors affect this relation. Nor have gender differences in the relation between UEMSP and these work factors  been studied. We sought to estimate: (1) the association between UEMSP and a range of physical, psychosocial and organizational work exposures, including the duration of computer work, and (2) the moderating effect of psychosocial work exposures on the relation between computer work and UEMSP. Using 2007-2008 Québec survey data on 2478 workers, we carried out gender-stratified multivariable logistic regression modeling and two-way interaction analyses. In both genders, odds of UEMSP were higher with exposure to high physical work demands and emotionally demanding work. Additionally among women, UEMSP were associated with duration of occupational computer exposure, sexual harassment, tense situations when dealing with clients, high quantitative demands and lack of prospects for promotion, and among men, with low coworker support, episodes of unemployment, low job security and contradictory work demands. Among women, the effect of computer work on UEMSP was considerably increased in the presence of emotionally demanding work, and may also be moderated by low recognition at work, contradictory work demands, and low supervisor support. These results suggest that the relations between UEMSP and computer work are moderated by psychosocial work exposures and that the relations between working conditions and UEMSP are somewhat different for each gender, highlighting the complexity of these relations and the importance of considering gender.

  17. Identifying the optimal segmentors for mass classification in mammograms

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Tomuro, Noriko; Furst, Jacob; Raicu, Daniela S.

    2015-03-01

    In this paper, we present the results of our investigation on identifying the optimal segmentor(s) from an ensemble of weak segmentors, used in a Computer-Aided Diagnosis (CADx) system which classifies suspicious masses in mammograms as benign or malignant. This is an extension of our previous work, where we used various parameter settings of image enhancement techniques to each suspicious mass (region of interest (ROI)) to obtain several enhanced images, then applied segmentation to each image to obtain several contours of a given mass. Each segmentation in this ensemble is essentially a "weak segmentor" because no single segmentation can produce the optimal result for all images. Then after shape features are computed from the segmented contours, the final classification model was built using logistic regression. The work in this paper focuses on identifying the optimal segmentor(s) from an ensemble mix of weak segmentors. For our purpose, optimal segmentors are those in the ensemble mix which contribute the most to the overall classification rather than the ones that produced high precision segmentation. To measure the segmentors' contribution, we examined weights on the features in the derived logistic regression model and computed the average feature weight for each segmentor. The result showed that, while in general the segmentors with higher segmentation success rates had higher feature weights, some segmentors with lower segmentation rates had high classification feature weights as well.

  18. MetaboTools: A comprehensive toolbox for analysis of genome-scale metabolic models

    DOE PAGES

    Aurich, Maike K.; Fleming, Ronan M. T.; Thiele, Ines

    2016-08-03

    Metabolomic data sets provide a direct read-out of cellular phenotypes and are increasingly generated to study biological questions. Previous work, by us and others, revealed the potential of analyzing extracellular metabolomic data in the context of the metabolic model using constraint-based modeling. With the MetaboTools, we make our methods available to the broader scientific community. The MetaboTools consist of a protocol, a toolbox, and tutorials of two use cases. The protocol describes, in a step-wise manner, the workflow of data integration, and computational analysis. The MetaboTools comprise the Matlab code required to complete the workflow described in the protocol. Tutorialsmore » explain the computational steps for integration of two different data sets and demonstrate a comprehensive set of methods for the computational analysis of metabolic models and stratification thereof into different phenotypes. The presented workflow supports integrative analysis of multiple omics data sets. Importantly, all analysis tools can be applied to metabolic models without performing the entire workflow. Taken together, the MetaboTools constitute a comprehensive guide to the intra-model analysis of extracellular metabolomic data from microbial, plant, or human cells. In conclusion, this computational modeling resource offers a broad set of computational analysis tools for a wide biomedical and non-biomedical research community.« less

  19. Computational analysis of high resolution unsteady airloads for rotor aeroacoustics

    NASA Technical Reports Server (NTRS)

    Quackenbush, Todd R.; Lam, C.-M. Gordon; Wachspress, Daniel A.; Bliss, Donald B.

    1994-01-01

    The study of helicopter aerodynamic loading for acoustics applications requires the application of efficient yet accurate simulations of the velocity field induced by the rotor's vortex wake. This report summarizes work to date on the development of such an analysis, which builds on the Constant Vorticity Contour (CVC) free wake model, previously implemented for the study of vibratory loading in the RotorCRAFT computer code. The present effort has focused on implementation of an airload reconstruction approach that computes high resolution airload solutions of rotor/rotor-wake interactions required for acoustics computations. Supplementary efforts on the development of improved vortex core modeling, unsteady aerodynamic effects, higher spatial resolution of rotor loading, and fast vortex wake implementations have substantially enhanced the capabilities of the resulting software, denoted RotorCRAFT/AA (AeroAcoustics). Results of validation calculations using recently acquired model rotor data show that by employing airload reconstruction it is possible to apply the CVC wake analysis with temporal and spatial resolution suitable for acoustics applications while reducing the computation time required by one to two orders of magnitude relative to that required by direct calculations. Promising correlation with this body of airload and noise data has been obtained for a variety of rotor configurations and operating conditions.

  20. Simulating coupled dynamics of a rigid-flexible multibody system and compressible fluid

    NASA Astrophysics Data System (ADS)

    Hu, Wei; Tian, Qiang; Hu, HaiYan

    2018-04-01

    As a subsequent work of previous studies of authors, a new parallel computation approach is proposed to simulate the coupled dynamics of a rigid-flexible multibody system and compressible fluid. In this approach, the smoothed particle hydrodynamics (SPH) method is used to model the compressible fluid, the natural coordinate formulation (NCF) and absolute nodal coordinate formulation (ANCF) are used to model the rigid and flexible bodies, respectively. In order to model the compressible fluid properly and efficiently via SPH method, three measures are taken as follows. The first is to use the Riemann solver to cope with the fluid compressibility, the second is to define virtual particles of SPH to model the dynamic interaction between the fluid and the multibody system, and the third is to impose the boundary conditions of periodical inflow and outflow to reduce the number of SPH particles involved in the computation process. Afterwards, a parallel computation strategy is proposed based on the graphics processing unit (GPU) to detect the neighboring SPH particles and to solve the dynamic equations of SPH particles in order to improve the computation efficiency. Meanwhile, the generalized-alpha algorithm is used to solve the dynamic equations of the multibody system. Finally, four case studies are given to validate the proposed parallel computation approach.

  1. A strand graph semantics for DNA-based computation

    PubMed Central

    Petersen, Rasmus L.; Lakin, Matthew R.; Phillips, Andrew

    2015-01-01

    DNA nanotechnology is a promising approach for engineering computation at the nanoscale, with potential applications in biofabrication and intelligent nanomedicine. DNA strand displacement is a general strategy for implementing a broad range of nanoscale computations, including any computation that can be expressed as a chemical reaction network. Modelling and analysis of DNA strand displacement systems is an important part of the design process, prior to experimental realisation. As experimental techniques improve, it is important for modelling languages to keep pace with the complexity of structures that can be realised experimentally. In this paper we present a process calculus for modelling DNA strand displacement computations involving rich secondary structures, including DNA branches and loops. We prove that our calculus is also sufficiently expressive to model previous work on non-branching structures, and propose a mapping from our calculus to a canonical strand graph representation, in which vertices represent DNA strands, ordered sites represent domains, and edges between sites represent bonds between domains. We define interactions between strands by means of strand graph rewriting, and prove the correspondence between the process calculus and strand graph behaviours. Finally, we propose a mapping from strand graphs to an efficient implementation, which we use to perform modelling and simulation of DNA strand displacement systems with rich secondary structure. PMID:27293306

  2. LBP-based penalized weighted least-squares approach to low-dose cone-beam computed tomography reconstruction

    NASA Astrophysics Data System (ADS)

    Ma, Ming; Wang, Huafeng; Liu, Yan; Zhang, Hao; Gu, Xianfeng; Liang, Zhengrong

    2014-03-01

    Cone-beam computed tomography (CBCT) has attracted growing interest of researchers in image reconstruction. The mAs level of the X-ray tube current, in practical application of CBCT, is mitigated in order to reduce the CBCT dose. The lowering of the X-ray tube current, however, results in the degradation of image quality. Thus, low-dose CBCT image reconstruction is in effect a noise problem. To acquire clinically acceptable quality of image, and keep the X-ray tube current as low as achievable in the meanwhile, some penalized weighted least-squares (PWLS)-based image reconstruction algorithms have been developed. One representative strategy in previous work is to model the prior information for solution regularization using an anisotropic penalty term. To enhance the edge preserving and noise suppressing in a finer scale, a novel algorithm combining the local binary pattern (LBP) with penalized weighted leastsquares (PWLS), called LBP-PWLS-based image reconstruction algorithm, is proposed in this work. The proposed LBP-PWLS-based algorithm adaptively encourages strong diffusion on the local spot/flat region around a voxel and less diffusion on edge/corner ones by adjusting the penalty for cost function, after the LBP is utilized to detect the region around the voxel as spot, flat and edge ones. The LBP-PWLS-based reconstruction algorithm was evaluated using the sinogram data acquired by a clinical CT scanner from the CatPhan® 600 phantom. Experimental results on the noiseresolution tradeoff measurement and other quantitative measurements demonstrated its feasibility and effectiveness in edge preserving and noise suppressing in comparison with a previous PWLS reconstruction algorithm.

  3. Pulmonary function and high-resolution computed tomography examinations among offshore drill floor workers.

    PubMed

    Kirkhus, Niels E; Skare, Øivind; Ulvestad, Bente; Aaløkken, Trond Mogens; Günther, Anne; Olsen, Raymond; Thomassen, Yngvar; Lund, May Brit; Ellingsen, Dag G

    2018-04-01

    The aim of this study was to assess short-term changes in pulmonary function in drill floor workers currently exposed to airborne contaminants generated as a result of drilling offshore. We also aimed to study the prevalence of pulmonary fibrosis using high-resolution computed tomography (HRCT) scans of another group of previously exposed drill floor workers. Pulmonary function was measured before and after a 14-day work period in a follow-up study of 65 drill floor workers and 65 referents. Additionally, 57 other drill floor workers exposed to drilling fluids during the 1980s were examined with HRCT of the lungs in a cross-sectional study. The drill floor workers had a statistically significant decline in forced expiratory volume in 1 s (FEV 1 ) across the 14-day work period after adjustment for diurnal variations in pulmonary function (mean 90 mL, range 30-140 mL), while the small decline among the referents (mean 20 mL, range - 30 to 70 mL) was not of statistical significance. Larger declines in FEV 1 among drill workers were associated with the fewer number of days of active drilling. There were no signs of pulmonary fibrosis related to oil mist exposure among the other previously exposed drill floor workers. After 14 days offshore, a statistically significant decline in FEV 1 was observed in the drill floor workers, which may not be related to oil mist exposure. No pulmonary fibrosis related to oil mist exposure was observed.

  4. A parallelizable real-time motion tracking algorithm with applications to ultrasonic strain imaging

    NASA Astrophysics Data System (ADS)

    Jiang, J.; Hall, T. J.

    2007-07-01

    Ultrasound-based mechanical strain imaging systems utilize signals from conventional diagnostic ultrasound systems to image tissue elasticity contrast that provides new diagnostically valuable information. Previous works (Hall et al 2003 Ultrasound Med. Biol. 29 427, Zhu and Hall 2002 Ultrason. Imaging 24 161) demonstrated that uniaxial deformation with minimal elevation motion is preferred for breast strain imaging and real-time strain image feedback to operators is important to accomplish this goal. The work reported here enhances the real-time speckle tracking algorithm with two significant modifications. One fundamental change is that the proposed algorithm is a column-based algorithm (a column is defined by a line of data parallel to the ultrasound beam direction, i.e. an A-line), as opposed to a row-based algorithm (a row is defined by a line of data perpendicular to the ultrasound beam direction). Then, displacement estimates from its adjacent columns provide good guidance for motion tracking in a significantly reduced search region to reduce computational cost. Consequently, the process of displacement estimation can be naturally split into at least two separated tasks, computed in parallel, propagating outward from the center of the region of interest (ROI). The proposed algorithm has been implemented and optimized in a Windows® system as a stand-alone ANSI C++ program. Results of preliminary tests, using numerical and tissue-mimicking phantoms, and in vivo tissue data, suggest that high contrast strain images can be consistently obtained with frame rates (10 frames s-1) that exceed our previous methods.

  5. Efficient Nash Equilibrium Resource Allocation Based on Game Theory Mechanism in Cloud Computing by Using Auction

    PubMed Central

    Nezarat, Amin; Dastghaibifard, GH

    2015-01-01

    One of the most complex issues in the cloud computing environment is the problem of resource allocation so that, on one hand, the cloud provider expects the most profitability and, on the other hand, users also expect to have the best resources at their disposal considering the budget constraints and time. In most previous work conducted, heuristic and evolutionary approaches have been used to solve this problem. Nevertheless, since the nature of this environment is based on economic methods, using such methods can decrease response time and reducing the complexity of the problem. In this paper, an auction-based method is proposed which determines the auction winner by applying game theory mechanism and holding a repetitive game with incomplete information in a non-cooperative environment. In this method, users calculate suitable price bid with their objective function during several round and repetitions and send it to the auctioneer; and the auctioneer chooses the winning player based the suggested utility function. In the proposed method, the end point of the game is the Nash equilibrium point where players are no longer inclined to alter their bid for that resource and the final bid also satisfies the auctioneer’s utility function. To prove the response space convexity, the Lagrange method is used and the proposed model is simulated in the cloudsim and the results are compared with previous work. At the end, it is concluded that this method converges to a response in a shorter time, provides the lowest service level agreement violations and the most utility to the provider. PMID:26431035

  6. Efficient Nash Equilibrium Resource Allocation Based on Game Theory Mechanism in Cloud Computing by Using Auction.

    PubMed

    Nezarat, Amin; Dastghaibifard, G H

    2015-01-01

    One of the most complex issues in the cloud computing environment is the problem of resource allocation so that, on one hand, the cloud provider expects the most profitability and, on the other hand, users also expect to have the best resources at their disposal considering the budget constraints and time. In most previous work conducted, heuristic and evolutionary approaches have been used to solve this problem. Nevertheless, since the nature of this environment is based on economic methods, using such methods can decrease response time and reducing the complexity of the problem. In this paper, an auction-based method is proposed which determines the auction winner by applying game theory mechanism and holding a repetitive game with incomplete information in a non-cooperative environment. In this method, users calculate suitable price bid with their objective function during several round and repetitions and send it to the auctioneer; and the auctioneer chooses the winning player based the suggested utility function. In the proposed method, the end point of the game is the Nash equilibrium point where players are no longer inclined to alter their bid for that resource and the final bid also satisfies the auctioneer's utility function. To prove the response space convexity, the Lagrange method is used and the proposed model is simulated in the cloudsim and the results are compared with previous work. At the end, it is concluded that this method converges to a response in a shorter time, provides the lowest service level agreement violations and the most utility to the provider.

  7. Fast generation of computer-generated hologram by graphics processing unit

    NASA Astrophysics Data System (ADS)

    Matsuda, Sho; Fujii, Tomohiko; Yamaguchi, Takeshi; Yoshikawa, Hiroshi

    2009-02-01

    A cylindrical hologram is well known to be viewable in 360 deg. This hologram depends high pixel resolution.Therefore, Computer-Generated Cylindrical Hologram (CGCH) requires huge calculation amount.In our previous research, we used look-up table method for fast calculation with Intel Pentium4 2.8 GHz.It took 480 hours to calculate high resolution CGCH (504,000 x 63,000 pixels and the average number of object points are 27,000).To improve quality of CGCH reconstructed image, fringe pattern requires higher spatial frequency and resolution.Therefore, to increase the calculation speed, we have to change the calculation method. In this paper, to reduce the calculation time of CGCH (912,000 x 108,000 pixels), we employ Graphics Processing Unit (GPU).It took 4,406 hours to calculate high resolution CGCH on Xeon 3.4 GHz.Since GPU has many streaming processors and a parallel processing structure, GPU works as the high performance parallel processor.In addition, GPU gives max performance to 2 dimensional data and streaming data.Recently, GPU can be utilized for the general purpose (GPGPU).For example, NVIDIA's GeForce7 series became a programmable processor with Cg programming language.Next GeForce8 series have CUDA as software development kit made by NVIDIA.Theoretically, calculation ability of GPU is announced as 500 GFLOPS. From the experimental result, we have achieved that 47 times faster calculation compared with our previous work which used CPU.Therefore, CGCH can be generated in 95 hours.So, total time is 110 hours to calculate and print the CGCH.

  8. SiSSR: Simultaneous subdivision surface registration for the quantification of cardiac function from computed tomography in canines.

    PubMed

    Vigneault, Davis M; Pourmorteza, Amir; Thomas, Marvin L; Bluemke, David A; Noble, J Alison

    2018-05-01

    Recent improvements in cardiac computed tomography (CCT) allow for whole-heart functional studies to be acquired at low radiation dose (<2mSv) and high-temporal resolution (<100ms) in a single heart beat. Although the extraction of regional functional information from these images is of great clinical interest, there is a paucity of research into the quantification of regional function from CCT, contrasting with the large body of work in echocardiography and cardiac MR. Here we present the Simultaneous Subdivision Surface Registration (SiSSR) method: a fast, semi-automated image analysis pipeline for quantifying regional function from contrast-enhanced CCT. For each of thirteen adult male canines, we construct an anatomical reference mesh representing the left ventricular (LV) endocardium, obviating the need for a template mesh to be manually sculpted and initialized. We treat this generated mesh as a Loop subdivision surface, and adapt a technique previously described in the context of 3-D echocardiography to register these surfaces to the endocardium efficiently across all cardiac frames simultaneously. Although previous work performs the registration at a single resolution, we observe that subdivision surfaces naturally suggest a multiresolution approach, leading to faster convergence and avoiding local minima. We additionally make two notable changes to the cost function of the optimization, explicitly encouraging plausible biological motion and high mesh quality. Finally, we calculate an accepted functional metric for CCT from the registered surfaces, and compare our results to an alternate state-of-the-art CCT method. Published by Elsevier B.V.

  9. Global-scale Joint Body and Surface Wave Tomography with Vertical Transverse Isotropy for Seismic Monitoring Applications

    NASA Astrophysics Data System (ADS)

    Simmons, Nathan; Myers, Steve

    2017-04-01

    We continue to develop more advanced models of Earth's global seismic structure with specific focus on improving predictive capabilities for future seismic events. Our most recent version of the model combines high-quality P and S wave body wave travel times and surface-wave group and phase velocities into a joint (simultaneous) inversion process to tomographically image Earth's crust and mantle. The new model adds anisotropy (known as vertical transverse isotropy) to the model, which is necessitated by the addition of surface waves to the tomographic data set. Like previous versions of the model the new model consists of 59 surfaces and 1.6 million model nodes from the surface to the core-mantle boundary, overlaying a 1-D outer and inner core model. The model architecture is aspherical and we directly incorporate Earth's expected hydrostatic shape (ellipticity and mantle stretching). We also explicitly honor surface undulations including the Moho, several internal crustal units, and the upper mantle transition zone undulations as predicated by previous studies. The explicit Earth model design allows for accurate travel time computation using our unique 3-D ray tracing algorithms, capable of 3-D ray tracing more than 20 distinct seismic phases including crustal, regional, teleseismic, and core phases. Thus, we can now incorporate certain secondary (and sometimes exotic) phases into source location determination and other analyses. New work on model uncertainty quantification assesses the error covariance of the model, which when completed will enable calculation of path-specific estimates of uncertainty for travel times computed using our previous model (LLNL-G3D-JPS) which is available to the monitoring and broader research community and we encourage external evaluation and validation. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  10. Finding New Math Identities by Computer

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Chancellor, Marisa K. (Technical Monitor)

    1996-01-01

    Recently a number of interesting new mathematical identities have been discovered by means of numerical searches on high performance computers, using some newly discovered algorithms. These include the following: pi = ((sup oo)(sub k=0))(Sigma) (1 / 16) (sup k) ((4 / 8k+1) - (2 / 8k+4) - (1 / 8k+5) - (1 / 8k+6)) and ((17 pi(exp 4)) / 360) = ((sup oo)(sub k=1))(Sigma) (1 + (1/2) + (1/3) + ... + (1/k))(exp 2) k(exp -2), zeta(3, 1, 3, 1, ..., 3, 1) = (2 pi(exp 4m) / (4m+2)! where m = number of (3,1) pairs. and where zeta(n1,n2,...,nr) = (sub k1 (is greater than) k2 (is greater than) ... (is greater than) kr)(Sigma) (1 / (k1 (sup n1) k2 (sup n2) ... kr (sup nr). The first identity is remarkable in that it permits one to compute the n-th binary or hexadecimal digit of pu directly, without computing any of the previous digits, and without using multiple precision arithmetic. Recently the ten billionth hexadecimal digit of pi was computed using this formula. The third identity has connections to quantum field theory. (The first and second of these been formally established; the third is affirmed by numerical evidence only.) The background and results of this work will be described, including an overview of the algorithms and computer techniques used in these studies.

  11. Continuous development of schemes for parallel computing of the electrostatics in biological systems: implementation in DelPhi.

    PubMed

    Li, Chuan; Petukh, Marharyta; Li, Lin; Alexov, Emil

    2013-08-15

    Due to the enormous importance of electrostatics in molecular biology, calculating the electrostatic potential and corresponding energies has become a standard computational approach for the study of biomolecules and nano-objects immersed in water and salt phase or other media. However, the electrostatics of large macromolecules and macromolecular complexes, including nano-objects, may not be obtainable via explicit methods and even the standard continuum electrostatics methods may not be applicable due to high computational time and memory requirements. Here, we report further development of the parallelization scheme reported in our previous work (Li, et al., J. Comput. Chem. 2012, 33, 1960) to include parallelization of the molecular surface and energy calculations components of the algorithm. The parallelization scheme utilizes different approaches such as space domain parallelization, algorithmic parallelization, multithreading, and task scheduling, depending on the quantity being calculated. This allows for efficient use of the computing resources of the corresponding computer cluster. The parallelization scheme is implemented in the popular software DelPhi and results in speedup of several folds. As a demonstration of the efficiency and capability of this methodology, the electrostatic potential, and electric field distributions are calculated for the bovine mitochondrial supercomplex illustrating their complex topology, which cannot be obtained by modeling the supercomplex components alone. Copyright © 2013 Wiley Periodicals, Inc.

  12. Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows

    PubMed Central

    Wang, Di; Kleinberg, Robert D.

    2009-01-01

    Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C2, C3, C4,…. It is known that C2 can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing Ck (k > 2) require solving a linear program. In this paper we prove that C3 can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}n, this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network. PMID:20161596

  13. Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows.

    PubMed

    Wang, Di; Kleinberg, Robert D

    2009-11-28

    Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C(2), C(3), C(4),…. It is known that C(2) can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing C(k) (k > 2) require solving a linear program. In this paper we prove that C(3) can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}(n), this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network.

  14. Hardware Testing and System Evaluation: Procedures to Evaluate Commodity Hardware for Production Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goebel, J

    2004-02-27

    Without stable hardware any program will fail. The frustration and expense of supporting bad hardware can drain an organization, delay progress, and frustrate everyone involved. At Stanford Linear Accelerator Center (SLAC), we have created a testing method that helps our group, SLAC Computer Services (SCS), weed out potentially bad hardware and purchase the best hardware at the best possible cost. Commodity hardware changes often, so new evaluations happen periodically each time we purchase systems and minor re-evaluations happen for revised systems for our clusters, about twice a year. This general framework helps SCS perform correct, efficient evaluations. This article outlinesmore » SCS's computer testing methods and our system acceptance criteria. We expanded the basic ideas to other evaluations such as storage, and we think the methods outlined in this article has helped us choose hardware that is much more stable and supportable than our previous purchases. We have found that commodity hardware ranges in quality, so systematic method and tools for hardware evaluation were necessary. This article is based on one instance of a hardware purchase, but the guidelines apply to the general problem of purchasing commodity computer systems for production computational work.« less

  15. Wide band fiber-optic communications

    NASA Technical Reports Server (NTRS)

    Bates, Harry E.

    1993-01-01

    A number of optical communication lines are now in use at the Kennedy Space Center (KSC) for the transmission of voice, computer data and video signals. At the present time most of these channels utilize a single carrier wavelength centered near 1300 nm. As a result of previous work the bandwidth capacity of a number of these channels is being increased by transmitting another signal in the 1550 nm region on the same fiber. This is accomplished by means of wavelength division multiplexing (WDM). It is therefore important to understand the bandwidth properties of the installed fiber plant. This work developed new procedures for measuring the bandwidth of fibers in both the 1300nm and 1550nm region. In addition, a preliminary study of fiber links terminating in the Engineering Development Laboratory was completed.

  16. Analysis of 20 magnetic clouds at 1 AU during a solar minimum

    NASA Astrophysics Data System (ADS)

    Gulisano, A. M.; Dasso, S.; Mandrini, C. H.; Démoulin, P.

    We study 20 magnetic clouds, observed in situ by the spacecraft Wind, at the Lagrangian point L1, from 22 August, 1995, to 7 November, 1997. In previous works, assuming a cylindrical symmetry for the local magnetic configuration and a satellite trajectory crossing the axis of the cloud, we obtained their orientations using a minimum variance analysis. In this work we compute the orientations and magnetic configurations using a non-linear simultaneous fit of the geometric and physical parameters for a linear force-free model, including the possibility of a not null impact parameter. We quantify global magnitudes such as the relative magnetic helicity per unit length and compare the values found with both methods (minimum variance and the simultaneous fit). FULL TEXT IN SPANISH

  17. Single and Double Photoionization of Mg

    NASA Astrophysics Data System (ADS)

    Abdel-Naby, Shahin; Pindzola, M. S.; Colgan, J.

    2014-05-01

    Single and double photoionization cross sections for Mg are calculated using a time-dependent close-coupling method. The correlation between the two 3 s subshell electrons of Mg is obtained by relaxation of the close-coupled equations in imaginary time. An implicit method is used to propagate the close-coupled equations in real time to obtain single and double ionization cross sections for Mg. Energy and angle triple differential cross sections for double photoionization at equal energy sharing of E1 =E2 = 16 . 4 eV are compared with Elettra experiments and previous theoretical calculations. This work was supported in part by grants from NSF and US DoE. Computational work was carried out at NERSC in Oakland, California, NICS in Knoxville, Tennessee, and OLCF in Oak Ridge, Tennessee.

  18. Variational Integrators for Interconnected Lagrange-Dirac Systems

    NASA Astrophysics Data System (ADS)

    Parks, Helen; Leok, Melvin

    2017-10-01

    Interconnected systems are an important class of mathematical models, as they allow for the construction of complex, hierarchical, multiphysics, and multiscale models by the interconnection of simpler subsystems. Lagrange-Dirac mechanical systems provide a broad category of mathematical models that are closed under interconnection, and in this paper, we develop a framework for the interconnection of discrete Lagrange-Dirac mechanical systems, with a view toward constructing geometric structure-preserving discretizations of interconnected systems. This work builds on previous work on the interconnection of continuous Lagrange-Dirac systems (Jacobs and Yoshimura in J Geom Mech 6(1):67-98, 2014) and discrete Dirac variational integrators (Leok and Ohsawa in Found Comput Math 11(5), 529-562, 2011). We test our results by simulating some of the continuous examples given in Jacobs and Yoshimura (2014).

  19. Relativistic algorithm for time transfer in Mars missions under IAU Resolutions: an analytic approach

    NASA Astrophysics Data System (ADS)

    Pan, Jun-Yang; Xie, Yi

    2015-02-01

    With tremendous advances in modern techniques, Einstein's general relativity has become an inevitable part of deep space missions. We investigate the relativistic algorithm for time transfer between the proper time τ of the onboard clock and the Geocentric Coordinate Time, which extends some previous works by including the effects of propagation of electromagnetic signals. In order to evaluate the implicit algebraic equations and integrals in the model, we take an analytic approach to work out their approximate values. This analytic model might be used in an onboard computer because of its limited capability to perform calculations. Taking an orbiter like Yinghuo-1 as an example, we find that the contributions of the Sun, the ground station and the spacecraft dominate the outcomes of the relativistic corrections to the model.

  20. User guide for MODPATH Version 7—A particle-tracking model for MODFLOW

    USGS Publications Warehouse

    Pollock, David W.

    2016-09-26

    MODPATH is a particle-tracking post-processing program designed to work with MODFLOW, the U.S. Geological Survey (USGS) finite-difference groundwater flow model. MODPATH version 7 is the fourth major release since its original publication. Previous versions were documented in USGS Open-File Reports 89–381 and 94–464 and in USGS Techniques and Methods 6–A41.MODPATH version 7 works with MODFLOW-2005 and MODFLOW–USG. Support for unstructured grids in MODFLOW–USG is limited to smoothed, rectangular-based quadtree and quadpatch grids.A software distribution package containing the computer program and supporting documentation, such as input instructions, output file descriptions, and example problems, is available from the USGS over the Internet (http://water.usgs.gov/ogw/modpath/).

  1. Analysis and optimization of population annealing

    NASA Astrophysics Data System (ADS)

    Amey, Christopher; Machta, Jonathan

    2018-03-01

    Population annealing is an easily parallelizable sequential Monte Carlo algorithm that is well suited for simulating the equilibrium properties of systems with rough free-energy landscapes. In this work we seek to understand and improve the performance of population annealing. We derive several useful relations between quantities that describe the performance of population annealing and use these relations to suggest methods to optimize the algorithm. These optimization methods were tested by performing large-scale simulations of the three-dimensional (3D) Edwards-Anderson (Ising) spin glass and measuring several observables. The optimization methods were found to substantially decrease the amount of computational work necessary as compared to previously used, unoptimized versions of population annealing. We also obtain more accurate values of several important observables for the 3D Edwards-Anderson model.

  2. Bulk Enthalpy Calculations in the Arc Jet Facility at NASA ARC

    NASA Technical Reports Server (NTRS)

    Thompson, Corinna S.; Prabhu, Dinesh; Terrazas-Salinas, Imelda; Mach, Jeffrey J.

    2011-01-01

    The Arc Jet Facilities at NASA Ames Research Center generate test streams with enthalpies ranging from 5 MJ/kg to 25 MJ/kg. The present work describes a rigorous method, based on equilibrium thermodynamics, for calculating the bulk enthalpy of the flow produced in two of these facilities. The motivation for this work is to determine a dimensionally-correct formula for calculating the bulk enthalpy that is at least as accurate as the conventional formulas that are currently used. Unlike previous methods, the new method accounts for the amount of argon that is present in the flow. Comparisons are made with bulk enthalpies computed from an energy balance method. An analysis of primary facility operating parameters and their associated uncertainties is presented in order to further validate the enthalpy calculations reported herein.

  3. Thermoelectric-Driven Autonomous Sensors for a Biomass Power Plant

    NASA Astrophysics Data System (ADS)

    Rodríguez, A.; Astrain, D.; Martínez, A.; Gubía, E.; Sorbet, F. J.

    2013-07-01

    This work presents the design and development of a thermoelectric generator intended to harness waste heat in a biomass power plant, and generate electric power to operate sensors and the required electronics for wireless communication. The first objective of the work is to design the optimum thermoelectric generator to harness heat from a hot surface, and generate electric power to operate a flowmeter and a wireless transmitter. The process is conducted by using a computational model, presented in previous papers, to determine the final design that meets the requirements of electric power consumption and number of transmissions per minute. Finally, the thermoelectric generator is simulated to evaluate its performance. The final device transmits information every 5 s. Moreover, it is completely autonomous and can be easily installed, since no electric wires are required.

  4. The Study of Surface Computer Supported Cooperative Work and Its Design, Efficiency, and Challenges

    ERIC Educational Resources Information Center

    Hwang, Wu-Yuin; Su, Jia-Han

    2012-01-01

    In this study, a Surface Computer Supported Cooperative Work paradigm is proposed. Recently, multitouch technology has become widely available for human-computer interaction. We found it has great potential to facilitate more awareness of human-to-human interaction than personal computers (PCs) in colocated collaborative work. However, other…

  5. Deaf individuals who work with computers present a high level of visual attention

    PubMed Central

    Ribeiro, Paula Vieira; Ribas, Valdenilson Ribeiro; Ribas, Renata de Melo Guerra; de Melo, Teresinha de Jesus Oliveira Guimarães; Marinho, Carlos Antonio de Sá; Silva, Kátia Karina do Monte; de Albuquerque, Elizabete Elias; Ribas, Valéria Ribeiro; de Lima, Renata Mirelly Silva; Santos, Tuthcha Sandrelle Botelho Tavares

    2011-01-01

    Some studies in the literature indicate that deaf individuals seem to develop a higher level of attention and concentration during the process of constructing of different ways of communicating. Objective The aim of this study was to evaluate the level of attention in individuals deaf from birth that worked with computers. Methods A total of 161 individuals in the 18-25 age group were assessed. Of these, 40 were congenitally deaf individuals that worked with computers, 42 were deaf individuals that did not work, did not know how to use nor used computers (Control 1), 39 individuals with normal hearing that did not work, did not know how to use computers nor used them (Control 2), and 40 individuals with normal hearing that worked with computers (Control 3). Results The group of subjects deaf from birth that worked with computers (IDWC) presented a higher level of focused attention, sustained attention, mental manipulation capacity and resistance to interference compared to the control groups. Conclusion This study highlights the relevance sensory to cognitive processing. PMID:29213734

  6. Optimal remediation of unconfined aquifers: Numerical applications and derivative calculations

    NASA Astrophysics Data System (ADS)

    Mansfield, Christopher M.; Shoemaker, Christine A.

    1999-05-01

    This paper extends earlier work on derivative-based optimization for cost-effective remediation to unconfined aquifers, which have more complex, nonlinear flow dynamics than confined aquifers. Most previous derivative-based optimization of contaminant removal has been limited to consideration of confined aquifers; however, contamination is more common in unconfined aquifers. Exact derivative equations are presented, and two computationally efficient approximations, the quasi-confined (QC) and head independent from previous (HIP) unconfined-aquifer finite element equation derivative approximations, are presented and demonstrated to be highly accurate. The derivative approximations can be used with any nonlinear optimization method requiring derivatives for computation of either time-invariant or time-varying pumping rates. The QC and HIP approximations are combined with the nonlinear optimal control algorithm SALQR into the unconfined-aquifer algorithm, which is shown to compute solutions for unconfined aquifers in CPU times that were not significantly longer than those required by the confined-aquifer optimization model. Two of the three example unconfined-aquifer cases considered obtained pumping policies with substantially lower objective function values with the unconfined model than were obtained with the confined-aquifer optimization, even though the mean differences in hydraulic heads predicted by the unconfined- and confined-aquifer models were small (less than 0.1%). We suggest a possible geophysical index based on differences in drawdown predictions between unconfined- and confined-aquifer models to estimate which aquifers require unconfined-aquifer optimization and which can be adequately approximated by the simpler confined-aquifer analysis.

  7. A simulation study of homogeneous ice nucleation in supercooled salty water

    NASA Astrophysics Data System (ADS)

    Soria, Guiomar D.; Espinosa, Jorge R.; Ramirez, Jorge; Valeriani, Chantal; Vega, Carlos; Sanz, Eduardo

    2018-06-01

    We use computer simulations to investigate the effect of salt on homogeneous ice nucleation. The melting point of the employed solution model was obtained both by direct coexistence simulations and by thermodynamic integration from previous calculations of the water chemical potential. Using a seeding approach, in which we simulate ice seeds embedded in a supercooled aqueous solution, we compute the nucleation rate as a function of temperature for a 1.85 NaCl mol per water kilogram solution at 1 bar. To improve the accuracy and reliability of our calculations, we combine seeding with the direct computation of the ice-solution interfacial free energy at coexistence using the Mold Integration method. We compare the results with previous simulation work on pure water to understand the effect caused by the solute. The model captures the experimental trend that the nucleation rate at a given supercooling decreases when adding salt. Despite the fact that the thermodynamic driving force for ice nucleation is higher for salty water for a given supercooling, the nucleation rate slows down with salt due to a significant increase of the ice-fluid interfacial free energy. The salty water model predicts an ice nucleation rate that is in good agreement with experimental measurements, bringing confidence in the predictive ability of the model. We expect that the combination of state-of-the-art simulation methods here employed to study ice nucleation from solution will be of much use in forthcoming numerical investigations of crystallization in mixtures.

  8. Parallel computing techniques for rotorcraft aerodynamics

    NASA Astrophysics Data System (ADS)

    Ekici, Kivanc

    The modification of unsteady three-dimensional Navier-Stokes codes for application on massively parallel and distributed computing environments is investigated. The Euler/Navier-Stokes code TURNS (Transonic Unsteady Rotor Navier-Stokes) was chosen as a test bed because of its wide use by universities and industry. For the efficient implementation of TURNS on parallel computing systems, two algorithmic changes are developed. First, main modifications to the implicit operator, Lower-Upper Symmetric Gauss Seidel (LU-SGS) originally used in TURNS, is performed. Second, application of an inexact Newton method, coupled with a Krylov subspace iterative method (Newton-Krylov method) is carried out. Both techniques have been tried previously for the Euler equations mode of the code. In this work, we have extended the methods to the Navier-Stokes mode. Several new implicit operators were tried because of convergence problems of traditional operators with the high cell aspect ratio (CAR) grids needed for viscous calculations on structured grids. Promising results for both Euler and Navier-Stokes cases are presented for these operators. For the efficient implementation of Newton-Krylov methods to the Navier-Stokes mode of TURNS, efficient preconditioners must be used. The parallel implicit operators used in the previous step are employed as preconditioners and the results are compared. The Message Passing Interface (MPI) protocol has been used because of its portability to various parallel architectures. It should be noted that the proposed methodology is general and can be applied to several other CFD codes (e.g. OVERFLOW).

  9. A simulation study of homogeneous ice nucleation in supercooled salty water.

    PubMed

    Soria, Guiomar D; Espinosa, Jorge R; Ramirez, Jorge; Valeriani, Chantal; Vega, Carlos; Sanz, Eduardo

    2018-06-14

    We use computer simulations to investigate the effect of salt on homogeneous ice nucleation. The melting point of the employed solution model was obtained both by direct coexistence simulations and by thermodynamic integration from previous calculations of the water chemical potential. Using a seeding approach, in which we simulate ice seeds embedded in a supercooled aqueous solution, we compute the nucleation rate as a function of temperature for a 1.85 NaCl mol per water kilogram solution at 1 bar. To improve the accuracy and reliability of our calculations, we combine seeding with the direct computation of the ice-solution interfacial free energy at coexistence using the Mold Integration method. We compare the results with previous simulation work on pure water to understand the effect caused by the solute. The model captures the experimental trend that the nucleation rate at a given supercooling decreases when adding salt. Despite the fact that the thermodynamic driving force for ice nucleation is higher for salty water for a given supercooling, the nucleation rate slows down with salt due to a significant increase of the ice-fluid interfacial free energy. The salty water model predicts an ice nucleation rate that is in good agreement with experimental measurements, bringing confidence in the predictive ability of the model. We expect that the combination of state-of-the-art simulation methods here employed to study ice nucleation from solution will be of much use in forthcoming numerical investigations of crystallization in mixtures.

  10. A multiscale strength model for tantalum over an extended range of strain rates

    NASA Astrophysics Data System (ADS)

    Barton, N. R.; Rhee, M.

    2013-09-01

    A strength model for tantalum is developed and exercised across a range of conditions relevant to various types of experimental observations. The model is based on previous multiscale modeling work combined with experimental observations. As such, the model's parameterization includes a hybrid of quantities that arise directly from predictive sub-scale physics models and quantities that are adjusted to align the model with experimental observations. Given current computing and experimental limitations, the response regions for sub-scale physics simulations and detailed experimental observations have been largely disjoint. In formulating the new model and presenting results here, attention is paid to integrated experimental observations that probe strength response at the elevated strain rates where a previous version of the model has generally been successful in predicting experimental data [Barton et al., J. Appl. Phys. 109(7), 073501 (2011)].

  11. Statistical Analysis of CFD Solutions from the 6th AIAA CFD Drag Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Derlaga, Joseph M.; Morrison, Joseph H.

    2017-01-01

    A graphical framework is used for statistical analysis of the results from an extensive N- version test of a collection of Reynolds-averaged Navier-Stokes computational uid dynam- ics codes. The solutions were obtained by code developers and users from North America, Europe, Asia, and South America using both common and custom grid sequencees as well as multiple turbulence models for the June 2016 6th AIAA CFD Drag Prediction Workshop sponsored by the AIAA Applied Aerodynamics Technical Committee. The aerodynamic con guration for this workshop was the Common Research Model subsonic transport wing- body previously used for both the 4th and 5th Drag Prediction Workshops. This work continues the statistical analysis begun in the earlier workshops and compares the results from the grid convergence study of the most recent workshop with previous workshops.

  12. Singlet Oxygen and Free Radical Reactions of Retinoids and Carotenoids—A Review

    PubMed Central

    Truscott, T. George

    2018-01-01

    We report on studies of reactions of singlet oxygen with carotenoids and retinoids and a range of free radical studies on carotenoids and retinoids with emphasis on recent work, dietary carotenoids and the role of oxygen in biological processes. Many previous reviews are cited and updated together with new data not previously reviewed. The review does not deal with computational studies but the emphasis is on laboratory-based results. We contrast the ease of study of both singlet oxygen and polyene radical cations compared to neutral radicals. Of particular interest is the switch from anti- to pro-oxidant behavior of a carotenoid with change of oxygen concentration: results for lycopene in a cellular model system show total protection of the human cells studied at zero oxygen concentration, but zero protection at 100% oxygen concentration. PMID:29301252

  13. [Influence of mental rotation of objects on psychophysiological functions of women].

    PubMed

    Chikina, L V; Fedorchuk, S V; Trushina, V A; Ianchuk, P I; Makarchuk, M Iu

    2012-01-01

    An integral part of activity of modern human beings is an involvement to work with the computer systems which, in turn, produces a nervous - emotional tension. Hence, a problem of control of the psychophysiological state of workmen with the purpose of health preservation and success of their activity and the problem of application of rehabilitational actions are actual. At present it is known that the efficiency of rehabilitational procedures rises following application of the complex of regenerative programs. Previously performed by us investigation showed that mental rotation is capable to compensate the consequences of a nervous - emotional tension. Therefore, in the present work we investigated how the complex of spatial tasks developed by us influences psychophysiological performances of tested women for which the psycho-emotional tension with the usage of computer technologies is more essential, and the procedure of mental rotation is more complex task for them, than for men. The complex of spatial tasks applied in the given work included: mental rotation of simple objects (letters and digits), mental rotation of complex objects (geometrical figures) and mental rotation of complex objects with the usage of a short-term memory. Execution of the complex of spatial tasks reduces the time of simple and complex sensomotor response, raises parameters of a short-term memory, brain work capacity and improves nervous processes. Collectively, mental rotation of objects can be recommended as a rehabilitational resource for compensation of consequences of any psycho-emotional strain, both for men, and for women.

  14. Working memory contributions to reinforcement learning impairments in schizophrenia.

    PubMed

    Collins, Anne G E; Brown, Jaime K; Gold, James M; Waltz, James A; Frank, Michael J

    2014-10-08

    Previous research has shown that patients with schizophrenia are impaired in reinforcement learning tasks. However, behavioral learning curves in such tasks originate from the interaction of multiple neural processes, including the basal ganglia- and dopamine-dependent reinforcement learning (RL) system, but also prefrontal cortex-dependent cognitive strategies involving working memory (WM). Thus, it is unclear which specific system induces impairments in schizophrenia. We recently developed a task and computational model allowing us to separately assess the roles of RL (slow, cumulative learning) mechanisms versus WM (fast but capacity-limited) mechanisms in healthy adult human subjects. Here, we used this task to assess patients' specific sources of impairments in learning. In 15 separate blocks, subjects learned to pick one of three actions for stimuli. The number of stimuli to learn in each block varied from two to six, allowing us to separate influences of capacity-limited WM from the incremental RL system. As expected, both patients (n = 49) and healthy controls (n = 36) showed effects of set size and delay between stimulus repetitions, confirming the presence of working memory effects. Patients performed significantly worse than controls overall, but computational model fits and behavioral analyses indicate that these deficits could be entirely accounted for by changes in WM parameters (capacity and reliability), whereas RL processes were spared. These results suggest that the working memory system contributes strongly to learning impairments in schizophrenia. Copyright © 2014 the authors 0270-6474/14/3413747-10$15.00/0.

  15. Cancer 'survivor-care': II. Disruption of prefrontal brain activation top-down control of working memory capacity as possible mechanism for chemo-fog/brain (chemotherapy-associated cognitive impairment).

    PubMed

    Raffa, R B

    2013-08-01

    Cancer chemotherapy-associated cognitive impairments (termed 'chemo-fog' or 'chemo-brain'), particularly in memory, have been self-reported or identified in cancer survivors previously treated with chemotherapy. Although a variety of deficits have been detected, a consistent theme is a detriment in visuospatial working memory. The parietal cortex, a major site of storage of such memory, is implicated in chemotherapy-induced damage. However, if the findings of two recent publications are combined, the (pre)frontal cortex might be an equally viable target. Two recent studies, one postulating a mechanism for 'top-down control' of working memory capacity and another visualizing chemotherapy-induced alterations in brain activation during working memory processing, are reviewed and integrated. A computational model and the proposal that the prefrontal cortex plays a role in working memory via top-down control of parietal working memory capacity is consistent with a recent demonstration of decreased frontal hyperactivation following chemotherapy. Chemotherapy-associated impairment of visuospatial working memory might include the (pre)frontal cortex in addition to the parietal cortex. This provides new opportunity for basic science and clinical investigation. © 2013 John Wiley & Sons Ltd.

  16. Computational modeling of the negative priming effect based on inhibition patterns and working memory

    PubMed Central

    Chung, Dongil; Raz, Amir; Lee, Jaewon; Jeong, Jaeseung

    2013-01-01

    Negative priming (NP), slowing down of the response for target stimuli that have been previously exposed, but ignored, has been reported in multiple psychological paradigms including the Stroop task. Although NP likely results from the interplay of selective attention, episodic memory retrieval, working memory, and inhibition mechanisms, a comprehensive theoretical account of NP is currently unavailable. This lacuna may result from the complexity of stimuli combinations in NP. Thus, we aimed to investigate the presence of different degrees of the NP effect according to prime-probe combinations within a classic Stroop task. We recorded reaction times (RTs) from 66 healthy participants during Stroop task performance and examined three different NP subtypes, defined according to the type of the Stroop probe in prime-probe pairs. Our findings show significant RT differences among NP subtypes that are putatively due to the presence of differential disinhibition, i.e., release from inhibition. Among the several potential origins for differential subtypes of NP, we investigated the involvement of selective attention and/or working memory using a parallel distributed processing (PDP) model (employing selective attention only) and a modified PDP model with working memory (PDP-WM, employing both selective attention and working memory). Our findings demonstrate that, unlike the conventional PDP model, the PDP-WM successfully simulates different levels of NP effects that closely follow the behavioral data. This outcome suggests that working memory engages in the re-accumulation of the evidence for target response and induces differential NP effects. Our computational model complements earlier efforts and may pave the road to further insights into an integrated theoretical account of complex NP effects. PMID:24312046

  17. A Computational and Theoretical Study of Conductance in Hydrogen-bonded Molecular Junctions

    NASA Astrophysics Data System (ADS)

    Wimmer, Michael

    This thesis is devoted to the theoretical and computational study of electron transport in molecular junctions where one or more hydrogen bonds are involved in the process. While electron transport through covalent bonds has been extensively studied, in recent work the focus has been shifted towards hydrogen-bonded systems due to their ubiquitous presence in biological systems and their potential in forming nano-junctions between molecular electronic devices and biological systems. This analysis allows us to significantly expand our comprehension of the experimentally observed result that the inclusion of hydrogen bonding in a molecular junction significantly impacts its transport properties, a fact that has important implications for our understanding of transport through DNA, and nano-biological interfaces in general. In part of this work I have explored the implications of quasiresonant transport in short chains of weakly-bonded molecular junctions involving hydrogen bonds. I used theoretical and computational analysis to interpret recent experiments and explain the role of Fano resonances in the transmission properties of the junction. In a different direction, I have undertaken the study of the transversal conduction through nucleotide chains that involve a variable number of different hydrogen bonds, e.g. NH˙˙˙O, OH˙˙˙O, and NH˙˙˙N, which are the three most prevalent hydrogen bonds in biological systems and organic electronics. My effort here has focused on the analysis of electronic descriptors that allow a simplified conceptual and computational understanding of transport properties. Specifically, I have expanded our previous work where the molecular polarizability was used as a conductance descriptor to include the possibility of atomic and bond partitions of the molecular polarizability. This is important because it affords an alternative molecular description of conductance that is not based on the conventional view of molecular orbitals as transport channels. My findings suggest that the hydrogen-bond networks are crucial in understanding the conductance of these junctions. A broader impact of this work pertains the fact that characterizing transport through hydrogen bonding networks may help in developing faster and cost-effective approaches to personalized medicine, to advance DNA sequencing and implantable electronics, and to progress in the design and application of new drugs.

  18. Working beyond 65: a qualitative study of perceived hazards and discomforts at work.

    PubMed

    Reynolds, Frances; Farrow, Alexandra; Blank, Alison

    2013-01-01

    This qualitative study explored self-reports of hazards and discomforts in the workplace and coping strategies among those choosing to work beyond the age of 65 years. 30 people aged 66-91 years took part. Most worked part-time in professional or administrative roles. Each participant engaged in one semi-structured interview. Participants described some hazards and discomforts in their current work, but no recent accidents. The main age-related discomfort was tiredness. Other hazards that recurred in participants' accounts were physical demands of the job, driving, and interpersonal difficulties such as client or customer complaints, and in very rare cases, bullying. Most work-related hazards (e.g. prolonged sitting at computers, lifting heavy items and driving) were thought likely to affect any worker regardless of age. Coping strategies included making adaptations to age-related changes (such as decreased stamina) by keeping fit and being open about difficulties to colleagues, reducing hours of work, altering roles at work, limiting driving, applying expertise derived from previous work experiences, being assertive, using authority and status, and (among the minority employed in larger organisations) making use of supportive company/organisational policies and practices. Participants described taking individual responsibility for managing hazards at work and perceived little or no elevation of risk linked to age.

  19. Games at work: the recreational use of computer games during working hours.

    PubMed

    Reinecke, Leonard

    2009-08-01

    The present study investigated the recreational use of video and computer games in the workplace. In an online survey, 833 employed users of online casual games reported on their use of computer games during working hours. The data indicate that playing computer games in the workplace elicits substantial levels of recovery experience. Recovery experience associated with gameplay was the strongest predictor for the use of games in the workplace. Furthermore, individuals with higher levels of work-related fatigue reported stronger recovery experience during gameplay and showed a higher tendency to play games during working hours than did persons with lower levels of work strain. Additionally, the social situation at work was found to have a significant influence on the use of games. Persons receiving less social support from colleagues and supervisors played games at work more frequently than did individuals with higher levels of social support. Furthermore, job control was positively related to the use of games at work. In sum, the results of the present study illustrate that computer games have a significant recovery potential. Implications of these findings for research on personal computer use during work and for games research in general are discussed.

  20. Are specialist physicians missing out on the e-Health boat?

    PubMed

    Osborn, M; Day, R; Westbrook, J

    2009-10-01

    Nationally health systems are making increasing investments in the use of clinical information systems. Little is known about current computer use by specialist physicians, particularly outside the hospital setting. To identify the extent and reasons physician Fellows of the Royal Australasian College of Physicians (RACP) use computers in their work. A self-administered survey was emailed from the RACP to all practising physicians in 2007 that were living in Australia and New Zealand who had consented to email contact with the College. The survey was sent to a total of 7445 eligible physicians, 2328 physicians responded (31.3% response rate), but only 1266 responses (21.0%) were able to be analysed. Most 97.5% had access to computers at work and 96.5% used home computers for work purposes. Physicians in public hospitals (72.6%) were more likely to use computers for work (65.6%) than those in private hospitals (12.6%) or consulting rooms (27.3%). Overall physicians working in public hospitals used a wider range of applications with 70.5% using their computers for searching the internet, 53.7% for receiving results and 52.7% used their computers to engage in specific educational activities. Physicians working from their consulting rooms (33.6%) were more likely to use electronic prescribing (11%) compared with physicians working in public hospitals (5.7%). Fellows have not incorporated computers into their consulting rooms over which they have control. This is in contrast to general practitioners who have embraced computers after the provision of various incentives. The rate of use of computers by physicians for electronic prescribing in consulting rooms (11%) is very low in comparison with general practitioners (98%). One reason may be that physicians work in multiple locations whereas general practitioners are more likely to work from one location.

  1. A Systematic Investigation of Computation Models for Predicting Adverse Drug Reactions (ADRs)

    PubMed Central

    Kuang, Qifan; Wang, MinQi; Li, Rong; Dong, YongCheng; Li, Yizhou; Li, Menglong

    2014-01-01

    Background Early and accurate identification of adverse drug reactions (ADRs) is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs. Principal Findings In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper. Conclusion Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms. PMID:25180585

  2. Sitting Time, Physical Activity and Sleep by Work Type and Pattern—The Australian Longitudinal Study on Women’s Health

    PubMed Central

    Clark, Bronwyn K.; Kolbe-Alexander, Tracy L.; Duncan, Mitch J.; Brown, Wendy

    2017-01-01

    Data from the Australian Longitudinal Study on Women’s Health were used to examine how work was associated with time spent sleeping, sitting and in physical activity (PA), in working women. Young (31–36 years; 2009) and mid-aged (59–64 years; 2010) women reported sleep (categorised as shorter ≤6 h/day and longer ≥8 h/day) and sitting time (work, transport, television, non-work computer, and other; summed for total sitting time) on the most recent work and non-work day; and moderate and vigorous PA (categorised as meeting/not meeting guidelines) in the previous week. Participants reported occupation (manager/professional; clerical/sales; trades/transport/labourer), work hours (part-time; full-time) and work pattern (shift/night; not shift/night). The odds of shorter sleep on work days was higher in both cohorts for women who worked shift or night hours. Longer sitting time on work days, made up primarily of sitting for work, was found for managers/professionals, clerical/sales and full-time workers. In the young cohort, clerical/sales workers and in the mid-aged cohort, full-time workers were less likely to meet PA guidelines. These results suggest multiple behaviour interventions tailored to work patterns and occupational category may be useful to improve the sleep, sitting and activity of working women. PMID:28287446

  3. Sitting Time, Physical Activity and Sleep by Work Type and Pattern-The Australian Longitudinal Study on Women's Health.

    PubMed

    Clark, Bronwyn K; Kolbe-Alexander, Tracy L; Duncan, Mitch J; Brown, Wendy

    2017-03-10

    Data from the Australian Longitudinal Study on Women's Health were used to examine how work was associated with time spent sleeping, sitting and in physical activity (PA), in working women. Young (31-36 years; 2009) and mid-aged (59-64 years; 2010) women reported sleep (categorised as shorter ≤6 h/day and longer ≥8 h/day) and sitting time (work, transport, television, non-work computer, and other; summed for total sitting time) on the most recent work and non-work day; and moderate and vigorous PA (categorised as meeting/not meeting guidelines) in the previous week. Participants reported occupation (manager/professional; clerical/sales; trades/transport/labourer), work hours (part-time; full-time) and work pattern (shift/night; not shift/night). The odds of shorter sleep on work days was higher in both cohorts for women who worked shift or night hours. Longer sitting time on work days, made up primarily of sitting for work, was found for managers/professionals, clerical/sales and full-time workers. In the young cohort, clerical/sales workers and in the mid-aged cohort, full-time workers were less likely to meet PA guidelines. These results suggest multiple behaviour interventions tailored to work patterns and occupational category may be useful to improve the sleep, sitting and activity of working women.

  4. On the spectroscopic constants, first electronic state, vibrational frequencies, and isomerization of hydroxymethylene (HCOH+)

    NASA Astrophysics Data System (ADS)

    Theis, Riley A.; Fortenberry, Ryan C.

    2017-09-01

    The hydroxymethylene cation (HCOH+) is believed to be chemically independent of the more stable formaldehyde cation isomer in interstellar chemistry and may likely be a precursor to methanol in chemical reaction networks. Previous work is corroborated here showing that the trans conformer of HCOH+ is 3.48 kcal/mol lower than the cis on the potential energy surface. The small energy difference between the conformers and the much larger dipole moment of cis-HCOH+ (2.73 D) make this conformer more likely to be observed than trans-HCOH+ via telescopic rotational spectroscopy. A strong adiabatic shift is also predicted in the first electronic excitation into the 1 2A‧‧/2 2A state out of either conformer into a C1 structure reducing the excitation wavelength from the near-ultraviolet all the way into the near-infrared. The full set of fundamental vibrational frequencies are also computed here at high-level. The 3306.0 cm-1 and 3225.3 cm-1 hydroxide stretches, respective of bare trans- and cis-HCOH+ , are in agreement with previous theory but are significantly higher than the frequencies determined from previous experiment utilizing argon tagging techniques. This shift is likely because the proton-bound complex created with the argon tag reduces the experimental frequencies. Lower-level computations including the argon tag bring the hydroxide stretches much closer to the experimental frequencies indicating that the predicted frequencies for bare HCOH+ are likely well-described.

  5. CT radiation profile width measurement using CR imaging plate raw data

    PubMed Central

    Yang, Chang‐Ying Joseph

    2015-01-01

    This technical note demonstrates computed tomography (CT) radiation profile measurement using computed radiography (CR) imaging plate raw data showing it is possible to perform the CT collimation width measurement using a single scan without saturating the imaging plate. Previously described methods require careful adjustments to the CR reader settings in order to avoid signal clipping in the CR processed image. CT radiation profile measurements were taken as part of routine quality control on 14 CT scanners from four vendors. CR cassettes were placed on the CT scanner bed, raised to isocenter, and leveled. Axial scans were taken at all available collimations, advancing the cassette for each scan. The CR plates were processed and raw CR data were analyzed using MATLAB scripts to measure collimation widths. The raw data approach was compared with previously established methodology. The quality control analysis scripts are released as open source using creative commons licensing. A log‐linear relationship was found between raw pixel value and air kerma, and raw data collimation width measurements were in agreement with CR‐processed, bit‐reduced data, using previously described methodology. The raw data approach, with intrinsically wider dynamic range, allows improved measurement flexibility and precision. As a result, we demonstrate a methodology for CT collimation width measurements using a single CT scan and without the need for CR scanning parameter adjustments which is more convenient for routine quality control work. PACS numbers: 87.57.Q‐, 87.59.bd, 87.57.uq PMID:26699559

  6. Personal use of work computers: distraction versus destruction.

    PubMed

    Mastrangelo, Paul M; Everton, Wendi; Jolton, Jeffery A

    2006-12-01

    To explore definitions, frequencies, and motivation for personal use of work computers, we analyzed 329 employees' responses to an online survey, which asked participants to self-report frequencies for 41 computer behaviors at work. This sample (65% female, 74% European ethnicity, mean age of 36 years) was formed by soliciting participants through Internet Usenet groups, emails, and listservs. Results support a distinction between computer use that is counterproductive and that which is merely not productive. Nonproductive Computer Use occurred more when employees were younger (r = -0.31, p < 0.01), had Internet access at work longer (r = +0.16, p < 0.01), and had faster Internet connections at work than at home (r = +0.14, p < 0.01). Counterproductive Computer Use occurred more when Internet access was newer (r = -0.16, p < 0.01) and employees knew others who had been warned about misuse (r = +0.11, p < 0.05). While most employees who engaged in computer counterproductivity also engaged in computer nonproductivity, the inverse was uncommon, suggesting the need to distinguish between the two when establishing computer policies and Internet accessibility.

  7. Hyper-Fractal Analysis: A visual tool for estimating the fractal dimension of 4D objects

    NASA Astrophysics Data System (ADS)

    Grossu, I. V.; Grossu, I.; Felea, D.; Besliu, C.; Jipa, Al.; Esanu, T.; Bordeianu, C. C.; Stan, E.

    2013-04-01

    This work presents a new version of a Visual Basic 6.0 application for estimating the fractal dimension of images and 3D objects (Grossu et al. (2010) [1]). The program was extended for working with four-dimensional objects stored in comma separated values files. This might be of interest in biomedicine, for analyzing the evolution in time of three-dimensional images. New version program summaryProgram title: Hyper-Fractal Analysis (Fractal Analysis v03) Catalogue identifier: AEEG_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v3_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 745761 No. of bytes in distributed program, including test data, etc.: 12544491 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 100M Classification: 14 Catalogue identifier of previous version: AEEG_v2_0 Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 831-832 Does the new version supersede the previous version? Yes Nature of problem: Estimating the fractal dimension of 4D images. Solution method: Optimized implementation of the 4D box-counting algorithm. Reasons for new version: Inspired by existing applications of 3D fractals in biomedicine [3], we extended the optimized version of the box-counting algorithm [1, 2] to the four-dimensional case. This might be of interest in analyzing the evolution in time of 3D images. The box-counting algorithm was extended in order to support 4D objects, stored in comma separated values files. A new form was added for generating 2D, 3D, and 4D test data. The application was tested on 4D objects with known dimension, e.g. the Sierpinski hypertetrahedron gasket, Df=ln(5)/ln(2) (Fig. 1). The algorithm could be extended, with minimum effort, to higher number of dimensions. Easy integration with other applications by using the very simple comma separated values file format for storing multi-dimensional images. Implementation of χ2 test as a criterion for deciding whether an object is fractal or not. User friendly graphical interface. Hyper-Fractal Analysis-Test on the Sierpinski hypertetrahedron 4D gasket (Df=ln(5)/ln(2)≅2.32). Running time: In a first approximation, the algorithm is linear [2]. References: [1] V. Grossu, D. Felea, C. Besliu, Al. Jipa, C.C. Bordeianu, E. Stan, T. Esanu, Computer Physics Communications, 181 (2010) 831-832. [2] I.V. Grossu, C. Besliu, M.V. Rusu, Al. Jipa, C. C. Bordeianu, D. Felea, Computer Physics Communications, 180 (2009) 1999-2001. [3] J. Ruiz de Miras, J. Navas, P. Villoslada, F.J. Esteban, Computer Methods and Programs in Biomedicine, 104 Issue 3 (2011) 452-460.

  8. 3D Kirchhoff depth migration algorithm: A new scalable approach for parallelization on multicore CPU based cluster

    NASA Astrophysics Data System (ADS)

    Rastogi, Richa; Londhe, Ashutosh; Srivastava, Abhishek; Sirasala, Kirannmayi M.; Khonde, Kiran

    2017-03-01

    In this article, a new scalable 3D Kirchhoff depth migration algorithm is presented on state of the art multicore CPU based cluster. Parallelization of 3D Kirchhoff depth migration is challenging due to its high demand of compute time, memory, storage and I/O along with the need of their effective management. The most resource intensive modules of the algorithm are traveltime calculations and migration summation which exhibit an inherent trade off between compute time and other resources. The parallelization strategy of the algorithm largely depends on the storage of calculated traveltimes and its feeding mechanism to the migration process. The presented work is an extension of our previous work, wherein a 3D Kirchhoff depth migration application for multicore CPU based parallel system had been developed. Recently, we have worked on improving parallel performance of this application by re-designing the parallelization approach. The new algorithm is capable to efficiently migrate both prestack and poststack 3D data. It exhibits flexibility for migrating large number of traces within the available node memory and with minimal requirement of storage, I/O and inter-node communication. The resultant application is tested using 3D Overthrust data on PARAM Yuva II, which is a Xeon E5-2670 based multicore CPU cluster with 16 cores/node and 64 GB shared memory. Parallel performance of the algorithm is studied using different numerical experiments and the scalability results show striking improvement over its previous version. An impressive 49.05X speedup with 76.64% efficiency is achieved for 3D prestack data and 32.00X speedup with 50.00% efficiency for 3D poststack data, using 64 nodes. The results also demonstrate the effectiveness and robustness of the improved algorithm with high scalability and efficiency on a multicore CPU cluster.

  9. The Modulus of Rupture from a Mathematical Point of View

    NASA Astrophysics Data System (ADS)

    Quintela, P.; Sánchez, M. T.

    2007-04-01

    The goal of this work is to present a complete mathematical study about the three-point bending experiments and the modulus of rupture of brittle materials. We will present the mathematical model associated to three-point bending experiments and we will use the asymptotic expansion method to obtain a new formula to calculate the modulus of rupture. We will compare the modulus of rupture of porcelain obtained with the previous formula with that obtained by using the classic theoretical formula. Finally, we will also present one and three-dimensional numerical simulations to compute the modulus of rupture.

  10. Avalanche and edge-of-chaos criticality do not necessarily co-occur in neural networks.

    PubMed

    Kanders, Karlis; Lorimer, Tom; Stoop, Ruedi

    2017-04-01

    There are indications that for optimizing neural computation, neural networks may operate at criticality. Previous approaches have used distinct fingerprints of criticality, leaving open the question whether the different notions would necessarily reflect different aspects of one and the same instance of criticality, or whether they could potentially refer to distinct instances of criticality. In this work, we choose avalanche criticality and edge-of-chaos criticality and demonstrate for a recurrent spiking neural network that avalanche criticality does not necessarily entrain dynamical edge-of-chaos criticality. This suggests that the different fingerprints may pertain to distinct phenomena.

  11. Practical adaptive quantum tomography

    NASA Astrophysics Data System (ADS)

    Granade, Christopher; Ferrie, Christopher; Flammia, Steven T.

    2017-11-01

    We introduce a fast and accurate heuristic for adaptive tomography that addresses many of the limitations of prior methods. Previous approaches were either too computationally intensive or tailored to handle special cases such as single qubits or pure states. By contrast, our approach combines the efficiency of online optimization with generally applicable and well-motivated data-processing techniques. We numerically demonstrate these advantages in several scenarios including mixed states, higher-dimensional systems, and restricted measurements. http://cgranade.com complete data and source code for this work are available online [1], and can be previewed at https://goo.gl/koiWxR.

  12. Mirror neural training induced by virtual reality in brain-computer interfaces may provide a promising approach for the autism therapy.

    PubMed

    Zhu, Huaping; Sun, Yaoru; Zeng, Jinhua; Sun, Hongyu

    2011-05-01

    Previous studies have suggested that the dysfunction of the human mirror neuron system (hMNS) plays an important role in the autism spectrum disorder (ASD). In this work, we propose a novel training program from our interdisciplinary research to improve mirror neuron functions of autistic individuals by using a BCI system with virtual reality technology. It is a promising approach for the autism to learn and develop social communications in a VR environment. A test method for this hypothesis is also provided. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nashold, B.; Rosenblatt, D.; Hau, J.

    This summary describes a Supplemental Site Inspection (SSI) conducted by Argonne National Laboratory (ANL) at Air Force Plant 59 (AFP 59) in Johnson City, New York. All required data pertaining to this project were entered by ANL into the Air Force-wide Installation Restoration Program Information System (IRPIMS) computer format and submitted to an appropriate authority. The work was sponsored by the United States Air Force as part of its Installation Restoration Program (IRP). Previous studies had revealed the presence of contaminants at the site and identified several potential contaminant sources. Argonne`s study was conducted to answer questions raised by earliermore » investigations.« less

  14. Distinguishing computable mixtures of quantum states

    NASA Astrophysics Data System (ADS)

    Grande, Ignacio H. López; Senno, Gabriel; de la Torre, Gonzalo; Larotonda, Miguel A.; Bendersky, Ariel; Figueira, Santiago; Acín, Antonio

    2018-05-01

    In this article we extend results from our previous work [Bendersky et al., Phys. Rev. Lett. 116, 230402 (2016), 10.1103/PhysRevLett.116.230402] by providing a protocol to distinguish in finite time and with arbitrarily high success probability any algorithmic mixture of pure states from the maximally mixed state. Moreover, we include an experimental realization, using a modified quantum key distribution setup, where two different random sequences of pure states are prepared; these sequences are indistinguishable according to quantum mechanics, but they become distinguishable when randomness is replaced with pseudorandomness within the experimental preparation process.

  15. Partial information decomposition as a spatiotemporal filter.

    PubMed

    Flecker, Benjamin; Alford, Wesley; Beggs, John M; Williams, Paul L; Beer, Randall D

    2011-09-01

    Understanding the mechanisms of distributed computation in cellular automata requires techniques for characterizing the emergent structures that underlie information processing in such systems. Recently, techniques from information theory have been brought to bear on this problem. Building on this work, we utilize the new technique of partial information decomposition to show that previous information-theoretic measures can confound distinct sources of information. We then propose a new set of filters and demonstrate that they more cleanly separate out the background domains, particles, and collisions that are typically associated with information storage, transfer, and modification in cellular automata.

  16. Avalanche and edge-of-chaos criticality do not necessarily co-occur in neural networks

    NASA Astrophysics Data System (ADS)

    Kanders, Karlis; Lorimer, Tom; Stoop, Ruedi

    2017-04-01

    There are indications that for optimizing neural computation, neural networks may operate at criticality. Previous approaches have used distinct fingerprints of criticality, leaving open the question whether the different notions would necessarily reflect different aspects of one and the same instance of criticality, or whether they could potentially refer to distinct instances of criticality. In this work, we choose avalanche criticality and edge-of-chaos criticality and demonstrate for a recurrent spiking neural network that avalanche criticality does not necessarily entrain dynamical edge-of-chaos criticality. This suggests that the different fingerprints may pertain to distinct phenomena.

  17. The Plasma Simulation Code: A modern particle-in-cell code with patch-based load-balancing

    NASA Astrophysics Data System (ADS)

    Germaschewski, Kai; Fox, William; Abbott, Stephen; Ahmadi, Narges; Maynard, Kristofor; Wang, Liang; Ruhl, Hartmut; Bhattacharjee, Amitava

    2016-08-01

    This work describes the Plasma Simulation Code (PSC), an explicit, electromagnetic particle-in-cell code with support for different order particle shape functions. We review the basic components of the particle-in-cell method as well as the computational architecture of the PSC code that allows support for modular algorithms and data structure in the code. We then describe and analyze in detail a distinguishing feature of PSC: patch-based load balancing using space-filling curves which is shown to lead to major efficiency gains over unbalanced methods and a previously used simpler balancing method.

  18. Fully coupled six-dimensional calculations of the water dimer vibration-rotation-tunneling states with split Wigner pseudospectral approach. II. Improvements and tests of additional potentials

    NASA Astrophysics Data System (ADS)

    Fellers, R. S.; Braly, L. B.; Saykally, R. J.; Leforestier, C.

    1999-04-01

    The SWPS method is improved by the addition of H.E.G. contractions for generating a more compact basis. An error in the definition of the internal fragment axis system used in our previous calculation is described and corrected. Fully coupled 6D (rigid monomers) VRT states are computed for several new water dimer potential surfaces and compared with experiment and our earlier SWPS results. This work sets the stage for refinement of such potential surfaces via regression analysis of VRT spectroscopic data.

  19. Computer games: a double-edged sword?

    PubMed

    Sun, De-Lin; Ma, Ning; Bao, Min; Chen, Xang-Chuan; Zhang, Da-Ren

    2008-10-01

    Excessive computer game playing (ECGP) has already become a serious social problem. However, limited data from experimental lab studies are available about the negative consequences of ECGP on players' cognitive characteristics. In the present study, we compared three groups of participants (current ECGP participants, previous ECGP participants, and control participants) on a Multiple Object Tracking (MOT) task. The previous ECGP participants performed significantly better than the control participants, which suggested a facilitation effect of computer games on visuospatial abilities. Moreover, the current ECGP participants performed significantly worse than the previous ECGP participants. This more important finding indicates that ECGP may be related to cognitive deficits. Implications of this study are discussed.

  20. Novel physical constraints on implementation of computational processes

    NASA Astrophysics Data System (ADS)

    Wolpert, David; Kolchinsky, Artemy

    Non-equilibrium statistical physics permits us to analyze computational processes, i.e., ways to drive a physical system such that its coarse-grained dynamics implements some desired map. It is now known how to implement any such desired computation without dissipating work, and what the minimal (dissipationless) work is that such a computation will require (the so-called generalized Landauer bound\\x9D). We consider how these analyses change if we impose realistic constraints on the computational process. First, we analyze how many degrees of freedom of the system must be controlled, in addition to the ones specifying the information-bearing degrees of freedom, in order to avoid dissipating work during a given computation, when local detailed balance holds. We analyze this issue for deterministic computations, deriving a state-space vs. speed trade-off, and use our results to motivate a measure of the complexity of a computation. Second, we consider computations that are implemented with logic circuits, in which only a small numbers of degrees of freedom are coupled at a time. We show that the way a computation is implemented using circuits affects its minimal work requirements, and relate these minimal work requirements to information-theoretic measures of complexity.

Top