Sample records for consequence computer based

  1. A computational model of selection by consequences: log survivor plots.

    PubMed

    Kulubekova, Saule; McDowell, J J

    2008-06-01

    [McDowell, J.J, 2004. A computational model of selection by consequences. J. Exp. Anal. Behav. 81, 297-317] instantiated the principle of selection by consequences in a virtual organism with an evolving repertoire of possible behaviors undergoing selection, reproduction, and mutation over many generations. The process is based on the computational approach, which is non-deterministic and rules-based. The model proposes a causal account for operant behavior. McDowell found that the virtual organism consistently showed a hyperbolic relationship between response and reinforcement rates according to the quantitative law of effect. To continue validation of the computational model, the present study examined its behavior on the molecular level by comparing the virtual organism's IRT distributions in the form of log survivor plots to findings from live organisms. Log survivor plots did not show the "broken-stick" feature indicative of distinct bouts and pauses in responding, although the bend in slope of the plots became more defined at low reinforcement rates. The shape of the virtual organism's log survivor plots was more consistent with the data on reinforced responding in pigeons. These results suggest that log survivor plot patterns of the virtual organism were generally consistent with the findings from live organisms providing further support for the computational model of selection by consequences as a viable account of operant behavior.

  2. Measuring the Computer-Related Self-Concept

    ERIC Educational Resources Information Center

    Langheinrich, Jessica; Schönfelder, Mona; Bogner, Franz X.

    2016-01-01

    A positive self-concept supposedly affects a student's well-being as well as his or her perception of individual competence at school. As computer-based learning is becoming increasingly important in school, a positive computer-related self-concept (CSC) might help to enhance cognitive achievement. Consequently, we focused on establishing a short,…

  3. sTeam--Providing Primary Media Functions for Web-Based Computer-Supported Cooperative Learning.

    ERIC Educational Resources Information Center

    Hampel, Thorsten

    The World Wide Web has developed as the de facto standard for computer based learning. However, as a server-centered approach, it confines readers and learners to passive nonsequential reading. Authoring and Web-publishing systems aim at supporting the authors' design process. Consequently, learners' activities are confined to selecting and…

  4. Can We Apply TAM in Computer-Based Classes?

    ERIC Educational Resources Information Center

    Williams, David; Williams, Denise

    2013-01-01

    While students may struggle in any classroom and consequently require help beyond the schedule meeting time and place of the class, computer-based courses pose the additional hurdle of requiring ready access to hardware and software that may be unavailable or inconvenient for students outside of the classroom and its scheduled meeting time. This…

  5. An analysis of computer-related patient safety incidents to inform the development of a classification.

    PubMed

    Magrabi, Farah; Ong, Mei-Sing; Runciman, William; Coiera, Enrico

    2010-01-01

    To analyze patient safety incidents associated with computer use to develop the basis for a classification of problems reported by health professionals. Incidents submitted to a voluntary incident reporting database across one Australian state were retrieved and a subset (25%) was analyzed to identify 'natural categories' for classification. Two coders independently classified the remaining incidents into one or more categories. Free text descriptions were analyzed to identify contributing factors. Where available medical specialty, time of day and consequences were examined. Descriptive statistics; inter-rater reliability. A search of 42,616 incidents from 2003 to 2005 yielded 123 computer related incidents. After removing duplicate and unrelated incidents, 99 incidents describing 117 problems remained. A classification with 32 types of computer use problems was developed. Problems were grouped into information input (31%), transfer (20%), output (20%) and general technical (24%). Overall, 55% of problems were machine related and 45% were attributed to human-computer interaction. Delays in initiating and completing clinical tasks were a major consequence of machine related problems (70%) whereas rework was a major consequence of human-computer interaction problems (78%). While 38% (n=26) of the incidents were reported to have a noticeable consequence but no harm, 34% (n=23) had no noticeable consequence. Only 0.2% of all incidents reported were computer related. Further work is required to expand our classification using incident reports and other sources of information about healthcare IT problems. Evidence based user interface design must focus on the safe entry and retrieval of clinical information and support users in detecting and correcting errors and malfunctions.

  6. An analysis of computer-related patient safety incidents to inform the development of a classification

    PubMed Central

    Ong, Mei-Sing; Runciman, William; Coiera, Enrico

    2010-01-01

    Objective To analyze patient safety incidents associated with computer use to develop the basis for a classification of problems reported by health professionals. Design Incidents submitted to a voluntary incident reporting database across one Australian state were retrieved and a subset (25%) was analyzed to identify ‘natural categories’ for classification. Two coders independently classified the remaining incidents into one or more categories. Free text descriptions were analyzed to identify contributing factors. Where available medical specialty, time of day and consequences were examined. Measurements Descriptive statistics; inter-rater reliability. Results A search of 42 616 incidents from 2003 to 2005 yielded 123 computer related incidents. After removing duplicate and unrelated incidents, 99 incidents describing 117 problems remained. A classification with 32 types of computer use problems was developed. Problems were grouped into information input (31%), transfer (20%), output (20%) and general technical (24%). Overall, 55% of problems were machine related and 45% were attributed to human–computer interaction. Delays in initiating and completing clinical tasks were a major consequence of machine related problems (70%) whereas rework was a major consequence of human–computer interaction problems (78%). While 38% (n=26) of the incidents were reported to have a noticeable consequence but no harm, 34% (n=23) had no noticeable consequence. Conclusion Only 0.2% of all incidents reported were computer related. Further work is required to expand our classification using incident reports and other sources of information about healthcare IT problems. Evidence based user interface design must focus on the safe entry and retrieval of clinical information and support users in detecting and correcting errors and malfunctions. PMID:20962128

  7. Modeling Biodegradation and Reactive Transport: Analytical and Numerical Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Y; Glascoe, L

    The computational modeling of the biodegradation of contaminated groundwater systems accounting for biochemical reactions coupled to contaminant transport is a valuable tool for both the field engineer/planner with limited computational resources and the expert computational researcher less constrained by time and computer power. There exists several analytical and numerical computer models that have been and are being developed to cover the practical needs put forth by users to fulfill this spectrum of computational demands. Generally, analytical models provide rapid and convenient screening tools running on very limited computational power, while numerical models can provide more detailed information with consequent requirementsmore » of greater computational time and effort. While these analytical and numerical computer models can provide accurate and adequate information to produce defensible remediation strategies, decisions based on inadequate modeling output or on over-analysis can have costly and risky consequences. In this chapter we consider both analytical and numerical modeling approaches to biodegradation and reactive transport. Both approaches are discussed and analyzed in terms of achieving bioremediation goals, recognizing that there is always a tradeoff between computational cost and the resolution of simulated systems.« less

  8. Learning with Interactive Computer Graphics in the Undergraduate Neuroscience Classroom

    ERIC Educational Resources Information Center

    Pani, John R.; Chariker, Julia H.; Naaz, Farah; Mattingly, William; Roberts, Joshua; Sephton, Sandra E.

    2014-01-01

    Instruction of neuroanatomy depends on graphical representation and extended self-study. As a consequence, computer-based learning environments that incorporate interactive graphics should facilitate instruction in this area. The present study evaluated such a system in the undergraduate neuroscience classroom. The system used the method of…

  9. [Genotoxic modification of nucleic acid bases and biological consequences of it. Review and prospects of experimental and computational investigations

    NASA Technical Reports Server (NTRS)

    Poltev, V. I.; Bruskov, V. I.; Shuliupina, N. V.; Rein, R.; Shibata, M.; Ornstein, R.; Miller, J.

    1993-01-01

    The review is presented of experimental and computational data on the influence of genotoxic modification of bases (deamination, alkylation, oxidation) on the structure and biological functioning of nucleic acids. Pathways are discussed for the influence of modification on coding properties of bases, on possible errors of nucleic acid biosynthesis, and on configurations of nucleotide mispairs. The atomic structure of nucleic acid fragments with modified bases and the role of base damages in mutagenesis and carcinogenesis are considered.

  10. Creation and Development of an Integrated Model of New Technologies and ESP

    ERIC Educational Resources Information Center

    Garcia Laborda, Jesus

    2004-01-01

    It seems irrefutable that the world is progressing in concert with computer science. Educational applications and projects for first and second language acquisition have not been left behind. However, currently it seems that the reputation of completely computer-based language learning courses has taken a nosedive, and, consequently there has been…

  11. Am I Extravert or Introvert? Considering the Personality Effect toward e-Learning System

    ERIC Educational Resources Information Center

    Al-Dujaily, Amal; Kim, Jieun; Ryu, Hokyoung

    2013-01-01

    A concern of computer-based learning system design is how to accommodate learners' individual differences during learning activities. Previous research suggests that adaptive e-learning systems can effectively address such individual differences and, consequently, they enable more directed tutoring via computer-assisted instruction. In this paper,…

  12. Integrated Computer System of Management in Logistics

    NASA Astrophysics Data System (ADS)

    Chwesiuk, Krzysztof

    2011-06-01

    This paper aims at presenting a concept of an integrated computer system of management in logistics, particularly in supply and distribution chains. Consequently, the paper includes the basic idea of the concept of computer-based management in logistics and components of the system, such as CAM and CIM systems in production processes, and management systems for storage, materials flow, and for managing transport, forwarding and logistics companies. The platform which integrates computer-aided management systems is that of electronic data interchange.

  13. Acoustic and Perceptual Effects of Left-Right Laryngeal Asymmetries Based on Computational Modeling

    ERIC Educational Resources Information Center

    Samlan, Robin A.; Story, Brad H.; Lotto, Andrew J.; Bunton, Kate

    2014-01-01

    Purpose: Computational modeling was used to examine the consequences of 5 different laryngeal asymmetries on acoustic and perceptual measures of vocal function. Method: A kinematic vocal fold model was used to impose 5 laryngeal asymmetries: adduction, edge bulging, nodal point ratio, amplitude of vibration, and starting phase. Thirty /a/ and /?/…

  14. A comparative study of multi-sensor data fusion methods for highly accurate assessment of manufactured parts

    NASA Astrophysics Data System (ADS)

    Hannachi, Ammar; Kohler, Sophie; Lallement, Alex; Hirsch, Ernest

    2015-04-01

    3D modeling of scene contents takes an increasing importance for many computer vision based applications. In particular, industrial applications of computer vision require efficient tools for the computation of this 3D information. Routinely, stereo-vision is a powerful technique to obtain the 3D outline of imaged objects from the corresponding 2D images. As a consequence, this approach provides only a poor and partial description of the scene contents. On another hand, for structured light based reconstruction techniques, 3D surfaces of imaged objects can often be computed with high accuracy. However, the resulting active range data in this case lacks to provide data enabling to characterize the object edges. Thus, in order to benefit from the positive points of various acquisition techniques, we introduce in this paper promising approaches, enabling to compute complete 3D reconstruction based on the cooperation of two complementary acquisition and processing techniques, in our case stereoscopic and structured light based methods, providing two 3D data sets describing respectively the outlines and surfaces of the imaged objects. We present, accordingly, the principles of three fusion techniques and their comparison based on evaluation criterions related to the nature of the workpiece and also the type of the tackled application. The proposed fusion methods are relying on geometric characteristics of the workpiece, which favour the quality of the registration. Further, the results obtained demonstrate that the developed approaches are well adapted for 3D modeling of manufactured parts including free-form surfaces and, consequently quality control applications using these 3D reconstructions.

  15. Alcohol Interventions Among Underage Drinkers in the ED: A Randomized Controlled Trial.

    PubMed

    Cunningham, Rebecca M; Chermack, Stephen T; Ehrlich, Peter F; Carter, Patrick M; Booth, Brenda M; Blow, Frederic C; Barry, Kristen L; Walton, Maureen A

    2015-10-01

    This study examined the efficacy of emergency department (ED)-based brief interventions (BIs), delivered by a computer or therapist, with and without a post-ED session, on alcohol consumption and consequences over 12 months. Patients (ages 14-20 years) screening positive for risky drinking were randomized to: BI (n = 277), therapist BI (n = 278), or control (n = 281). After the 3-month follow-up, participants were randomized to receive a post-ED BI session or control. Incorporating motivational interviewing, the BIs addressed alcohol consumption and consequences, including driving under the influence (DUI), and alcohol-related injury, as well as other concomitant drug use. The computer BI was an offline, Facebook-styled program. Among 4389 patients screened, 1054 patients reported risky drinking and 836 were enrolled in the randomized controlled trial. Regression models examined the main effects of the intervention conditions (versus control) and the interaction effects (ED condition × post-ED condition) on primary outcomes. The therapist and computer BIs significantly reduced consumption at 3 months, consequences at 3 and 12 months, and prescription drug use at 12 months; the computer BI reduced the frequency of DUI at 12 months; and the therapist BI reduced the frequency of alcohol-related injury at 12 months. The post-ED session reduced alcohol consequences at 6 months, benefiting those who had not received a BI in the ED. A single-session BI, delivered by a computer or therapist in the ED, shows promise for underage drinkers. Findings for the fully automated stand-alone computer BI are particularly appealing given the ease of future implementation. Copyright © 2015 by the American Academy of Pediatrics.

  16. Using E-Exercise Bases in Mathematics: Case Studies at University

    ERIC Educational Resources Information Center

    Cazes, Claire; Gueudet, Ghislaine; Hersant, Magali; Vandebrouck, Fabrice

    2006-01-01

    E-Exercise Bases (EEB) are now used in the teaching of mathematics, especially at university. We discuss here the consequences of their use on the students' activity during computer lab sessions. Results stem from observations of several teaching designs organised in different French universities with three e-exercise bases. The analysis focuses…

  17. A Fully Distributed Approach to the Design of a KBIT/SEC VHF Packet Radio Network,

    DTIC Science & Technology

    1984-02-01

    topological change and consequent out-modea routing data. Algorithm development has been aided by computer simulation using a finite state machine technique...development has been aided by computer simulation using a finite state machine technique to model a realistic network of up to fifty nodes. This is...use of computer based equipments in weapons systems and their associated sensors and command and control elements and the trend from voice to data

  18. Dual Coding Theory and Computer Education: Some Media Experiments To Examine the Effects of Different Media on Learning.

    ERIC Educational Resources Information Center

    Alty, James L.

    Dual Coding Theory has quite specific predictions about how information in different media is stored, manipulated and recalled. Different combinations of media are expected to have significant effects upon the recall and retention of information. This obviously may have important consequences in the design of computer-based programs. The paper…

  19. Immersive, Interactive, Web-Enabled Computer Simulation as a Trigger for Learning: The next Generation of Problem-Based Learning in Educational Leadership

    ERIC Educational Resources Information Center

    Mann, Dale; Reardon, R. M.; Becker, J. D.; Shakeshaft, C.; Bacon, Nicholas

    2011-01-01

    This paper describes the use of advanced computer technology in an innovative educational leadership program. This program integrates full-motion video scenarios that simulate the leadership challenges typically faced by principals over the course of a full school year. These scenarios require decisions that are then coupled to consequences and…

  20. Evaluation of Item-Based Top-N Recommendation Algorithms

    DTIC Science & Technology

    2000-09-15

    Furthermore, one of the advantages of the item-based algorithm is that it has much smaller computational require- 11 0.0 0.1 0.2 0.3 0.4 0.5 0.6 ecommerce ...items, utilized by many e-commerce sites, cannot take advantage of pre-computed user-to-user similarities. Consequently, even though the throughput of...Non-Zeros ecommerce 6667 17491 91222 catalog 50918 39080 435524 ccard 42629 68793 398619 skills 4374 2125 82612 movielens 943 1682 100000 Table 1: The

  1. Policy Issues in Computer Education. Assessing the Cognitive Consequences of Computer Environments for Learning (ACCCEL).

    ERIC Educational Resources Information Center

    Linn, Marcia

    This paper analyzes the capabilities of the computer learning environment identified by the Assessing the Cognitive Consequences of Computer Environments for Learning (ACCCEL) Project, augments the analysis with experimental work, and discusses how schools can implement policies which provide for the maximum potential of computers. The ACCCEL…

  2. Flexible processing and the design of grammar.

    PubMed

    Sag, Ivan A; Wasow, Thomas

    2015-02-01

    We explore the consequences of letting the incremental and integrative nature of language processing inform the design of competence grammar. What emerges is a view of grammar as a system of local monotonic constraints that provide a direct characterization of the signs (the form-meaning correspondences) of a given language. This "sign-based" conception of grammar has provided precise solutions to the key problems long thought to motivate movement-based analyses, has supported three decades of computational research developing large-scale grammar implementations, and is now beginning to play a role in computational psycholinguistics research that explores the use of underspecification in the incremental computation of partial meanings.

  3. [Problem of bioterrorism under modern conditions].

    PubMed

    Vorob'ev, A A; Boev, B V; Bondarenko, V M; Gintsburg, A L

    2002-01-01

    It is practically impossible to discuss the problem of bioterrorism (BT) and to develop effective programs of decreasing the losses and expenses suffered by the society from the BT acts without evaluation of the threat and prognosis of consequences based on research and empiric data. Stained international situation following the act of terrorism (attack on the USA) on September 11, 2001, makes the scenarios of the bacterial weapon use (the causative agents of plague, smallpox, anthrax, etc.) by international terrorists most probable. In this connection studies on the analysis and prognostication of the consequences of BT, including mathematical and computer modelling, are necessary. The authors present the results of initiative studies on the analysis and prognostication of the consequences of the hypothetical act of BT with the use of the smallpox causative agent in a city with the population of about 1,000,000 inhabitants. The analytical prognostic studies on the operative analysis and prognostication of the consequences of the BT act with the use of the smallpox causative agent has demonstrated that the mathematical (computer) model of the epidemic outbreak of smallpox is an effective instrument of calculation studies. Prognostic evaluations of the consequences of the act of BT under the conditions of different reaction of public health services (time of detection, interventions) have been obtained with the use of modelling. In addition, the computer model is necessary for training health specialists to react adequately to the acts of BT with the use of different kinds of bacteriological weapons.

  4. Anterior Cingulate Cortex Instigates Adaptive Switches in Choice by Integrating Immediate and Delayed Components of Value in Ventromedial Prefrontal Cortex

    PubMed Central

    Guitart-Masip, Marc; Kurth-Nelson, Zeb; Dolan, Raymond J.

    2014-01-01

    Actions can lead to an immediate reward or punishment and a complex set of delayed outcomes. Adaptive choice necessitates the brain track and integrate both of these potential consequences. Here, we designed a sequential task whereby the decision to exploit or forego an available offer was contingent on comparing immediate value and a state-dependent future cost of expending a limited resource. Crucially, the dynamics of the task demanded frequent switches in policy based on an online computation of changing delayed consequences. We found that human subjects choose on the basis of a near-optimal integration of immediate reward and delayed consequences, with the latter computed in a prefrontal network. Within this network, anterior cingulate cortex (ACC) was dynamically coupled to ventromedial prefrontal cortex (vmPFC) when adaptive switches in choice were required. Our results suggest a choice architecture whereby interactions between ACC and vmPFC underpin an integration of immediate and delayed components of value to support flexible policy switching that accommodates the potential delayed consequences of an action. PMID:24573291

  5. Anterior cingulate cortex instigates adaptive switches in choice by integrating immediate and delayed components of value in ventromedial prefrontal cortex.

    PubMed

    Economides, Marcos; Guitart-Masip, Marc; Kurth-Nelson, Zeb; Dolan, Raymond J

    2014-02-26

    Actions can lead to an immediate reward or punishment and a complex set of delayed outcomes. Adaptive choice necessitates the brain track and integrate both of these potential consequences. Here, we designed a sequential task whereby the decision to exploit or forego an available offer was contingent on comparing immediate value and a state-dependent future cost of expending a limited resource. Crucially, the dynamics of the task demanded frequent switches in policy based on an online computation of changing delayed consequences. We found that human subjects choose on the basis of a near-optimal integration of immediate reward and delayed consequences, with the latter computed in a prefrontal network. Within this network, anterior cingulate cortex (ACC) was dynamically coupled to ventromedial prefrontal cortex (vmPFC) when adaptive switches in choice were required. Our results suggest a choice architecture whereby interactions between ACC and vmPFC underpin an integration of immediate and delayed components of value to support flexible policy switching that accommodates the potential delayed consequences of an action.

  6. Blocking of Goal-Location Learning Based on Shape

    ERIC Educational Resources Information Center

    Alexander, Tim; Wilson, Stuart P.; Wilson, Paul N.

    2009-01-01

    Using desktop, computer-simulated virtual environments (VEs), the authors conducted 5 experiments to investigate blocking of learning about a goal location based on Shape B as a consequence of preliminary training to locate that goal using Shape A. The shapes were large 2-dimensional horizontal figures on the ground. Blocking of spatial learning…

  7. Evaluating the Cognitive Consequences of Playing "Portal" for a Short Duration

    ERIC Educational Resources Information Center

    Adams, Deanne M.; Pilegard, Celeste; Mayer, Richard E.

    2016-01-01

    Learning physics often requires overcoming common misconceptions based on naïve interpretations of observations in the everyday world. One proposed way to help learners build appropriate physics intuitions is to expose them to computer simulations in which motion is based on Newtonian principles. In addition, playing video games that require…

  8. Understanding and Predicting Student Self-Regulated Learning Strategies in Game-Based Learning Environments

    ERIC Educational Resources Information Center

    Sabourin, Jennifer L.; Shores, Lucy R.; Mott, Bradford W.; Lester, James C.

    2013-01-01

    Self-regulated learning behaviors such as goal setting and monitoring have been found to be crucial to students' success in computer-based learning environments. Consequently, understanding students' self-regulated learning behavior has been the subject of increasing attention. Unfortunately, monitoring these behaviors in real-time has…

  9. Evaluation of the Intel iWarp parallel processor for space flight applications

    NASA Technical Reports Server (NTRS)

    Hine, Butler P., III; Fong, Terrence W.

    1993-01-01

    The potential of a DARPA-sponsored advanced processor, the Intel iWarp, for use in future SSF Data Management Systems (DMS) upgrades is evaluated through integration into the Ames DMS testbed and applications testing. The iWarp is a distributed, parallel computing system well suited for high performance computing applications such as matrix operations and image processing. The system architecture is modular, supports systolic and message-based computation, and is capable of providing massive computational power in a low-cost, low-power package. As a consequence, the iWarp offers significant potential for advanced space-based computing. This research seeks to determine the iWarp's suitability as a processing device for space missions. In particular, the project focuses on evaluating the ease of integrating the iWarp into the SSF DMS baseline architecture and the iWarp's ability to support computationally stressing applications representative of SSF tasks.

  10. CUDA-based real time surgery simulation.

    PubMed

    Liu, Youquan; De, Suvranu

    2008-01-01

    In this paper we present a general software platform that enables real time surgery simulation on the newly available compute unified device architecture (CUDA)from NVIDIA. CUDA-enabled GPUs harness the power of 128 processors which allow data parallel computations. Compared to the previous GPGPU, it is significantly more flexible with a C language interface. We report implementation of both collision detection and consequent deformation computation algorithms. Our test results indicate that the CUDA enables a twenty times speedup for collision detection and about fifteen times speedup for deformation computation on an Intel Core 2 Quad 2.66 GHz machine with GeForce 8800 GTX.

  11. Lattice surgery on the Raussendorf lattice

    NASA Astrophysics Data System (ADS)

    Herr, Daniel; Paler, Alexandru; Devitt, Simon J.; Nori, Franco

    2018-07-01

    Lattice surgery is a method to perform quantum computation fault-tolerantly by using operations on boundary qubits between different patches of the planar code. This technique allows for universal planar code computation without eliminating the intrinsic two-dimensional nearest-neighbor properties of the surface code that eases physical hardware implementations. Lattice surgery approaches to algorithmic compilation and optimization have been demonstrated to be more resource efficient for resource-intensive components of a fault-tolerant algorithm, and consequently may be preferable over braid-based logic. Lattice surgery can be extended to the Raussendorf lattice, providing a measurement-based approach to the surface code. In this paper we describe how lattice surgery can be performed on the Raussendorf lattice and therefore give a viable alternative to computation using braiding in measurement-based implementations of topological codes.

  12. Bayesian inference and decision theory - A framework for decision making in natural resource management

    USGS Publications Warehouse

    Dorazio, R.M.; Johnson, F.A.

    2003-01-01

    Bayesian inference and decision theory may be used in the solution of relatively complex problems of natural resource management, owing to recent advances in statistical theory and computing. In particular, Markov chain Monte Carlo algorithms provide a computational framework for fitting models of adequate complexity and for evaluating the expected consequences of alternative management actions. We illustrate these features using an example based on management of waterfowl habitat.

  13. A hybrid optical switch architecture to integrate IP into optical networks to provide flexible and intelligent bandwidth on demand for cloud computing

    NASA Astrophysics Data System (ADS)

    Yang, Wei; Hall, Trevor J.

    2013-12-01

    The Internet is entering an era of cloud computing to provide more cost effective, eco-friendly and reliable services to consumer and business users. As a consequence, the nature of the Internet traffic has been fundamentally transformed from a pure packet-based pattern to today's predominantly flow-based pattern. Cloud computing has also brought about an unprecedented growth in the Internet traffic. In this paper, a hybrid optical switch architecture is presented to deal with the flow-based Internet traffic, aiming to offer flexible and intelligent bandwidth on demand to improve fiber capacity utilization. The hybrid optical switch is capable of integrating IP into optical networks for cloud-based traffic with predictable performance, for which the delay performance of the electronic module in the hybrid optical switch architecture is evaluated through simulation.

  14. Flume experimental evaluation of the effect of rill flow path tortuosity on rill roughness based on the Manning–Strickler equation

    USDA-ARS?s Scientific Manuscript database

    Numerous soil erosion models compute concentrated flow hydraulics based on the Manning–Strickler equation (v = kSt R2/3 I1/2) even though the range of the application on rill flow is unclear. Unconfined rill morphologies generate local friction effects and consequently spatially variable rill roughn...

  15. Computing with motile bio-agents

    NASA Astrophysics Data System (ADS)

    Nicolau, Dan V., Jr.; Burrage, Kevin; Nicolau, Dan V.

    2007-12-01

    We describe a model of computation of the parallel type, which we call 'computing with bio-agents', based on the concept that motions of biological objects such as bacteria or protein molecular motors in confined spaces can be regarded as computations. We begin with the observation that the geometric nature of the physical structures in which model biological objects move modulates the motions of the latter. Consequently, by changing the geometry, one can control the characteristic trajectories of the objects; on the basis of this, we argue that such systems are computing devices. We investigate the computing power of mobile bio-agent systems and show that they are computationally universal in the sense that they are capable of computing any Boolean function in parallel. We argue also that using appropriate conditions, bio-agent systems can solve NP-complete problems in probabilistic polynomial time.

  16. Adiabatic quantum computation in open systems.

    PubMed

    Sarandy, M S; Lidar, D A

    2005-12-16

    We analyze the performance of adiabatic quantum computation (AQC) subject to decoherence. To this end, we introduce an inherently open-systems approach, based on a recent generalization of the adiabatic approximation. In contrast to closed systems, we show that a system may initially be in an adiabatic regime, but then undergo a transition to a regime where adiabaticity breaks down. As a consequence, the success of AQC depends sensitively on the competition between various pertinent rates, giving rise to optimality criteria.

  17. Comparison of Uncalibrated Rgbvi with Spectrometer-Based Ndvi Derived from Uav Sensing Systems on Field Scale

    NASA Astrophysics Data System (ADS)

    Bareth, G.; Bolten, A.; Gnyp, M. L.; Reusch, S.; Jasper, J.

    2016-06-01

    The development of UAV-based sensing systems for agronomic applications serves the improvement of crop management. The latter is in the focus of precision agriculture which intends to optimize yield, fertilizer input, and crop protection. Besides, in some cropping systems vehicle-based sensing devices are less suitable because fields cannot be entered from certain growing stages onwards. This is true for rice, maize, sorghum, and many more crops. Consequently, UAV-based sensing approaches fill a niche of very high resolution data acquisition on the field scale in space and time. While mounting RGB digital compact cameras to low-weight UAVs (< 5 kg) is well established, the miniaturization of sensors in the last years also enables hyperspectral data acquisition from those platforms. From both, RGB and hyperspectral data, vegetation indices (VIs) are computed to estimate crop growth parameters. In this contribution, we compare two different sensing approaches from a low-weight UAV platform (< 5 kg) for monitoring a nitrogen field experiment of winter wheat and a corresponding farmers' field in Western Germany. (i) A standard digital compact camera was flown to acquire RGB images which are used to compute the RGBVI and (ii) NDVI is computed from a newly modified version of the Yara N-Sensor. The latter is a well-established tractor-based hyperspectral sensor for crop management and is available on the market since a decade. It was modified for this study to fit the requirements of UAV-based data acquisition. Consequently, we focus on three objectives in this contribution: (1) to evaluate the potential of the uncalibrated RGBVI for monitoring nitrogen status in winter wheat, (2) investigate the UAV-based performance of the modified Yara N-Sensor, and (3) compare the results of the two different UAV-based sensing approaches for winter wheat.

  18. Kinetics of water loss and the likelihood of intracellular freezing in mouse ova

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mazur, P.; Rall, W.F.; Leibo, S.P.

    To avoid intracellular freezing and its usually lethal consequences, cells must lose their freezable water before reaching their ice-nucleation temperature. One major factor determining the rate of water loss is the temperature dependence of water permeability, L/sub p/ (hydraulic conductivity). Because of the paucity of water permeability measurements at subzero temperatures, that temperature dependence has usually been extrapolated from above-zero measurements. The extrapolation has often been based on an exponential dependence of L/sub p/ on temperature. This paper compares the kinetics of water loss based on that extrapolation with that based on an Arrhenius relation between L/sub p/ and temperature,more » and finds substantial differences below -20 to -25/sup 0/C. Since the ice-nucleation temperature of mouse ova in the cryoprotectants DMSO and glycerol is usually below -30/sup 0/C, the Arrhenius form of the water-loss equation was used to compute the extent of supercooling in ova cooled at rates between 1 and 8/sup 0/C/min and the consequent likelihood of intracellular freezing. The predicted likelihood agrees well with that previously observed. The water-loss equation was also used to compute the volumes of ova as a function of cooling rate and temperature. The computed cell volumes agree qualitatively with previously observed volumes, but differed quantitatively. 25 references, 5 figures, 3 tables.« less

  19. Early Prediction of Student Self-Regulation Strategies by Combining Multiple Models

    ERIC Educational Resources Information Center

    Sabourin, Jennifer L.; Mott, Bradford W.; Lester, James C.

    2012-01-01

    Self-regulated learning behaviors such as goal setting and monitoring have been found to be crucial to students' success in computer-based learning environments. Consequently, understanding students' self-regulated learning behavior has been the subject of increasing interest. Unfortunately, monitoring these behaviors in real-time has proven…

  20. Assessing the anticipated consequences of Computer-based Provider Order Entry at three community hospitals using an open-ended, semi-structured survey instrument.

    PubMed

    Sittig, Dean F; Ash, Joan S; Guappone, Ken P; Campbell, Emily M; Dykstra, Richard H

    2008-07-01

    To determine what "average" clinicians in organizations that were about to implement Computer-based Provider Order Entry (CPOE) were expecting to occur, we conducted an open-ended, semi-structured survey at three community hospitals. We created an open-ended, semi-structured, interview survey template that we customized for each organization. This interview-based survey was designed to be administered orally to clinicians and take approximately 5 min to complete, although clinicians were allowed to discuss as many advantages or disadvantages of the impending system roll-out as they wanted to. Our survey findings did not reveal any overly negative, critical, problematic, or striking sets of concerns. However, from the standpoint of unintended consequences, we found that clinicians were anticipating only a few of the events, emotions, and process changes that are likely to result from CPOE. The results of such an open-ended survey may prove useful in helping CPOE leaders to understand user perceptions and predictions about CPOE, because it can expose issues about which more communication, or discussion, is needed. Using the survey, implementation strategies and management techniques outlined in this paper, any chief information officer (CIO) or chief medical information officer (CMIO) should be able to adequately assess their organization's CPOE readiness, make the necessary mid-course corrections, and be prepared to deal with the currently identified unintended consequences of CPOE should they occur.

  1. Model-Based and Model-Free Pavlovian Reward Learning: Revaluation, Revision and Revelation

    PubMed Central

    Dayan, Peter; Berridge, Kent C.

    2014-01-01

    Evidence supports at least two methods for learning about reward and punishment and making predictions for guiding actions. One method, called model-free, progressively acquires cached estimates of the long-run values of circumstances and actions from retrospective experience. The other method, called model-based, uses representations of the environment, expectations and prospective calculations to make cognitive predictions of future value. Extensive attention has been paid to both methods in computational analyses of instrumental learning. By contrast, although a full computational analysis has been lacking, Pavlovian learning and prediction has typically been presumed to be solely model-free. Here, we revise that presumption and review compelling evidence from Pavlovian revaluation experiments showing that Pavlovian predictions can involve their own form of model-based evaluation. In model-based Pavlovian evaluation, prevailing states of the body and brain influence value computations, and thereby produce powerful incentive motivations that can sometimes be quite new. We consider the consequences of this revised Pavlovian view for the computational landscape of prediction, response and choice. We also revisit differences between Pavlovian and instrumental learning in the control of incentive motivation. PMID:24647659

  2. Item Difficulty in the Evaluation of Computer-Based Instruction: An Example from Neuroanatomy

    PubMed Central

    Chariker, Julia H.; Naaz, Farah; Pani, John R.

    2012-01-01

    This article reports large item effects in a study of computer-based learning of neuroanatomy. Outcome measures of the efficiency of learning, transfer of learning, and generalization of knowledge diverged by a wide margin across test items, with certain sets of items emerging as particularly difficult to master. In addition, the outcomes of comparisons between instructional methods changed with the difficulty of the items to be learned. More challenging items better differentiated between instructional methods. This set of results is important for two reasons. First, it suggests that instruction may be more efficient if sets of consistently difficult items are the targets of instructional methods particularly suited to them. Second, there is wide variation in the published literature regarding the outcomes of empirical evaluations of computer-based instruction. As a consequence, many questions arise as to the factors that may affect such evaluations. The present paper demonstrates that the level of challenge in the material that is presented to learners is an important factor to consider in the evaluation of a computer-based instructional system. PMID:22231801

  3. Item difficulty in the evaluation of computer-based instruction: an example from neuroanatomy.

    PubMed

    Chariker, Julia H; Naaz, Farah; Pani, John R

    2012-01-01

    This article reports large item effects in a study of computer-based learning of neuroanatomy. Outcome measures of the efficiency of learning, transfer of learning, and generalization of knowledge diverged by a wide margin across test items, with certain sets of items emerging as particularly difficult to master. In addition, the outcomes of comparisons between instructional methods changed with the difficulty of the items to be learned. More challenging items better differentiated between instructional methods. This set of results is important for two reasons. First, it suggests that instruction may be more efficient if sets of consistently difficult items are the targets of instructional methods particularly suited to them. Second, there is wide variation in the published literature regarding the outcomes of empirical evaluations of computer-based instruction. As a consequence, many questions arise as to the factors that may affect such evaluations. The present article demonstrates that the level of challenge in the material that is presented to learners is an important factor to consider in the evaluation of a computer-based instructional system. Copyright © 2011 American Association of Anatomists.

  4. Model-based and model-free Pavlovian reward learning: revaluation, revision, and revelation.

    PubMed

    Dayan, Peter; Berridge, Kent C

    2014-06-01

    Evidence supports at least two methods for learning about reward and punishment and making predictions for guiding actions. One method, called model-free, progressively acquires cached estimates of the long-run values of circumstances and actions from retrospective experience. The other method, called model-based, uses representations of the environment, expectations, and prospective calculations to make cognitive predictions of future value. Extensive attention has been paid to both methods in computational analyses of instrumental learning. By contrast, although a full computational analysis has been lacking, Pavlovian learning and prediction has typically been presumed to be solely model-free. Here, we revise that presumption and review compelling evidence from Pavlovian revaluation experiments showing that Pavlovian predictions can involve their own form of model-based evaluation. In model-based Pavlovian evaluation, prevailing states of the body and brain influence value computations, and thereby produce powerful incentive motivations that can sometimes be quite new. We consider the consequences of this revised Pavlovian view for the computational landscape of prediction, response, and choice. We also revisit differences between Pavlovian and instrumental learning in the control of incentive motivation.

  5. RISKIND : an enhanced computer code for National Environmental Policy Act transportation consequence analysis

    DOT National Transportation Integrated Search

    1996-01-01

    The RISKIND computer program was developed for the analysis of radiological consequences and health risks to individuals and the collective population from exposures associated with the transportation of spent nuclear fuel (SNF) or other radioactive ...

  6. 76 FR 23854 - Reclassification of Motorcycles (Two and Three Wheeled Vehicles) in the Guide to Reporting...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-28

    ... and the unintended consequences of misclassification. Harley Davidson Motor Company (HDMC) stated that... concerns about the administrative, logistical and financial burdens of providing information based on the... estimated that the cost of updating their computers to process the information included in the new guidance...

  7. The Historical and Situated Nature Design Experiments--Implications for Data Analysis

    ERIC Educational Resources Information Center

    Krange, I.; Ludvigsen, Sten

    2009-01-01

    This article is a methodological contribution to the use of design experiments in educational research. We will discuss the implications of a historical and situated interpretation to design experiments, the consequences this has for the analysis of the collected data and empirically based suggestions to improve the designs of the computer-based…

  8. Short Project-Based Learning with MATLAB Applications to Support the Learning of Video-Image Processing

    ERIC Educational Resources Information Center

    Gil, Pablo

    2017-01-01

    University courses concerning Computer Vision and Image Processing are generally taught using a traditional methodology that is focused on the teacher rather than on the students. This approach is consequently not effective when teachers seek to attain cognitive objectives involving their students' critical thinking. This manuscript covers the…

  9. Learning with Computer-Based Multimedia: Gender Effects on Efficiency

    ERIC Educational Resources Information Center

    Pohnl, Sabine; Bogner, Franz X.

    2012-01-01

    Up to now, only a few studies in multimedia learning have focused on gender effects. While research has mostly focused on learning success, the effect of gender on instructional efficiency (IE) has not yet been considered. Consequently, we used a quasi-experimental design to examine possible gender differences in the learning success, mental…

  10. Growth Dynamics of Information Search Services.

    ERIC Educational Resources Information Center

    Lindqvist, Mats

    Computer based information search services, ISS's, of the type that provide on-line literature searches are analyzed from a system's viewpoint using a continuous simulation model. The analysis shows that the observed growth and stagnation of a typical ISS can be explained as a natural consequence of market responses to the service together with a…

  11. A New Model of Sensorimotor Coupling in the Development of Speech

    ERIC Educational Resources Information Center

    Westermann, Gert; Miranda, Eduardo Reck

    2004-01-01

    We present a computational model that learns a coupling between motor parameters and their sensory consequences in vocal production during a babbling phase. Based on the coupling, preferred motor parameters and prototypically perceived sounds develop concurrently. Exposure to an ambient language modifies perception to coincide with the sounds from…

  12. A Nation at Risk: The Economic Consequences of Neglecting Career Development.

    ERIC Educational Resources Information Center

    Jarvis, Phillip S.

    1990-01-01

    Neglect of career development at all levels, K-adult, is costly to employers, taxpayers, and individuals. The information delivered through computer-based career guidance systems is vital, but it must be accompanied by training in critical reasoning skills so that relevant information for decision making can be selected through the insight gained…

  13. Checklists for the Evaluation of Educational Software: Critical Review and Prospects.

    ERIC Educational Resources Information Center

    Tergan, Sigmar-Olaf

    1998-01-01

    Reviews strengths and weaknesses of check lists for the evaluation of computer software and outlines consequences for their practical application. Suggests an approach based on an instructional design model and a comprehensive framework to cope with problems of validity and predictive power of software evaluation. Discusses prospects of the…

  14. Fuels planning: science synthesis and integration; environmental consequences fact sheet 15: The Wildlife Habitat Response Model

    Treesearch

    David Pilliod

    2005-01-01

    The Wildlife Habitat Response Model (WHRM) is a Web-based computer tool for evaluating the potential effects of fuel-reduction projects on terrestrial wildlife habitats. It uses species-habitat associations in ponderosa pine (Pinus ponderosa), dry-type Douglas-fir (Pseudotsuga menziesii), lodgepole pine (Pinus...

  15. Removal of a foreign body from the skull base using a customized computer-designed guide bar.

    PubMed

    Wei, Ran; Xiang-Zhen, Liu; Bing, Guo; Da-Long, Shu; Ze-Ming, Tan

    2010-06-01

    Foreign bodies located at the base of the skull pose a surgical challenge. Here, a customized computer-designed surgical guide bar was designed to facilitate removal of a skull base foreign body. Within 24h of the patient's presentation, a guide bar and mounting platform were designed to remove a foreign body located adjacent to the transverse process of the atlas and pressing against the internal carotid artery. The foreign body was successfully located and removed using the custom designed guide bar and computer operative planning. Ten months postoperatively the patient was free of complaints and lacked any complications such as restricted opening of the mouth or false aneurysm. The inferior alveolar nerve damage noted immediately postoperatively (a consequence of mandibular osteotomy) was slightly reduced at follow-up, but labial numbness persisted. The navigation tools described herein were successfully employed to aid foreign body removal from the skull base. Copyright (c) 2009 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  16. Excessive computer game playing among Norwegian adults: self-reported consequences of playing and association with mental health problems.

    PubMed

    Wenzel, H G; Bakken, I J; Johansson, A; Götestam, K G; Øren, Anita

    2009-12-01

    Computer games are the most advanced form of gaming. For most people, the playing is an uncomplicated leisure activity; however, for a minority the gaming becomes excessive and is associated with negative consequences. The aim of the present study was to investigate computer game-playing behaviour in the general adult Norwegian population, and to explore mental health problems and self-reported consequences of playing. The survey includes 3,405 adults 16 to 74 years old (Norway 2007, response rate 35.3%). Overall, 65.5% of the respondents reported having ever played computer games (16-29 years, 93.9%; 30-39 years, 85.0%; 40-59 years, 56.2%; 60-74 years, 25.7%). Among 2,170 players, 89.8% reported playing less than 1 hr. as a daily average over the last month, 5.0% played 1-2 hr. daily, 3.1% played 2-4 hr. daily, and 2.2% reported playing > 4 hr. daily. The strongest risk factor for playing > 4 hr. daily was being an online player, followed by male gender, and single marital status. Reported negative consequences of computer game playing increased strongly with average daily playing time. Furthermore, prevalence of self-reported sleeping problems, depression, suicide ideations, anxiety, obsessions/ compulsions, and alcohol/substance abuse increased with increasing playing time. This study showed that adult populations should also be included in research on computer game-playing behaviour and its consequences.

  17. Review and Discussion of Children's Conceptions of Computers

    NASA Astrophysics Data System (ADS)

    Rücker, Michael T.; Pinkwart, Niels

    2016-04-01

    Today's children grow up surrounded by computers. They observe them, interact with them and, as a consequence, start forming conceptions of how they work and what they can do. Any constructivist approach to learning requires that we gain an understanding of such preconceived ideas and beliefs in order to use computers as learning tools in an effective and informed manner. In this paper, we present five such conceptions that children reportedly form about computers, based on an interdisciplinary literature review. We then evaluate how persistent these conceptions appear to be over time and in light of new technological developments. Finally, we discuss the relevance and implications of our findings for education in the contexts of conceptual pluralism and conceptual categorisation.

  18. Conifer ovulate cones accumulate pollen principally by simple impaction.

    PubMed

    Cresswell, James E; Henning, Kevin; Pennel, Christophe; Lahoubi, Mohamed; Patrick, Michael A; Young, Phillipe G; Tabor, Gavin R

    2007-11-13

    In many pine species (Family Pinaceae), ovulate cones structurally resemble a turbine, which has been widely interpreted as an adaptation for improving pollination by producing complex aerodynamic effects. We tested the turbine interpretation by quantifying patterns of pollen accumulation on ovulate cones in a wind tunnel and by using simulation models based on computational fluid dynamics. We used computer-aided design and computed tomography to create computational fluid dynamics model cones. We studied three species: Pinus radiata, Pinus sylvestris, and Cedrus libani. Irrespective of the approach or species studied, we found no evidence that turbine-like aerodynamics made a significant contribution to pollen accumulation, which instead occurred primarily by simple impaction. Consequently, we suggest alternative adaptive interpretations for the structure of ovulate cones.

  19. Conifer ovulate cones accumulate pollen principally by simple impaction

    PubMed Central

    Cresswell, James E.; Henning, Kevin; Pennel, Christophe; Lahoubi, Mohamed; Patrick, Michael A.; Young, Phillipe G.; Tabor, Gavin R.

    2007-01-01

    In many pine species (Family Pinaceae), ovulate cones structurally resemble a turbine, which has been widely interpreted as an adaptation for improving pollination by producing complex aerodynamic effects. We tested the turbine interpretation by quantifying patterns of pollen accumulation on ovulate cones in a wind tunnel and by using simulation models based on computational fluid dynamics. We used computer-aided design and computed tomography to create computational fluid dynamics model cones. We studied three species: Pinus radiata, Pinus sylvestris, and Cedrus libani. Irrespective of the approach or species studied, we found no evidence that turbine-like aerodynamics made a significant contribution to pollen accumulation, which instead occurred primarily by simple impaction. Consequently, we suggest alternative adaptive interpretations for the structure of ovulate cones. PMID:17986613

  20. Orbital and maxillofacial computer aided surgery: patient-specific finite element models to predict surgical outcomes.

    PubMed

    Luboz, Vincent; Chabanas, Matthieu; Swider, Pascal; Payan, Yohan

    2005-08-01

    This paper addresses an important issue raised for the clinical relevance of Computer-Assisted Surgical applications, namely the methodology used to automatically build patient-specific finite element (FE) models of anatomical structures. From this perspective, a method is proposed, based on a technique called the mesh-matching method, followed by a process that corrects mesh irregularities. The mesh-matching algorithm generates patient-specific volume meshes from an existing generic model. The mesh regularization process is based on the Jacobian matrix transform related to the FE reference element and the current element. This method for generating patient-specific FE models is first applied to computer-assisted maxillofacial surgery, and more precisely, to the FE elastic modelling of patient facial soft tissues. For each patient, the planned bone osteotomies (mandible, maxilla, chin) are used as boundary conditions to deform the FE face model, in order to predict the aesthetic outcome of the surgery. Seven FE patient-specific models were successfully generated by our method. For one patient, the prediction of the FE model is qualitatively compared with the patient's post-operative appearance, measured from a computer tomography scan. Then, our methodology is applied to computer-assisted orbital surgery. It is, therefore, evaluated for the generation of 11 patient-specific FE poroelastic models of the orbital soft tissues. These models are used to predict the consequences of the surgical decompression of the orbit. More precisely, an average law is extrapolated from the simulations carried out for each patient model. This law links the size of the osteotomy (i.e. the surgical gesture) and the backward displacement of the eyeball (the consequence of the surgical gesture).

  1. Packet spacing : an enabling mechanism for delivering multimedia content in computational grids /

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, A. C.; Feng, W. C.; Belford, Geneva G.

    2001-01-01

    Streaming multimedia with UDP has become increasingly popular over distributed systems like the Internet. Scientific applications that stream multimedia include remote computational steering of visualization data and video-on-demand teleconferencing over the Access Grid. However, UDP does not possess a self-regulating, congestion-control mechanism; and most best-efort traflc is served by congestion-controlled TCF! Consequently, UDP steals bandwidth from TCP such that TCP$ows starve for network resources. With the volume of Internet traffic continuing to increase, the perpetuation of UDP-based streaming will cause the Internet to collapse as it did in the mid-1980's due to the use of non-congestion-controlled TCP. To address thismore » problem, we introduce the counterintuitive notion of inter-packet spacing with control feedback to enable UDP-based applications to perform well in the next-generation Internet and computational grids. When compared with traditional UDP-based streaming, we illustrate that our approach can reduce packet loss over SO% without adversely afecting delivered throughput. Keywords: network protocol, multimedia, packet spacing, streaming, TCI: UDlq rate-adjusting congestion control, computational grid, Access Grid.« less

  2. A Computational Model of Selection by Consequences

    ERIC Educational Resources Information Center

    McDowell, J. J.

    2004-01-01

    Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of…

  3. The Use of a Task-Based Online Forum in Language Teaching: Learning Practices and Outcomes

    ERIC Educational Resources Information Center

    Batardière, Marie-Thérèse

    2013-01-01

    This chapter investigates students' reported patterns of use and perceived outcomes of an online intercultural exchange. It is hoped that the study will inform our understanding of the students' language learning process on an online discussion forum and consequently will help us maximise the educational potential of computer-mediated…

  4. Computer-Based Feedback in Linear Algebra: Effects on Transfer Performance and Motivation

    ERIC Educational Resources Information Center

    Corbalan, Gemma; Paas, Fred; Cuypers, Hans

    2010-01-01

    Two studies investigated the effects on students' perceptions (Study 1) and learning and motivation (Study 2) of different levels of feedback in mathematical problems. In these problems, an error made in one step of the problem-solving procedure will carry over to the following steps and consequently to the final solution. Providing immediate…

  5. Web-Mediated Problem-Based Learning and Computer Programming: Effects of Study Approach on Academic Achievement and Attitude

    ERIC Educational Resources Information Center

    Yagci, Mustafa

    2018-01-01

    In the relevant literature, it is often debated whether learning programming requires high-level thinking skills, the lack of which consequently results in the failure of students in programming. The complex nature of programming and individual differences, including study approaches, thinking styles, and the focus of supervision, all have an…

  6. Computing biological functions using BioΨ, a formal description of biological processes based on elementary bricks of actions

    PubMed Central

    Pérès, Sabine; Felicori, Liza; Rialle, Stéphanie; Jobard, Elodie; Molina, Franck

    2010-01-01

    Motivation: In the available databases, biological processes are described from molecular and cellular points of view, but these descriptions are represented with text annotations that make it difficult to handle them for computation. Consequently, there is an obvious need for formal descriptions of biological processes. Results: We present a formalism that uses the BioΨ concepts to model biological processes from molecular details to networks. This computational approach, based on elementary bricks of actions, allows us to calculate on biological functions (e.g. process comparison, mapping structure–function relationships, etc.). We illustrate its application with two examples: the functional comparison of proteases and the functional description of the glycolysis network. This computational approach is compatible with detailed biological knowledge and can be applied to different kinds of systems of simulation. Availability: www.sysdiag.cnrs.fr/publications/supplementary-materials/BioPsi_Manager/ Contact: sabine.peres@sysdiag.cnrs.fr; franck.molina@sysdiag.cnrs.fr Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20448138

  7. The development of computer networks: First results from a microeconomic model

    NASA Astrophysics Data System (ADS)

    Maier, Gunther; Kaufmann, Alexander

    Computer networks like the Internet are gaining importance in social and economic life. The accelerating pace of the adoption of network technologies for business purposes is a rather recent phenomenon. Many applications are still in the early, sometimes even experimental, phase. Nevertheless, it seems to be certain that networks will change the socioeconomic structures we know today. This is the background for our special interest in the development of networks, in the role of spatial factors influencing the formation of networks, and consequences of networks on spatial structures, and in the role of externalities. This paper discusses a simple economic model - based on a microeconomic calculus - that incorporates the main factors that generate the growth of computer networks. The paper provides analytic results about the generation of computer networks. The paper discusses (1) under what conditions economic factors will initiate the process of network formation, (2) the relationship between individual and social evaluation, and (3) the efficiency of a network that is generated based on economic mechanisms.

  8. Evidence Theory Based Uncertainty Quantification in Radiological Risk due to Accidental Release of Radioactivity from a Nuclear Power Plant

    NASA Astrophysics Data System (ADS)

    Ingale, S. V.; Datta, D.

    2010-10-01

    Consequence of the accidental release of radioactivity from a nuclear power plant is assessed in terms of exposure or dose to the members of the public. Assessment of risk is routed through this dose computation. Dose computation basically depends on the basic dose assessment model and exposure pathways. One of the exposure pathways is the ingestion of contaminated food. The aim of the present paper is to compute the uncertainty associated with the risk to the members of the public due to the ingestion of contaminated food. The governing parameters of the ingestion dose assessment model being imprecise, we have approached evidence theory to compute the bound of the risk. The uncertainty is addressed by the belief and plausibility fuzzy measures.

  9. Use of the Computer for Research on Instruction and Student Understanding in Physics.

    NASA Astrophysics Data System (ADS)

    Grayson, Diane Jeanette

    This dissertation describes an investigation of how the computer may be utilized to perform research on instruction and on student understanding in physics. The research was conducted within three content areas: kinematics, waves and dynamics. The main focus of the research on instruction was the determination of factors needed for a computer program to be instructionally effective. The emphasis in the research on student understanding was the identification of specific conceptual and reasoning difficulties students encounter with the subject matter. Most of the research was conducted using the computer -based interview, a technique developed during the early part of the work, conducted within the domain of kinematics. In a computer-based interview, a student makes a prediction about how a particular system will behave under given circumstances, observes a simulation of the event on a computer screen, and then is asked by an interviewer to explain any discrepancy between prediction and observation. In the course of the research, a model was developed for producing educational software. The model has three important components: (i) research on student difficulties in the content area to be addressed, (ii) observations of students using the computer program, and (iii) consequent program modification. This model was used to guide the development of an instructional computer program dealing with graphical representations of transverse pulses. Another facet of the research involved the design of a computer program explicitly for the purposes of research. A computer program was written that simulates a modified Atwood's machine. The program was than used in computer -based interviews and proved to be an effective means of probing student understanding of dynamics concepts. In order to ascertain whether or not the student difficulties identified were peculiar to the computer, laboratory-based interviews with real equipment were also conducted. The laboratory-based interviews were designed to parallel the computer-based interviews as closely as possible. The results of both types of interviews are discussed in detail. The dissertation concludes with a discussion of some of the benefits of using the computer in physics instruction and physics education research. Attention is also drawn to some of the limitations of the computer as a research instrument or instructional device.

  10. Unconditionally verifiable blind quantum computation

    NASA Astrophysics Data System (ADS)

    Fitzsimons, Joseph F.; Kashefi, Elham

    2017-07-01

    Blind quantum computing (BQC) allows a client to have a server carry out a quantum computation for them such that the client's input, output, and computation remain private. A desirable property for any BQC protocol is verification, whereby the client can verify with high probability whether the server has followed the instructions of the protocol or if there has been some deviation resulting in a corrupted output state. A verifiable BQC protocol can be viewed as an interactive proof system leading to consequences for complexity theory. We previously proposed [A. Broadbent, J. Fitzsimons, and E. Kashefi, in Proceedings of the 50th Annual Symposium on Foundations of Computer Science, Atlanta, 2009 (IEEE, Piscataway, 2009), p. 517] a universal and unconditionally secure BQC scheme where the client only needs to be able to prepare single qubits in separable states randomly chosen from a finite set and send them to the server, who has the balance of the required quantum computational resources. In this paper we extend that protocol with additional functionality allowing blind computational basis measurements, which we use to construct another verifiable BQC protocol based on a different class of resource states. We rigorously prove that the probability of failing to detect an incorrect output is exponentially small in a security parameter, while resource overhead remains polynomial in this parameter. This resource state allows entangling gates to be performed between arbitrary pairs of logical qubits with only constant overhead. This is a significant improvement on the original scheme, which required that all computations to be performed must first be put into a nearest-neighbor form, incurring linear overhead in the number of qubits. Such an improvement has important consequences for efficiency and fault-tolerance thresholds.

  11. Large-scale parallel genome assembler over cloud computing environment.

    PubMed

    Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong

    2017-06-01

    The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.

  12. Social, Organizational, and Contextual Characteristics of Clinical Decision Support Systems for Intensive Insulin Therapy: A Literature Review and Case Study

    PubMed Central

    Campion, Thomas R.; Waitman, Lemuel R.; May, Addison K.; Ozdas, Asli; Lorenzi, Nancy M.; Gadd, Cynthia S.

    2009-01-01

    Introduction: Evaluations of computerized clinical decision support systems (CDSS) typically focus on clinical performance changes and do not include social, organizational, and contextual characteristics explaining use and effectiveness. Studies of CDSS for intensive insulin therapy (IIT) are no exception, and the literature lacks an understanding of effective computer-based IIT implementation and operation. Results: This paper presents (1) a literature review of computer-based IIT evaluations through the lens of institutional theory, a discipline from sociology and organization studies, to demonstrate the inconsistent reporting of workflow and care process execution and (2) a single-site case study to illustrate how computer-based IIT requires substantial organizational change and creates additional complexity with unintended consequences including error. Discussion: Computer-based IIT requires organizational commitment and attention to site-specific technology, workflow, and care processes to achieve intensive insulin therapy goals. The complex interaction between clinicians, blood glucose testing devices, and CDSS may contribute to workflow inefficiency and error. Evaluations rarely focus on the perspective of nurses, the primary users of computer-based IIT whose knowledge can potentially lead to process and care improvements. Conclusion: This paper addresses a gap in the literature concerning the social, organizational, and contextual characteristics of CDSS in general and for intensive insulin therapy specifically. Additionally, this paper identifies areas for future research to define optimal computer-based IIT process execution: the frequency and effect of manual data entry error of blood glucose values, the frequency and effect of nurse overrides of CDSS insulin dosing recommendations, and comprehensive ethnographic study of CDSS for IIT. PMID:19815452

  13. The Impact of the Network Topology on the Viral Prevalence: A Node-Based Approach

    PubMed Central

    Yang, Lu-Xing; Draief, Moez; Yang, Xiaofan

    2015-01-01

    This paper addresses the impact of the structure of the viral propagation network on the viral prevalence. For that purpose, a new epidemic model of computer virus, known as the node-based SLBS model, is proposed. Our analysis shows that the maximum eigenvalue of the underlying network is a key factor determining the viral prevalence. Specifically, the value range of the maximum eigenvalue is partitioned into three subintervals: viruses tend to extinction very quickly or approach extinction or persist depending on into which subinterval the maximum eigenvalue of the propagation network falls. Consequently, computer virus can be contained by adjusting the propagation network so that its maximum eigenvalue falls into the desired subinterval. PMID:26222539

  14. Motor prediction in Brain-Computer Interfaces for controlling mobile robots.

    PubMed

    Geng, Tao; Gan, John Q

    2008-01-01

    EEG-based Brain-Computer Interface (BCI) can be regarded as a new channel for motor control except that it does not involve muscles. Normal neuromuscular motor control has two fundamental components: (1) to control the body, and (2) to predict the consequences of the control command, which is called motor prediction. In this study, after training with a specially designed BCI paradigm based on motor imagery, two subjects learnt to predict the time course of some features of the EEG signals. It is shown that, with this newly-obtained motor prediction skill, subjects can use motor imagery of feet to directly control a mobile robot to avoid obstacles and reach a small target in a time-critical scenario.

  15. Test Takers' Beliefs and Experiences of a High-Stakes Computer-Based English Listening and Speaking Test

    ERIC Educational Resources Information Center

    Zhan, Ying; Wan, Zhi Hong

    2016-01-01

    Test takers' beliefs or experiences have been overlooked in most validation studies in language education. Meanwhile, a mutual exclusion has been observed in the literature, with little or no dialogue between validation studies and studies concerning the uses and consequences of testing. To help fill these research gaps, a group of Senior III…

  16. Supporting Students' Learning and Socioscientific Reasoning about Climate Change--The Effect of Computer-Based Concept Mapping Scaffolds

    ERIC Educational Resources Information Center

    Eggert, Sabina; Nitsch, Anne; Boone, William J.; Nückles, Matthias; Bögeholz, Susanne

    2017-01-01

    Climate change is one of the most challenging problems facing today's global society (e.g., IPCC 2013). While climate change is a widely covered topic in the media, and abundant information is made available through the internet, the causes and consequences of climate change in its full complexity are difficult for individuals, especially…

  17. Microcomputers in Academic Departments: The Consequences of Implementing Distributive Desk-Top Computing to Faculty and Staff.

    ERIC Educational Resources Information Center

    Winans, Glen T.

    This paper presents a descriptive review of how the Provost's Office of the College of Letters and Science at the University of California, Santa Barbara (UCSB) implemented 330 microcomputers in the 34 academic departments from July 1984 through June 1986. The decision to implement stand-alone microcomputers was based on four concerns: increasing…

  18. Linking 3D spatial models of fuels and fire: Effects of spatial heterogeneity on fire behavior

    Treesearch

    Russell A. Parsons; William E. Mell; Peter McCauley

    2011-01-01

    Crownfire endangers fire fighters and can have severe ecological consequences. Prediction of fire behavior in tree crowns is essential to informed decisions in fire management. Current methods used in fire management do not address variability in crown fuels. New mechanistic physics-based fire models address convective heat transfer with computational fluid dynamics (...

  19. A Population-Based Study of Childhood Sexual Contact in China: Prevalence and Long-Term Consequences

    ERIC Educational Resources Information Center

    Luo, Ye; Parish, William L.; Laumann, Edward O.

    2008-01-01

    Objectives: This study provides national estimates of the prevalence of childhood sexual contact and its association with sexual well-being and psychological distress among adults in China. Method: A national stratified probability sample of 1,519 women and 1,475 men aged 20-64 years in urban China completed a computer-administered survey in…

  20. Near-Field Source Localization by Using Focusing Technique

    NASA Astrophysics Data System (ADS)

    He, Hongyang; Wang, Yide; Saillard, Joseph

    2008-12-01

    We discuss two fast algorithms to localize multiple sources in near field. The symmetry-based method proposed by Zhi and Chia (2007) is first improved by implementing a search-free procedure for the reduction of computation cost. We present then a focusing-based method which does not require symmetric array configuration. By using focusing technique, the near-field signal model is transformed into a model possessing the same structure as in the far-field situation, which allows the bearing estimation with the well-studied far-field methods. With the estimated bearing, the range estimation of each source is consequently obtained by using 1D MUSIC method without parameter pairing. The performance of the improved symmetry-based method and the proposed focusing-based method is compared by Monte Carlo simulations and with Crammer-Rao bound as well. Unlike other near-field algorithms, these two approaches require neither high-computation cost nor high-order statistics.

  1. Quantum protocols within Spekkens' toy model

    NASA Astrophysics Data System (ADS)

    Disilvestro, Leonardo; Markham, Damian

    2017-05-01

    Quantum mechanics is known to provide significant improvements in information processing tasks when compared to classical models. These advantages range from computational speedups to security improvements. A key question is where these advantages come from. The toy model developed by Spekkens [R. W. Spekkens, Phys. Rev. A 75, 032110 (2007), 10.1103/PhysRevA.75.032110] mimics many of the features of quantum mechanics, such as entanglement and no cloning, regarded as being important in this regard, despite being a local hidden variable theory. In this work, we study several protocols within Spekkens' toy model where we see it can also mimic the advantages and limitations shown in the quantum case. We first provide explicit proofs for the impossibility of toy bit commitment and the existence of a toy error correction protocol and consequent k -threshold secret sharing. Then, defining a toy computational model based on the quantum one-way computer, we prove the existence of blind and verified protocols. Importantly, these two last quantum protocols are known to achieve a better-than-classical security. Our results suggest that such quantum improvements need not arise from any Bell-type nonlocality or contextuality, but rather as a consequence of steering correlations.

  2. Gibbs Free Energy of Hydrolytic Water Molecule in Acyl-Enzyme Intermediates of a Serine Protease: A Potential Application for Computer-Aided Discovery of Mechanism-Based Reversible Covalent Inhibitors.

    PubMed

    Masuda, Yosuke; Yamaotsu, Noriyuki; Hirono, Shuichi

    2017-01-01

    In order to predict the potencies of mechanism-based reversible covalent inhibitors, the relationships between calculated Gibbs free energy of hydrolytic water molecule in acyl-trypsin intermediates and experimentally measured catalytic rate constants (k cat ) were investigated. After obtaining representative solution structures by molecular dynamics (MD) simulations, hydration thermodynamics analyses using WaterMap™ were conducted. Consequently, we found for the first time that when Gibbs free energy of the hydrolytic water molecule was lower, logarithms of k cat were also lower. The hydrolytic water molecule with favorable Gibbs free energy may hydrolyze acylated serine slowly. Gibbs free energy of hydrolytic water molecule might be a useful descriptor for computer-aided discovery of mechanism-based reversible covalent inhibitors of hydrolytic enzymes.

  3. Economic Consequence Analysis of Disasters: The ECAT Software Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rose, Adam; Prager, Fynn; Chen, Zhenhua

    This study develops a methodology for rapidly obtaining approximate estimates of the economic consequences from numerous natural, man-made and technological threats. This software tool is intended for use by various decision makers and analysts to obtain estimates rapidly. It is programmed in Excel and Visual Basic for Applications (VBA) to facilitate its use. This tool is called E-CAT (Economic Consequence Analysis Tool) and accounts for the cumulative direct and indirect impacts (including resilience and behavioral factors that significantly affect base estimates) on the U.S. economy. E-CAT is intended to be a major step toward advancing the current state of economicmore » consequence analysis (ECA) and also contributing to and developing interest in further research into complex but rapid turnaround approaches. The essence of the methodology involves running numerous simulations in a computable general equilibrium (CGE) model for each threat, yielding synthetic data for the estimation of a single regression equation based on the identification of key explanatory variables (threat characteristics and background conditions). This transforms the results of a complex model, which is beyond the reach of most users, into a "reduced form" model that is readily comprehensible. Functionality has been built into E-CAT so that its users can switch various consequence categories on and off in order to create customized profiles of economic consequences of numerous risk events. E-CAT incorporates uncertainty on both the input and output side in the course of the analysis.« less

  4. System support software for the Space Ultrareliable Modular Computer (SUMC)

    NASA Technical Reports Server (NTRS)

    Hill, T. E.; Hintze, G. C.; Hodges, B. C.; Austin, F. A.; Buckles, B. P.; Curran, R. T.; Lackey, J. D.; Payne, R. E.

    1974-01-01

    The highly transportable programming system designed and implemented to support the development of software for the Space Ultrareliable Modular Computer (SUMC) is described. The SUMC system support software consists of program modules called processors. The initial set of processors consists of the supervisor, the general purpose assembler for SUMC instruction and microcode input, linkage editors, an instruction level simulator, a microcode grid print processor, and user oriented utility programs. A FORTRAN 4 compiler is undergoing development. The design facilitates the addition of new processors with a minimum effort and provides the user quasi host independence on the ground based operational software development computer. Additional capability is provided to accommodate variations in the SUMC architecture without consequent major modifications in the initial processors.

  5. Decision Making and Reward in Frontal Cortex

    PubMed Central

    Kennerley, Steven W.; Walton, Mark E.

    2011-01-01

    Patients with damage to the prefrontal cortex (PFC)—especially the ventral and medial parts of PFC—often show a marked inability to make choices that meet their needs and goals. These decision-making impairments often reflect both a deficit in learning concerning the consequences of a choice, as well as deficits in the ability to adapt future choices based on experienced value of the current choice. Thus, areas of PFC must support some value computations that are necessary for optimal choice. However, recent frameworks of decision making have highlighted that optimal and adaptive decision making does not simply rest on a single computation, but a number of different value computations may be necessary. Using this framework as a guide, we summarize evidence from both lesion studies and single-neuron physiology for the representation of different value computations across PFC areas. PMID:21534649

  6. Application of fuzzy fault tree analysis based on modified fuzzy AHP and fuzzy TOPSIS for fire and explosion in the process industry.

    PubMed

    Yazdi, Mohammad; Korhan, Orhan; Daneshvar, Sahand

    2018-05-09

    This study aimed at establishing fault tree analysis (FTA) using expert opinion to compute the probability of an event. To find the probability of the top event (TE), all probabilities of the basic events (BEs) should be available when the FTA is drawn. In this case, employing expert judgment can be used as an alternative to failure data in an awkward situation. The fuzzy analytical hierarchy process as a standard technique is used to give a specific weight to each expert, and fuzzy set theory is engaged for aggregating expert opinion. In this regard, the probability of BEs will be computed and, consequently, the probability of the TE obtained using Boolean algebra. Additionally, to reduce the probability of the TE in terms of three parameters (safety consequences, cost and benefit), the importance measurement technique and modified TOPSIS was employed. The effectiveness of the proposed approach is demonstrated with a real-life case study.

  7. New Unintended Adverse Consequences of Electronic Health Records

    PubMed Central

    Wright, A.; Ash, J.; Singh, H.

    2016-01-01

    Summary Although the health information technology industry has made considerable progress in the design, development, implementation, and use of electronic health records (EHRs), the lofty expectations of the early pioneers have not been met. In 2006, the Provider Order Entry Team at Oregon Health & Science University described a set of unintended adverse consequences (UACs), or unpredictable, emergent problems associated with computer-based provider order entry implementation, use, and maintenance. Many of these originally identified UACs have not been completely addressed or alleviated, some have evolved over time, and some new ones have emerged as EHRs became more widely available. The rapid increase in the adoption of EHRs, coupled with the changes in the types and attitudes of clinical users, has led to several new UACs, specifically: complete clinical information unavailable at the point of care; lack of innovations to improve system usability leading to frustrating user experiences; inadvertent disclosure of large amounts of patient-specific information; increased focus on computer-based quality measurement negatively affecting clinical workflows and patient-provider interactions; information overload from marginally useful computer-generated data; and a decline in the development and use of internally-developed EHRs. While each of these new UACs poses significant challenges to EHR developers and users alike, they also offer many opportunities. The challenge for clinical informatics researchers is to continue to refine our current systems while exploring new methods of overcoming these challenges and developing innovations to improve EHR interoperability, usability, security, functionality, clinical quality measurement, and information summarization and display. PMID:27830226

  8. Breaking Lander-Waterman’s Coverage Bound

    PubMed Central

    Nashta-ali, Damoun; Motahari, Seyed Abolfazl; Hosseinkhalaj, Babak

    2016-01-01

    Lander-Waterman’s coverage bound establishes the total number of reads required to cover the whole genome of size G bases. In fact, their bound is a direct consequence of the well-known solution to the coupon collector’s problem which proves that for such genome, the total number of bases to be sequenced should be O(G ln G). Although the result leads to a tight bound, it is based on a tacit assumption that the set of reads are first collected through a sequencing process and then are processed through a computation process, i.e., there are two different machines: one for sequencing and one for processing. In this paper, we present a significant improvement compared to Lander-Waterman’s result and prove that by combining the sequencing and computing processes, one can re-sequence the whole genome with as low as O(G) sequenced bases in total. Our approach also dramatically reduces the required computational power for the combined process. Simulation results are performed on real genomes with different sequencing error rates. The results support our theory predicting the log G improvement on coverage bound and corresponding reduction in the total number of bases required to be sequenced. PMID:27806058

  9. A computational model of selection by consequences.

    PubMed

    McDowell, J J

    2004-05-01

    Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of computational experiments that arranged reinforcement according to random-interval (RI) schedules. The quantitative features of the model were varied over wide ranges in these experiments, and many of the qualitative features of the model also were varied. The digital organism consistently showed a hyperbolic relation between response and reinforcement rates, and this hyperbolic description of the data was consistently better than the description provided by other, similar, function forms. In addition, the parameters of the hyperbola varied systematically with the quantitative, and some of the qualitative, properties of the model in ways that were consistent with findings from biological organisms. These results suggest that the material events responsible for an organism's responding on RI schedules are computationally equivalent to Darwinian selection by consequences. They also suggest that the computational model developed here is worth pursuing further as a possible dynamic account of behavior.

  10. Elucidating Hyperconjugation from Electronegativity to Predict Drug Conformational Energy in a High Throughput Manner.

    PubMed

    Liu, Zhaomin; Pottel, Joshua; Shahamat, Moeed; Tomberg, Anna; Labute, Paul; Moitessier, Nicolas

    2016-04-25

    Computational chemists use structure-based drug design and molecular dynamics of drug/protein complexes which require an accurate description of the conformational space of drugs. Organic chemists use qualitative chemical principles such as the effect of electronegativity on hyperconjugation, the impact of steric clashes on stereochemical outcome of reactions, and the consequence of resonance on the shape of molecules to rationalize experimental observations. While computational chemists speak about electron densities and molecular orbitals, organic chemists speak about partial charges and localized molecular orbitals. Attempts to reconcile these two parallel approaches such as programs for natural bond orbitals and intrinsic atomic orbitals computing Lewis structures-like orbitals and reaction mechanism have appeared. In the past, we have shown that encoding and quantifying chemistry knowledge and qualitative principles can lead to predictive methods. In the same vein, we thought to understand the conformational behaviors of molecules and to encode this knowledge back into a molecular mechanics tool computing conformational potential energy and to develop an alternative to atom types and training of force fields on large sets of molecules. Herein, we describe a conceptually new approach to model torsion energies based on fundamental chemistry principles. To demonstrate our approach, torsional energy parameters were derived on-the-fly from atomic properties. When the torsional energy terms implemented in GAFF, Parm@Frosst, and MMFF94 were substituted by our method, the accuracy of these force fields to reproduce MP2-derived torsional energy profiles and their transferability to a variety of functional groups and drug fragments were overall improved. In addition, our method did not rely on atom types and consequently did not suffer from poor automated atom type assignments.

  11. Rational design of an enzyme mutant for anti-cocaine therapeutics

    NASA Astrophysics Data System (ADS)

    Zheng, Fang; Zhan, Chang-Guo

    2008-09-01

    (-)-Cocaine is a widely abused drug and there is no available anti-cocaine therapeutic. The disastrous medical and social consequences of cocaine addiction have made the development of an effective pharmacological treatment a high priority. An ideal anti-cocaine medication would be to accelerate (-)-cocaine metabolism producing biologically inactive metabolites. The main metabolic pathway of cocaine in body is the hydrolysis at its benzoyl ester group. Reviewed in this article is the state-of-the-art computational design of high-activity mutants of human butyrylcholinesterase (BChE) against (-)-cocaine. The computational design of BChE mutants have been based on not only the structure of the enzyme, but also the detailed catalytic mechanisms for BChE-catalyzed hydrolysis of (-)-cocaine and (+)-cocaine. Computational studies of the detailed catalytic mechanisms and the structure-and-mechanism-based computational design have been carried out through the combined use of a variety of state-of-the-art techniques of molecular modeling. By using the computational insights into the catalytic mechanisms, a recently developed unique computational design strategy based on the simulation of the rate-determining transition state has been employed to design high-activity mutants of human BChE for hydrolysis of (-)-cocaine, leading to the exciting discovery of BChE mutants with a considerably improved catalytic efficiency against (-)-cocaine. One of the discovered BChE mutants (i.e., A199S/S287G/A328W/Y332G) has a ˜456-fold improved catalytic efficiency against (-)-cocaine. The encouraging outcome of the computational design and discovery effort demonstrates that the unique computational design approach based on the transition-state simulation is promising for rational enzyme redesign and drug discovery.

  12. Costs and Outcomes over 36 Years of Patients with Phenylketonuria Who Do and Do Not Remain on a Phenylalanine-Restricted Diet

    ERIC Educational Resources Information Center

    Guest, J. F.; Bai, J. J.; Taylor, R. R.; Sladkevicius, E.; Lee, P. J.; Lachmann, R. H.

    2013-01-01

    Background: To quantify the costs and consequences of managing phenylketonuria (PKU) in the UK and to estimate the potential implications to the UK's National Health Service (NHS) of keeping patients on a phenylalanine-restricted diet for life. Methods: A computer-based model was constructed depicting the management of PKU patients over the first…

  13. Examining the Impact of L2 Proficiency and Keyboarding Skills on Scores on TOEFL-iBT Writing Tasks

    ERIC Educational Resources Information Center

    Barkaoui, Khaled

    2014-01-01

    A major concern with computer-based (CB) tests of second-language (L2) writing is that performance on such tests may be influenced by test-taker keyboarding skills. Poor keyboarding skills may force test-takers to focus their attention and cognitive resources on motor activities (i.e., keyboarding) and, consequently, other processes and aspects of…

  14. Social, organizational, and contextual characteristics of clinical decision support systems for intensive insulin therapy: a literature review and case study.

    PubMed

    Campion, Thomas R; Waitman, Lemuel R; May, Addison K; Ozdas, Asli; Lorenzi, Nancy M; Gadd, Cynthia S

    2010-01-01

    Evaluations of computerized clinical decision support systems (CDSS) typically focus on clinical performance changes and do not include social, organizational, and contextual characteristics explaining use and effectiveness. Studies of CDSS for intensive insulin therapy (IIT) are no exception, and the literature lacks an understanding of effective computer-based IIT implementation and operation. This paper presents (1) a literature review of computer-based IIT evaluations through the lens of institutional theory, a discipline from sociology and organization studies, to demonstrate the inconsistent reporting of workflow and care process execution and (2) a single-site case study to illustrate how computer-based IIT requires substantial organizational change and creates additional complexity with unintended consequences including error. Computer-based IIT requires organizational commitment and attention to site-specific technology, workflow, and care processes to achieve intensive insulin therapy goals. The complex interaction between clinicians, blood glucose testing devices, and CDSS may contribute to workflow inefficiency and error. Evaluations rarely focus on the perspective of nurses, the primary users of computer-based IIT whose knowledge can potentially lead to process and care improvements. This paper addresses a gap in the literature concerning the social, organizational, and contextual characteristics of CDSS in general and for intensive insulin therapy specifically. Additionally, this paper identifies areas for future research to define optimal computer-based IIT process execution: the frequency and effect of manual data entry error of blood glucose values, the frequency and effect of nurse overrides of CDSS insulin dosing recommendations, and comprehensive ethnographic study of CDSS for IIT. Copyright (c) 2009. Published by Elsevier Ireland Ltd.

  15. Prototype of a computer method for designing and analyzing heating, ventilating and air conditioning proportional, electronic control systems

    NASA Astrophysics Data System (ADS)

    Barlow, Steven J.

    1986-09-01

    The Air Force needs a better method of designing new and retrofit heating, ventilating and air conditioning (HVAC) control systems. Air Force engineers currently use manual design/predict/verify procedures taught at the Air Force Institute of Technology, School of Civil Engineering, HVAC Control Systems course. These existing manual procedures are iterative and time-consuming. The objectives of this research were to: (1) Locate and, if necessary, modify an existing computer-based method for designing and analyzing HVAC control systems that is compatible with the HVAC Control Systems manual procedures, or (2) Develop a new computer-based method of designing and analyzing HVAC control systems that is compatible with the existing manual procedures. Five existing computer packages were investigated in accordance with the first objective: MODSIM (for modular simulation), HVACSIM (for HVAC simulation), TRNSYS (for transient system simulation), BLAST (for building load and system thermodynamics) and Elite Building Energy Analysis Program. None were found to be compatible or adaptable to the existing manual procedures, and consequently, a prototype of a new computer method was developed in accordance with the second research objective.

  16. Quasicrystals and Quantum Computing

    NASA Astrophysics Data System (ADS)

    Berezin, Alexander A.

    1997-03-01

    In Quantum (Q) Computing qubits form Q-superpositions for macroscopic times. One scheme for ultra-fast (Q) computing can be based on quasicrystals. Ultrafast processing in Q-coherent structures (and the very existence of durable Q-superpositions) may be 'consequence' of presence of entire manifold of integer arithmetic (A0, aleph-naught of Georg Cantor) at any 4-point of space-time, furthermore, at any point of any multidimensional phase space of (any) N-particle Q-system. The latter, apart from quasicrystals, can include dispersed and/or diluted systems (Berezin, 1994). In such systems such alleged centrepieces of Q-Computing as ability for fast factorization of long integers can be processed by sheer virtue of the fact that entire infinite pattern of prime numbers is instantaneously available as 'free lunch' at any instant/point. Infinitely rich pattern of A0 (including pattern of primes and almost primes) acts as 'independent' physical effect which directly generates Q-dynamics (and physical world) 'out of nothing'. Thus Q-nonlocality can be ultimately based on instantaneous interconnectedness through ever- the-same structure of A0 ('Platonic field' of integers).

  17. Elementary EFL Teachers' Computer Phobia and Computer Self-Efficacy in Taiwan

    ERIC Educational Resources Information Center

    Chen, Kate Tzuching

    2012-01-01

    The advent and application of computer and information technology has increased the overall success of EFL teaching; however, such success is hard to assess, and teachers prone to computer avoidance face negative consequences. Two major obstacles are high computer phobia and low computer self-efficacy. However, little research has been carried out…

  18. The Design of Modular Web-Based Collaboration

    NASA Astrophysics Data System (ADS)

    Intapong, Ploypailin; Settapat, Sittapong; Kaewkamnerdpong, Boonserm; Achalakul, Tiranee

    Online collaborative systems are popular communication channels as the systems allow people from various disciplines to interact and collaborate with ease. The systems provide communication tools and services that can be integrated on the web; consequently, the systems are more convenient to use and easier to install. Nevertheless, most of the currently available systems are designed according to some specific requirements and cannot be straightforwardly integrated into various applications. This paper provides the design of a new collaborative platform, which is component-based and re-configurable. The platform is called the Modular Web-based Collaboration (MWC). MWC shares the same concept as computer supported collaborative work (CSCW) and computer-supported collaborative learning (CSCL), but it provides configurable tools for online collaboration. Each tool module can be integrated into users' web applications freely and easily. This makes collaborative system flexible, adaptable and suitable for online collaboration.

  19. Adjoint shape optimization for fluid-structure interaction of ducted flows

    NASA Astrophysics Data System (ADS)

    Heners, J. P.; Radtke, L.; Hinze, M.; Düster, A.

    2018-03-01

    Based on the coupled problem of time-dependent fluid-structure interaction, equations for an appropriate adjoint problem are derived by the consequent use of the formal Lagrange calculus. Solutions of both primal and adjoint equations are computed in a partitioned fashion and enable the formulation of a surface sensitivity. This sensitivity is used in the context of a steepest descent algorithm for the computation of the required gradient of an appropriate cost functional. The efficiency of the developed optimization approach is demonstrated by minimization of the pressure drop in a simple two-dimensional channel flow and in a three-dimensional ducted flow surrounded by a thin-walled structure.

  20. The application of quantum mechanics in structure-based drug design.

    PubMed

    Mucs, Daniel; Bryce, Richard A

    2013-03-01

    Computational chemistry has become an established and valuable component in structure-based drug design. However the chemical complexity of many ligands and active sites challenges the accuracy of the empirical potentials commonly used to describe these systems. Consequently, there is a growing interest in utilizing electronic structure methods for addressing problems in protein-ligand recognition. In this review, the authors discuss recent progress in the development and application of quantum chemical approaches to modeling protein-ligand interactions. The authors specifically consider the development of quantum mechanics (QM) approaches for studying large molecular systems pertinent to biology, focusing on protein-ligand docking, protein-ligand binding affinities and ligand strain on binding. Although computation of binding energies remains a challenging and evolving area, current QM methods can underpin improved docking approaches and offer detailed insights into ligand strain and into the nature and relative strengths of complex active site interactions. The authors envisage that QM will become an increasingly routine and valued tool of the computational medicinal chemist.

  1. Student conceptions about the DNA structure within a hierarchical organizational level: Improvement by experiment- and computer-based outreach learning.

    PubMed

    Langheinrich, Jessica; Bogner, Franz X

    2015-01-01

    As non-scientific conceptions interfere with learning processes, teachers need both, to know about them and to address them in their classrooms. For our study, based on 182 eleventh graders, we analyzed the level of conceptual understanding by implementing the "draw and write" technique during a computer-supported gene technology module. To give participants the hierarchical organizational level which they have to draw, was a specific feature of our study. We introduced two objective category systems for analyzing drawings and inscriptions. Our results indicated a long- as well as a short-term increase in the level of conceptual understanding and in the number of drawn elements and their grades concerning the DNA structure. Consequently, we regard the "draw and write" technique as a tool for a teacher to get to know students' alternative conceptions. Furthermore, our study points the modification potential of hands-on and computer-supported learning modules. © 2015 The International Union of Biochemistry and Molecular Biology.

  2. Simple prescription for computing the interparticle potential energy for D-dimensional gravity systems

    NASA Astrophysics Data System (ADS)

    Accioly, Antonio; Helayël-Neto, José; Barone, F. E.; Herdy, Wallace

    2015-02-01

    A straightforward prescription for computing the D-dimensional potential energy of gravitational models, which is strongly based on the Feynman path integral, is built up. Using this method, the static potential energy for the interaction of two masses is found in the context of D-dimensional higher-derivative gravity models, and its behavior is analyzed afterwards in both ultraviolet and infrared regimes. As a consequence, two new gravity systems in which the potential energy is finite at the origin, respectively, in D = 5 and D = 6, are found. Since the aforementioned prescription is equivalent to that based on the marriage between quantum mechanics (to leading order, i.e., in the first Born approximation) and the nonrelativistic limit of quantum field theory, and bearing in mind that the latter relies basically on the calculation of the nonrelativistic Feynman amplitude ({{M}NR}), a trivial expression for computing {{M}NR} is obtained from our prescription as an added bonus.

  3. Study of Fluid Experiment System (FES)/CAST/Holographic Ground System (HGS)

    NASA Technical Reports Server (NTRS)

    Workman, Gary L.; Cummings, Rick; Jones, Brian

    1992-01-01

    The use of holographic and schlieren optical techniques for studying the concentration gradients in solidification processes has been used by several investigators over the years. The HGS facility at MSFC has been primary resource in researching this capability. Consequently, scientific personnel have been able to utilize these techniques in both ground based research and in space experiments. An important event in the scientific utilization of the HGS facilities was the TGS Crystal Growth and the casting and solidification technology (CAST) experiments that were flown on the International Microgravity Laboratory (IML) mission in March of this year. The preparation and processing of these space observations are the primary experiments reported in this work. This project provides some ground-based studies to optimize on the holographic techniques used to acquire information about the crystal growth processes flown on IML. Since the ground-based studies will be compared with the space-based experimental results, it is necessary to conduct sufficient ground based studies to best determine how the experiment worked in space. The current capabilities in computer based systems for image processing and numerical computation have certainly assisted in those efforts. As anticipated, this study has certainly shown that these advanced computing capabilities are helpful in the data analysis of such experiments.

  4. Computational analysis of nonlinearities within dynamics of cable-based driving systems

    NASA Astrophysics Data System (ADS)

    Anghelache, G. D.; Nastac, S.

    2017-08-01

    This paper deals with computational nonlinear dynamics of mechanical systems containing some flexural parts within the actuating scheme, and, especially, the situations of the cable-based driving systems were treated. It was supposed both functional nonlinearities and the real characteristic of the power supply, in order to obtain a realistically computer simulation model being able to provide very feasible results regarding the system dynamics. It was taken into account the transitory and stable regimes during a regular exploitation cycle. The authors present a particular case of a lift system, supposed to be representatively for the objective of this study. The simulations were made based on the values of the essential parameters acquired from the experimental tests and/or the regular practice in the field. The results analysis and the final discussions reveal the correlated dynamic aspects within the mechanical parts, the driving system, and the power supply, whole of these supplying potential sources of particular resonances, within some transitory phases of the working cycle, and which can affect structural and functional dynamics. In addition, it was underlines the influences of computational hypotheses on the both quantitative and qualitative behaviour of the system. Obviously, the most significant consequence of this theoretical and computational research consist by developing an unitary and feasible model, useful to dignify the nonlinear dynamic effects into the systems with cable-based driving scheme, and hereby to help an optimization of the exploitation regime including a dynamics control measures.

  5. Control of Transitional and Turbulent Flows Using Plasma-Based Actuators

    DTIC Science & Technology

    2006-06-01

    by means of asymmetric dielectric-barrier-discharge ( DBD ) actuators is presented. The flow fields are simulated employ- ing an extensively validated...effective use of DBD devices. As a consequence, meaningful computations require the use of three-dimensional large-eddy simulation approaches capable of...counter-flow DBD actuator is shown to provide an effective on-demand tripping device . This prop- erty is exploited for the suppression of laminar

  6. A computational model of selection by consequences.

    PubMed Central

    McDowell, J J

    2004-01-01

    Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of computational experiments that arranged reinforcement according to random-interval (RI) schedules. The quantitative features of the model were varied over wide ranges in these experiments, and many of the qualitative features of the model also were varied. The digital organism consistently showed a hyperbolic relation between response and reinforcement rates, and this hyperbolic description of the data was consistently better than the description provided by other, similar, function forms. In addition, the parameters of the hyperbola varied systematically with the quantitative, and some of the qualitative, properties of the model in ways that were consistent with findings from biological organisms. These results suggest that the material events responsible for an organism's responding on RI schedules are computationally equivalent to Darwinian selection by consequences. They also suggest that the computational model developed here is worth pursuing further as a possible dynamic account of behavior. PMID:15357512

  7. A Taylor Expansion-Based Adaptive Design Strategy for Global Surrogate Modeling With Applications in Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; Zhang, Guannan; Ye, Ming; Wu, Jianfeng; Wu, Jichun

    2017-12-01

    Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we develop a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.

  8. The Computer Revolution.

    ERIC Educational Resources Information Center

    Berkeley, Edmund C.

    "The Computer Revolution", a part of the "Second Industrial Revolution", is examined with reference to the social consequences of computers. The subject is introduced in an opening section which discusses the revolution in the handling of information and the history, powers, uses, and working s of computers. A second section examines in detail the…

  9. Building Efficient Wireless Infrastructures for Pervasive Computing Environments

    ERIC Educational Resources Information Center

    Sheng, Bo

    2010-01-01

    Pervasive computing is an emerging concept that thoroughly brings computing devices and the consequent technology into people's daily life and activities. Most of these computing devices are very small, sometimes even "invisible", and often embedded into the objects surrounding people. In addition, these devices usually are not isolated, but…

  10. Gold rush - A swarm dynamics in games

    NASA Astrophysics Data System (ADS)

    Zelinka, Ivan; Bukacek, Michal

    2017-07-01

    This paper is focused on swarm intelligence techniques and its practical use in computer games. The aim is to show how a swarm dynamics can be generated by multiplayer game, then recorded, analyzed and eventually controlled. In this paper we also discuss possibility to use swarm intelligence instead of game players. Based on our previous experiments two games, using swarm algorithms are mentioned briefly here. The first one is strategy game StarCraft: Brood War, and TicTacToe in which SOMA algorithm has also take a role of player against human player. Open research reported here has shown potential benefit of swarm computation in the field of strategy games and players strategy based on swarm behavior record and analysis. We propose new game called Gold Rush as an experimental environment for human or artificial swarm behavior and consequent analysis.

  11. Use of Continuous Integration Tools for Application Performance Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vergara Larrea, Veronica G; Joubert, Wayne; Fuson, Christopher B

    High performance computing systems are becom- ing increasingly complex, both in node architecture and in the multiple layers of software stack required to compile and run applications. As a consequence, the likelihood is increasing for application performance regressions to occur as a result of routine upgrades of system software components which interact in complex ways. The purpose of this study is to evaluate the effectiveness of continuous integration tools for application performance monitoring on HPC systems. In addition, this paper also describes a prototype system for application perfor- mance monitoring based on Jenkins, a Java-based continuous integration tool. The monitoringmore » system described leverages several features in Jenkins to track application performance results over time. Preliminary results and lessons learned from monitoring applications on Cray systems at the Oak Ridge Leadership Computing Facility are presented.« less

  12. Hybrid computational and experimental approach for the study and optimization of mechanical components

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Pryputniewicz, Ryszard J.

    1998-05-01

    Increased demands on the performance and efficiency of mechanical components impose challenges on their engineering design and optimization, especially when new and more demanding applications must be developed in relatively short periods of time while satisfying design objectives, as well as cost and manufacturability. In addition, reliability and durability must be taken into consideration. As a consequence, effective quantitative methodologies, computational and experimental, should be applied in the study and optimization of mechanical components. Computational investigations enable parametric studies and the determination of critical engineering design conditions, while experimental investigations, especially those using optical techniques, provide qualitative and quantitative information on the actual response of the structure of interest to the applied load and boundary conditions. We discuss a hybrid experimental and computational approach for investigation and optimization of mechanical components. The approach is based on analytical, computational, and experimental resolutions methodologies in the form of computational, noninvasive optical techniques, and fringe prediction analysis tools. Practical application of the hybrid approach is illustrated with representative examples that demonstrate the viability of the approach as an effective engineering tool for analysis and optimization.

  13. A Methodology for Forecasting Damage & Economic Consequences to Floods: Building on the National Flood Interoperability Experiment (NFIE)

    NASA Astrophysics Data System (ADS)

    Tootle, G. A.; Gutenson, J. L.; Zhu, L.; Ernest, A. N. S.; Oubeidillah, A.; Zhang, X.

    2015-12-01

    The National Flood Interoperability Experiment (NFIE) held June 3-July 17, 2015 at the National Water Center (NWC) in Tuscaloosa, Alabama sought to demonstrate an increase in flood predictive capacity for the coterminous United States (CONUS). Accordingly, NFIE-derived technologies and workflows offer the ability to forecast flood damage and economic consequence estimates that coincide with the hydrologic and hydraulic estimations these physics-based models generate. A model providing an accurate prediction of damage and economic consequences is a valuable asset when allocating funding for disaster response, recovery, and relief. Damage prediction and economic consequence assessment also offer an adaptation planning mechanism for defending particularly valuable or vulnerable structures. The NFIE, held at the NWC on The University of Alabama (UA) campus led to the development of this large scale flow and inundation forecasting framework. Currently, the system can produce 15-hour lead-time forecasts for the entire coterminous United States (CONUS). A concept which is anticipated to become operational as of May 2016 within the NWC. The processing of such a large-scale, fine resolution model is accomplished in a parallel computing environment using large supercomputing clusters. Traditionally, flood damage and economic consequence assessment is calculated in a desktop computing environment with a ménage of meteorology, hydrology, hydraulic, and damage assessment tools. In the United States, there are a range of these flood damage/ economic consequence assessment software's available to local, state, and federal emergency management agencies. Among the more commonly used and freely accessible models are the Hydrologic Engineering Center's Flood Damage Reduction Analysis (HEC-FDA), Flood Impact Assessment (HEC-FIA), and Federal Emergency Management Agency's (FEMA's) United States Multi-Hazard (Hazus-MH). All of which exist only in a desktop environment. With this, authors submit an initial framework for estimating damage and economic consequences to floods using flow and inundation products from the NFIE framework. This adaptive system utilizes existing nationwide datasets describing location and use of structures and can take assimilate a range of data resolutions.

  14. A real-time control system for the control of suspended interferometers based on hybrid computing techniques

    NASA Astrophysics Data System (ADS)

    Acernese, Fausto; Barone, Fabrizio; De Rosa, Rosario; Eleuteri, Antonio; Milano, Leopoldo; Pardi, Silvio; Ricciardi, Iolanda; Russo, Guido

    2004-09-01

    One of the main requirements of a digital system for the control of interferometric detectors of gravitational waves is the computing power, that is a direct consequence of the increasing complexity of the digital algorithms necessary for the control signals generation. For this specific task many specialized non standard real-time architectures have been developed, often very expensive and difficult to upgrade. On the other hand, such computing power is generally fully available for off-line applications on standard Pc based systems. Therefore, a possible and obvious solution may be provided by the integration of both the real-time and off-line architecture resulting in a hybrid control system architecture based on standards available components, trying to get both the advantages of the perfect data synchronization provided by the real-time systems and by the large computing power available on Pc based systems. Such integration may be provided by the implementation of the link between the two different architectures through the standard Ethernet network, whose data transfer speed is largely increasing in these years, using the TCP/IP, UDP and raw Ethernet protocols. In this paper we describe the architecture of an hybrid Ethernet based real-time control system prototype we implemented in Napoli, discussing its characteristics and performances. Finally we discuss a possible application to the real-time control of a suspended mass of the mode cleaner of the 3m prototype optical interferometer for gravitational wave detection (IDGW-3P) operational in Napoli.

  15. Barometric fluctuations in wells tapping deep unconfined aquifers

    USGS Publications Warehouse

    Weeks, Edwin P.

    1979-01-01

    Water levels in wells screened only below the water table in unconfined aquifers fluctuate in response to atmospheric pressure changes. These fluctuations occur because the materials composing the unsaturated zone resist air movement and have capacity to store air with a change in pressure. Consequently, the translation of any pressure change at land surface is slowed as it moves through the unsaturated zone to the water table, but it reaches the water surface in the well instantaneously. Thus a pressure imbalance is created that results in a water level fluctuation. Barometric effects on water levels in unconfined aquifers can be computed by solution of the differential equation governing the flow of gas in the unsaturated zone subject to the appropriate boundary conditions. Solutions to this equation for two sets of boundary conditions were applied to compute water level response in a well tapping the Ogallala Formation near Lubbock, Texas from simultaneous microbarograph records. One set of computations, based on the step function unit response solution and convolution, resulted in a very good match between computed and measured water levels. A second set of computations, based on analysis of the amplitude ratios of simultaneous cyclic microbarograph and water level fluctuations, gave inconsistent results in terms of the unsaturated zone pneumatic properties but provided useful insights on the nature of unconfined-aquifer water level fluctuations.

  16. Computed lateral rate and acceleration power spectral response of conventional and STOL airplanes to atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Lichtenstein, J. H.

    1975-01-01

    Power-spectral-density calculations were made of the lateral responses to atmospheric turbulence for several conventional and short take-off and landing (STOL) airplanes. The turbulence was modeled as three orthogonal velocity components, which were uncorrelated, and each was represented with a one-dimensional power spectrum. Power spectral densities were computed for displacements, rates, and accelerations in roll, yaw, and sideslip. In addition, the power spectral density of the transverse acceleration was computed. Evaluation of ride quality based on a specific ride quality criterion was also made. The results show that the STOL airplanes generally had larger values for the rate and acceleration power spectra (and, consequently, larger corresponding root-mean-square values) than the conventional airplanes. The ride quality criterion gave poorer ratings to the STOL airplanes than to the conventional airplanes.

  17. Study of FES/CAST/HGS

    NASA Technical Reports Server (NTRS)

    Workman, Gary L.; Cummings, Rick; Jones, Brian

    1992-01-01

    The microgravity materials processing program has been instrumental in providing the crystal growth community with an experimental environment to better understand the phenomena associated with the growing of crystals. In many applications one may pursue the growth of large single crystals which cannot be grown on earth due to convective driven flows. A microgravity environment is characterized by neither convection of buoyancy. Consequently superior crystals are able to be grown in space. On the other hand, since neither convection nor buoyancy dominates the fluid flow in a microgravity environment, then lesser dominating phenomena can affect crystal growth, such as surface driven flows or diffusion limited solidification. In the case of experiments that are to be flown in space using the Fluid Experiments System (FES), diffusion limited growth should be the dominating phenomenon. The use of holographic and Schlieren optical techniques for studying the concentration gradients in solidification processes has been used by several investigators over the years. The Holographic Ground System (HGS) facility at MSFC has been a primary resource in researching this capability. Consequently scientific personnel have been able to utilize these techniques in both ground based research and in space experiments. An important event in the scientific utilization of the HGS facilities was the TGS (triglycine sulfate) Crystal Growth and the Casting and Solidification Technology (CAST) experiments that were flown on the International Microgravity Lab (IML) mission in March of this year. The preparation and processing of these space observations are the primary experiments reported in this work. This project provides some ground-based studies to optimize on the holographic techniques used to acquire information about the crystal growth processes flown on IML. Since the ground-based studies will be compared with the space-based experimental results, it is necessary to conduct sufficient ground based studies to best determine how the experiment in space worked. The current capabilities in computer based systems for image processing and numerical computation have certainly assisted in those efforts. As anticipated, this study has certainly shown that these advanced computing capabilities are helpful in the data analysis of such experiments.

  18. Models for integrated and differential scattering optical properties of encapsulated light absorbing carbon aggregates.

    PubMed

    Kahnert, Michael; Nousiainen, Timo; Lindqvist, Hannakaisa

    2013-04-08

    Optical properties of light absorbing carbon (LAC) aggregates encapsulated in a shell of sulfate are computed for realistic model geometries based on field measurements. Computations are performed for wavelengths from the UV-C to the mid-IR. Both climate- and remote sensing-relevant optical properties are considered. The results are compared to commonly used simplified model geometries, none of which gives a realistic representation of the distribution of the LAC mass within the host material and, as a consequence, fail to predict the optical properties accurately. A new core-gray shell model is introduced, which accurately reproduces the size- and wavelength dependence of the integrated and differential optical properties.

  19. Rhetorical Consequences of the Computer Society: Expert Systems and Human Communication.

    ERIC Educational Resources Information Center

    Skopec, Eric Wm.

    Expert systems are computer programs that solve selected problems by modelling domain-specific behaviors of human experts. These computer programs typically consist of an input/output system that feeds data into the computer and retrieves advice, an inference system using the reasoning and heuristic processes of human experts, and a knowledge…

  20. Older Korean-American Adults' Attitudes toward the Computer

    ERIC Educational Resources Information Center

    Kwon, Hyuckhoon

    2009-01-01

    This study seeks to gain a holistic understanding of how older Korean-American adults' socio-demographic factors affect their attitudes toward the computer. The research was guided by four main questions: (1) What do participants describe as the consequences of their using the computer? (2) What attitudes toward the computer do participants…

  1. Integrating Numerical Computation into the Modeling Instruction Curriculum

    ERIC Educational Resources Information Center

    Caballero, Marcos D.; Burk, John B.; Aiken, John M.; Thoms, Brian D.; Douglas, Scott S.; Scanlon, Erin M.; Schatz, Michael F.

    2014-01-01

    Numerical computation (the use of a computer to solve, simulate, or visualize a physical problem) has fundamentally changed the way scientific research is done. Systems that are too difficult to solve in closed form are probed using computation. Experiments that are impossible to perform in the laboratory are studied numerically. Consequently, in…

  2. An Approach to Experimental Design for the Computer Analysis of Complex Phenomenon

    NASA Technical Reports Server (NTRS)

    Rutherford, Brian

    2000-01-01

    The ability to make credible system assessments, predictions and design decisions related to engineered systems and other complex phenomenon is key to a successful program for many large-scale investigations in government and industry. Recently, many of these large-scale analyses have turned to computational simulation to provide much of the required information. Addressing specific goals in the computer analysis of these complex phenomenon is often accomplished through the use of performance measures that are based on system response models. The response models are constructed using computer-generated responses together with physical test results where possible. They are often based on probabilistically defined inputs and generally require estimation of a set of response modeling parameters. As a consequence, the performance measures are themselves distributed quantities reflecting these variabilities and uncertainties. Uncertainty in the values of the performance measures leads to uncertainties in predicted performance and can cloud the decisions required of the analysis. A specific goal of this research has been to develop methodology that will reduce this uncertainty in an analysis environment where limited resources and system complexity together restrict the number of simulations that can be performed. An approach has been developed that is based on evaluation of the potential information provided for each "intelligently selected" candidate set of computer runs. Each candidate is evaluated by partitioning the performance measure uncertainty into two components - one component that could be explained through the additional computational simulation runs and a second that would remain uncertain. The portion explained is estimated using a probabilistic evaluation of likely results for the additional computational analyses based on what is currently known about the system. The set of runs indicating the largest potential reduction in uncertainty is then selected and the computational simulations are performed. Examples are provided to demonstrate this approach on small scale problems. These examples give encouraging results. Directions for further research are indicated.

  3. Information technology and ethics: An exploratory factor analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conger, S.; Loch, K.D.; Helft, B.L.

    1994-12-31

    Ethical dilemmas are situations in which a decision results in unpleasant consequences. The unpleasant consequences are treated as a zero-sum game in which someone always loses. Introducing information technology (IT) to a situation makes the recognition of a potential loser more abstract and difficult to identify, thus an ethical dilemma may go unrecognized. The computer mediates the human relationship which causes a lost sense of contact with a person at the other end of the computer connection. In 1986, Richard O. Mason published an essay identifying privacy, accuracy, property, and Access (PAPA) as the four main ethical issues of themore » information age. Anecdotes for each issue describe the injured party`s perspective to identify consequences resulting from unethical use of information and information technology. This research sought to validate Mason`s social issues empirically, but with distinct differences. Mason defined issues to raise awareness and initiate debate on the need for a social agenda; our focus is on individual computer users and the attitudes they hold about ethical behavior in computer use. This study examined the attitudes of the computer user who experiences the ethical dilemma to determine the extent to which ethical components are recognized, and whether Mason`s issues form recognizable constructs.« less

  4. An integrated computer-based procedure for teamwork in digital nuclear power plants.

    PubMed

    Gao, Qin; Yu, Wenzhu; Jiang, Xiang; Song, Fei; Pan, Jiajie; Li, Zhizhong

    2015-01-01

    Computer-based procedures (CBPs) are expected to improve operator performance in nuclear power plants (NPPs), but they may reduce the openness of interaction between team members and harm teamwork consequently. To support teamwork in the main control room of an NPP, this study proposed a team-level integrated CBP that presents team members' operation status and execution histories to one another. Through a laboratory experiment, we compared the new integrated design and the existing individual CBP design. Sixty participants, randomly divided into twenty teams of three people each, were assigned to the two conditions to perform simulated emergency operating procedures. The results showed that compared with the existing CBP design, the integrated CBP reduced the effort of team communication and improved team transparency. The results suggest that this novel design is effective to optim team process, but its impact on the behavioural outcomes may be moderated by more factors, such as task duration. The study proposed and evaluated a team-level integrated computer-based procedure, which present team members' operation status and execution history to one another. The experimental results show that compared with the traditional procedure design, the integrated design reduces the effort of team communication and improves team transparency.

  5. Identifying controlling variables for math computation fluency through experimental analysis: the interaction of stimulus control and reinforcing consequences.

    PubMed

    Hofstadter-Duke, Kristi L; Daly, Edward J

    2015-03-01

    This study investigated a method for conducting experimental analyses of academic responding. In the experimental analyses, academic responding (math computation), rather than problem behavior, was reinforced across conditions. Two separate experimental analyses (one with fluent math computation problems and one with non-fluent math computation problems) were conducted with three elementary school children using identical contingencies while math computation rate was measured. Results indicate that the experimental analysis with non-fluent problems produced undifferentiated responding across participants; however, differentiated responding was achieved for all participants in the experimental analysis with fluent problems. A subsequent comparison of the single-most effective condition from the experimental analyses replicated the findings with novel computation problems. Results are discussed in terms of the critical role of stimulus control in identifying controlling consequences for academic deficits, and recommendations for future research refining and extending experimental analysis to academic responding are made. © The Author(s) 2014.

  6. Choices and Consequences.

    ERIC Educational Resources Information Center

    Thorp, Carmany

    1995-01-01

    Describes student use of Hyperstudio computer software to create history adventure games. History came alive while students learned efficient writing skills; learned to understand and manipulate cause, effect choice and consequence; and learned to incorporate succinct locational, climatic, and historical detail. (ET)

  7. Evidence-based medicine: what has happened in the past 50 years?

    PubMed

    Mellis, Craig

    2015-01-01

    Although the phrase 'evidence-based medicine' (EBM) was used for the first time in the medical literature less than 25 years ago, the history of EBM goes back for centuries. What is remarkable is how popular and how globally accepted the EBM movement has become in such a short time. Many famous, past clinicians have played major roles in the disciplines that preceded EBM, particularly 'clinical epidemiology'. It soon became clear to the early EBM champions that 'evidence' was only part of the clinical decision-making process. Consequently, both clinical expertise and the patient's values and preferences were rapidly incorporated into the concept we now know as 'EBM'. The current need for high-quality, easily accessible 'evidence-based summaries' for busy clinicians is now apparent, as traditional EBM requires both considerable time and skill. Consequently, there is a progressive move away from the primary literature (such as randomised controlled trials) to systematic reviews and other 'evidence-based summaries'. The future of EBM will almost certainly involve widespread utilisation of 'clinical (computer)-based decision support systems'. © 2014 The Author. Journal of Paediatrics and Child Health © 2014 Paediatrics and Child Health Division (Royal Australasian College of Physicians).

  8. Manipulating attention via mindfulness induction improves P300-based brain-computer interface performance

    NASA Astrophysics Data System (ADS)

    Lakey, Chad E.; Berry, Daniel R.; Sellers, Eric W.

    2011-04-01

    In this study, we examined the effects of a short mindfulness meditation induction (MMI) on the performance of a P300-based brain-computer interface (BCI) task. We expected that MMI would harness present-moment attentional resources, resulting in two positive consequences for P300-based BCI use. Specifically, we believed that MMI would facilitate increases in task accuracy and promote the production of robust P300 amplitudes. Sixteen-channel electroencephalographic data were recorded from 18 subjects using a row/column speller task paradigm. Nine subjects participated in a 6 min MMI and an additional nine subjects served as a control group. Subjects were presented with a 6 × 6 matrix of alphanumeric characters on a computer monitor. Stimuli were flashed at a stimulus onset asynchrony (SOA) of 125 ms. Calibration data were collected on 21 items without providing feedback. These data were used to derive a stepwise linear discriminate analysis classifier that was applied to an additional 14 items to evaluate accuracy. Offline performance analyses revealed that MMI subjects were significantly more accurate than control subjects. Likewise, MMI subjects produced significantly larger P300 amplitudes than control subjects at Cz and PO7. The discussion focuses on the potential attentional benefits of MMI for P300-based BCI performance.

  9. A Method for Extracting Suspected Parotid Lesions in CT Images using Feature-based Segmentation and Active Contours based on Stationary Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Wu, T. Y.; Lin, S. F.

    2013-10-01

    Automatic suspected lesion extraction is an important application in computer-aided diagnosis (CAD). In this paper, we propose a method to automatically extract the suspected parotid regions for clinical evaluation in head and neck CT images. The suspected lesion tissues in low contrast tissue regions can be localized with feature-based segmentation (FBS) based on local texture features, and can be delineated with accuracy by modified active contour models (ACM). At first, stationary wavelet transform (SWT) is introduced. The derived wavelet coefficients are applied to derive the local features for FBS, and to generate enhanced energy maps for ACM computation. Geometric shape features (GSFs) are proposed to analyze each soft tissue region segmented by FBS; the regions with higher similarity GSFs with the lesions are extracted and the information is also applied as the initial conditions for fine delineation computation. Consequently, the suspected lesions can be automatically localized and accurately delineated for aiding clinical diagnosis. The performance of the proposed method is evaluated by comparing with the results outlined by clinical experts. The experiments on 20 pathological CT data sets show that the true-positive (TP) rate on recognizing parotid lesions is about 94%, and the dimension accuracy of delineation results can also approach over 93%.

  10. Knowledge base about earthquakes as a tool to minimize strong events consequences

    NASA Astrophysics Data System (ADS)

    Frolova, Nina; Bonnin, Jean; Larionov, Valery; Ugarov, Alexander; Kijko, Andrzej

    2017-04-01

    The paper describes the structure and content of the knowledge base on physical and socio-economical consequences of damaging earthquakes, which may be used for calibration of near real-time loss assessment systems based on simulation models for shaking intensity, damage to buildings and casualties estimates. Such calibration allows to compensate some factors which influence on reliability of expected damage and loss assessment in "emergency" mode. The knowledge base contains the description of past earthquakes' consequences for the area under study. It also includes the current distribution of built environment and population at the time of event occurrence. Computer simulation of the recorded in knowledge base events allow to determine the sets of regional calibration coefficients, including rating of seismological surveys, peculiarities of shaking intensity attenuation and changes in building stock and population distribution, in order to provide minimum error of damaging earthquakes loss estimations in "emergency" mode. References 1. Larionov, V., Frolova, N: Peculiarities of seismic vulnerability estimations. In: Natural Hazards in Russia, volume 6: Natural Risks Assessment and Management, Publishing House "Kruk", Moscow, 120-131, 2003. 2. Frolova, N., Larionov, V., Bonnin, J.: Data Bases Used In Worlwide Systems For Earthquake Loss Estimation In Emergency Mode: Wenchuan Earthquake. In Proc. TIEMS2010 Conference, Beijing, China, 2010. 3. Frolova N. I., Larionov V. I., Bonnin J., Sushchev S. P., Ugarov A. N., Kozlov M. A. Loss Caused by Earthquakes: Rapid Estimates. Natural Hazards Journal of the International Society for the Prevention and Mitigation of Natural Hazards, vol.84, ISSN 0921-030, Nat Hazards DOI 10.1007/s11069-016-2653

  11. Can Computers be Social?

    NASA Astrophysics Data System (ADS)

    Ekdahl, Bertil

    2002-09-01

    Of main concern in agent based computing is the conception that software agents can attain socially responsible behavior. This idea has its origin in the need for agents to interact with one another in a cooperating manner. Such interplay between several agents can be seen as a combinatorial play where the rules are fixed and the actors are supposed to closely analyze the play in order to behave rational. This kind of rationality has successfully being mathematically described. When the social behavior is extended beyond rational behavior, mere mathematical analysis falls short. For such behavior language is decisive for transferring concepts and language is a holistic entity that cannot be analyzed and defined mathematically. Accordingly, computers cannot be furnished with a language in the sense that meaning can be conveyed and consequently they lack all the necessary properties to be made social. The attempts to postulate mental properties to computer programs are a misconception that is blamed the lack of true understanding of language and especially the relation between formal system and its semantics.

  12. [The Triumph of "Stupidity" : Deep Blue`s Victory over Garri Kasparov. The Controversy about its Impact on Artficial Intelligence Research].

    PubMed

    Heßler, Martina

    2017-03-01

    The competition between the chess computer Deep Blue and the former chess world champion Garri Kasparov in 1997 was a spectacle staged for the media. However, the chess game, like other games, was also a test field for artificial intelligence research. On the one hand Deep Blue's victory was called a "milestone" for AI research, on the other hand, a dead end, since the superiority of the chess computer was based on pure computing power and had nothing to do with "real" AI.The article questions the premises of these different interpretations and maps Deep Blue and its way of playing chess into the history of AI. This also requires an analysis of the underlying concepts of thinking. Finally, the essay calls for assuming different "ways of thinking" for man and computer. Instead of fundamental discussions of concepts of thinking, we should ask about the consequences of the human-machine division of labor.

  13. Physically Based Virtual Surgery Planning and Simulation Tools for Personal Health Care Systems

    NASA Astrophysics Data System (ADS)

    Dogan, Firat; Atilgan, Yasemin

    The virtual surgery planning and simulation tools have gained a great deal of importance in the last decade in a consequence of increasing capacities at the information technology level. The modern hardware architectures, large scale database systems, grid based computer networks, agile development processes, better 3D visualization and all the other strong aspects of the information technology brings necessary instruments into almost every desk. The last decade’s special software and sophisticated super computer environments are now serving to individual needs inside “tiny smart boxes” for reasonable prices. However, resistance to learning new computerized environments, insufficient training and all the other old habits prevents effective utilization of IT resources by the specialists of the health sector. In this paper, all the aspects of the former and current developments in surgery planning and simulation related tools are presented, future directions and expectations are investigated for better electronic health care systems.

  14. Performance Analysis of an Actor-Based Distributed Simulation

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.

    1998-01-01

    Object-oriented design of simulation programs appears to be very attractive because of the natural association of components in the simulated system with objects. There is great potential in distributing the simulation across several computers for the purpose of parallel computation and its consequent handling of larger problems in less elapsed time. One approach to such a design is to use "actors", that is, active objects with their own thread of control. Because these objects execute concurrently, communication is via messages. This is in contrast to an object-oriented design using passive objects where communication between objects is via method calls (direct calls when they are in the same address space and remote procedure calls when they are in different address spaces or different machines). This paper describes a performance analysis program for the evaluation of a design for distributed simulations based upon actors.

  15. Visualization of decision processes using a cognitive architecture

    NASA Astrophysics Data System (ADS)

    Livingston, Mark A.; Murugesan, Arthi; Brock, Derek; Frost, Wende K.; Perzanowski, Dennis

    2013-01-01

    Cognitive architectures are computational theories of reasoning the human mind engages in as it processes facts and experiences. A cognitive architecture uses declarative and procedural knowledge to represent mental constructs that are involved in decision making. Employing a model of behavioral and perceptual constraints derived from a set of one or more scenarios, the architecture reasons about the most likely consequence(s) of a sequence of events. Reasoning of any complexity and depth involving computational processes, however, is often opaque and challenging to comprehend. Arguably, for decision makers who may need to evaluate or question the results of autonomous reasoning, it would be useful to be able to inspect the steps involved in an interactive, graphical format. When a chain of evidence and constraint-based decision points can be visualized, it becomes easier to explore both how and why a scenario of interest will likely unfold in a particular way. In initial work on a scheme for visualizing cognitively-based decision processes, we focus on generating graphical representations of models run in the Polyscheme cognitive architecture. Our visualization algorithm operates on a modified version of Polyscheme's output, which is accomplished by augmenting models with a simple set of tags. We provide example visualizations and discuss properties of our technique that pose challenges for our representation goals. We conclude with a summary of feedback solicited from domain experts and practitioners in the field of cognitive modeling.

  16. Inverse kinematic-based robot control

    NASA Technical Reports Server (NTRS)

    Wolovich, W. A.; Flueckiger, K. F.

    1987-01-01

    A fundamental problem which must be resolved in virtually all non-trivial robotic operations is the well-known inverse kinematic question. More specifically, most of the tasks which robots are called upon to perform are specified in Cartesian (x,y,z) space, such as simple tracking along one or more straight line paths or following a specified surfacer with compliant force sensors and/or visual feedback. In all cases, control is actually implemented through coordinated motion of the various links which comprise the manipulator; i.e., in link space. As a consequence, the control computer of every sophisticated anthropomorphic robot must contain provisions for solving the inverse kinematic problem which, in the case of simple, non-redundant position control, involves the determination of the first three link angles, theta sub 1, theta sub 2, and theta sub 3, which produce a desired wrist origin position P sub xw, P sub yw, and P sub zw at the end of link 3 relative to some fixed base frame. Researchers outline a new inverse kinematic solution and demonstrate its potential via some recent computer simulations. They also compare it to current inverse kinematic methods and outline some of the remaining problems which will be addressed in order to render it fully operational. Also discussed are a number of practical consequences of this technique beyond its obvious use in solving the inverse kinematic question.

  17. A network-analysis-based comparative study of the throughput behavior of polymer melts in barrier screw geometries

    NASA Astrophysics Data System (ADS)

    Aigner, M.; Köpplmayr, T.; Kneidinger, C.; Miethlinger, J.

    2014-05-01

    Barrier screws are widely used in the plastics industry. Due to the extreme diversity of their geometries, describing the flow behavior is difficult and rarely done in practice. We present a systematic approach based on networks that uses tensor algebra and numerical methods to model and calculate selected barrier screw geometries in terms of pressure, mass flow, and residence time. In addition, we report the results of three-dimensional simulations using the commercially available ANSYS Polyflow software. The major drawbacks of three-dimensional finite-element-method (FEM) simulations are that they require vast computational power and, large quantities of memory, and consume considerable time to create a geometric model created by computer-aided design (CAD) and complete a flow calculation. Consequently, a modified 2.5-dimensional finite volume method, termed network analysis is preferable. The results obtained by network analysis and FEM simulations correlated well. Network analysis provides an efficient alternative to complex FEM software in terms of computing power and memory consumption. Furthermore, typical barrier screw geometries can be parameterized and used for flow calculations without timeconsuming CAD-constructions.

  18. Supporting students' learning in the domain of computer science

    NASA Astrophysics Data System (ADS)

    Gasparinatou, Alexandra; Grigoriadou, Maria

    2011-03-01

    Previous studies have shown that students with low knowledge understand and learn better from more cohesive texts, whereas high-knowledge students have been shown to learn better from texts of lower cohesion. This study examines whether high-knowledge readers in computer science benefit from a text of low cohesion. Undergraduate students (n = 65) read one of four versions of a text concerning Local Network Topologies, orthogonally varying local and global cohesion. Participants' comprehension was examined through free-recall measure, text-based, bridging-inference, elaborative-inference, problem-solving questions and a sorting task. The results indicated that high-knowledge readers benefited from the low-cohesion text. The interaction of text cohesion and knowledge was reliable for the sorting activity, for elaborative-inference and for problem-solving questions. Although high-knowledge readers performed better in text-based and in bridging-inference questions with the low-cohesion text, the interaction of text cohesion and knowledge was not reliable. The results suggest a more complex view of when and for whom textual cohesion affects comprehension and consequently learning in computer science.

  19. The challenge of raising ethical awareness: a case-based aiding system for use by computing and ICT students.

    PubMed

    Sherratt, Don; Rogerson, Simon; Ben Fairweather, N

    2005-04-01

    Students, the future Information and Communication Technology (ICT) professionals, are often perceived to have little understanding of the ethical issues associated with the use of ICTs. There is a growing recognition that the moral issues associated with the use of the new technologies should be brought to the attention of students. Furthermore, they should be encouraged to explore and think more deeply about the social and legal consequences of the use of ICTs. This paper describes the development of a tool designed to raise students' awareness of the social impact of ICTs. The tool offers guidance to students undertaking computing and computer-related courses when considering the social, legal and professional implications of the actions of participants in situations of ethical conflict. However, unlike previous work in this field, this tool is not based on an artificial intelligence paradigm. Aspects of the theoretical basis for the design of the tool and the tool's practical development are discussed. Preliminary results from the testing of the tool are also discussed.

  20. The Shortlist Method for fast computation of the Earth Mover's Distance and finding optimal solutions to transportation problems.

    PubMed

    Gottschlich, Carsten; Schuhmacher, Dominic

    2014-01-01

    Finding solutions to the classical transportation problem is of great importance, since this optimization problem arises in many engineering and computer science applications. Especially the Earth Mover's Distance is used in a plethora of applications ranging from content-based image retrieval, shape matching, fingerprint recognition, object tracking and phishing web page detection to computing color differences in linguistics and biology. Our starting point is the well-known revised simplex algorithm, which iteratively improves a feasible solution to optimality. The Shortlist Method that we propose substantially reduces the number of candidates inspected for improving the solution, while at the same time balancing the number of pivots required. Tests on simulated benchmarks demonstrate a considerable reduction in computation time for the new method as compared to the usual revised simplex algorithm implemented with state-of-the-art initialization and pivot strategies. As a consequence, the Shortlist Method facilitates the computation of large scale transportation problems in viable time. In addition we describe a novel method for finding an initial feasible solution which we coin Modified Russell's Method.

  1. The Shortlist Method for Fast Computation of the Earth Mover's Distance and Finding Optimal Solutions to Transportation Problems

    PubMed Central

    Gottschlich, Carsten; Schuhmacher, Dominic

    2014-01-01

    Finding solutions to the classical transportation problem is of great importance, since this optimization problem arises in many engineering and computer science applications. Especially the Earth Mover's Distance is used in a plethora of applications ranging from content-based image retrieval, shape matching, fingerprint recognition, object tracking and phishing web page detection to computing color differences in linguistics and biology. Our starting point is the well-known revised simplex algorithm, which iteratively improves a feasible solution to optimality. The Shortlist Method that we propose substantially reduces the number of candidates inspected for improving the solution, while at the same time balancing the number of pivots required. Tests on simulated benchmarks demonstrate a considerable reduction in computation time for the new method as compared to the usual revised simplex algorithm implemented with state-of-the-art initialization and pivot strategies. As a consequence, the Shortlist Method facilitates the computation of large scale transportation problems in viable time. In addition we describe a novel method for finding an initial feasible solution which we coin Modified Russell's Method. PMID:25310106

  2. Cloud4Psi: cloud computing for 3D protein structure similarity searching.

    PubMed

    Mrozek, Dariusz; Małysiak-Mrozek, Bożena; Kłapciński, Artur

    2014-10-01

    Popular methods for 3D protein structure similarity searching, especially those that generate high-quality alignments such as Combinatorial Extension (CE) and Flexible structure Alignment by Chaining Aligned fragment pairs allowing Twists (FATCAT) are still time consuming. As a consequence, performing similarity searching against large repositories of structural data requires increased computational resources that are not always available. Cloud computing provides huge amounts of computational power that can be provisioned on a pay-as-you-go basis. We have developed the cloud-based system that allows scaling of the similarity searching process vertically and horizontally. Cloud4Psi (Cloud for Protein Similarity) was tested in the Microsoft Azure cloud environment and provided good, almost linearly proportional acceleration when scaled out onto many computational units. Cloud4Psi is available as Software as a Service for testing purposes at: http://cloud4psi.cloudapp.net/. For source code and software availability, please visit the Cloud4Psi project home page at http://zti.polsl.pl/dmrozek/science/cloud4psi.htm. © The Author 2014. Published by Oxford University Press.

  3. PRESAGE: Protecting Structured Address Generation against Soft Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram

    Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation (to index large arrays) have not been widely researched. We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGE is that any addressmore » computation scheme that flows an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Enabling the flow of errors allows one to situate detectors at loop exit points, and helps turn silent corruptions into easily detectable error situations. Our experiments using PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less

  4. PRESAGE: Protecting Structured Address Generation against Soft Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram

    Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation have not been widely researched (especially in the context of indexing large arrays). We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGEmore » is that any address computation scheme that propagates an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Ensuring the propagation of errors allows one to place detectors at loop exit points and helps turn silent corruptions into easily detectable error situations. Our experiments using the PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less

  5. Cloud4Psi: cloud computing for 3D protein structure similarity searching

    PubMed Central

    Mrozek, Dariusz; Małysiak-Mrozek, Bożena; Kłapciński, Artur

    2014-01-01

    Summary: Popular methods for 3D protein structure similarity searching, especially those that generate high-quality alignments such as Combinatorial Extension (CE) and Flexible structure Alignment by Chaining Aligned fragment pairs allowing Twists (FATCAT) are still time consuming. As a consequence, performing similarity searching against large repositories of structural data requires increased computational resources that are not always available. Cloud computing provides huge amounts of computational power that can be provisioned on a pay-as-you-go basis. We have developed the cloud-based system that allows scaling of the similarity searching process vertically and horizontally. Cloud4Psi (Cloud for Protein Similarity) was tested in the Microsoft Azure cloud environment and provided good, almost linearly proportional acceleration when scaled out onto many computational units. Availability and implementation: Cloud4Psi is available as Software as a Service for testing purposes at: http://cloud4psi.cloudapp.net/. For source code and software availability, please visit the Cloud4Psi project home page at http://zti.polsl.pl/dmrozek/science/cloud4psi.htm. Contact: dariusz.mrozek@polsl.pl PMID:24930141

  6. Bio-steps beyond Turing.

    PubMed

    Calude, Cristian S; Păun, Gheorghe

    2004-11-01

    Are there 'biologically computing agents' capable to compute Turing uncomputable functions? It is perhaps tempting to dismiss this question with a negative answer. Quite the opposite, for the first time in the literature on molecular computing we contend that the answer is not theoretically negative. Our results will be formulated in the language of membrane computing (P systems). Some mathematical results presented here are interesting in themselves. In contrast with most speed-up methods which are based on non-determinism, our results rest upon some universality results proved for deterministic P systems. These results will be used for building "accelerated P systems". In contrast with the case of Turing machines, acceleration is a part of the hardware (not a quality of the environment) and it is realised either by decreasing the size of "reactors" or by speeding-up the communication channels. Consequently, two acceleration postulates of biological inspiration are introduced; each of them poses specific questions to biology. Finally, in a more speculative part of the paper, we will deal with Turing non-computability activity of the brain and possible forms of (extraterrestrial) intelligence.

  7. A Taylor Expansion-Based Adaptive Design Strategy for Global Surrogate Modeling With Applications in Groundwater Modeling

    DOE PAGES

    Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; ...

    2017-12-27

    Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we developmore » a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.« less

  8. A Taylor Expansion-Based Adaptive Design Strategy for Global Surrogate Modeling With Applications in Groundwater Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing

    Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we developmore » a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.« less

  9. Intelligent model-based diagnostics for vehicle health management

    NASA Astrophysics Data System (ADS)

    Luo, Jianhui; Tu, Fang; Azam, Mohammad S.; Pattipati, Krishna R.; Willett, Peter K.; Qiao, Liu; Kawamoto, Masayuki

    2003-08-01

    The recent advances in sensor technology, remote communication and computational capabilities, and standardized hardware/software interfaces are creating a dramatic shift in the way the health of vehicles is monitored and managed. These advances facilitate remote monitoring, diagnosis and condition-based maintenance of automotive systems. With the increased sophistication of electronic control systems in vehicles, there is a concomitant increased difficulty in the identification of the malfunction phenomena. Consequently, the current rule-based diagnostic systems are difficult to develop, validate and maintain. New intelligent model-based diagnostic methodologies that exploit the advances in sensor, telecommunications, computing and software technologies are needed. In this paper, we will investigate hybrid model-based techniques that seamlessly employ quantitative (analytical) models and graph-based dependency models for intelligent diagnosis. Automotive engineers have found quantitative simulation (e.g. MATLAB/SIMULINK) to be a vital tool in the development of advanced control systems. The hybrid method exploits this capability to improve the diagnostic system's accuracy and consistency, utilizes existing validated knowledge on rule-based methods, enables remote diagnosis, and responds to the challenges of increased system complexity. The solution is generic and has the potential for application in a wide range of systems.

  10. Two-boundary grid generation for the solution of the three dimensional compressible Navier-Stokes equations. Ph.D. Thesis - Old Dominion Univ.

    NASA Technical Reports Server (NTRS)

    Smith, R. E.

    1981-01-01

    A grid generation technique called the two boundary technique is developed and applied for the solution of the three dimensional Navier-Stokes equations. The Navier-Stokes equations are transformed from a cartesian coordinate system to a computational coordinate system, and the grid generation technique provides the Jacobian matrix describing the transformation. The two boundary technique is based on algebraically defining two distinct boundaries of a flow domain and the distribution of the grid is achieved by applying functions to the uniform computational grid which redistribute the computational independent variables and consequently concentrate or disperse the grid points in the physical domain. The Navier-Stokes equations are solved using a MacCormack time-split technique. Grids and supersonic laminar flow solutions are obtained for a family of three dimensional corners and two spike-nosed bodies.

  11. "I Am Very Good at Computers": Young Children's Computer Use and Their Computer Self-Esteem

    ERIC Educational Resources Information Center

    Hatzigianni, Maria; Margetts, Kay

    2012-01-01

    Children frequently encounter computers in many aspects of daily life. It is important to consider the consequences not only on children's cognitive development but on their emotional and self-development. This paper reports on research undertaken in Australia with 52 children aged between 44 and 79 months to explore the existence or not of a…

  12. Consequences of bounds on longitudinal emittance growth for the design of recirculating linear accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berg, J. S.

    2015-05-03

    Recirculating linear accelerators (RLAs) are a cost-effective method for the acceleration of muons for a muon collider in energy ranges from a couple GeV to a few 10s of GeV. Muon beams generally have longitudinal emittances that are large for the RF frequency that is used, and it is important to limit the growth of that longitudinal emittance. This has particular consequences for the arc design of the RLAs. I estimate the longitudinal emittance growth in an RLA arising from the RF nonlinearity. Given an emittance growth limitation and other design parameters, one can then compute the maximum momentum compactionmore » in the arcs. I describe how to obtain an approximate arc design satisfying these requirements based on the deisgn in [1]. Longitudinal dynamics also determine the energy spread in the beam, and this has consequences on the transverse phase advance in the linac. This in turn has consequences for the arc design due to the need to match beta functions. I combine these considerations to discuss design parameters for the acceleration of muons for a collider in an RLA from 5 to 63 GeV.« less

  13. Statistical surrogate models for prediction of high-consequence climate change.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Constantine, Paul; Field, Richard V., Jr.; Boslough, Mark Bruce Elrick

    2011-09-01

    In safety engineering, performance metrics are defined using probabilistic risk assessments focused on the low-probability, high-consequence tail of the distribution of possible events, as opposed to best estimates based on central tendencies. We frame the climate change problem and its associated risks in a similar manner. To properly explore the tails of the distribution requires extensive sampling, which is not possible with existing coupled atmospheric models due to the high computational cost of each simulation. We therefore propose the use of specialized statistical surrogate models (SSMs) for the purpose of exploring the probability law of various climate variables of interest.more » A SSM is different than a deterministic surrogate model in that it represents each climate variable of interest as a space/time random field. The SSM can be calibrated to available spatial and temporal data from existing climate databases, e.g., the Program for Climate Model Diagnosis and Intercomparison (PCMDI), or to a collection of outputs from a General Circulation Model (GCM), e.g., the Community Earth System Model (CESM) and its predecessors. Because of its reduced size and complexity, the realization of a large number of independent model outputs from a SSM becomes computationally straightforward, so that quantifying the risk associated with low-probability, high-consequence climate events becomes feasible. A Bayesian framework is developed to provide quantitative measures of confidence, via Bayesian credible intervals, in the use of the proposed approach to assess these risks.« less

  14. Computational Modeling and Simulation of Genital Tubercle Development

    EPA Pesticide Factsheets

    Hypospadias is a developmental defect of urethral tube closure that has a complex etiology involving genetic and environmental factors, including anti-androgenic and estrogenic disrupting chemicals; however, little is known about the morphoregulatory consequences of androgen/estrogen balance during genital tubercle (GT) development. Computer models that predictively model sexual dimorphism of the GT may provide a useful resource to translate chemical-target bipartite networks and their developmental consequences across the human-relevant chemical universe. Here, we describe a multicellular agent-based model of genital tubercle (GT) development that simulates urethrogenesis from the sexually-indifferent urethral plate stage to urethral tube closure. The prototype model, constructed in CompuCell3D, recapitulates key aspects of GT morphogenesis controlled by SHH, FGF10, and androgen pathways through modulation of stochastic cell behaviors, including differential adhesion, motility, proliferation, and apoptosis. Proper urethral tube closure in the model was shown to depend quantitatively on SHH- and FGF10-induced effects on mesenchymal proliferation and epithelial apoptosis??both ultimately linked to androgen signaling. In the absence of androgen, GT development was feminized and with partial androgen deficiency, the model resolved with incomplete urethral tube closure, thereby providing an in silico platform for probabilistic prediction of hypospadias risk across c

  15. A new model of sensorimotor coupling in the development of speech.

    PubMed

    Westermann, Gert; Reck Miranda, Eduardo

    2004-05-01

    We present a computational model that learns a coupling between motor parameters and their sensory consequences in vocal production during a babbling phase. Based on the coupling, preferred motor parameters and prototypically perceived sounds develop concurrently. Exposure to an ambient language modifies perception to coincide with the sounds from the language. The model develops motor mirror neurons that are active when an external sound is perceived. An extension to visual mirror neurons for oral gestures is suggested.

  16. On the Use of Equivalent Linearization for High-Cycle Fatigue Analysis of Geometrically Nonlinear Structures

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.

    2003-01-01

    The use of stress predictions from equivalent linearization analyses in the computation of high-cycle fatigue life is examined. Stresses so obtained differ in behavior from the fully nonlinear analysis in both spectral shape and amplitude. Consequently, fatigue life predictions made using this data will be affected. Comparisons of fatigue life predictions based upon the stress response obtained from equivalent linear and numerical simulation analyses are made to determine the range over which the equivalent linear analysis is applicable.

  17. Computer Competency of Nursing Students at a University in Thailand

    ERIC Educational Resources Information Center

    Niyomkar, Srimana

    2012-01-01

    During the past years, computer and information technology has been rapidly integrated into the education and healthcare fields. In the 21st century, computers are more powerful than ever, and are used in all aspects of nursing, including education, practice, policy, and research. Consequently, student nurses will need to utilize computer…

  18. A Quantitative Investigation of Cloud Computing Adoption in Nigeria: Testing an Enhanced Technology Acceptance Model

    ERIC Educational Resources Information Center

    Ishola, Bashiru Abayomi

    2017-01-01

    Cloud computing has recently emerged as a potential alternative to the traditional on-premise computing that businesses can leverage to achieve operational efficiencies. Consequently, technology managers are often tasked with the responsibilities to analyze the barriers and variables critical to organizational cloud adoption decisions. This…

  19. Review of Research on the Cognitive Effects of Computer-Assisted Learning.

    ERIC Educational Resources Information Center

    Mandinach, E.; And Others

    This review of the research on the cognitive effects of computer-assisted instruction begins with an overview of the ACCCEL (Assessing Cognitive Consequences of Computer Environments for Learning) research program at the University of California at Berkeley, which consists of several interrelated studies examining the acquisition of such higher…

  20. Learning Consequences of Mobile-Computing Technologies: Differential Impacts on Integrative Learning and Skill-Focused Learning

    ERIC Educational Resources Information Center

    Kumi, Richard; Reychav, Iris; Sabherwal, Rajiv

    2016-01-01

    Many educational institutions are integrating mobile-computing technologies (MCT) into the classroom to improve learning outcomes. There is also a growing interest in research to understand how MCT influence learning outcomes. The diversity of results in prior research indicates that computer-mediated learning has different effects on various…

  1. Cognitive Consequences of Participation in a "Fifth Dimension" After-School Computer Club.

    ERIC Educational Resources Information Center

    Mayer, Richard E.; Quilici, Jill; Moreno, Roxana; Duran, Richard; Woodbridge, Scott; Simon, Rebecca; Sanchez, David; Lavezzo, Amy

    1997-01-01

    Children who attended the Fifth Dimension after-school computer club at least 10 times during the 1994-95 school year performed better on word problem comprehension tests than did non-participating children. Results support the hypothesis that experience in using computer software in the Fifth Dimension club produces measurable, resilient, and…

  2. Think globally and solve locally: secondary memory-based network learning for automated multi-species function prediction

    PubMed Central

    2014-01-01

    Background Network-based learning algorithms for automated function prediction (AFP) are negatively affected by the limited coverage of experimental data and limited a priori known functional annotations. As a consequence their application to model organisms is often restricted to well characterized biological processes and pathways, and their effectiveness with poorly annotated species is relatively limited. A possible solution to this problem might consist in the construction of big networks including multiple species, but this in turn poses challenging computational problems, due to the scalability limitations of existing algorithms and the main memory requirements induced by the construction of big networks. Distributed computation or the usage of big computers could in principle respond to these issues, but raises further algorithmic problems and require resources not satisfiable with simple off-the-shelf computers. Results We propose a novel framework for scalable network-based learning of multi-species protein functions based on both a local implementation of existing algorithms and the adoption of innovative technologies: we solve “locally” the AFP problem, by designing “vertex-centric” implementations of network-based algorithms, but we do not give up thinking “globally” by exploiting the overall topology of the network. This is made possible by the adoption of secondary memory-based technologies that allow the efficient use of the large memory available on disks, thus overcoming the main memory limitations of modern off-the-shelf computers. This approach has been applied to the analysis of a large multi-species network including more than 300 species of bacteria and to a network with more than 200,000 proteins belonging to 13 Eukaryotic species. To our knowledge this is the first work where secondary-memory based network analysis has been applied to multi-species function prediction using biological networks with hundreds of thousands of proteins. Conclusions The combination of these algorithmic and technological approaches makes feasible the analysis of large multi-species networks using ordinary computers with limited speed and primary memory, and in perspective could enable the analysis of huge networks (e.g. the whole proteomes available in SwissProt), using well-equipped stand-alone machines. PMID:24843788

  3. CERN alerter—RSS based system for information broadcast to all CERN offices

    NASA Astrophysics Data System (ADS)

    Otto, R.

    2008-07-01

    Nearly every large organization uses a tool to broadcast messages and information across the internal campus (messages like alerts announcing interruption in services or just information about upcoming events). These tools typically allow administrators (operators) to send 'targeted' messages which are sent only to specific groups of users or computers, e/g only those located in a specified building or connected to a particular computing service. CERN has a long history of such tools: CERNVMS's SPM_quotMESSAGE command, Zephyr [2] and the most recent the NICE Alerter based on the NNTP protocol. The NICE Alerter used on all Windows-based computers had to be phased out as a consequence of phasing out NNTP at CERN. The new solution to broadcast information messages on the CERN campus continues to provide the service based on cross-platform technologies, hence minimizing custom developments and relying on commercial software as much as possible. The new system, called CERN Alerter, is based on RSS (Really Simple Syndication) [9] for the transport protocol and uses Microsoft SharePoint as the backend for database and posting interface. The windows-based client relies on Internet Explorer 7.0 with custom code to trigger the window pop-ups and the notifications for new events. Linux and Mac OS X clients could also rely on any RSS readers to subscribe to targeted notifications. The paper covers the architecture and implementation aspects of the new system.

  4. Cloud Computing and Its Applications in GIS

    NASA Astrophysics Data System (ADS)

    Kang, Cao

    2011-12-01

    Cloud computing is a novel computing paradigm that offers highly scalable and highly available distributed computing services. The objectives of this research are to: 1. analyze and understand cloud computing and its potential for GIS; 2. discover the feasibilities of migrating truly spatial GIS algorithms to distributed computing infrastructures; 3. explore a solution to host and serve large volumes of raster GIS data efficiently and speedily. These objectives thus form the basis for three professional articles. The first article is entitled "Cloud Computing and Its Applications in GIS". This paper introduces the concept, structure, and features of cloud computing. Features of cloud computing such as scalability, parallelization, and high availability make it a very capable computing paradigm. Unlike High Performance Computing (HPC), cloud computing uses inexpensive commodity computers. The uniform administration systems in cloud computing make it easier to use than GRID computing. Potential advantages of cloud-based GIS systems such as lower barrier to entry are consequently presented. Three cloud-based GIS system architectures are proposed: public cloud- based GIS systems, private cloud-based GIS systems and hybrid cloud-based GIS systems. Public cloud-based GIS systems provide the lowest entry barriers for users among these three architectures, but their advantages are offset by data security and privacy related issues. Private cloud-based GIS systems provide the best data protection, though they have the highest entry barriers. Hybrid cloud-based GIS systems provide a compromise between these extremes. The second article is entitled "A cloud computing algorithm for the calculation of Euclidian distance for raster GIS". Euclidean distance is a truly spatial GIS algorithm. Classical algorithms such as the pushbroom and growth ring techniques require computational propagation through the entire raster image, which makes it incompatible with the distributed nature of cloud computing. This paper presents a parallel Euclidean distance algorithm that works seamlessly with the distributed nature of cloud computing infrastructures. The mechanism of this algorithm is to subdivide a raster image into sub-images and wrap them with a one pixel deep edge layer of individually computed distance information. Each sub-image is then processed by a separate node, after which the resulting sub-images are reassembled into the final output. It is shown that while any rectangular sub-image shape can be used, those approximating squares are computationally optimal. This study also serves as a demonstration of this subdivide and layer-wrap strategy, which would enable the migration of many truly spatial GIS algorithms to cloud computing infrastructures. However, this research also indicates that certain spatial GIS algorithms such as cost distance cannot be migrated by adopting this mechanism, which presents significant challenges for the development of cloud-based GIS systems. The third article is entitled "A Distributed Storage Schema for Cloud Computing based Raster GIS Systems". This paper proposes a NoSQL Database Management System (NDDBMS) based raster GIS data storage schema. NDDBMS has good scalability and is able to use distributed commodity computers, which make it superior to Relational Database Management Systems (RDBMS) in a cloud computing environment. In order to provide optimized data service performance, the proposed storage schema analyzes the nature of commonly used raster GIS data sets. It discriminates two categories of commonly used data sets, and then designs corresponding data storage models for both categories. As a result, the proposed storage schema is capable of hosting and serving enormous volumes of raster GIS data speedily and efficiently on cloud computing infrastructures. In addition, the scheme also takes advantage of the data compression characteristics of Quadtrees, thus promoting efficient data storage. Through this assessment of cloud computing technology, the exploration of the challenges and solutions to the migration of GIS algorithms to cloud computing infrastructures, and the examination of strategies for serving large amounts of GIS data in a cloud computing infrastructure, this dissertation lends support to the feasibility of building a cloud-based GIS system. However, there are still challenges that need to be addressed before a full-scale functional cloud-based GIS system can be successfully implemented. (Abstract shortened by UMI.)

  5. A proposal for a computer-based framework of support for public health in the management of biological incidents: the Czech Republic experience.

    PubMed

    Bures, Vladimír; Otcenásková, Tereza; Cech, Pavel; Antos, Karel

    2012-11-01

    Biological incidents jeopardising public health require decision-making that consists of one dominant feature: complexity. Therefore, public health decision-makers necessitate appropriate support. Based on the analogy with business intelligence (BI) principles, the contextual analysis of the environment and available data resources, and conceptual modelling within systems and knowledge engineering, this paper proposes a general framework for computer-based decision support in the case of a biological incident. At the outset, the analysis of potential inputs to the framework is conducted and several resources such as demographic information, strategic documents, environmental characteristics, agent descriptors and surveillance systems are considered. Consequently, three prototypes were developed, tested and evaluated by a group of experts. Their selection was based on the overall framework scheme. Subsequently, an ontology prototype linked with an inference engine, multi-agent-based model focusing on the simulation of an environment, and expert-system prototypes were created. All prototypes proved to be utilisable support tools for decision-making in the field of public health. Nevertheless, the research revealed further issues and challenges that might be investigated by both public health focused researchers and practitioners.

  6. Work and Technology in Higher Education: The Social Construction of Academic Computing. Technology and Education Series.

    ERIC Educational Resources Information Center

    Shields, Mark A., Ed.

    This volume contributes to the understanding of higher education's catalytic role in shaping the microcomputer revolution. Academic computing is viewed here as a social and cultural phenomenon. An in-depth collection of mainly ethnographic studies of the academic computing revolution--its consequences, meanings, and significance--is presented. The…

  7. Reflections from a Computer Simulations Program on Cell Division in Selected Kenyan Secondary Schools

    ERIC Educational Resources Information Center

    Ndirangu, Mwangi; Kiboss, Joel K.; Wekesa, Eric W.

    2005-01-01

    The application of computer technology in education is a relatively new approach that is trying to justify inclusion in the Kenyan school curriculum. Being abstract, with a dynamic nature that does not manifest itself visibly, the process of cell division has posed difficulties for teachers. Consequently, a computer simulation program, using…

  8. Pre-Service Teachers' Uses of and Barriers from Adopting Computer-Assisted Language Learning (CALL) Programs

    ERIC Educational Resources Information Center

    Samani, Ebrahim; Baki, Roselan; Razali, Abu Bakar

    2014-01-01

    Success in implementation of computer-assisted language learning (CALL) programs depends on the teachers' understanding of the roles of CALL programs in education. Consequently, it is also important to understand the barriers teachers face in the use of computer-assisted language learning (CALL) programs. The current study was conducted on 14…

  9. Factors Affecting Career Choice: Comparison between Students from Computer and Other Disciplines

    ERIC Educational Resources Information Center

    Alexander, P. M.; Holmner, M.; Lotriet, H. H.; Matthee, M. C.; Pieterse, H. V.; Naidoo, S.; Twinomurinzi, H.; Jordaan, D.

    2011-01-01

    The number of student enrolments in computer-related courses remains a serious concern worldwide with far reaching consequences. This paper reports on an extensive survey about career choice and associated motivational factors amongst new students, only some of whom intend to major in computer-related courses, at two South African universities.…

  10. First Order Fire Effects Model: FOFEM 4.0, user's guide

    Treesearch

    Elizabeth D. Reinhardt; Robert E. Keane; James K. Brown

    1997-01-01

    A First Order Fire Effects Model (FOFEM) was developed to predict the direct consequences of prescribed fire and wildfire. FOFEM computes duff and woody fuel consumption, smoke production, and fire-caused tree mortality for most forest and rangeland types in the United States. The model is available as a computer program for PC or Data General computer.

  11. Group-based variant calling leveraging next-generation supercomputing for large-scale whole-genome sequencing studies.

    PubMed

    Standish, Kristopher A; Carland, Tristan M; Lockwood, Glenn K; Pfeiffer, Wayne; Tatineni, Mahidhar; Huang, C Chris; Lamberth, Sarah; Cherkas, Yauheniya; Brodmerkel, Carrie; Jaeger, Ed; Smith, Lance; Rajagopal, Gunaretnam; Curran, Mark E; Schork, Nicholas J

    2015-09-22

    Next-generation sequencing (NGS) technologies have become much more efficient, allowing whole human genomes to be sequenced faster and cheaper than ever before. However, processing the raw sequence reads associated with NGS technologies requires care and sophistication in order to draw compelling inferences about phenotypic consequences of variation in human genomes. It has been shown that different approaches to variant calling from NGS data can lead to different conclusions. Ensuring appropriate accuracy and quality in variant calling can come at a computational cost. We describe our experience implementing and evaluating a group-based approach to calling variants on large numbers of whole human genomes. We explore the influence of many factors that may impact the accuracy and efficiency of group-based variant calling, including group size, the biogeographical backgrounds of the individuals who have been sequenced, and the computing environment used. We make efficient use of the Gordon supercomputer cluster at the San Diego Supercomputer Center by incorporating job-packing and parallelization considerations into our workflow while calling variants on 437 whole human genomes generated as part of large association study. We ultimately find that our workflow resulted in high-quality variant calls in a computationally efficient manner. We argue that studies like ours should motivate further investigations combining hardware-oriented advances in computing systems with algorithmic developments to tackle emerging 'big data' problems in biomedical research brought on by the expansion of NGS technologies.

  12. A data management system to enable urgent natural disaster computing

    NASA Astrophysics Data System (ADS)

    Leong, Siew Hoon; Kranzlmüller, Dieter; Frank, Anton

    2014-05-01

    Civil protection, in particular natural disaster management, is very important to most nations and civilians in the world. When disasters like flash floods, earthquakes and tsunamis are expected or have taken place, it is of utmost importance to make timely decisions for managing the affected areas and reduce casualties. Computer simulations can generate information and provide predictions to facilitate this decision making process. Getting the data to the required resources is a critical requirement to enable the timely computation of the predictions. An urgent data management system to support natural disaster computing is thus necessary to effectively carry out data activities within a stipulated deadline. Since the trigger of a natural disaster is usually unpredictable, it is not always possible to prepare required resources well in advance. As such, an urgent data management system for natural disaster computing has to be able to work with any type of resources. Additional requirements include the need to manage deadlines and huge volume of data, fault tolerance, reliable, flexibility to changes, ease of usage, etc. The proposed data management platform includes a service manager to provide a uniform and extensible interface for the supported data protocols, a configuration manager to check and retrieve configurations of available resources, a scheduler manager to ensure that the deadlines can be met, a fault tolerance manager to increase the reliability of the platform and a data manager to initiate and perform the data activities. These managers will enable the selection of the most appropriate resource, transfer protocol, etc. such that the hard deadline of an urgent computation can be met for a particular urgent activity, e.g. data staging or computation. We associated 2 types of deadlines [2] with an urgent computing system. Soft-hard deadline: Missing a soft-firm deadline will render the computation less useful resulting in a cost that can have severe consequences Hard deadline: Missing a hard deadline renders the computation useless and results in full catastrophic consequences. A prototype of this system has a REST-based service manager. The REST-based implementation provides a uniform interface that is easy to use. New and upcoming file transfer protocols can easily be extended and accessed via the service manager. The service manager interacts with the other four managers to coordinate the data activities so that the fundamental natural disaster urgent computing requirement, i.e. deadline, can be fulfilled in a reliable manner. A data activity can include data storing, data archiving and data storing. Reliability is ensured by the choice of a network of managers organisation model[1] the configuration manager and the fault tolerance manager. With this proposed design, an easy to use, resource-independent data management system that can support and fulfill the computation of a natural disaster prediction within stipulated deadlines can thus be realised. References [1] H. G. Hegering, S. Abeck, and B. Neumair, Integrated management of networked systems - concepts, architectures, and their operational application, Morgan Kaufmann Publishers, 340 Pine Stret, Sixth Floor, San Francisco, CA 94104-3205, USA, 1999. [2] H. Kopetz, Real-time systems design principles for distributed embedded applications, second edition, Springer, LLC, 233 Spring Street, New York, NY 10013, USA, 2011. [3] S. H. Leong, A. Frank, and D. Kranzlmu¨ ller, Leveraging e-infrastructures for urgent computing, Procedia Computer Science 18 (2013), no. 0, 2177 - 2186, 2013 International Conference on Computational Science. [4] N. Trebon, Enabling urgent computing within the existing distributed computing infrastructure, Ph.D. thesis, University of Chicago, August 2011, http://people.cs.uchicago.edu/~ntrebon/docs/dissertation.pdf.

  13. Computer-Based Training in Math and Working Memory Improves Cognitive Skills and Academic Achievement in Primary School Children: Behavioral Results

    PubMed Central

    Sánchez-Pérez, Noelia; Castillo, Alejandro; López-López, José A.; Pina, Violeta; Puga, Jorge L.; Campoy, Guillermo; González-Salinas, Carmen; Fuentes, Luis J.

    2018-01-01

    Student academic achievement has been positively related to further development outcomes, such as the attainment of higher educational, employment, and socioeconomic aspirations. Among all the academic competences, mathematics has been identified as an essential skill in the field of international leadership as well as for those seeking positions in disciplines related to science, technology, and engineering. Given its positive consequences, studies have designed trainings to enhance children's mathematical skills. Additionally, the ability to regulate and control actions and cognitions, i.e., executive functions (EF), has been associated with school success, which has resulted in a strong effort to develop EF training programs to improve students' EF and academic achievement. The present study examined the efficacy of a school computer-based training composed of two components, namely, working memory and mathematics tasks. Among the advantages of using a computer-based training program is the ease with which it can be implemented in school settings and the ease by which the difficulty of the tasks can be adapted to fit the child's ability level. To test the effects of the training, children's cognitive skills (EF and IQ) and their school achievement (math and language grades and abilities) were evaluated. The results revealed a significant improvement in cognitive skills, such as non-verbal IQ and inhibition, and better school performance in math and reading among the children who participated in the training compared to those children who did not. Most of the improvements were related to training on WM tasks. These findings confirmed the efficacy of a computer-based training that combined WM and mathematics activities as part of the school routines based on the training's impact on children's academic competences and cognitive skills. PMID:29375442

  14. Computer-Based Training in Math and Working Memory Improves Cognitive Skills and Academic Achievement in Primary School Children: Behavioral Results.

    PubMed

    Sánchez-Pérez, Noelia; Castillo, Alejandro; López-López, José A; Pina, Violeta; Puga, Jorge L; Campoy, Guillermo; González-Salinas, Carmen; Fuentes, Luis J

    2017-01-01

    Student academic achievement has been positively related to further development outcomes, such as the attainment of higher educational, employment, and socioeconomic aspirations. Among all the academic competences, mathematics has been identified as an essential skill in the field of international leadership as well as for those seeking positions in disciplines related to science, technology, and engineering. Given its positive consequences, studies have designed trainings to enhance children's mathematical skills. Additionally, the ability to regulate and control actions and cognitions, i.e., executive functions (EF), has been associated with school success, which has resulted in a strong effort to develop EF training programs to improve students' EF and academic achievement. The present study examined the efficacy of a school computer-based training composed of two components, namely, working memory and mathematics tasks. Among the advantages of using a computer-based training program is the ease with which it can be implemented in school settings and the ease by which the difficulty of the tasks can be adapted to fit the child's ability level. To test the effects of the training, children's cognitive skills (EF and IQ) and their school achievement (math and language grades and abilities) were evaluated. The results revealed a significant improvement in cognitive skills, such as non-verbal IQ and inhibition, and better school performance in math and reading among the children who participated in the training compared to those children who did not. Most of the improvements were related to training on WM tasks. These findings confirmed the efficacy of a computer-based training that combined WM and mathematics activities as part of the school routines based on the training's impact on children's academic competences and cognitive skills.

  15. Information Geometry for Landmark Shape Analysis: Unifying Shape Representation and Deformation

    PubMed Central

    Peter, Adrian M.; Rangarajan, Anand

    2010-01-01

    Shape matching plays a prominent role in the comparison of similar structures. We present a unifying framework for shape matching that uses mixture models to couple both the shape representation and deformation. The theoretical foundation is drawn from information geometry wherein information matrices are used to establish intrinsic distances between parametric densities. When a parameterized probability density function is used to represent a landmark-based shape, the modes of deformation are automatically established through the information matrix of the density. We first show that given two shapes parameterized by Gaussian mixture models (GMMs), the well-known Fisher information matrix of the mixture model is also a Riemannian metric (actually, the Fisher-Rao Riemannian metric) and can therefore be used for computing shape geodesics. The Fisher-Rao metric has the advantage of being an intrinsic metric and invariant to reparameterization. The geodesic—computed using this metric—establishes an intrinsic deformation between the shapes, thus unifying both shape representation and deformation. A fundamental drawback of the Fisher-Rao metric is that it is not available in closed form for the GMM. Consequently, shape comparisons are computationally very expensive. To address this, we develop a new Riemannian metric based on generalized ϕ-entropy measures. In sharp contrast to the Fisher-Rao metric, the new metric is available in closed form. Geodesic computations using the new metric are considerably more efficient. We validate the performance and discriminative capabilities of these new information geometry-based metrics by pairwise matching of corpus callosum shapes. We also study the deformations of fish shapes that have various topological properties. A comprehensive comparative analysis is also provided using other landmark-based distances, including the Hausdorff distance, the Procrustes metric, landmark-based diffeomorphisms, and the bending energies of the thin-plate (TPS) and Wendland splines. PMID:19110497

  16. Denver RTD's computer aided dispatch/automatic vehicle location system : the human factors consequences

    DOT National Transportation Integrated Search

    1999-09-01

    This report documents what happened to employees' work procedures when their employer when their employer installed Computer Aided Disptach/Automatic Vehicle Locator (CAD/AVL) technology to provide real-time surveillance of vehicles and to upgrade ra...

  17. Perceptually stable regions for arbitrary polygons.

    PubMed

    Rocha, J

    2003-01-01

    Zou and Yan have recently developed a skeletonization algorithm of digital shapes based on a regularity/singularity analysis; they use the polygon whose vertices are the boundary pixels of the image to compute a constrained Delaunay triangulation (CDT) in order to find local symmetries and stable regions. Their method has produced good results but it is slow since its complexity depends on the number of contour pixels. This paper presents an extension of their technique to handle arbitrary polygons, not only polygons of short edges. Consequently, not only can we achieve results as good as theirs for digital images, but we can also compute skeletons of polygons of any number of edges. Since we can handle polygonal approximations of figures, the skeletons are more resilient to noise and faster to process.

  18. Directions in parallel programming: HPF, shared virtual memory and object parallelism in pC++

    NASA Technical Reports Server (NTRS)

    Bodin, Francois; Priol, Thierry; Mehrotra, Piyush; Gannon, Dennis

    1994-01-01

    Fortran and C++ are the dominant programming languages used in scientific computation. Consequently, extensions to these languages are the most popular for programming massively parallel computers. We discuss two such approaches to parallel Fortran and one approach to C++. The High Performance Fortran Forum has designed HPF with the intent of supporting data parallelism on Fortran 90 applications. HPF works by asking the user to help the compiler distribute and align the data structures with the distributed memory modules in the system. Fortran-S takes a different approach in which the data distribution is managed by the operating system and the user provides annotations to indicate parallel control regions. In the case of C++, we look at pC++ which is based on a concurrent aggregate parallel model.

  19. Three-Dimensional Mechanical Model of the Human Spine and the Versatility of its Use

    NASA Astrophysics Data System (ADS)

    Sokol, Milan; Velísková, Petra; Rehák, Ľuboš; Žabka, Martin

    2014-03-01

    The aim of the work is oriented towards the simulation or modeling of the lumbar and thoracic human spine as a load-bearing 3D system in a computer program (ANSYS). The human spine model includes a determination of the geometry based on X-ray pictures of frontal and lateral projections. For this reason, another computer code, BMPCOORDINATES, was developed as an aid to obtain the most precise and realistic model of the spine. Various positions, deformations, scoliosis, rotation and torsion can be modelled. Once the geometry is done, external loading on different spinal segments is entered; consequently, the response could be analysed. This can contribute a lot to medical practice as a tool for diagnoses, and developing implants or other artificial instruments for fixing the spine.

  20. Frequency-selective near-field radiative heat transfer between photonic crystal slabs: a computational approach for arbitrary geometries and materials.

    PubMed

    Rodriguez, Alejandro W; Ilic, Ognjen; Bermel, Peter; Celanovic, Ivan; Joannopoulos, John D; Soljačić, Marin; Johnson, Steven G

    2011-09-09

    We demonstrate the possibility of achieving enhanced frequency-selective near-field radiative heat transfer between patterned (photonic-crystal) slabs at designable frequencies and separations, exploiting a general numerical approach for computing heat transfer in arbitrary geometries and materials based on the finite-difference time-domain method. Our simulations reveal a tradeoff between selectivity and near-field enhancement as the slab-slab separation decreases, with the patterned heat transfer eventually reducing to the unpatterned result multiplied by a fill factor (described by a standard proximity approximation). We also find that heat transfer can be further enhanced at selective frequencies when the slabs are brought into a glide-symmetric configuration, a consequence of the degeneracies associated with the nonsymmorphic symmetry group.

  1. Towards computational materials design from first principles using alchemical changes and derivatives.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    von Lilienfeld-Toal, Otto Anatole

    2010-11-01

    The design of new materials with specific physical, chemical, or biological properties is a central goal of much research in materials and medicinal sciences. Except for the simplest and most restricted cases brute-force computational screening of all possible compounds for interesting properties is beyond any current capacity due to the combinatorial nature of chemical compound space (set of stoichiometries and configurations). Consequently, when it comes to computationally optimizing more complex systems, reliable optimization algorithms must not only trade-off sufficient accuracy and computational speed of the models involved, they must also aim for rapid convergence in terms of number of compoundsmore » 'visited'. I will give an overview on recent progress on alchemical first principles paths and gradients in compound space that appear to be promising ingredients for more efficient property optimizations. Specifically, based on molecular grand canonical density functional theory an approach will be presented for the construction of high-dimensional yet analytical property gradients in chemical compound space. Thereafter, applications to molecular HOMO eigenvalues, catalyst design, and other problems and systems shall be discussed.« less

  2. Faculty Technology Adoption and Integration: Motivations and Consequences

    ERIC Educational Resources Information Center

    Mrabet, Khalid

    2009-01-01

    In recent years, technology integration has become one of the top priorities at higher education institutions. Consequently, faculty members found themselves compelled to integrate computers and other technology into their teaching, research, and public service. The purpose of this qualitative study was to gain an understanding of some of the…

  3. Inhalation toxicity of indoor air pollutants in Drosophila melanogaster using integrated transcriptomics and computational behavior analyses

    NASA Astrophysics Data System (ADS)

    Eom, Hyun-Jeong; Liu, Yuedan; Kwak, Gyu-Suk; Heo, Muyoung; Song, Kyung Seuk; Chung, Yun Doo; Chon, Tae-Soo; Choi, Jinhee

    2017-06-01

    We conducted an inhalation toxicity test on the alternative animal model, Drosophila melanogaster, to investigate potential hazards of indoor air pollution. The inhalation toxicity of toluene and formaldehyde was investigated using comprehensive transcriptomics and computational behavior analyses. The ingenuity pathway analysis (IPA) based on microarray data suggests the involvement of pathways related to immune response, stress response, and metabolism in formaldehyde and toluene exposure based on hub molecules. We conducted a toxicity test using mutants of the representative genes in these pathways to explore the toxicological consequences of alterations of these pathways. Furthermore, extensive computational behavior analysis showed that exposure to either toluene or formaldehyde reduced most of the behavioral parameters of both wild-type and mutants. Interestingly, behavioral alteration caused by toluene or formaldehyde exposure was most severe in the p38b mutant, suggesting that the defects in the p38 pathway underlie behavioral alteration. Overall, the results indicate that exposure to toluene and formaldehyde via inhalation causes severe toxicity in Drosophila, by inducing significant alterations in gene expression and behavior, suggesting that Drosophila can be used as a potential alternative model in inhalation toxicity screening.

  4. Regression-based adaptive sparse polynomial dimensional decomposition for sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Tang, Kunkun; Congedo, Pietro; Abgrall, Remi

    2014-11-01

    Polynomial dimensional decomposition (PDD) is employed in this work for global sensitivity analysis and uncertainty quantification of stochastic systems subject to a large number of random input variables. Due to the intimate structure between PDD and Analysis-of-Variance, PDD is able to provide simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to polynomial chaos (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of the standard method unaffordable for real engineering applications. In order to address this problem of curse of dimensionality, this work proposes a variance-based adaptive strategy aiming to build a cheap meta-model by sparse-PDD with PDD coefficients computed by regression. During this adaptive procedure, the model representation by PDD only contains few terms, so that the cost to resolve repeatedly the linear system of the least-square regression problem is negligible. The size of the final sparse-PDD representation is much smaller than the full PDD, since only significant terms are eventually retained. Consequently, a much less number of calls to the deterministic model is required to compute the final PDD coefficients.

  5. Inhalation toxicity of indoor air pollutants in Drosophila melanogaster using integrated transcriptomics and computational behavior analyses

    PubMed Central

    Eom, Hyun-Jeong; Liu, Yuedan; Kwak, Gyu-Suk; Heo, Muyoung; Song, Kyung Seuk; Chung, Yun Doo; Chon, Tae-Soo; Choi, Jinhee

    2017-01-01

    We conducted an inhalation toxicity test on the alternative animal model, Drosophila melanogaster, to investigate potential hazards of indoor air pollution. The inhalation toxicity of toluene and formaldehyde was investigated using comprehensive transcriptomics and computational behavior analyses. The ingenuity pathway analysis (IPA) based on microarray data suggests the involvement of pathways related to immune response, stress response, and metabolism in formaldehyde and toluene exposure based on hub molecules. We conducted a toxicity test using mutants of the representative genes in these pathways to explore the toxicological consequences of alterations of these pathways. Furthermore, extensive computational behavior analysis showed that exposure to either toluene or formaldehyde reduced most of the behavioral parameters of both wild-type and mutants. Interestingly, behavioral alteration caused by toluene or formaldehyde exposure was most severe in the p38b mutant, suggesting that the defects in the p38 pathway underlie behavioral alteration. Overall, the results indicate that exposure to toluene and formaldehyde via inhalation causes severe toxicity in Drosophila, by inducing significant alterations in gene expression and behavior, suggesting that Drosophila can be used as a potential alternative model in inhalation toxicity screening. PMID:28621308

  6. Case Study: Organotypic human in vitro models of embryonic ...

    EPA Pesticide Factsheets

    Morphogenetic fusion of tissues is a common event in embryonic development and disruption of fusion is associated with birth defects of the eye, heart, neural tube, phallus, palate, and other organ systems. Embryonic tissue fusion requires precise regulation of cell-cell and cell-matrix interactions that drive proliferation, differentiation, and morphogenesis. Chemical low-dose exposures can disrupt morphogenesis across space and time by interfering with key embryonic fusion events. The Morphogenetic Fusion Task uses computer and in vitro models to elucidate consequences of developmental exposures. The Morphogenetic Fusion Task integrates multiple approaches to model responses to chemicals that leaad to birth defects, including integrative mining on ToxCast DB, ToxRefDB, and chemical structures, advanced computer agent-based models, and human cell-based cultures that model disruption of cellular and molecular behaviors including mechanisms predicted from integrative data mining and agent-based models. The purpose of the poster is to indicate progress on the CSS 17.02 Virtual Tissue Models Morphogenesis Task 1 products for the Board of Scientific Counselors meeting on Nov 16-17.

  7. Experimental Evaluation of Suitability of Selected Multi-Criteria Decision-Making Methods for Large-Scale Agent-Based Simulations.

    PubMed

    Tučník, Petr; Bureš, Vladimír

    2016-01-01

    Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the-server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models.

  8. Power System Information Delivering System Based on Distributed Object

    NASA Astrophysics Data System (ADS)

    Tanaka, Tatsuji; Tsuchiya, Takehiko; Tamura, Setsuo; Seki, Tomomichi; Kubota, Kenji

    In recent years, improvement in computer performance and development of computer network technology or the distributed information processing technology has a remarkable thing. Moreover, the deregulation is starting and will be spreading in the electric power industry in Japan. Consequently, power suppliers are required to supply low cost power with high quality services to customers. Corresponding to these movements the authors have been proposed SCOPE (System Configuration Of PowEr control system) architecture for distributed EMS/SCADA (Energy Management Systems / Supervisory Control and Data Acquisition) system based on distributed object technology, which offers the flexibility and expandability adapting those movements. In this paper, the authors introduce a prototype of the power system information delivering system, which was developed based on SCOPE architecture. This paper describes the architecture and the evaluation results of this prototype system. The power system information delivering system supplies useful power systems information such as electric power failures to the customers using Internet and distributed object technology. This system is new type of SCADA system which monitors failure of power transmission system and power distribution system with geographic information integrated way.

  9. Alignment-independent comparison of binding sites based on DrugScore potential fields encoded by 3D Zernike descriptors.

    PubMed

    Nisius, Britta; Gohlke, Holger

    2012-09-24

    Analyzing protein binding sites provides detailed insights into the biological processes proteins are involved in, e.g., into drug-target interactions, and so is of crucial importance in drug discovery. Herein, we present novel alignment-independent binding site descriptors based on DrugScore potential fields. The potential fields are transformed to a set of information-rich descriptors using a series expansion in 3D Zernike polynomials. The resulting Zernike descriptors show a promising performance in detecting similarities among proteins with low pairwise sequence identities that bind identical ligands, as well as within subfamilies of one target class. Furthermore, the Zernike descriptors are robust against structural variations among protein binding sites. Finally, the Zernike descriptors show a high data compression power, and computing similarities between binding sites based on these descriptors is highly efficient. Consequently, the Zernike descriptors are a useful tool for computational binding site analysis, e.g., to predict the function of novel proteins, off-targets for drug candidates, or novel targets for known drugs.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brickell, E.F.; Simmons, G.J.

    In the period since 1976, when Diffie and Hellman published the first discussion of two-key cryptography to appear in the open literature, only a handful of two-key cryptoalgorithms have been proposed - two of which are based on the knapsack problem. Consequently there was enormous interest when Shamir announced in early 1982 a cryptanalytic technique that could break many Merkle-Hellman knapsacks. In a rapid sequence of developments, Simmons and Brickell, Adleman, and Lagarias all announced other attacks on knapsack-based cryptosystems that were either computationally much more efficient or else directed at other knapsack schemes such as the Graham-Shamir or iteratedmore » systems. This paper analyzes the common features of knapsack-based cryptosystems and presents all of the cryptanalytic attacks made in 1982 from a unified viewpoint.« less

  11. Numerical Analysis of Base Flowfield for a Four-Engine Clustered Nozzle Configuration

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See

    1995-01-01

    Excessive base heating has been a problem for many launch vehicles. For certain designs such as the direct dump of turbine exhaust inside and at the lip of the nozzle, the potential burning of the turbine exhaust in the base region can be of great concern. Accurate prediction of the base environment at altitudes is therefore very important during the vehicle design phase. Otherwise, undesirable consequences may occur. In this study, the turbulent base flowfield of a cold flow experimental investigation for a four-engine clustered nozzle was numerically benchmarked using a pressure-based computational fluid dynamics (CFD) method. This is a necessary step before the benchmarking of hot flow and combustion flow tests can be considered. Since the medium was unheated air, reasonable prediction of the base pressure distribution at high altitude was the main goal. Several physical phenomena pertaining to the multiengine clustered nozzle base flow physics were deduced from the analysis.

  12. Analysis of radiation safety for Small Modular Reactor (SMR) on PWR-100 MWe type

    NASA Astrophysics Data System (ADS)

    Udiyani, P. M.; Husnayani, I.; Deswandri; Sunaryo, G. R.

    2018-02-01

    Indonesia as an archipelago country, including big, medium and small islands is suitable to construction of Small Medium/Modular reactors. Preliminary technology assessment on various SMR has been started, indeed the SMR is grouped into Light Water Reactor, Gas Cooled Reactor, and Solid Cooled Reactor and from its site it is group into Land Based reactor and Water Based Reactor. Fukushima accident made people doubt about the safety of Nuclear Power Plant (NPP), which impact on the public perception of the safety of nuclear power plants. The paper will describe the assessment of safety and radiation consequences on site for normal operation and Design Basis Accident postulation of SMR based on PWR-100 MWe in Bangka Island. Consequences of radiation for normal operation simulated for 3 units SMR. The source term was generated from an inventory by using ORIGEN-2 software and the consequence of routine calculated by PC-Cream and accident by PC Cosyma. The adopted methodology used was based on site-specific meteorological and spatial data. According to calculation by PC-CREAM 08 computer code, the highest individual dose in site area for adults is 5.34E-02 mSv/y in ESE direction within 1 km distance from stack. The result of calculation is that doses on public for normal operation below 1mSv/y. The calculation result from PC Cosyma, the highest individual dose is 1.92.E+00 mSv in ESE direction within 1km distance from stack. The total collective dose (all pathway) is 3.39E-01 manSv, with dominant supporting from cloud pathway. Results show that there are no evacuation countermeasure will be taken based on the regulation of emergency.

  13. Cyber Contingency Analysis version 1.x

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Contingency analysis based approach for quantifying and examining the resiliency of a cyber system in respect to confidentiality, integrity and availability. A graph representing an organization's cyber system and related resources is used for the availability contingency analysis. The mission critical paths associated with an organization are used to determine the consequences of a potential contingency. A node (or combination of nodes) are removed from the graph to analyze a particular contingency. The value of all mission critical paths that are disrupted by that contingency are used to quantify its severity. A total severity score can be calculated based onmore » the complete list of all these contingencies. A simple n1 analysis can be done in which only one node is removed at a time for the analysis. We can also compute nk analysis, where k is the number of nodes to simultaneously remove for analysis. A contingency risk score can also be computed, which takes the probability of the contingencies into account. In addition to availability, we can also quantify confidentiality and integrity scores for the system. These treat user accounts as potential contingencies. The amount (and type) of files that an account can read to is used to compute the confidentiality score. The amount (and type) of files that an account can write to is used to compute the integrity score. As with availability analysis, we can use this information to compute total severity scores in regards to confidentiality and integrity. We can also take probability into account to compute associated risk scores.« less

  14. An interactive data management and analysis system for clinical investigators.

    PubMed

    Groner, G F; Hopwood, M D; Palley, N A; Sibley, W L; Baker, W R; Christopher, T G; Thompson, H K

    1978-09-01

    An interactive minicomputer-based system has been developed that enables the clinical research investigator to personally explore and analyze his research data and, as a consequence of these explorations, to acquire more information. This system, which does not require extensive training or computer programming, enables the investigator to describe his data interactively in his own terms, enter data values while having them checked for validity, store time-oriented patient data in a carefully controlled on-line data base, retrieve data by patient, variable, and time, create subsets of patients with common characteristics, perform statistical analyses, and produce tables and graphs. It also permits data to be transferred to and from other computers. The system is well accepted and is being used by a variety of medical specialists at the three clinical research centers where it is operational. Reported benefits include less elapsed and nonproductive time, more thorough analysis of more data, greater and earlier insight into the meaning of research data, and increased publishable results.

  15. Design of a fast computer-based partial discharge diagnostic system

    NASA Technical Reports Server (NTRS)

    Oliva, Jose R.; Karady, G. G.; Domitz, Stan

    1991-01-01

    Partial discharges cause progressive deterioration of insulating materials working in high voltage conditions and may lead ultimately to insulator failure. Experimental findings indicate that deterioration increases with the number of discharges and is consequently proportional to the magnitude and frequency of the applied voltage. In order to obtain a better understanding of the mechanisms of deterioration produced by partial discharges, instrumentation capable of individual pulse resolution is required. A new computer-based partial discharge detection system was designed and constructed to conduct long duration tests on sample capacitors. This system is capable of recording large number of pulses without dead time and producing valuable information related to amplitude, polarity, and charge content of the discharges. The operation of the system is automatic and no human supervision is required during the testing stage. Ceramic capacitors were tested at high voltage in long duration tests. The obtained results indicated that the charge content of partial discharges shift towards high levels of charge as the level of deterioration in the capacitor increases.

  16. More than just a game: the role of simulation in the teaching of product design and entrepreneurship to mechanical engineering students

    NASA Astrophysics Data System (ADS)

    Costello, Gabriel J.

    2017-11-01

    The purpose of this work is to contribute to the debate on the best pedagogical approach to developing undergraduate mechanical engineering skills to meet the requirements of contemporary complex working environments. The paper provides an example of using student-entrepreneur collaboration in the teaching of modules to Mechanical Engineering final-year students. Problem-based learning (PBL) is one of the most significant recent innovations in the area of education for the professions. This work proposes to make an original contribution by simulating a real-life entrepreneur interaction for the students. The current literature largely confines simulation-based learning to computer applications such as games. However, this paper argues that role playing by students interfacing with technology start-ups can also be regarded as 'simulation' in a wider sense. Consequently, the paper proposes the concept of simulation-action learning as an enhancement of PBL and to distinguish it from computer simulation.

  17. Gun bore flaw image matching based on improved SIFT descriptor

    NASA Astrophysics Data System (ADS)

    Zeng, Luan; Xiong, Wei; Zhai, You

    2013-01-01

    In order to increase the operation speed and matching ability of SIFT algorithm, the SIFT descriptor and matching strategy are improved. First, a method of constructing feature descriptor based on sector area is proposed. By computing the gradients histogram of location bins which are parted into 6 sector areas, a descriptor with 48 dimensions is constituted. It can reduce the dimension of feature vector and decrease the complexity of structuring descriptor. Second, it introduce a strategy that partitions the circular region into 6 identical sector areas starting from the dominate orientation. Consequently, the computational complexity is reduced due to cancellation of rotation operation for the area. The experimental results indicate that comparing with the OpenCV SIFT arithmetic, the average matching speed of the new method increase by about 55.86%. The matching veracity can be increased even under some variation of view point, illumination, rotation, scale and out of focus. The new method got satisfied results in gun bore flaw image matching. Keywords: Metrology, Flaw image matching, Gun bore, Feature descriptor

  18. Nonlinear ship waves and computational fluid dynamics

    PubMed Central

    MIYATA, Hideaki; ORIHARA, Hideo; SATO, Yohei

    2014-01-01

    Research works undertaken in the first author’s laboratory at the University of Tokyo over the past 30 years are highlighted. Finding of the occurrence of nonlinear waves (named Free-Surface Shock Waves) in the vicinity of a ship advancing at constant speed provided the start-line for the progress of innovative technologies in the ship hull-form design. Based on these findings, a multitude of the Computational Fluid Dynamic (CFD) techniques have been developed over this period, and are highlighted in this paper. The TUMMAC code has been developed for wave problems, based on a rectangular grid system, while the WISDAM code treats both wave and viscous flow problems in the framework of a boundary-fitted grid system. These two techniques are able to cope with almost all fluid dynamical problems relating to ships, including the resistance, ship’s motion and ride-comfort issues. Consequently, the two codes have contributed significantly to the progress in the technology of ship design, and now form an integral part of the ship-designing process. PMID:25311139

  19. Segmentation of DTI based on tensorial morphological gradient

    NASA Astrophysics Data System (ADS)

    Rittner, Leticia; de Alencar Lotufo, Roberto

    2009-02-01

    This paper presents a segmentation technique for diffusion tensor imaging (DTI). This technique is based on a tensorial morphological gradient (TMG), defined as the maximum dissimilarity over the neighborhood. Once this gradient is computed, the tensorial segmentation problem becomes an scalar one, which can be solved by conventional techniques, such as watershed transform and thresholding. Similarity functions, namely the dot product, the tensorial dot product, the J-divergence and the Frobenius norm, were compared, in order to understand their differences regarding the measurement of tensor dissimilarities. The study showed that the dot product and the tensorial dot product turned out to be inappropriate for computation of the TMG, while the Frobenius norm and the J-divergence were both capable of measuring tensor dissimilarities, despite the distortion of Frobenius norm, since it is not an affine invariant measure. In order to validate the TMG as a solution for DTI segmentation, its computation was performed using distinct similarity measures and structuring elements. TMG results were also compared to fractional anisotropy. Finally, synthetic and real DTI were used in the method validation. Experiments showed that the TMG enables the segmentation of DTI by watershed transform or by a simple choice of a threshold. The strength of the proposed segmentation method is its simplicity and robustness, consequences of TMG computation. It enables the use, not only of well-known algorithms and tools from the mathematical morphology, but also of any other segmentation method to segment DTI, since TMG computation transforms tensorial images in scalar ones.

  20. Increasing the impact of medical image computing using community-based open-access hackathons: The NA-MIC and 3D Slicer experience.

    PubMed

    Kapur, Tina; Pieper, Steve; Fedorov, Andriy; Fillion-Robin, J-C; Halle, Michael; O'Donnell, Lauren; Lasso, Andras; Ungi, Tamas; Pinter, Csaba; Finet, Julien; Pujol, Sonia; Jagadeesan, Jayender; Tokuda, Junichi; Norton, Isaiah; Estepar, Raul San Jose; Gering, David; Aerts, Hugo J W L; Jakab, Marianna; Hata, Nobuhiko; Ibanez, Luiz; Blezek, Daniel; Miller, Jim; Aylward, Stephen; Grimson, W Eric L; Fichtinger, Gabor; Wells, William M; Lorensen, William E; Schroeder, Will; Kikinis, Ron

    2016-10-01

    The National Alliance for Medical Image Computing (NA-MIC) was launched in 2004 with the goal of investigating and developing an open source software infrastructure for the extraction of information and knowledge from medical images using computational methods. Several leading research and engineering groups participated in this effort that was funded by the US National Institutes of Health through a variety of infrastructure grants. This effort transformed 3D Slicer from an internal, Boston-based, academic research software application into a professionally maintained, robust, open source platform with an international leadership and developer and user communities. Critical improvements to the widely used underlying open source libraries and tools-VTK, ITK, CMake, CDash, DCMTK-were an additional consequence of this effort. This project has contributed to close to a thousand peer-reviewed publications and a growing portfolio of US and international funded efforts expanding the use of these tools in new medical computing applications every year. In this editorial, we discuss what we believe are gaps in the way medical image computing is pursued today; how a well-executed research platform can enable discovery, innovation and reproducible science ("Open Science"); and how our quest to build such a software platform has evolved into a productive and rewarding social engineering exercise in building an open-access community with a shared vision. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. The Humanistic Duo: The Park/Recreation Professional and the Computer. (Computer-Can I Use It?).

    ERIC Educational Resources Information Center

    Weiner, Myron E.

    This paper states that there are two fundamental reasons for the comparative absence of computer use for parks and recreation at the present time. These are (1) lack of clear cut cost justification and (2) reluctance on the part of recreation professionals to accept their role as managers and, consequently, to utilize modern management tools. The…

  2. The Computer as an Authority Figure: Some Effects of CAI on Student Perception of Teacher Authority. Technical Report Number 29.

    ERIC Educational Resources Information Center

    Brod, Rodney L.

    A sociological theory of authority was used to investigate some nonintellective, perhaps unintended, consequences of computer-assisted instruction (CAI) upon student's attitudes and orientations toward the organization of the school. An attitudinal questionnaire was used to survey attitudes toward the teacher and the computer in a junior high…

  3. Use of cone beam computed tomography in implant dentistry: current concepts, indications and limitations for clinical practice and research.

    PubMed

    Bornstein, Michael M; Horner, Keith; Jacobs, Reinhilde

    2017-02-01

    Diagnostic radiology is an essential component of treatment planning in the field of implant dentistry. This narrative review will present current concepts for the use of cone beam computed tomography imaging, before and after implant placement, in daily clinical practice and research. Guidelines for the selection of three-dimensional imaging will be discussed, and limitations will be highlighted. Current concepts of radiation dose optimization, including novel imaging modalities using low-dose protocols, will be presented. For preoperative cross-sectional imaging, data are still not available which demonstrate that cone beam computed tomography results in fewer intraoperative complications such as nerve damage or bleeding incidents, or that implants inserted using preoperative cone beam computed tomography data sets for planning purposes will exhibit higher survival or success rates. The use of cone beam computed tomography following the insertion of dental implants should be restricted to specific postoperative complications, such as damage of neurovascular structures or postoperative infections in relation to the maxillary sinus. Regarding peri-implantitis, the diagnosis and severity of the disease should be evaluated primarily based on clinical parameters and on radiological findings based on periapical radiographs (two dimensional). The use of cone beam computed tomography scans in clinical research might not yield any evident beneficial effect for the patient included. As many of the cone beam computed tomography scans performed for research have no direct therapeutic consequence, dose optimization measures should be implemented by using appropriate exposure parameters and by reducing the field of view to the actual region of interest. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  4. Dynamic stability analysis for capillary channel flow: One-dimensional and three-dimensional computations and the equivalent steady state technique

    NASA Astrophysics Data System (ADS)

    Grah, Aleksander; Dreyer, Michael E.

    2010-01-01

    Spacecraft technology provides a series of applications for capillary channel flow. It can serve as a reliable means for positioning and transport of liquids under low gravity conditions. Basically, capillary channels provide liquid paths with one or more free surfaces. A problem may be flow instabilities leading to a collapse of the liquid surfaces. A result is undesired gas ingestion and a two phase flow which can in consequence cause several technical problems. The presented capillary channel consists of parallel plates with two free liquid surfaces. The flow rate is established by a pump at the channel outlet, creating a lower pressure within the channel. Owing to the pressure difference between the liquid phase and the ambient gas phase the free surfaces bend inwards and remain stable as long as they are able to resist the steady and unsteady pressure effects. For the numerical prediction of the flow stability two very different models are used. The one-dimensional unsteady model is mainly based on the Bernoulli equation, the continuity equation, and the Gauss-Laplace equation. For three-dimensional evaluations an open source computational fluid dynamics (CFD) tool is applied. For verifications the numerical results are compared with quasisteady and unsteady data of a sounding rocket experiment. Contrary to previous experiments this one results in a significantly longer observation sequence. Furthermore, the critical point of the steady flow instability could be approached by a quasisteady technique. As in previous experiments the comparison to the numerical model evaluation shows a very good agreement for the movement of the liquid surfaces and for the predicted flow instability. The theoretical prediction of the flow instability is related to the speed index, based on characteristic velocities of the capillary channel flow. Stable flow regimes are defined by stability criteria for steady and unsteady flow. The one-dimensional computation of the speed index is based on the technique of the equivalent steady system, which is published for the first time in the present paper. This approach assumes that for every unsteady state an equivalent steady state with a special boundary condition can be formulated. The equivalent steady state technique enables a reformulation of the equation system and an efficient and reliable speed index computation. Furthermore, the existence of the numerical singularity at the critical point of the steady flow instability, postulated in previous publication, is demonstrated in detail. The numerical singularity is related to the stability criterion for steady flow and represents the numerical consequence of the liquid surface collapse. The evaluation and generation of the pressure diagram is demonstrated in detail with a series of numerical dynamic flow studies. The stability diagram, based on one-dimensional computation, gives a detailed overview of the stable and instable flow regimes. This prediction is in good agreement with the experimentally observed critical flow conditions and results of three-dimensional CFD computations.

  5. Computed radiography as a gamma ray detector—dose response and applications

    NASA Astrophysics Data System (ADS)

    O'Keeffe, D. S.; McLeod, R. W.

    2004-08-01

    Computed radiography (CR) can be used for imaging the spatial distribution of photon emissions from radionuclides. Its wide dynamic range and good response to medium energy gamma rays reduces the need for long exposure times. Measurements of small doses can be performed without having to pre-sensitize the computed radiography plates via an x-ray exposure, as required with screen-film systems. Cassette-based Agfa MD30 and Kodak GP25 CR plates were used in applications involving the detection of gamma ray emissions from technetium-99m and iodine-131. Cassette entrance doses as small as 1 µGy (140 keV gamma rays) produce noisy images, but the images are suitable for applications such as the detection of breaks in radiation protection barriers. A consequence of the gamma ray sensitivity of CR plates is the possibility that some nuclear medicine patients may fog their x-rays if the x-ray is taken soon after their radiopharmaceutical injection. The investigation showed that such fogging is likely to be diffuse.

  6. On the possibility (or lack thereof) of agreement between experiment and computation of flows over wings at moderate Reynolds number.

    PubMed

    Tank, J; Smith, L; Spedding, G R

    2017-02-06

    The flight of many birds and bats, and their robotic counterparts, occurs over a range of chord-based Reynolds numbers from 1 × 10 4 to 1.5 × 10 5 . It is precisely over this range where the aerodynamics of simple, rigid, fixed wings becomes extraordinarily sensitive to small changes in geometry and the environment, with two sets of consequences. The first is that practical lifting devices at this scale will likely not be simple, rigid, fixed wings. The second is that it becomes non-trivial to make baseline comparisons for experiment and computation, when either one can be wrong. Here we examine one ostensibly simple case of the NACA 0012 aerofoil and make careful comparison between the technical literature, and new experiments and computations. The agreement (or lack thereof) will establish one or more baseline results and some sensitivities around them. The idea is that the diagnostic procedures will help to guide comparisons and predictions in subsequent more complex cases.

  7. On the possibility (or lack thereof) of agreement between experiment and computation of flows over wings at moderate Reynolds number

    PubMed Central

    Tank, J.; Smith, L.

    2017-01-01

    The flight of many birds and bats, and their robotic counterparts, occurs over a range of chord-based Reynolds numbers from 1 × 104 to 1.5 × 105. It is precisely over this range where the aerodynamics of simple, rigid, fixed wings becomes extraordinarily sensitive to small changes in geometry and the environment, with two sets of consequences. The first is that practical lifting devices at this scale will likely not be simple, rigid, fixed wings. The second is that it becomes non-trivial to make baseline comparisons for experiment and computation, when either one can be wrong. Here we examine one ostensibly simple case of the NACA 0012 aerofoil and make careful comparison between the technical literature, and new experiments and computations. The agreement (or lack thereof) will establish one or more baseline results and some sensitivities around them. The idea is that the diagnostic procedures will help to guide comparisons and predictions in subsequent more complex cases. PMID:28163869

  8. QM/QM approach to model energy disorder in amorphous organic semiconductors.

    PubMed

    Friederich, Pascal; Meded, Velimir; Symalla, Franz; Elstner, Marcus; Wenzel, Wolfgang

    2015-02-10

    It is an outstanding challenge to model the electronic properties of organic amorphous materials utilized in organic electronics. Computation of the charge carrier mobility is a challenging problem as it requires integration of morphological and electronic degrees of freedom in a coherent methodology and depends strongly on the distribution of polaron energies in the system. Here we represent a QM/QM model to compute the polaron energies combining density functional methods for molecules in the vicinity of the polaron with computationally efficient density functional based tight binding methods in the rest of the environment. For seven widely used amorphous organic semiconductor materials, we show that the calculations are accelerated up to 1 order of magnitude without any loss in accuracy. Considering that the quantum chemical step is the efficiency bottleneck of a workflow to model the carrier mobility, these results are an important step toward accurate and efficient disordered organic semiconductors simulations, a prerequisite for accelerated materials screening and consequent component optimization in the organic electronics industry.

  9. Vision-Based People Detection System for Heavy Machine Applications

    PubMed Central

    Fremont, Vincent; Bui, Manh Tuan; Boukerroui, Djamal; Letort, Pierrick

    2016-01-01

    This paper presents a vision-based people detection system for improving safety in heavy machines. We propose a perception system composed of a monocular fisheye camera and a LiDAR. Fisheye cameras have the advantage of a wide field-of-view, but the strong distortions that they create must be handled at the detection stage. Since people detection in fisheye images has not been well studied, we focus on investigating and quantifying the impact that strong radial distortions have on the appearance of people, and we propose approaches for handling this specificity, adapted from state-of-the-art people detection approaches. These adaptive approaches nevertheless have the drawback of high computational cost and complexity. Consequently, we also present a framework for harnessing the LiDAR modality in order to enhance the detection algorithm for different camera positions. A sequential LiDAR-based fusion architecture is used, which addresses directly the problem of reducing false detections and computational cost in an exclusively vision-based system. A heavy machine dataset was built, and different experiments were carried out to evaluate the performance of the system. The results are promising, in terms of both processing speed and performance. PMID:26805838

  10. Vision-Based People Detection System for Heavy Machine Applications.

    PubMed

    Fremont, Vincent; Bui, Manh Tuan; Boukerroui, Djamal; Letort, Pierrick

    2016-01-20

    This paper presents a vision-based people detection system for improving safety in heavy machines. We propose a perception system composed of a monocular fisheye camera and a LiDAR. Fisheye cameras have the advantage of a wide field-of-view, but the strong distortions that they create must be handled at the detection stage. Since people detection in fisheye images has not been well studied, we focus on investigating and quantifying the impact that strong radial distortions have on the appearance of people, and we propose approaches for handling this specificity, adapted from state-of-the-art people detection approaches. These adaptive approaches nevertheless have the drawback of high computational cost and complexity. Consequently, we also present a framework for harnessing the LiDAR modality in order to enhance the detection algorithm for different camera positions. A sequential LiDAR-based fusion architecture is used, which addresses directly the problem of reducing false detections and computational cost in an exclusively vision-based system. A heavy machine dataset was built, and different experiments were carried out to evaluate the performance of the system. The results are promising, in terms of both processing speed and performance.

  11. Health decision making: lynchpin of evidence-based practice.

    PubMed

    Spring, Bonnie

    2008-01-01

    Health decision making is both the lynchpin and the least developed aspect of evidence-based practice. The evidence-based practice process requires integrating the evidence with consideration of practical resources and patient preferences and doing so via a process that is genuinely collaborative. Yet, the literature is largely silent about how to accomplish integrative, shared decision making. for evidence-based practice are discussed for 2 theories of clinician decision making (expected utility and fuzzy trace) and 2 theories of patient health decision making (transtheoretical model and reasoned action). Three suggestions are offered. First, it would be advantageous to have theory-based algorithms that weight and integrate the 3 data strands (evidence, resources, preferences) in different decisional contexts. Second, patients, not providers, make the decisions of greatest impact on public health, and those decisions are behavioral. Consequently, theory explicating how provider-patient collaboration can influence patient lifestyle decisions made miles from the provider's office is greatly needed. Third, although the preponderance of data on complex decisions supports a computational approach, such an approach to evidence-based practice is too impractical to be widely applied at present. More troublesomely, until patients come to trust decisions made computationally more than they trust their providers' intuitions, patient adherence will remain problematic. A good theory of integrative, collaborative health decision making remains needed.

  12. Health Decision Making: Lynchpin of Evidence-Based Practice

    PubMed Central

    Spring, Bonnie

    2008-01-01

    Health decision making is both the lynchpin and the least developed aspect of evidence-based practice. The evidence-based practice process requires integrating the evidence with consideration of practical resources and patient preferences and doing so via a process that is genuinely collaborative. Yet, the literature is largely silent about how to accomplish integrative, shared decision making. Implications for evidence-based practice are discussed for 2 theories of clinician decision making (expected utility and fuzzy trace) and 2 theories of patient health decision making (transtheoretical model and reasoned action). Three suggestions are offered. First, it would be advantageous to have theory-based algorithms that weight and integrate the 3 data strands (evidence, resources, preferences) in different decisional contexts. Second, patients, not providers, make the decisions of greatest impact on public health, and those decisions are behavioral. Consequently, theory explicating how provider-patient collaboration can influence patient lifestyle decisions made miles from the provider's office is greatly needed. Third, although the preponderance of data on complex decisions supports a computational approach, such an approach to evidence-based practice is too impractical to be widely applied at present. More troublesomely, until patients come to trust decisions made computationally more than they trust their providers’ intuitions, patient adherence will remain problematic. A good theory of integrative, collaborative health decision making remains needed. PMID:19015288

  13. Artificial Boundary Conditions for Computation of Oscillating External Flows

    NASA Technical Reports Server (NTRS)

    Tsynkov, S. V.

    1996-01-01

    In this paper, we propose a new technique for the numerical treatment of external flow problems with oscillatory behavior of the solution in time. Specifically, we consider the case of unbounded compressible viscous plane flow past a finite body (airfoil). Oscillations of the flow in time may be caused by the time-periodic injection of fluid into the boundary layer, which in accordance with experimental data, may essentially increase the performance of the airfoil. To conduct the actual computations, we have to somehow restrict the original unbounded domain, that is, to introduce an artificial (external) boundary and to further consider only a finite computational domain. Consequently, we will need to formulate some artificial boundary conditions (ABC's) at the introduced external boundary. The ABC's we are aiming to obtain must meet a fundamental requirement. One should be able to uniquely complement the solution calculated inside the finite computational domain to its infinite exterior so that the original problem is solved within the desired accuracy. Our construction of such ABC's for oscillating flows is based on an essential assumption: the Navier-Stokes equations can be linearized in the far field against the free-stream back- ground. To actually compute the ABC's, we represent the far-field solution as a Fourier series in time and then apply the Difference Potentials Method (DPM) of V. S. Ryaben'kii. This paper contains a general theoretical description of the algorithm for setting the DPM-based ABC's for time-periodic external flows. Based on our experience in implementing analogous ABC's for steady-state problems (a simpler case), we expect that these boundary conditions will become an effective tool for constructing robust numerical methods to calculate oscillatory flows.

  14. Estimating the probability for major gene Alzheimer disease

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrer, L.A.; Cupples, L.A.

    1994-02-01

    Alzheimer disease (AD) is a neuropsychiatric illness caused by multiple etiologies. Prediction of whether AD is genetically based in a given family is problematic because of censoring bias among unaffected relatives as a consequence of the late onset of the disorder, diagnostic uncertainties, heterogeneity, and limited information in a single family. The authors have developed a method based on Bayesian probability to compute values for a continuous variable that ranks AD families as having a major gene form of AD (MGAD). In addition, they have compared the Bayesian method with a maximum-likelihood approach. These methods incorporate sex- and age-adjusted riskmore » estimates and allow for phenocopies and familial clustering of age on onset. Agreement is high between the two approaches for ranking families as MGAD (Spearman rank [r] = .92). When either method is used, the numerical outcomes are sensitive to assumptions of the gene frequency and cumulative incidence of the disease in the population. Consequently, risk estimates should be used cautiously for counseling purposes; however, there are numerous valid applications of these procedures in genetic and epidemiological studies. 41 refs., 4 figs., 3 tabs.« less

  15. Methods for nuclear air-cleaning-system accident-consequence assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrae, R.W.; Bolstad, J.W.; Gregory, W.S.

    1982-01-01

    This paper describes a multilaboratory research program that is directed toward addressing many questions that analysts face when performing air cleaning accident consequence assessments. The program involves developing analytical tools and supportive experimental data that will be useful in making more realistic assessments of accident source terms within and up to the atmospheric boundaries of nuclear fuel cycle facilities. The types of accidents considered in this study includes fires, explosions, spills, tornadoes, criticalities, and equipment failures. The main focus of the program is developing an accident analysis handbook (AAH). We will describe the contents of the AAH, which include descriptionsmore » of selected nuclear fuel cycle facilities, process unit operations, source-term development, and accident consequence analyses. Three computer codes designed to predict gas and material propagation through facility air cleaning systems are described. These computer codes address accidents involving fires (FIRAC), explosions (EXPAC), and tornadoes (TORAC). The handbook relies on many illustrative examples to show the analyst how to approach accident consequence assessments. We will use the FIRAC code and a hypothetical fire scenario to illustrate the accident analysis capability.« less

  16. Westinghouse ICF power plant study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sucov, E. W.

    1980-10-01

    In this study, two different electric power plants for the production of about 1000 MWe which were based on a CO/sub 2/ laser driver and on a heavy ion driver have been developed and analyzed. The purposes of this study were: (1) to examine in a self consistent way the technological and institutional problems that need to be confronted and solved in order to produce commercially competitive electricity in the 2020 time frame from an inertial fusion reactor, and (2) to compare, on a common basis, the consequences of using two different drivers to initiate the DT fuel pellet explosions.more » Analytic descriptions of size/performance/cost relationships for each of the subsystems comprising the power plant have been combined into an overall computer code which models the entire plant. This overall model has been used to conduct trade studies which examine the consequences of varying critical design values around the reference point.« less

  17. Framework and methodology for supply chain lifecycle analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamlet, Jason; Eames, Brandon K.; Kao, Gio K.

    The various technologies presented herein relate to pertaining to identifying and mitigating risks and attacks on a supply chain. A computer-implemented representation of a supply chain is generated comprising nodes (locations) and edges (objects, information). Risk to attack and different attack vectors can be defined for the various nodes and edges, and further, based upon the risks and attacks, (difficulty, consequence) pairs can be determined. One or more mitigations can be generated to increase a difficulty of attack and/or reduce consequence of an attack. The one or more mitigations can be constrained, e.g., by cost, time, etc., to facilitate determinationmore » of how feasible a respective mitigation is to implement with regard to finances available, duration to implement, etc. A context-free grammar can be utilized to identify one or more attacks in the supply chain. Further, the risks can undergo a ranking to enable mitigation priority to be determined.« less

  18. Molecular basis of cyclooxygenase enzymes (COXs) selective inhibition

    PubMed Central

    Limongelli, Vittorio; Bonomi, Massimiliano; Marinelli, Luciana; Gervasio, Francesco Luigi; Cavalli, Andrea; Novellino, Ettore; Parrinello, Michele

    2010-01-01

    The widely used nonsteroidal anti-inflammatory drugs block the cyclooxygenase enzymes (COXs) and are clinically used for the treatment of inflammation, pain, and cancers. A selective inhibition of the different isoforms, particularly COX-2, is desirable, and consequently a deeper understanding of the molecular basis of selective inhibition is of great demand. Using an advanced computational technique we have simulated the full dissociation process of a highly potent and selective inhibitor, SC-558, in both COX-1 and COX-2. We have found a previously unreported alternative binding mode in COX-2 explaining the time-dependent inhibition exhibited by this class of inhibitors and consequently their long residence time inside this isoform. Our metadynamics-based approach allows us to illuminate the highly dynamical character of the ligand/protein recognition process, thus explaining a wealth of experimental data and paving the way to an innovative strategy for designing new COX inhibitors with tuned selectivity. PMID:20215464

  19. Scaling Task Management in Space and Time: Reducing User Overhead in Ubiquitous-Computing Environments

    DTIC Science & Technology

    2005-03-28

    consequently users are torn between taking advantage of increasingly pervasive computing systems, and the price (in attention and skill) that they have to... advantage of the surrounding computing environments; and (c) that it is usable by non-experts. Second, from a software architect’s perspective, we...take full advantage of the computing systems accessible to them, much as they take advantage of the furniture in each physical space. In the example

  20. A simplified simulation model for a HPDC die with conformal cooling channels

    NASA Astrophysics Data System (ADS)

    Frings, Markus; Behr, Marek; Elgeti, Stefanie

    2017-10-01

    In general, the cooling phase of the high-pressure die casting process is based on complex physical phenomena: so-lidification of molten material; heat exchange between cast part, die and cooling fluid; turbulent flow inside the cooling channels that needs to be considered when computing the heat flux; interdependency of properties and temperature of the cooling liquid. Intuitively understanding and analyzing all of these effects when designing HPDC dies is not feasible. A remedy that has become available is numerical design, based for example on shape optimization methods. However, current computing power is not sufficient to perform optimization while at the same time fully resolving all physical phenomena. But since in HPDC suitable objective functions very often lead to integral values, e.g., average die temperature, this paper identifies possible simplifications in the modeling of the cooling phase. As a consequence, the computational effort is reduced to an acceptable level. A further aspect that arises in the context of shape optimization is the evaluation of shape gradients. The challenge here is to allow for large shape deformations without remeshing. In our approach, the cooling channels are described by their center lines. The flow profile of the cooling fluid is then estimated based on experimental data found in literature for turbulent pipe flows. In combination, the heat flux throughout cavity, die, and cooling channel can be described by one single advection-diffusion equation on a fixed mesh. The parameters in the equation are adjusted based on the position of cavity and cooling channel. Both results contribute towards a computationally efficient, yet accurate method, which can be employed within the frame of shape optimization of cooling channels in HPDC dies.

  1. A deconvolution extraction method for 2D multi-object fibre spectroscopy based on the regularized least-squares QR-factorization algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Jian; Yin, Qian; Guo, Ping; Luo, A.-li

    2014-09-01

    This paper presents an efficient method for the extraction of astronomical spectra from two-dimensional (2D) multifibre spectrographs based on the regularized least-squares QR-factorization (LSQR) algorithm. We address two issues: we propose a modified Gaussian point spread function (PSF) for modelling the 2D PSF from multi-emission-line gas-discharge lamp images (arc images), and we develop an efficient deconvolution method to extract spectra in real circumstances. The proposed modified 2D Gaussian PSF model can fit various types of 2D PSFs, including different radial distortion angles and ellipticities. We adopt the regularized LSQR algorithm to solve the sparse linear equations constructed from the sparse convolution matrix, which we designate the deconvolution spectrum extraction method. Furthermore, we implement a parallelized LSQR algorithm based on graphics processing unit programming in the Compute Unified Device Architecture to accelerate the computational processing. Experimental results illustrate that the proposed extraction method can greatly reduce the computational cost and memory use of the deconvolution method and, consequently, increase its efficiency and practicability. In addition, the proposed extraction method has a stronger noise tolerance than other methods, such as the boxcar (aperture) extraction and profile extraction methods. Finally, we present an analysis of the sensitivity of the extraction results to the radius and full width at half-maximum of the 2D PSF.

  2. Neural correlates of sensory prediction errors in monkeys: evidence for internal models of voluntary self-motion in the cerebellum.

    PubMed

    Cullen, Kathleen E; Brooks, Jessica X

    2015-02-01

    During self-motion, the vestibular system makes essential contributions to postural stability and self-motion perception. To ensure accurate perception and motor control, it is critical to distinguish between vestibular sensory inputs that are the result of externally applied motion (exafference) and that are the result of our own actions (reafference). Indeed, although the vestibular sensors encode vestibular afference and reafference with equal fidelity, neurons at the first central stage of sensory processing selectively encode vestibular exafference. The mechanism underlying this reafferent suppression compares the brain's motor-based expectation of sensory feedback with the actual sensory consequences of voluntary self-motion, effectively computing the sensory prediction error (i.e., exafference). It is generally thought that sensory prediction errors are computed in the cerebellum, yet it has been challenging to explicitly demonstrate this. We have recently addressed this question and found that deep cerebellar nuclei neurons explicitly encode sensory prediction errors during self-motion. Importantly, in everyday life, sensory prediction errors occur in response to changes in the effector or world (muscle strength, load, etc.), as well as in response to externally applied sensory stimulation. Accordingly, we hypothesize that altering the relationship between motor commands and the actual movement parameters will result in the updating in the cerebellum-based computation of exafference. If our hypothesis is correct, under these conditions, neuronal responses should initially be increased--consistent with a sudden increase in the sensory prediction error. Then, over time, as the internal model is updated, response modulation should decrease in parallel with a reduction in sensory prediction error, until vestibular reafference is again suppressed. The finding that the internal model predicting the sensory consequences of motor commands adapts for new relationships would have important implications for understanding how responses to passive stimulation endure despite the cerebellum's ability to learn new relationships between motor commands and sensory feedback.

  3. Hybrid techniques for the digital control of mechanical and optical systems

    NASA Astrophysics Data System (ADS)

    Acernese, Fausto; Barone, Fabrizio; De Rosa, Rosario; Eleuteri, Antonio; Milano, Leopoldo; Pardi, Silvio; Ricciardi, Iolanda; Russo, Guido

    2004-07-01

    One of the main requirements of a digital system for the control of interferometric detectors of gravitational waves is the computing power, that is a direct consequence of the increasing complexity of the digital algorithms necessary for the control signals generation. For this specific task many specialised non standard real-time architectures have been developed, often very expensive and difficult to upgrade. On the other hand, such computing power is generally fully available for off-line applications on standard Pc based systems. Therefore, a possible and obvious solution may be provided by the integration of both the the real-time and off-line architecture resulting in a hybrid control system architecture based on standards available components, trying to get both the advantages of the perfect data synchronization provided by the real-time systems and by the large computing power available on Pc based systems. Such integration may be provided by the implementation of the link between the two different architectures through the standard Ethernet network, whose data transfer speed is largely increasing in these years, using the TCP/IP and UDP protocols. In this paper we describe the architecture of an hybrid Ethernet based real-time control system protoype we implemented in Napoli, discussing its characteristics and performances. Finally we discuss a possible application to the real-time control of a suspended mass of the mode cleaner of the 3m prototype optical interferometer for gravitational wave detection (IDGW-3P) operational in Napoli.

  4. Real -time dispatching modelling for trucks with different capacities in open pit mines / Modelowanie w czasie rzeczywistym przewozów ciężarówek o różnej ładowności w kopalni odkrywkowej

    NASA Astrophysics Data System (ADS)

    Ahangaran, Daryoush Kaveh; Yasrebi, Amir Bijan; Wetherelt, Andy; Foster, Patrick

    2012-10-01

    Application of fully automated systems for truck dispatching plays a major role in decreasing the transportation costs which often represent the majority of costs spent on open pit mining. Consequently, the application of a truck dispatching system has become fundamentally important in most of the world's open pit mines. Recent experiences indicate that by decreasing a truck's travelling time and the associated waiting time of its associated shovel then due to the application of a truck dispatching system the rate of production will be considerably improved. Computer-based truck dispatching systems using algorithms, advanced and accurate software are examples of these innovations. Developing an algorithm of a computer- based program appropriated to a specific mine's conditions is considered as one of the most important activities in connection with computer-based dispatching in open pit mines. In this paper the changing trend of programming and dispatching control algorithms and automation conditions will be discussed. Furthermore, since the transportation fleet of most mines use trucks with different capacities, innovative methods, operational optimisation techniques and the best possible methods for developing the required algorithm for real-time dispatching are selected by conducting research on mathematical-based planning methods. Finally, a real-time dispatching model compatible with the requirement of trucks with different capacities is developed by using two techniques of flow networks and integer programming.

  5. Dynamically allocating sets of fine-grained processors to running computations

    NASA Technical Reports Server (NTRS)

    Middleton, David

    1988-01-01

    Researchers explore an approach to using general purpose parallel computers which involves mapping hardware resources onto computations instead of mapping computations onto hardware. Problems such as processor allocation, task scheduling and load balancing, which have traditionally proven to be challenging, change significantly under this approach and may become amenable to new attacks. Researchers describe the implementation of this approach used by the FFP Machine whose computation and communication resources are repeatedly partitioned into disjoint groups that match the needs of available tasks from moment to moment. Several consequences of this system are examined.

  6. Geospace ionosphere research with a MF/HF radio instrument on a cubesat

    NASA Astrophysics Data System (ADS)

    Kallio, E. J.; Aikio, A. T.; Alho, M.; Fontell, M.; van Gijlswijk, R.; Kauristie, K.; Kestilä, A.; Koskimaa, P.; Makela, J. S.; Mäkelä, M.; Turunen, E.; Vanhamäki, H.

    2016-12-01

    Modern technology provides new possibilities to study geospace and its ionosphere, using spacecraft and and computer simulations. A type of nanosatellites, CubeSats, provide a cost effective possibility to provide in-situ measurements in the ionosphere. Moreover, combined CubeSat observations with ground-based observations gives a new view on auroras and associated electromagnetic phenomena. Especially joint and active CubeSat - ground based observation campaigns enable the possibility of studying the 3D structure of the ionosphere. Furthermore using several CubeSats to form satellite constellations enables much higher temporal resolution. At the same time, increasing computation capacity has made it possible to perform simulations where properties of the ionosphere, such as propagation of the electromagnetic waves in the medium frequency, MF (0.3-3 MHz) and high frequency, HF (3-30 MHz), ranges is based on a 3D ionospheric model and on first-principles modelling. Electromagnetic waves at those frequencies are strongly affected by ionospheric electrons and, consequently, those frequencies can be used for studying the plasma. On the other hand, even if the ionosphere originally enables long-range telecommunication at MF and HF frequencies, the frequent occurrence of spatiotemporal variations in the ionosphere disturbs communication channels, especially at high latitudes. Therefore, study of the MF and HF waves in the ionosphere has both a strong science and technology interests. We present computational simulation results and measuring principles and techniques to investigate the arctic ionosphere by a polar orbiting CubeSat whose novel AM radio instrument measures HF and MF waves. The cubesat, which contains also a white light aurora camera, is planned to be launched in 2017 (http://www.suomi100satelliitti.fi/eng). We have modelled the propagation of the radio waves, both ground generated man-made waves and space formed space weather related waves, through the 3D arctic ionosphere with (1) a new 3D ray tracing model and (2) a new 3D full kinetic electromagnetic simulation. These simulations are used to analyse the origin of the radio waves observed by the MH/HF radio instrument and, consequently, to derive information about the 3D ionosphere and its spatial and temporal variations.

  7. An innovative computationally efficient hydromechanical coupling approach for fault reactivation in geological subsurface utilization

    NASA Astrophysics Data System (ADS)

    Adams, M.; Kempka, T.; Chabab, E.; Ziegler, M.

    2018-02-01

    Estimating the efficiency and sustainability of geological subsurface utilization, i.e., Carbon Capture and Storage (CCS) requires an integrated risk assessment approach, considering the occurring coupled processes, beside others, the potential reactivation of existing faults. In this context, hydraulic and mechanical parameter uncertainties as well as different injection rates have to be considered and quantified to elaborate reliable environmental impact assessments. Consequently, the required sensitivity analyses consume significant computational time due to the high number of realizations that have to be carried out. Due to the high computational costs of two-way coupled simulations in large-scale 3D multiphase fluid flow systems, these are not applicable for the purpose of uncertainty and risk assessments. Hence, an innovative semi-analytical hydromechanical coupling approach for hydraulic fault reactivation will be introduced. This approach determines the void ratio evolution in representative fault elements using one preliminary base simulation, considering one model geometry and one set of hydromechanical parameters. The void ratio development is then approximated and related to one reference pressure at the base of the fault. The parametrization of the resulting functions is then directly implemented into a multiphase fluid flow simulator to carry out the semi-analytical coupling for the simulation of hydromechanical processes. Hereby, the iterative parameter exchange between the multiphase and mechanical simulators is omitted, since the update of porosity and permeability is controlled by one reference pore pressure at the fault base. The suggested procedure is capable to reduce the computational time required by coupled hydromechanical simulations of a multitude of injection rates by a factor of up to 15.

  8. Evaluating the effects of dam breach methodologies on Consequence Estimation through Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Kalyanapu, A. J.; Thames, B. A.

    2013-12-01

    Dam breach modeling often includes application of models that are sophisticated, yet computationally intensive to compute flood propagation at high temporal and spatial resolutions. This results in a significant need for computational capacity that requires development of newer flood models using multi-processor and graphics processing techniques. Recently, a comprehensive benchmark exercise titled the 12th Benchmark Workshop on Numerical Analysis of Dams, is organized by the International Commission on Large Dams (ICOLD) to evaluate the performance of these various tools used for dam break risk assessment. The ICOLD workshop is focused on estimating the consequences of failure of a hypothetical dam near a hypothetical populated area with complex demographics, and economic activity. The current study uses this hypothetical case study and focuses on evaluating the effects of dam breach methodologies on consequence estimation and analysis. The current study uses ICOLD hypothetical data including the topography, dam geometric and construction information, land use/land cover data along with socio-economic and demographic data. The objective of this study is to evaluate impacts of using four different dam breach methods on the consequence estimates used in the risk assessments. The four methodologies used are: i) Froehlich (1995), ii) MacDonald and Langridge-Monopolis 1984 (MLM), iii) Von Thun and Gillete 1990 (VTG), and iv) Froehlich (2008). To achieve this objective, three different modeling components were used. First, using the HEC-RAS v.4.1, dam breach discharge hydrographs are developed. These hydrographs are then provided as flow inputs into a two dimensional flood model named Flood2D-GPU, which leverages the computer's graphics card for much improved computational capabilities of the model input. Lastly, outputs from Flood2D-GPU, including inundated areas, depth grids, velocity grids, and flood wave arrival time grids, are input into HEC-FIA, which provides the consequence assessment for the solution to the problem statement. For the four breach methodologies, a sensitivity analysis of four breach parameters, breach side slope (SS), breach width (Wb), breach invert elevation (Elb), and time of failure (tf), is conducted. Up to, 68 simulations are computed to produce breach hydrographs in HEC-RAS for input into Flood2D-GPU. The Flood2D-GPU simulation results were then post-processed in HEC-FIA to evaluate: Total Population at Risk (PAR), 14-yr and Under PAR (PAR14-), 65-yr and Over PAR (PAR65+), Loss of Life (LOL) and Direct Economic Impact (DEI). The MLM approach resulted in wide variability in simulated minimum and maximum values of PAR, PAR 65+ and LOL estimates. For PAR14- and DEI, Froehlich (1995) resulted in lower values while MLM resulted in higher estimates. This preliminary study demonstrated the relative performance of four commonly used dam breach methodologies and their impacts on consequence estimation.

  9. Modular Manufacturing Simulator: Users Manual

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The Modular Manufacturing Simulator (MMS) has been developed for the beginning user of computer simulations. Consequently, the MMS cannot model complex systems that require branching and convergence logic. Once a user becomes more proficient in computer simulation and wants to add more complexity, the user is encouraged to use one of the many available commercial simulation systems. The (MMS) is based on the SSE5 that was developed in the early 1990's by the University of Alabama in Huntsville (UAH). A recent survey by MSFC indicated that the simulator has been a major contributor to the economic impact of the MSFC technology transfer program. Many manufacturers have requested additional features for the SSE5. Consequently, the following features have been added to the MMS that are not available in the SSE5: runs under Windows, print option for both input parameters and output statistics, operator can be fixed at a station or assigned to a group of stations, operator movement based on time limit, part limit, or work-in-process (WIP) limit at next station. The movement options for a moveable operators are: go to station with largest WIP, rabbit chase where operator moves in circular sequence between stations, and push/pull where operator moves back and forth between stations. This user's manual contains the necessary information for installing the MMS on a PC, a description of the various MMS commands, and the solutions to a number of sample problems using the MMS. Also included in the beginning of this report is a brief discussion of technology transfer.

  10. Blending an Android Development Course with Software Engineering Concepts

    ERIC Educational Resources Information Center

    Chatzigeorgiou, Alexander; Theodorou, Tryfon L.; Violettas, George E.; Xinogalos, Stelios

    2016-01-01

    The tremendous popularity of mobile computing and Android in particular has attracted millions of developers who see opportunities for building their own start-ups. As a consequence Computer Science students express an increasing interest into the related technology of Java development for Android applications. Android projects are complex by…

  11. Integration of Computational Chemistry into the Undergraduate Organic Chemistry Laboratory Curriculum

    ERIC Educational Resources Information Center

    Esselman, Brian J.; Hill, Nicholas J.

    2016-01-01

    Advances in software and hardware have promoted the use of computational chemistry in all branches of chemical research to probe important chemical concepts and to support experimentation. Consequently, it has become imperative that students in the modern undergraduate curriculum become adept at performing simple calculations using computational…

  12. Designing a Network and Systems Computing Curriculum: The Stakeholders and the Issues

    ERIC Educational Resources Information Center

    Tan, Grace; Venables, Anne

    2010-01-01

    Since 2001, there has been a dramatic decline in Information Technology and Computer Science student enrolments worldwide. As a consequence, many institutions have evaluated their offerings and revamped their programs to include units designed to capture students' interests and increase subsequent enrolment. Likewise, at Victoria University the…

  13. The New Technology in Political Education in West Germany.

    ERIC Educational Resources Information Center

    George, Siegfried

    Debate in West Germany among technicians, economists, politicians, and educators about technological advancement and the use of computers focuses on the need to be informed about the consequences of the technological revolution. Some concerns are that computer use will lead to social isolation, a growing bureaucracy and authoritarian power…

  14. Are You Ready for Mobile Learning?

    ERIC Educational Resources Information Center

    Corbeil, Joseph Rene; Valdes-Corbeil, Maria Elena

    2007-01-01

    Mobile learning is defined as the intersection of mobile computing (the application of small, portable, and wireless computing and communication devices) and e-learning (learning facilitated and supported through the use of information and communications technology). Consequently, it comes as no surprise that sooner or later people would begin to…

  15. Computer Game Development as a Literacy Activity

    ERIC Educational Resources Information Center

    Owston, Ron; Wideman, Herb; Ronda, Natalia Sinitskaya; Brown, Christine

    2009-01-01

    This study examined computer game development as a pedagogical activity to motivate and engage students in curriculum-related literacy activities. We hypothesized that as a consequence, students would improve their traditional reading and writing skills as well as develop new digital literacy skills. Eighteen classes of grade 4 students were…

  16. Generating Scenarios When Data Are Missing

    NASA Technical Reports Server (NTRS)

    Mackey, Ryan

    2007-01-01

    The Hypothetical Scenario Generator (HSG) is being developed in conjunction with other components of artificial-intelligence systems for automated diagnosis and prognosis of faults in spacecraft, aircraft, and other complex engineering systems. The HSG accepts, as input, possibly incomplete data on the current state of a system (see figure). The HSG models a potential fault scenario as an ordered disjunctive tree of conjunctive consequences, wherein the ordering is based upon the likelihood that a particular conjunctive path will be taken for the given set of inputs. The computation of likelihood is based partly on a numerical ranking of the degree of completeness of data with respect to satisfaction of the antecedent conditions of prognostic rules. The results from the HSG are then used by a model-based artificial- intelligence subsystem to predict realistic scenarios and states.

  17. Brain Computation Is Organized via Power-of-Two-Based Permutation Logic.

    PubMed

    Xie, Kun; Fox, Grace E; Liu, Jun; Lyu, Cheng; Lee, Jason C; Kuang, Hui; Jacobs, Stephanie; Li, Meng; Liu, Tianming; Song, Sen; Tsien, Joe Z

    2016-01-01

    There is considerable scientific interest in understanding how cell assemblies-the long-presumed computational motif-are organized so that the brain can generate intelligent cognition and flexible behavior. The Theory of Connectivity proposes that the origin of intelligence is rooted in a power-of-two-based permutation logic ( N = 2 i -1), producing specific-to-general cell-assembly architecture capable of generating specific perceptions and memories, as well as generalized knowledge and flexible actions. We show that this power-of-two-based permutation logic is widely used in cortical and subcortical circuits across animal species and is conserved for the processing of a variety of cognitive modalities including appetitive, emotional and social information. However, modulatory neurons, such as dopaminergic (DA) neurons, use a simpler logic despite their distinct subtypes. Interestingly, this specific-to-general permutation logic remained largely intact although NMDA receptors-the synaptic switch for learning and memory-were deleted throughout adulthood, suggesting that the logic is developmentally pre-configured. Moreover, this computational logic is implemented in the cortex via combining a random-connectivity strategy in superficial layers 2/3 with nonrandom organizations in deep layers 5/6. This randomness of layers 2/3 cliques-which preferentially encode specific and low-combinatorial features and project inter-cortically-is ideal for maximizing cross-modality novel pattern-extraction, pattern-discrimination and pattern-categorization using sparse code, consequently explaining why it requires hippocampal offline-consolidation. In contrast, the nonrandomness in layers 5/6-which consists of few specific cliques but a higher portion of more general cliques projecting mostly to subcortical systems-is ideal for feedback-control of motivation, emotion, consciousness and behaviors. These observations suggest that the brain's basic computational algorithm is indeed organized by the power-of-two-based permutation logic. This simple mathematical logic can account for brain computation across the entire evolutionary spectrum, ranging from the simplest neural networks to the most complex.

  18. Brain Computation Is Organized via Power-of-Two-Based Permutation Logic

    PubMed Central

    Xie, Kun; Fox, Grace E.; Liu, Jun; Lyu, Cheng; Lee, Jason C.; Kuang, Hui; Jacobs, Stephanie; Li, Meng; Liu, Tianming; Song, Sen; Tsien, Joe Z.

    2016-01-01

    There is considerable scientific interest in understanding how cell assemblies—the long-presumed computational motif—are organized so that the brain can generate intelligent cognition and flexible behavior. The Theory of Connectivity proposes that the origin of intelligence is rooted in a power-of-two-based permutation logic (N = 2i–1), producing specific-to-general cell-assembly architecture capable of generating specific perceptions and memories, as well as generalized knowledge and flexible actions. We show that this power-of-two-based permutation logic is widely used in cortical and subcortical circuits across animal species and is conserved for the processing of a variety of cognitive modalities including appetitive, emotional and social information. However, modulatory neurons, such as dopaminergic (DA) neurons, use a simpler logic despite their distinct subtypes. Interestingly, this specific-to-general permutation logic remained largely intact although NMDA receptors—the synaptic switch for learning and memory—were deleted throughout adulthood, suggesting that the logic is developmentally pre-configured. Moreover, this computational logic is implemented in the cortex via combining a random-connectivity strategy in superficial layers 2/3 with nonrandom organizations in deep layers 5/6. This randomness of layers 2/3 cliques—which preferentially encode specific and low-combinatorial features and project inter-cortically—is ideal for maximizing cross-modality novel pattern-extraction, pattern-discrimination and pattern-categorization using sparse code, consequently explaining why it requires hippocampal offline-consolidation. In contrast, the nonrandomness in layers 5/6—which consists of few specific cliques but a higher portion of more general cliques projecting mostly to subcortical systems—is ideal for feedback-control of motivation, emotion, consciousness and behaviors. These observations suggest that the brain’s basic computational algorithm is indeed organized by the power-of-two-based permutation logic. This simple mathematical logic can account for brain computation across the entire evolutionary spectrum, ranging from the simplest neural networks to the most complex. PMID:27895562

  19. Dynamic Target Match Signals in Perirhinal Cortex Can Be Explained by Instantaneous Computations That Act on Dynamic Input from Inferotemporal Cortex

    PubMed Central

    Pagan, Marino

    2014-01-01

    Finding sought objects requires the brain to combine visual and target signals to determine when a target is in view. To investigate how the brain implements these computations, we recorded neural responses in inferotemporal cortex (IT) and perirhinal cortex (PRH) as macaque monkeys performed a delayed-match-to-sample target search task. Our data suggest that visual and target signals were combined within or before IT in the ventral visual pathway and then passed onto PRH, where they were reformatted into a more explicit target match signal over ∼10–15 ms. Accounting for these dynamics in PRH did not require proposing dynamic computations within PRH itself but, rather, could be attributed to instantaneous PRH computations performed upon an input representation from IT that changed with time. We found that the dynamics of the IT representation arose from two commonly observed features: individual IT neurons whose response preferences were not simply rescaled with time and variable response latencies across the population. Our results demonstrate that these types of time-varying responses have important consequences for downstream computation and suggest that dynamic representations can arise within a feedforward framework as a consequence of instantaneous computations performed upon time-varying inputs. PMID:25122904

  20. Evaluation of Enthalpy Diagrams for NH3-H2O Absorption Refrigerator

    NASA Astrophysics Data System (ADS)

    Takei, Toshitaka; Saito, Kiyoshi; Kawai, Sunao

    The protection of environment is becoming a grave problem nowadays and an absorption refrigerator, which does not use fleon as a refrigerant, is acquiring a close attention. Among the absorption refrigerators, a number of ammonia-water absorption refrigerators are being used in realm such as refrigeration and ice accumulation, since this type of refrigerator can produce below zero degree products. It is essential to conduct an investigation on the characteristics of ammonia-water absorption refrigerator in detail by means of computer simulation in order to realize low cost, highly efficient operation. Unfortunately, there have been number of problems in order to conduct computer simulations. Firstly, Merkel's achievements of enthalpy diagram does not give the relational equations. And secondly, although relational equation are being proposed by Ziegler, simpler equations that can be applied to computer simulation are yet to be proposed. In this research, simper equations based on Ziegler's equations have been derived to make computer simulation concerning the performance of ammonia-water absorption refrigerator possible-Both results of computer simulations using simple equations and Merkel's enthalpy diagram respectively, have been compared with the actual experimental data of one staged ammonia-water absorption refrigerator. Consequently, it is clarified that the results from Ziegler's equations agree with experimental data better than those from Merkel's enthalpy diagram.

  1. Fast parallel tandem mass spectral library searching using GPU hardware acceleration.

    PubMed

    Baumgardner, Lydia Ashleigh; Shanmugam, Avinash Kumar; Lam, Henry; Eng, Jimmy K; Martin, Daniel B

    2011-06-03

    Mass spectrometry-based proteomics is a maturing discipline of biologic research that is experiencing substantial growth. Instrumentation has steadily improved over time with the advent of faster and more sensitive instruments collecting ever larger data files. Consequently, the computational process of matching a peptide fragmentation pattern to its sequence, traditionally accomplished by sequence database searching and more recently also by spectral library searching, has become a bottleneck in many mass spectrometry experiments. In both of these methods, the main rate-limiting step is the comparison of an acquired spectrum with all potential matches from a spectral library or sequence database. This is a highly parallelizable process because the core computational element can be represented as a simple but arithmetically intense multiplication of two vectors. In this paper, we present a proof of concept project taking advantage of the massively parallel computing available on graphics processing units (GPUs) to distribute and accelerate the process of spectral assignment using spectral library searching. This program, which we have named FastPaSS (for Fast Parallelized Spectral Searching), is implemented in CUDA (Compute Unified Device Architecture) from NVIDIA, which allows direct access to the processors in an NVIDIA GPU. Our efforts demonstrate the feasibility of GPU computing for spectral assignment, through implementation of the validated spectral searching algorithm SpectraST in the CUDA environment.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, J.E.; Roussin, R.W.; Gilpin, H.

    A version of the CRAC2 computer code applicable for use in analyses of consequences and risks of reactor accidents in case work for environmental statements has been implemented for use on the Nuclear Regulatory Commission Data General MV/8000 computer system. Input preparation is facilitated through the use of an interactive computer program which operates on an IBM personal computer. The resulting CRAC2 input deck is transmitted to the MV/8000 by using an error-free file transfer mechanism. To facilitate the use of CRAC2 at NRC, relevant background material on input requirements and model descriptions has been extracted from four reports -more » ''Calculations of Reactor Accident Consequences,'' Version 2, NUREG/CR-2326 (SAND81-1994) and ''CRAC2 Model Descriptions,'' NUREG/CR-2552 (SAND82-0342), ''CRAC Calculations for Accident Sections of Environmental Statements, '' NUREG/CR-2901 (SAND82-1693), and ''Sensitivity and Uncertainty Studies of the CRAC2 Computer Code,'' NUREG/CR-4038 (ORNL-6114). When this background information is combined with instructions on the input processor, this report provides a self-contained guide for preparing CRAC2 input data with a specific orientation toward applications on the MV/8000. 8 refs., 11 figs., 10 tabs.« less

  3. Targeted intervention: Computational approaches to elucidate and predict relapse in alcoholism.

    PubMed

    Heinz, Andreas; Deserno, Lorenz; Zimmermann, Ulrich S; Smolka, Michael N; Beck, Anne; Schlagenhauf, Florian

    2017-05-01

    Alcohol use disorder (AUD) and addiction in general is characterized by failures of choice resulting in repeated drug intake despite severe negative consequences. Behavioral change is hard to accomplish and relapse after detoxification is common and can be promoted by consumption of small amounts of alcohol as well as exposure to alcohol-associated cues or stress. While those environmental factors contributing to relapse have long been identified, the underlying psychological and neurobiological mechanism on which those factors act are to date incompletely understood. Based on the reinforcing effects of drugs of abuse, animal experiments showed that drug, cue and stress exposure affect Pavlovian and instrumental learning processes, which can increase salience of drug cues and promote habitual drug intake. In humans, computational approaches can help to quantify changes in key learning mechanisms during the development and maintenance of alcohol dependence, e.g. by using sequential decision making in combination with computational modeling to elucidate individual differences in model-free versus more complex, model-based learning strategies and their neurobiological correlates such as prediction error signaling in fronto-striatal circuits. Computational models can also help to explain how alcohol-associated cues trigger relapse: mechanisms such as Pavlovian-to-Instrumental Transfer can quantify to which degree Pavlovian conditioned stimuli can facilitate approach behavior including alcohol seeking and intake. By using generative models of behavioral and neural data, computational approaches can help to quantify individual differences in psychophysiological mechanisms that underlie the development and maintenance of AUD and thus promote targeted intervention. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Approach to solution of coupled heat transfer problem on the surface of hypersonic vehicle of arbitrary shape

    NASA Astrophysics Data System (ADS)

    Bocharov, A. N.; Bityurin, V. A.; Golovin, N. N.; Evstigneev, N. M.; Petrovskiy, V. P.; Ryabkov, O. I.; Teplyakov, I. O.; Shustov, A. A.; Solomonov, Yu S.; Fortov, V. E.

    2016-11-01

    In this paper, an approach to solve conjugate heat- and mass-transfer problems is considered to be applied to hypersonic vehicle surface of arbitrary shape. The approach under developing should satisfy the following demands. (i) The surface of the body of interest may have arbitrary geometrical shape. (ii) The shape of the body can change during calculation. (iii) The flight characteristics may vary in a wide range, specifically flight altitude, free-stream Mach number, angle-of-attack, etc. (iv) The approach should be realized with using the high-performance-computing (HPC) technologies. The approach is based on coupled solution of 3D unsteady hypersonic flow equations and 3D unsteady heat conductance problem for the thick wall. Iterative process is applied to account for ablation of wall material and, consequently, mass injection from the surface and changes in the surface shape. While iterations, unstructured computational grids both in the flow region and within the wall interior are adapted to the current geometry and flow conditions. The flow computations are done on HPC platform and are most time-consuming part of the whole problem, while heat conductance problem can be solved on many kinds of computers.

  5. Spectral Prior Image Constrained Compressed Sensing (Spectral PICCS) for Photon-Counting Computed Tomography

    PubMed Central

    Yu, Zhicong; Leng, Shuai; Li, Zhoubo; McCollough, Cynthia H.

    2016-01-01

    Photon-counting computed tomography (PCCT) is an emerging imaging technique that enables multi-energy imaging with only a single scan acquisition. To enable multi-energy imaging, the detected photons corresponding to the full x-ray spectrum are divided into several subgroups of bin data that correspond to narrower energy windows. Consequently, noise in each energy bin increases compared to the full-spectrum data. This work proposes an iterative reconstruction algorithm for noise suppression in the narrower energy bins used in PCCT imaging. The algorithm is based on the framework of prior image constrained compressed sensing (PICCS) and is called spectral PICCS; it uses the full-spectrum image reconstructed using conventional filtered back-projection as the prior image. The spectral PICCS algorithm is implemented using a constrained optimization scheme with adaptive iterative step sizes such that only two tuning parameters are required in most cases. The algorithm was first evaluated using computer simulations, and then validated by both physical phantoms and in-vivo swine studies using a research PCCT system. Results from both computer-simulation and experimental studies showed substantial image noise reduction in narrow energy bins (43~73%) without sacrificing CT number accuracy or spatial resolution. PMID:27551878

  6. RAID v2.0: an updated resource of RNA-associated interactions across organisms

    PubMed Central

    Yi, Ying; Zhao, Yue; Li, Chunhua; Zhang, Lin; Huang, Huiying; Li, Yana; Liu, Lanlan; Hou, Ping; Cui, Tianyu; Tan, Puwen; Hu, Yongfei; Zhang, Ting; Huang, Yan; Li, Xiaobo; Yu, Jia; Wang, Dong

    2017-01-01

    With the development of biotechnologies and computational prediction algorithms, the number of experimental and computational prediction RNA-associated interactions has grown rapidly in recent years. However, diverse RNA-associated interactions are scattered over a wide variety of resources and organisms, whereas a fully comprehensive view of diverse RNA-associated interactions is still not available for any species. Hence, we have updated the RAID database to version 2.0 (RAID v2.0, www.rna-society.org/raid/) by integrating experimental and computational prediction interactions from manually reading literature and other database resources under one common framework. The new developments in RAID v2.0 include (i) over 850-fold RNA-associated interactions, an enhancement compared to the previous version; (ii) numerous resources integrated with experimental or computational prediction evidence for each RNA-associated interaction; (iii) a reliability assessment for each RNA-associated interaction based on an integrative confidence score; and (iv) an increase of species coverage to 60. Consequently, RAID v2.0 recruits more than 5.27 million RNA-associated interactions, including more than 4 million RNA–RNA interactions and more than 1.2 million RNA–protein interactions, referring to nearly 130 000 RNA/protein symbols across 60 species. PMID:27899615

  7. Modulation of Posterior Alpha Activity by Spatial Attention Allows for Controlling A Continuous Brain-Computer Interface.

    PubMed

    Horschig, Jörn M; Oosterheert, Wouter; Oostenveld, Robert; Jensen, Ole

    2015-11-01

    Here we report that the modulation of alpha activity by covert attention can be used as a control signal in an online brain-computer interface, that it is reliable, and that it is robust. Subjects were instructed to orient covert visual attention to the left or right hemifield. We decoded the direction of attention from the magnetoencephalogram by a template matching classifier and provided the classification outcome to the subject in real-time using a novel graphical user interface. Training data for the templates were obtained from a Posner-cueing task conducted just before the BCI task. Eleven subjects participated in four sessions each. Eight of the subjects achieved classification rates significantly above chance level. Subjects were able to significantly increase their performance from the first to the second session. Individual patterns of posterior alpha power remained stable throughout the four sessions and did not change with increased performance. We conclude that posterior alpha power can successfully be used as a control signal in brain-computer interfaces. We also discuss several ideas for further improving the setup and propose future research based on solid hypotheses about behavioral consequences of modulating neuronal oscillations by brain computer interfacing.

  8. Spectral prior image constrained compressed sensing (spectral PICCS) for photon-counting computed tomography

    NASA Astrophysics Data System (ADS)

    Yu, Zhicong; Leng, Shuai; Li, Zhoubo; McCollough, Cynthia H.

    2016-09-01

    Photon-counting computed tomography (PCCT) is an emerging imaging technique that enables multi-energy imaging with only a single scan acquisition. To enable multi-energy imaging, the detected photons corresponding to the full x-ray spectrum are divided into several subgroups of bin data that correspond to narrower energy windows. Consequently, noise in each energy bin increases compared to the full-spectrum data. This work proposes an iterative reconstruction algorithm for noise suppression in the narrower energy bins used in PCCT imaging. The algorithm is based on the framework of prior image constrained compressed sensing (PICCS) and is called spectral PICCS; it uses the full-spectrum image reconstructed using conventional filtered back-projection as the prior image. The spectral PICCS algorithm is implemented using a constrained optimization scheme with adaptive iterative step sizes such that only two tuning parameters are required in most cases. The algorithm was first evaluated using computer simulations, and then validated by both physical phantoms and in vivo swine studies using a research PCCT system. Results from both computer-simulation and experimental studies showed substantial image noise reduction in narrow energy bins (43-73%) without sacrificing CT number accuracy or spatial resolution.

  9. A coarse-grid projection method for accelerating incompressible flow computations

    NASA Astrophysics Data System (ADS)

    San, Omer; Staples, Anne

    2011-11-01

    We present a coarse-grid projection (CGP) algorithm for accelerating incompressible flow computations, which is applicable to methods involving Poisson equations as incompressibility constraints. CGP methodology is a modular approach that facilitates data transfer with simple interpolations and uses black-box solvers for the Poisson and advection-diffusion equations in the flow solver. Here, we investigate a particular CGP method for the vorticity-stream function formulation that uses the full weighting operation for mapping from fine to coarse grids, the third-order Runge-Kutta method for time stepping, and finite differences for the spatial discretization. After solving the Poisson equation on a coarsened grid, bilinear interpolation is used to obtain the fine data for consequent time stepping on the full grid. We compute several benchmark flows: the Taylor-Green vortex, a vortex pair merging, a double shear layer, decaying turbulence and the Taylor-Green vortex on a distorted grid. In all cases we use either FFT-based or V-cycle multigrid linear-cost Poisson solvers. Reducing the number of degrees of freedom of the Poisson solver by powers of two accelerates these computations while, for the first level of coarsening, retaining the same level of accuracy in the fine resolution vorticity field.

  10. Remote observing with NASA's Deep Space Network

    NASA Astrophysics Data System (ADS)

    Kuiper, T. B. H.; Majid, W. A.; Martinez, S.; Garcia-Miro, C.; Rizzo, J. R.

    2012-09-01

    The Deep Space Network (DSN) communicates with spacecraft as far away as the boundary between the Solar System and the interstellar medium. To make this possible, large sensitive antennas at Canberra, Australia, Goldstone, California, and Madrid, Spain, provide for constant communication with interplanetary missions. We describe the procedures for radioastronomical observations using this network. Remote access to science monitor and control computers by authorized observers is provided by two-factor authentication through a gateway at the Jet Propulsion Laboratory (JPL) in Pasadena. To make such observations practical, we have devised schemes based on SSH tunnels and distributed computing. At the very minimum, one can use SSH tunnels and VNC (Virtual Network Computing, a remote desktop software suite) to control the science hosts within the DSN Flight Operations network. In this way we have controlled up to three telescopes simultaneously. However, X-window updates can be slow and there are issues involving incompatible screen sizes and multi-screen displays. Consequently, we are now developing SSH tunnel-based schemes in which instrument control and monitoring, and intense data processing, are done on-site by the remote DSN hosts while data manipulation and graphical display are done at the observer's host. We describe our approaches to various challenges, our experience with what worked well and lessons learned, and directions for future development.

  11. Right Size Determining the Staff Necessary to Sustain Simulation and Computing Capabilities for Nuclear Security

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nikkel, Daniel J.; Meisner, Robert

    The Advanced Simulation and Computing Campaign, herein referred to as the ASC Program, is a core element of the science-based Stockpile Stewardship Program (SSP), which enables assessment, certification, and maintenance of the safety, security, and reliability of the U.S. nuclear stockpile without the need to resume nuclear testing. The use of advanced parallel computing has transitioned from proof-of-principle to become a critical element for assessing and certifying the stockpile. As the initiative phase of the ASC Program came to an end in the mid-2000s, the National Nuclear Security Administration redirected resources to other urgent priorities, and resulting staff reductions inmore » ASC occurred without the benefit of analysis of the impact on modern stockpile stewardship that is dependent on these new simulation capabilities. Consequently, in mid-2008 the ASC Program management commissioned a study to estimate the essential size and balance needed to sustain advanced simulation as a core component of stockpile stewardship. The ASC Program requires a minimum base staff size of 930 (which includes the number of staff necessary to maintain critical technical disciplines as well as to execute required programmatic tasks) to sustain its essential ongoing role in stockpile stewardship.« less

  12. Learning with Interactive Computer Graphics in the Undergraduate Neuroscience Classroom

    PubMed Central

    Pani, John R.; Chariker, Julia H.; Naaz, Farah; Mattingly, William; Roberts, Joshua; Sephton, Sandra E.

    2014-01-01

    Instruction of neuroanatomy depends on graphical representation and extended self-study. As a consequence, computer-based learning environments that incorporate interactive graphics should facilitate instruction in this area. The present study evaluated such a system in the undergraduate neuroscience classroom. The system used the method of adaptive exploration, in which exploration in a high fidelity graphical environment is integrated with immediate testing and feedback in repeated cycles of learning. The results of this study were that students considered the graphical learning environment to be superior to typical classroom materials used for learning neuroanatomy. Students managed the frequency and duration of study, test, and feedback in an efficient and adaptive manner. For example, the number of tests taken before reaching a minimum test performance of 90% correct closely approximated the values seen in more regimented experimental studies. There was a wide range of student opinion regarding the choice between a simpler and a more graphically compelling program for learning sectional anatomy. Course outcomes were predicted by individual differences in the use of the software that reflected general work habits of the students, such as the amount of time committed to testing. The results of this introduction into the classroom are highly encouraging for development of computer-based instruction in biomedical disciplines. PMID:24449123

  13. An Open Architecture to Support Social and Health Services in a Smart TV Environment.

    PubMed

    Costa, Carlos Rivas; Anido-Rifon, Luis E; Fernandez-Iglesias, Manuel J

    2017-03-01

    To design, implement, and test a solution to provide social and health services for the elderly at home based on smart TV technologies and access to all services. The architecture proposed is based on an open software platform and standard personal computing hardware. This provides great flexibility to develop new applications over the underlying infrastructure or to integrate new devices, for instance to monitor a broad range of vital signs in those cases where home monitoring is required. An actual system as a proof-of-concept was designed, implemented, and deployed. Applications range from social network clients to vital signs monitoring; from interactive TV contests to conventional online care applications such as medication reminders or telemedicine. In both cases, the results have been very positive, confirming the initial perception of the TV as a convenient, easy-to-use technology to provide social and health care. The TV set is a much more familiar computing interface for most senior users, and as a consequence, smart TVs become a most convenient solution for the design and implementation of applications and services targeted to this user group. This proposal has been tested in real setting with 62 senior people at their homes. Users included both individuals with experience using computers and others reluctant to them.

  14. Brief Motivational Interviewing Intervention for Peer Violence and Alcohol Use in Teens: One-Year Follow-up

    PubMed Central

    Chermack, Stephen T.; Zimmerman, Marc A.; Shope, Jean T.; Bingham, C. Raymond; Blow, Frederic C.; Walton, Maureen A.

    2012-01-01

    BACKGROUND AND OBJECTIVES: Emergency department (ED) visits present an opportunity to deliver brief interventions (BIs) to reduce violence and alcohol misuse among urban adolescents at risk for future injury. Previous analyses demonstrated that a BI resulted in reductions in violence and alcohol consequences up to 6 months. This article describes findings examining the efficacy of BIs on peer violence and alcohol misuse at 12 months. METHODS: Patients (14–18 years of age) at an ED reporting past year alcohol use and aggression were enrolled in the randomized control trial, which included computerized assessment, random assignment to control group or BI delivered by a computer or therapist assisted by a computer. The main outcome measures (at baseline and 12 months) included violence (peer aggression, peer victimization, violence-related consequences) and alcohol (alcohol misuse, binge drinking, alcohol-related consequences). RESULTS: A total of 3338 adolescents were screened (88% participation). Of those, 726 screened positive for violence and alcohol use and were randomly selected; 84% completed 12-month follow-up. In comparison with the control group, the therapist assisted by a computer group showed significant reductions in peer aggression (P < .01) and peer victimization (P < .05) at 12 months. BI and control groups did not differ on alcohol-related variables at 12 months. CONCLUSIONS: Evaluation of the SafERteens intervention 1 year after an ED visit provides support for the efficacy of computer-assisted therapist brief intervention for reducing peer violence. PMID:22614776

  15. Experimental Evaluation of Suitability of Selected Multi-Criteria Decision-Making Methods for Large-Scale Agent-Based Simulations

    PubMed Central

    2016-01-01

    Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the–server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models. PMID:27806061

  16. Fully Associative, Nonisothermal, Potential-Based Unified Viscoplastic Model for Titanium-Based Matrices

    NASA Technical Reports Server (NTRS)

    2005-01-01

    A number of titanium matrix composite (TMC) systems are currently being investigated for high-temperature air frame and propulsion system applications. As a result, numerous computational methodologies for predicting both deformation and life for this class of materials are under development. An integral part of these methodologies is an accurate and computationally efficient constitutive model for the metallic matrix constituent. Furthermore, because these systems are designed to operate at elevated temperatures, the required constitutive models must account for both time-dependent and time-independent deformations. To accomplish this, the NASA Lewis Research Center is employing a recently developed, complete, potential-based framework. This framework, which utilizes internal state variables, was put forth for the derivation of reversible and irreversible constitutive equations. The framework, and consequently the resulting constitutive model, is termed complete because the existence of the total (integrated) form of the Gibbs complementary free energy and complementary dissipation potentials are assumed a priori. The specific forms selected here for both the Gibbs and complementary dissipation potentials result in a fully associative, multiaxial, nonisothermal, unified viscoplastic model with nonlinear kinematic hardening. This model constitutes one of many models in the Generalized Viscoplasticity with Potential Structure (GVIPS) class of inelastic constitutive equations.

  17. A Belief-based Trust Model for Dynamic Service Selection

    NASA Astrophysics Data System (ADS)

    Ali, Ali Shaikh; Rana, Omer F.

    Provision of services across institutional boundaries has become an active research area. Many such services encode access to computational and data resources (comprising single machines to computational clusters). Such services can also be informational, and integrate different resources within an institution. Consequently, we envision a service rich environment in the future, where service consumers can intelligently decide between which services to select. If interaction between service providers/users is automated, it is necessary for these service clients to be able to automatically chose between a set of equivalent (or similar) services. In such a scenario trust serves as a benchmark to differentiate between service providers. One might therefore prioritize potential cooperative partners based on the established trust. Although many approaches exist in literature about trust between online communities, the exact nature of trust for multi-institutional service sharing remains undefined. Therefore, the concept of trust suffers from an imperfect understanding, a plethora of definitions, and informal use in the literature. We present a formalism for describing trust within multi-institutional service sharing, and provide an implementation of this; enabling the agent to make trust-based decision. We evaluate our formalism through simulation.

  18. On the generalized VIP time integral methodology for transient thermal problems

    NASA Technical Reports Server (NTRS)

    Mei, Youping; Chen, Xiaoqin; Tamma, Kumar K.; Sha, Desong

    1993-01-01

    The paper describes the development and applicability of a generalized VIrtual-Pulse (VIP) time integral method of computation for thermal problems. Unlike past approaches for general heat transfer computations, and with the advent of high speed computing technology and the importance of parallel computations for efficient use of computing environments, a major motivation via the developments described in this paper is the need for developing explicit computational procedures with improved accuracy and stability characteristics. As a consequence, a new and effective VIP methodology is described which inherits these improved characteristics. Numerical illustrative examples are provided to demonstrate the developments and validate the results obtained for thermal problems.

  19. English Loanwords in Spanish Computer Language

    ERIC Educational Resources Information Center

    Cabanillas, Isabel de la Cruz; Martinez, Cristina Tejedor; Prados, Mercedes Diez; Redondo, Esperanza Cerda

    2007-01-01

    Contact with the English language, especially from the 20th century onwards, has had as a consequence an increase in the number of words that are borrowed from English into Spanish. This process is particularly noticeable in Spanish for Specific Purposes, and, more specifically, in the case of Spanish computer language. Although sociocultural and…

  20. Logo Burn-In. Microcomputing Working Paper Series.

    ERIC Educational Resources Information Center

    Drexel Univ., Philadelphia, PA. Microcomputing Program.

    This paper describes a hot-stamping operation undertaken at Drexel University in an attempt to prevent computer theft on campus. The program was initiated in response to the University's anticipated receipt of up to 3,000 Macintosh microcomputers per year and the consequent publicity the university was receiving. All clusters of computers (e.g.,…

  1. Fluorescence-based enhanced reality (FLER) for real-time estimation of bowel perfusion in minimally invasive surgery

    NASA Astrophysics Data System (ADS)

    Diana, Michele

    2016-03-01

    Pre-anastomotic bowel perfusion is a key factor for a successful healing process. Clinical judgment has limited accuracy to evaluate intestinal microperfusion. Fluorescence videography is a promising tool for image-guided intraoperative assessment of the bowel perfusion at the future anastomotic site in the setting of minimally invasive procedures. The standard configuration for fluorescence videography includes a Near-Infrared endoscope able to detect the signal emitted by a fluorescent dye, more frequently Indocyanine Green (ICG), which is administered by intravenous injection. Fluorescence intensity is proportional to the amount of fluorescent dye diffusing in the tissue and consequently is a surrogate marker of tissue perfusion. However, fluorescence intensity alone remains a subjective approach and an integrated computer-based analysis of the over-time evolution of the fluorescence signal is required to obtain quantitative data. We have developed a solution integrating computer-based analysis for intra-operative evaluation of the optimal resection site, based on the bowel perfusion as determined by the dynamic fluorescence intensity. The software can generate a "virtual perfusion cartography", based on the "fluorescence time-to-peak". The virtual perfusion cartography can be overlapped onto real-time laparoscopic images to obtain the Enhanced Reality effect. We have defined this approach FLuorescence-based Enhanced Reality (FLER). This manuscript describes the stepwise development of the FLER concept.

  2. Hypercompetitive Environments: An Agent-based model approach

    NASA Astrophysics Data System (ADS)

    Dias, Manuel; Araújo, Tanya

    Information technology (IT) environments are characterized by complex changes and rapid evolution. Globalization and the spread of technological innovation have increased the need for new strategic information resources, both from individual firms and management environments. Improvements in multidisciplinary methods and, particularly, the availability of powerful computational tools, are giving researchers an increasing opportunity to investigate management environments in their true complex nature. The adoption of a complex systems approach allows for modeling business strategies from a bottom-up perspective — understood as resulting from repeated and local interaction of economic agents — without disregarding the consequences of the business strategies themselves to individual behavior of enterprises, emergence of interaction patterns between firms and management environments. Agent-based models are at the leading approach of this attempt.

  3. Extensive computation of albedo contrast between martian dust devil tracks and their neighboring regions

    NASA Astrophysics Data System (ADS)

    Statella, Thiago; Pina, Pedro; da Silva, Erivaldo Antônio

    2015-04-01

    We have developed a method to compute the albedo contrast between dust devil tracks and their surrounding regions on Mars. It is mainly based on Mathematical Morphology operators and uses all the points of the edges of the tracks to compute the values of the albedo contrast. It permits the extraction of more accurate and complete information, when compared to traditional point sampling, not only providing better statistics but also permitting the analysis of local variations along the entirety of the tracks. This measure of contrast, based on relative quantities, is much more adequate to establish comparisons at regional scales and in multi-temporal basis using imagery acquired in rather different environmental and operational conditions. Also, the substantial increase in the details extracted may permit quantifying differential depositions of dust by computing local temporal fading of the tracks with consequences on a better estimation of the thickness of the top most layer of dust and the minimum value needed to create dust devils tracks. The developed tool is tested on 110 HiRISE images depicting regions in the Aeolis, Argyre, Eridania, Noachis and Hellas quadrangles. As a complementary evaluation, we also performed a temporal analysis of the albedo in a region of Russell crater, where high seasonal dust devil activity was already observed before, comprising the years 2007-2012. The mean albedo of the Russell crater is in this case indicative of dust devil tracks presence and, therefore, can be used to quantify dust devil activity.

  4. A high-rate PCI-based telemetry processor system

    NASA Astrophysics Data System (ADS)

    Turri, R.

    2002-07-01

    The high performances reached by the Satellite on-board telemetry generation and transmission, as consequently, will impose the design of ground facilities with higher processing capabilities at low cost to allow a good diffusion of these ground station. The equipment normally used are based on complex, proprietary bus and computing architectures that prevent the systems from exploiting the continuous and rapid increasing in computing power available on market. The PCI bus systems now allow processing of high-rate data streams in a standard PC-system. At the same time the Windows NT operating system supports multitasking and symmetric multiprocessing, giving the capability to process high data rate signals. In addition, high-speed networking, 64 bit PCI-bus technologies and the increase in processor power and software, allow creating a system based on COTS products (which in future may be easily and inexpensively upgraded). In the frame of EUCLID RTP 9.8 project, a specific work element was dedicated to develop the architecture of a system able to acquire telemetry data of up to 600 Mbps. Laben S.p.A - a Finmeccanica Company -, entrusted of this work, has designed a PCI-based telemetry system making possible the communication between a satellite down-link and a wide area network at the required rate.

  5. Personal Computer Based Controller For Switched Reluctance Motor Drives

    NASA Astrophysics Data System (ADS)

    Mang, X.; Krishnan, R.; Adkar, S.; Chandramouli, G.

    1987-10-01

    Th9, switched reluctance motor (SRM) has recently gained considerable attention in the variable speed drive market. Two important factors that have contributed to this are, the simplicity of construction and the possibility of developing low cost con-trollers with minimum number of switching devices in the drive circuits. This is mainly due to the state-of-art of the present digital circuits technology and the low cost of switching devices. The control of this motor drive is under research. Optimized performance of the SRM motor drive is very dependent on the integration of the controller, converter and the motor. This research on system integration involves considerable changes in the control algorithms and their implementation. A Personal computer (PC) based controller is very appropriate for this purpose. Accordingly, the present paper is concerned with the design of a PC based controller for a SRM. The PC allows for real-time microprocessor control with the possibility of on-line system parameter modifications. Software reconfiguration of this controller is easier than a hardware based controller. User friendliness is a natural consequence of such a system. Considering the low cost of PCs, this controller will offer an excellent cost-effective means of studying the control strategies for the SRM drive intop greater detail than in the past.

  6. Electro-encephalogram based brain-computer interface: improved performance by mental practice and concentration skills.

    PubMed

    Mahmoudi, Babak; Erfanian, Abbas

    2006-11-01

    Mental imagination is the essential part of the most EEG-based communication systems. Thus, the quality of mental rehearsal, the degree of imagined effort, and mind controllability should have a major effect on the performance of electro-encephalogram (EEG) based brain-computer interface (BCI). It is now well established that mental practice using motor imagery improves motor skills. The effects of mental practice on motor skill learning are the result of practice on central motor programming. According to this view, it seems logical that mental practice should modify the neuronal activity in the primary sensorimotor areas and consequently change the performance of EEG-based BCI. For developing a practical BCI system, recognizing the resting state with eyes opened and the imagined voluntary movement is important. For this purpose, the mind should be able to focus on a single goal for a period of time, without deviation to another context. In this work, we are going to examine the role of mental practice and concentration skills on the EEG control during imaginative hand movements. The results show that the mental practice and concentration can generally improve the classification accuracy of the EEG patterns. It is found that mental training has a significant effect on the classification accuracy over the primary motor cortex and frontal area.

  7. Rapid Optimization of External Quantum Efficiency of Thin Film Solar Cells Using Surrogate Modeling of Absorptivity.

    PubMed

    Kaya, Mine; Hajimirza, Shima

    2018-05-25

    This paper uses surrogate modeling for very fast design of thin film solar cells with improved solar-to-electricity conversion efficiency. We demonstrate that the wavelength-specific optical absorptivity of a thin film multi-layered amorphous-silicon-based solar cell can be modeled accurately with Neural Networks and can be efficiently approximated as a function of cell geometry and wavelength. Consequently, the external quantum efficiency can be computed by averaging surrogate absorption and carrier recombination contributions over the entire irradiance spectrum in an efficient way. Using this framework, we optimize a multi-layer structure consisting of ITO front coating, metallic back-reflector and oxide layers for achieving maximum efficiency. Our required computation time for an entire model fitting and optimization is 5 to 20 times less than the best previous optimization results based on direct Finite Difference Time Domain (FDTD) simulations, therefore proving the value of surrogate modeling. The resulting optimization solution suggests at least 50% improvement in the external quantum efficiency compared to bare silicon, and 25% improvement compared to a random design.

  8. Preliminary design of a high speed civil transport: The Opus 0-001

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Based on research into the technology and issues surrounding the design, development, and operation of a second generation High Speed Civil Transport, HSCT, the Opus 0-001 team completed the preliminary design of a sixty passenger, three engine aircraft. The design of this aircraft was performed using a computer program which the team wrote. This program automatically computed the geometric, aerodynamic, and performance characteristic of an aircraft whose preliminary geometry was specified. The Opus 0-001 aircraft was designed for a cruise Mach number of 2.2, a range of 4,700 nautical miles and its design was based in current or very near term technology. Its small size was a consequence of an emphasis on a profitable, low cost program, capable of delivering tomorrow's passengers in style and comfort at prices that make it an attractive competitor to both current and future subsonic transport aircraft. Several hundred thousand cases of Cruise Mach number, aircraft size and cost breakdown were investigated to obtain costs and revenues for which profit was calculated. The projected unit flyaway cost was $92.0 million per aircraft.

  9. Increasing the applicability of density functional theory. IV. Consequences of ionization-potential improved exchange-correlation potentials.

    PubMed

    Verma, Prakash; Bartlett, Rodney J

    2014-05-14

    This paper's objective is to create a "consistent" mean-field based Kohn-Sham (KS) density functional theory (DFT) meaning the functional should not only provide good total energy properties, but also the corresponding KS eigenvalues should be accurate approximations to the vertical ionization potentials (VIPs) of the molecule, as the latter condition attests to the viability of the exchange-correlation potential (VXC). None of the prominently used DFT approaches show these properties: the optimized effective potential VXC based ab initio dft does. A local, range-separated hybrid potential cam-QTP-00 is introduced as the basis for a "consistent" KS DFT approach. The computed VIPs as the negative of KS eigenvalue have a mean absolute error of 0.8 eV for an extensive set of molecule's electron ionizations, including the core. Barrier heights, equilibrium geometries, and magnetic properties obtained from the potential are in good agreement with experiment. A similar accuracy with less computational efforts can be achieved by using a non-variational global hybrid variant of the QTP-00 approach.

  10. Biomaterials and computation: a strategic alliance to investigate emergent responses of neural cells.

    PubMed

    Sergi, Pier Nicola; Cavalcanti-Adam, Elisabetta Ada

    2017-03-28

    Topographical and chemical cues drive migration, outgrowth and regeneration of neurons in different and crucial biological conditions. In the natural extracellular matrix, their influences are so closely coupled that they result in complex cellular responses. As a consequence, engineered biomaterials are widely used to simplify in vitro conditions, disentangling intricate in vivo behaviours, and narrowing the investigation on particular emergent responses. Nevertheless, how topographical and chemical cues affect the emergent response of neural cells is still unclear, thus in silico models are used as additional tools to reproduce and investigate the interactions between cells and engineered biomaterials. This work aims at presenting the synergistic use of biomaterials-based experiments and computation as a strategic way to promote the discovering of complex neural responses as well as to allow the interactions between cells and biomaterials to be quantitatively investigated, fostering a rational design of experiments.

  11. Automatic detection and severity measurement of eczema using image processing.

    PubMed

    Alam, Md Nafiul; Munia, Tamanna Tabassum Khan; Tavakolian, Kouhyar; Vasefi, Fartash; MacKinnon, Nick; Fazel-Rezai, Reza

    2016-08-01

    Chronic skin diseases like eczema may lead to severe health and financial consequences for patients if not detected and controlled early. Early measurement of disease severity, combined with a recommendation for skin protection and use of appropriate medication can prevent the disease from worsening. Current diagnosis can be costly and time-consuming. In this paper, an automatic eczema detection and severity measurement model are presented using modern image processing and computer algorithm. The system can successfully detect regions of eczema and classify the identified region as mild or severe based on image color and texture feature. Then the model automatically measures skin parameters used in the most common assessment tool called "Eczema Area and Severity Index (EASI)," by computing eczema affected area score, eczema intensity score, and body region score of eczema allowing both patients and physicians to accurately assess the affected skin.

  12. Integrative Systems Models of Cardiac Excitation Contraction Coupling

    PubMed Central

    Greenstein, Joseph L.; Winslow, Raimond L.

    2010-01-01

    Excitation-contraction coupling in the cardiac myocyte is mediated by a number of highly integrated mechanisms of intracellular Ca2+ transport. The complexity and integrative nature of heart cell electrophysiology and Ca2+-cycling has led to an evolution of computational models that have played a crucial role in shaping our understanding of heart function. An important emerging theme in systems biology is that the detailed nature of local signaling events, such as those that occur in the cardiac dyad, have important consequences at higher biological scales. Multi-scale modeling techniques have revealed many mechanistic links between micro-scale events, such as Ca2+ binding to a channel protein, and macro-scale phenomena, such as excitation-contraction coupling gain. Here we review experimentally based multi-scale computational models of excitation-contraction coupling and the insights that have been gained through their application. PMID:21212390

  13. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1987-01-01

    The results of ongoing research directed at developing a graph theoretical model for describing data and control flow associated with the execution of large grained algorithms in a spatial distributed computer environment is presented. This model is identified by the acronym ATAMM (Algorithm/Architecture Mapping Model). The purpose of such a model is to provide a basis for establishing rules for relating an algorithm to its execution in a multiprocessor environment. Specifications derived from the model lead directly to the description of a data flow architecture which is a consequence of the inherent behavior of the data and control flow described by the model. The purpose of the ATAMM based architecture is to optimize computational concurrency in the multiprocessor environment and to provide an analytical basis for performance evaluation. The ATAMM model and architecture specifications are demonstrated on a prototype system for concept validation.

  14. Blurriness in Live Forensics: An Introduction

    NASA Astrophysics Data System (ADS)

    Savoldi, Antonio; Gubian, Paolo

    The Live Forensics discipline aims at answering basic questions related to a digital crime, which usually involves a computer-based system. The investigation should be carried out with the very goal to establish which processes were running, when they were started and by whom, what specific activities those processes were doing and the state of active network connections. Besides, a set of tools needs to be launched on the running system by altering, as a consequence of the Locard’s exchange principle [2], the system’s memory. All the methodologies for the live forensics field proposed until now have a basic, albeit important, weakness, which is the inability to quantify the perturbation, or blurriness, of the system’s memory of the investigated computer. This is the very last goal of this paper: to provide a set of guidelines which can be effectively used for measuring the uncertainty of the collected volatile memory on a live system being investigated.

  15. Tunable Coarse Graining for Monte Carlo Simulations of Proteins via Smoothed Energy Tables: Direct and Exchange Simulations

    PubMed Central

    2015-01-01

    Many commonly used coarse-grained models for proteins are based on simplified interaction sites and consequently may suffer from significant limitations, such as the inability to properly model protein secondary structure without the addition of restraints. Recent work on a benzene fluid (LettieriS.; ZuckermanD. M.J. Comput. Chem.2012, 33, 268−27522120971) suggested an alternative strategy of tabulating and smoothing fully atomistic orientation-dependent interactions among rigid molecules or fragments. Here we report our initial efforts to apply this approach to the polar and covalent interactions intrinsic to polypeptides. We divide proteins into nearly rigid fragments, construct distance and orientation-dependent tables of the atomistic interaction energies between those fragments, and apply potential energy smoothing techniques to those tables. The amount of smoothing can be adjusted to give coarse-grained models that range from the underlying atomistic force field all the way to a bead-like coarse-grained model. For a moderate amount of smoothing, the method is able to preserve about 70–90% of the α-helical structure while providing a factor of 3–10 improvement in sampling per unit computation time (depending on how sampling is measured). For a greater amount of smoothing, multiple folding–unfolding transitions of the peptide were observed, along with a factor of 10–100 improvement in sampling per unit computation time, although the time spent in the unfolded state was increased compared with less smoothed simulations. For a β hairpin, secondary structure is also preserved, albeit for a narrower range of the smoothing parameter and, consequently, for a more modest improvement in sampling. We have also applied the new method in a “resolution exchange” setting, in which each replica runs a Monte Carlo simulation with a different degree of smoothing. We obtain exchange rates that compare favorably to our previous efforts at resolution exchange (LymanE.; ZuckermanD. M.J. Chem. Theory Comput.2006, 2, 656−666). PMID:25400525

  16. High performance computing for deformable image registration: towards a new paradigm in adaptive radiotherapy.

    PubMed

    Samant, Sanjiv S; Xia, Junyi; Muyan-Ozcelik, Pinar; Owens, John D

    2008-08-01

    The advent of readily available temporal imaging or time series volumetric (4D) imaging has become an indispensable component of treatment planning and adaptive radiotherapy (ART) at many radiotherapy centers. Deformable image registration (DIR) is also used in other areas of medical imaging, including motion corrected image reconstruction. Due to long computation time, clinical applications of DIR in radiation therapy and elsewhere have been limited and consequently relegated to offline analysis. With the recent advances in hardware and software, graphics processing unit (GPU) based computing is an emerging technology for general purpose computation, including DIR, and is suitable for highly parallelized computing. However, traditional general purpose computation on the GPU is limited because the constraints of the available programming platforms. As well, compared to CPU programming, the GPU currently has reduced dedicated processor memory, which can limit the useful working data set for parallelized processing. We present an implementation of the demons algorithm using the NVIDIA 8800 GTX GPU and the new CUDA programming language. The GPU performance will be compared with single threading and multithreading CPU implementations on an Intel dual core 2.4 GHz CPU using the C programming language. CUDA provides a C-like language programming interface, and allows for direct access to the highly parallel compute units in the GPU. Comparisons for volumetric clinical lung images acquired using 4DCT were carried out. Computation time for 100 iterations in the range of 1.8-13.5 s was observed for the GPU with image size ranging from 2.0 x 10(6) to 14.2 x 10(6) pixels. The GPU registration was 55-61 times faster than the CPU for the single threading implementation, and 34-39 times faster for the multithreading implementation. For CPU based computing, the computational time generally has a linear dependence on image size for medical imaging data. Computational efficiency is characterized in terms of time per megapixels per iteration (TPMI) with units of seconds per megapixels per iteration (or spmi). For the demons algorithm, our CPU implementation yielded largely invariant values of TPMI. The mean TPMIs were 0.527 spmi and 0.335 spmi for the single threading and multithreading cases, respectively, with <2% variation over the considered image data range. For GPU computing, we achieved TPMI =0.00916 spmi with 3.7% variation, indicating optimized memory handling under CUDA. The paradigm of GPU based real-time DIR opens up a host of clinical applications for medical imaging.

  17. Quantifying the predictive consequences of model error with linear subspace analysis

    USGS Publications Warehouse

    White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.

    2014-01-01

    All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.

  18. Partitioning-based mechanisms under personalized differential privacy.

    PubMed

    Li, Haoran; Xiong, Li; Ji, Zhanglong; Jiang, Xiaoqian

    2017-05-01

    Differential privacy has recently emerged in private statistical aggregate analysis as one of the strongest privacy guarantees. A limitation of the model is that it provides the same privacy protection for all individuals in the database. However, it is common that data owners may have different privacy preferences for their data. Consequently, a global differential privacy parameter may provide excessive privacy protection for some users, while insufficient for others. In this paper, we propose two partitioning-based mechanisms, privacy-aware and utility-based partitioning, to handle personalized differential privacy parameters for each individual in a dataset while maximizing utility of the differentially private computation. The privacy-aware partitioning is to minimize the privacy budget waste, while utility-based partitioning is to maximize the utility for a given aggregate analysis. We also develop a t -round partitioning to take full advantage of remaining privacy budgets. Extensive experiments using real datasets show the effectiveness of our partitioning mechanisms.

  19. Partitioning-based mechanisms under personalized differential privacy

    PubMed Central

    Li, Haoran; Xiong, Li; Ji, Zhanglong; Jiang, Xiaoqian

    2017-01-01

    Differential privacy has recently emerged in private statistical aggregate analysis as one of the strongest privacy guarantees. A limitation of the model is that it provides the same privacy protection for all individuals in the database. However, it is common that data owners may have different privacy preferences for their data. Consequently, a global differential privacy parameter may provide excessive privacy protection for some users, while insufficient for others. In this paper, we propose two partitioning-based mechanisms, privacy-aware and utility-based partitioning, to handle personalized differential privacy parameters for each individual in a dataset while maximizing utility of the differentially private computation. The privacy-aware partitioning is to minimize the privacy budget waste, while utility-based partitioning is to maximize the utility for a given aggregate analysis. We also develop a t-round partitioning to take full advantage of remaining privacy budgets. Extensive experiments using real datasets show the effectiveness of our partitioning mechanisms. PMID:28932827

  20. Emerging association between addictive gaming and attention-deficit/hyperactivity disorder.

    PubMed

    Weinstein, Aviv; Weizman, Abraham

    2012-10-01

    Children's and adolescent's use of computer games and videogames is becoming highly popular and has increased dramatically over the last decade. There is growing evidence of high prevalence of addiction to computer games and videogames among children, which is causing concern because of its harmful consequences. There is also emerging evidence of an association between computer game and videogame addiction and attention deficit/hyperactivity disorder (ADHD). This is indicated by the occurrence of gaming addiction as a co-morbid disorder of ADHD, common physiological and pharmacological mechanisms, and potential genetic association between the two disorders. A proper understanding of the psychological and neurotransmitter mechanisms underlying both disorders is important for appropriate diagnostic classification of both disorders. Furthermore, it is important for development of potential pharmacological treatment of both disorders. Relatively few studies have investigated the common mechanisms for both disorders. This paper reviews new findings, trends, and developments in the field. The paper is based on a literature search, in Medline and PUBMED, using the keywords addictive gaming and ADHD, of articles published between 2000 and 2012.

  1. Anharmonic Infrared Spectroscopy through the Fourier Transform of Time Correlation Function Formalism in ONETEP.

    PubMed

    Vitale, Valerio; Dziedzic, Jacek; Dubois, Simon M-M; Fangohr, Hans; Skylaris, Chris-Kriton

    2015-07-14

    Density functional theory molecular dynamics (DFT-MD) provides an efficient framework for accurately computing several types of spectra. The major benefit of DFT-MD approaches lies in the ability to naturally take into account the effects of temperature and anharmonicity, without having to introduce any ad hoc or a posteriori corrections. Consequently, computational spectroscopy based on DFT-MD approaches plays a pivotal role in the understanding and assignment of experimental peaks and bands at finite temperature, particularly in the case of floppy molecules. Linear-scaling DFT methods can be used to study large and complex systems, such as peptides, DNA strands, amorphous solids, and molecules in solution. Here, we present the implementation of DFT-MD IR spectroscopy in the ONETEP linear-scaling code. In addition, two methods for partitioning the dipole moment within the ONETEP framework are presented. Dipole moment partitioning allows us to compute spectra of molecules in solution, which fully include the effects of the solvent, while at the same time removing the solvent contribution from the spectra.

  2. Tree decomposition based fast search of RNA structures including pseudoknots in genomes.

    PubMed

    Song, Yinglei; Liu, Chunmei; Malmberg, Russell; Pan, Fangfang; Cai, Liming

    2005-01-01

    Searching genomes for RNA secondary structure with computational methods has become an important approach to the annotation of non-coding RNAs. However, due to the lack of efficient algorithms for accurate RNA structure-sequence alignment, computer programs capable of fast and effectively searching genomes for RNA secondary structures have not been available. In this paper, a novel RNA structure profiling model is introduced based on the notion of a conformational graph to specify the consensus structure of an RNA family. Tree decomposition yields a small tree width t for such conformation graphs (e.g., t = 2 for stem loops and only a slight increase for pseudo-knots). Within this modelling framework, the optimal alignment of a sequence to the structure model corresponds to finding a maximum valued isomorphic subgraph and consequently can be accomplished through dynamic programming on the tree decomposition of the conformational graph in time O(k(t)N(2)), where k is a small parameter; and N is the size of the projiled RNA structure. Experiments show that the application of the alignment algorithm to search in genomes yields the same search accuracy as methods based on a Covariance model with a significant reduction in computation time. In particular; very accurate searches of tmRNAs in bacteria genomes and of telomerase RNAs in yeast genomes can be accomplished in days, as opposed to months required by other methods. The tree decomposition based searching tool is free upon request and can be downloaded at our site h t t p ://w.uga.edu/RNA-informatics/software/index.php.

  3. Methods for improving simulations of biological systems: systemic computation and fractal proteins

    PubMed Central

    Bentley, Peter J.

    2009-01-01

    Modelling and simulation are becoming essential for new fields such as synthetic biology. Perhaps the most important aspect of modelling is to follow a clear design methodology that will help to highlight unwanted deficiencies. The use of tools designed to aid the modelling process can be of benefit in many situations. In this paper, the modelling approach called systemic computation (SC) is introduced. SC is an interaction-based language, which enables individual-based expression and modelling of biological systems, and the interactions between them. SC permits a precise description of a hypothetical mechanism to be written using an intuitive graph-based or a calculus-based notation. The same description can then be directly run as a simulation, merging the hypothetical mechanism and the simulation into the same entity. However, even when using well-designed modelling tools to produce good models, the best model is not always the most accurate one. Frequently, computational constraints or lack of data make it infeasible to model an aspect of biology. Simplification may provide one way forward, but with inevitable consequences of decreased accuracy. Instead of attempting to replace an element with a simpler approximation, it is sometimes possible to substitute the element with a different but functionally similar component. In the second part of this paper, this modelling approach is described and its advantages are summarized using an exemplar: the fractal protein model. Finally, the paper ends with a discussion of good biological modelling practice by presenting lessons learned from the use of SC and the fractal protein model. PMID:19324681

  4. Estimation of the net acid load of the diet of ancestral preagricultural Homo sapiens and their hominid ancestors.

    PubMed

    Sebastian, Anthony; Frassetto, Lynda A; Sellmeyer, Deborah E; Merriam, Renée L; Morris, R Curtis

    2002-12-01

    Natural selection has had < 1% of hominid evolutionary time to eliminate the inevitable maladaptations consequent to the profound transformation of the human diet resulting from the inventions of agriculture and animal husbandry. The objective was to estimate the net systemic load of acid (net endogenous acid production; NEAP) from retrojected ancestral preagricultural diets and to compare it with that of contemporary diets, which are characterized by an imbalance of nutrient precursors of hydrogen and bicarbonate ions that induces a lifelong, low-grade, pathogenically significant systemic metabolic acidosis. Using established computational methods, we computed NEAP for a large number of retrojected ancestral preagricultural diets and compared them with computed and measured values for typical American diets. The mean (+/- SD) NEAP for 159 retrojected preagricultural diets was -88 +/- 82 mEq/d; 87% were net base-producing. The computational model predicted NEAP for the average American diet (as recorded in the third National Health and Nutrition Examination Survey) as 48 mEq/d, within a few percentage points of published measured values for free-living Americans; the model, therefore, was not biased toward generating negative NEAP values. The historical shift from negative to positive NEAP was accounted for by the displacement of high-bicarbonate-yielding plant foods in the ancestral diet by cereal grains and energy-dense, nutrient-poor foods in the contemporary diet-neither of which are net base-producing. The findings suggest that diet-induced metabolic acidosis and its sequelae in humans eating contemporary diets reflect a mismatch between the nutrient composition of the diet and genetically determined nutritional requirements for optimal systemic acid-base status.

  5. Partnership in Computational Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huray, Paul G.

    1999-02-24

    This is the final report for the "Partnership in Computational Science" (PICS) award in an amount of $500,000 for the period January 1, 1993 through December 31, 1993. A copy of the proposal with its budget is attached as Appendix A. This report first describes the consequent significance of the DOE award in building infrastructure of high performance computing in the Southeast and then describes the work accomplished under this grant and a list of publications resulting from it.

  6. Spider World: A Robot Language for Learning to Program. Assessing the Cognitive Consequences of Computer Environments for Learning (ACCCEL).

    ERIC Educational Resources Information Center

    Dalbey, John; Linn, Marcia

    Spider World is an interactive program designed to help individuals with no previous computer experience to learn the fundamentals of programming. The program emphasizes cognitive tasks which are central to programming and provides significant problem-solving opportunities. In Spider World, the user commands a hypothetical robot (called the…

  7. The Computer-Mediated Communication (CMC) Classroom: A Challenge of Medium, Presence, Interaction, Identity, and Relationship

    ERIC Educational Resources Information Center

    Sherblom, John C.

    2010-01-01

    There is a "prevalence of computer-mediated communication (CMC) in education," and a concern for its negative psychosocial consequences and lack of effectiveness as an instructional tool. This essay identifies five variables in the CMC research literature and shows their moderating effect on the psychosocial, instructional expevrience of the CMC…

  8. Structural Consequences of Retention Policies: The Use of Computer Models To Inform Policy.

    ERIC Educational Resources Information Center

    Morris, Don R.

    This paper reports on a longitudinal study of the structural effects of grade retention on dropout rate and percent of graduates qualified. The study drew on computer simulation to explore the effects of retention and how this practice affects dropout rate, the percent of graduates who meet required standards, and enrollment itself. The computer…

  9. Exploring the Benefits and Challenges of Using Laptop Computers in Higher Education Classrooms: A Formative Analysis

    ERIC Educational Resources Information Center

    Kay, Robin H.; Lauricella, Sharon

    2011-01-01

    Because of decreased prices, increased convenience, and wireless access, an increasing number of college and university students are using laptop computers in their classrooms. This recent trend has forced instructors to address the educational consequences of using these mobile devices. The purpose of the current study was to analyze and assess…

  10. An Evaluation of the Conditions, Processes, and Consequences of Laptop Computing in K-12 Classrooms

    ERIC Educational Resources Information Center

    Cavanaugh, Cathy; Dawson, Kara; Ritzhaupt, Albert

    2011-01-01

    This article examines how laptop computing technology, teacher professional development, and systematic support resulted in changed teaching practices and increased student achievement in 47 K-12 schools in 11 Florida school districts. The overview of a large-scale study documents the type and magnitude of change in student-centered teaching,…

  11. Ubiquitous computing technology for just-in-time motivation of behavior change.

    PubMed

    Intille, Stephen S

    2004-01-01

    This paper describes a vision of health care where "just-in-time" user interfaces are used to transform people from passive to active consumers of health care. Systems that use computational pattern recognition to detect points of decision, behavior, or consequences automatically can present motivational messages to encourage healthy behavior at just the right time. Further, new ubiquitous computing and mobile computing devices permit information to be conveyed to users at just the right place. In combination, computer systems that present messages at the right time and place can be developed to motivate physical activity and healthy eating. Computational sensing technologies can also be used to measure the impact of the motivational technology on behavior.

  12. GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing

    PubMed Central

    Fang, Ye; Ding, Yun; Feinstein, Wei P.; Koppelman, David M.; Moreno, Juana; Jarrell, Mark; Ramanujam, J.; Brylinski, Michal

    2016-01-01

    Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249. PMID:27420300

  13. GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing.

    PubMed

    Fang, Ye; Ding, Yun; Feinstein, Wei P; Koppelman, David M; Moreno, Juana; Jarrell, Mark; Ramanujam, J; Brylinski, Michal

    2016-01-01

    Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249.

  14. Experiments in Computing: A Survey

    PubMed Central

    Moisseinen, Nella

    2014-01-01

    Experiments play a central role in science. The role of experiments in computing is, however, unclear. Questions about the relevance of experiments in computing attracted little attention until the 1980s. As the discipline then saw a push towards experimental computer science, a variety of technically, theoretically, and empirically oriented views on experiments emerged. As a consequence of those debates, today's computing fields use experiments and experiment terminology in a variety of ways. This paper analyzes experimentation debates in computing. It presents five ways in which debaters have conceptualized experiments in computing: feasibility experiment, trial experiment, field experiment, comparison experiment, and controlled experiment. This paper has three aims: to clarify experiment terminology in computing; to contribute to disciplinary self-understanding of computing; and, due to computing's centrality in other fields, to promote understanding of experiments in modern science in general. PMID:24688404

  15. Experiments in computing: a survey.

    PubMed

    Tedre, Matti; Moisseinen, Nella

    2014-01-01

    Experiments play a central role in science. The role of experiments in computing is, however, unclear. Questions about the relevance of experiments in computing attracted little attention until the 1980s. As the discipline then saw a push towards experimental computer science, a variety of technically, theoretically, and empirically oriented views on experiments emerged. As a consequence of those debates, today's computing fields use experiments and experiment terminology in a variety of ways. This paper analyzes experimentation debates in computing. It presents five ways in which debaters have conceptualized experiments in computing: feasibility experiment, trial experiment, field experiment, comparison experiment, and controlled experiment. This paper has three aims: to clarify experiment terminology in computing; to contribute to disciplinary self-understanding of computing; and, due to computing's centrality in other fields, to promote understanding of experiments in modern science in general.

  16. Combining operational models and data into a dynamic vessel risk assessment tool for coastal regions

    NASA Astrophysics Data System (ADS)

    Fernandes, R.; Braunschweig, F.; Lourenço, F.; Neves, R.

    2015-07-01

    The technological evolution in terms of computational capacity, data acquisition systems, numerical modelling and operational oceanography is supplying opportunities for designing and building holistic approaches and complex tools for newer and more efficient management (planning, prevention and response) of coastal water pollution risk events. A combined methodology to dynamically estimate time and space variable shoreline risk levels from ships has been developed, integrating numerical metocean forecasts and oil spill simulations with vessel tracking automatic identification systems (AIS). The risk rating combines the likelihood of an oil spill occurring from a vessel navigating in a study area - Portuguese Continental shelf - with the assessed consequences to the shoreline. The spill likelihood is based on dynamic marine weather conditions and statistical information from previous accidents. The shoreline consequences reflect the virtual spilled oil amount reaching shoreline and its environmental and socio-economic vulnerabilities. The oil reaching shoreline is quantified with an oil spill fate and behaviour model running multiple virtual spills from vessels along time. Shoreline risks can be computed in real-time or from previously obtained data. Results show the ability of the proposed methodology to estimate the risk properly sensitive to dynamic metocean conditions and to oil transport behaviour. The integration of meteo-oceanic + oil spill models with coastal vulnerability and AIS data in the quantification of risk enhances the maritime situational awareness and the decision support model, providing a more realistic approach in the assessment of shoreline impacts. The risk assessment from historical data can help finding typical risk patterns, "hot spots" or developing sensitivity analysis to specific conditions, whereas real time risk levels can be used in the prioritization of individual ships, geographical areas, strategic tug positioning and implementation of dynamic risk-based vessel traffic monitoring.

  17. Ash production and dispersal from sustained low-intensity Mono-Inyo eruptions

    NASA Astrophysics Data System (ADS)

    Black, Benjamin A.; Manga, Michael; Andrews, Benjamin

    2016-08-01

    Recent rhyolitic volcanism has demonstrated that prolonged low-intensity ash venting may accompany effusive dome formation. We examine the possibility and some consequences of episodes of extended, weak ash venting at the rhyolitic Mono-Inyo chain in Eastern California. We describe ash-filled cracks within one of the youngest domes, Panum Crater, which provide a textural record of ash venting during dome effusion. We use synchrotron-based X-ray computed tomography to characterize the particles in these tuffisites. Particle sizes in well-sorted tuffisite layers agree well with grain size distributions observed during weak ash venting at Soufrière Hills Volcano, Montserrat, and yield approximate upper and lower bounds on gas velocity and mass flux during the formation of those layers. We simulate ash dispersal with Ash3d to assess the consequences of long-lived Mono-Inyo ash venting for ash deposition and the accompanying volcanic hazards. Our results highlight the sensitivity of large-scale outcomes of volcanic eruptions to small-scale processes.

  18. Physical consequences of the mitochondrial targeting of single-walled carbon nanotubes probed computationally

    NASA Astrophysics Data System (ADS)

    Chistyakov, V. A.; Zolotukhin, P. V.; Prazdnova, E. V.; Alperovich, I.; Soldatov, A. V.

    2015-06-01

    Experiments by F. Zhou and coworkers (2010) [16] showed that mitochondria are the main target of the cellular accumulation of single-walled carbon nanotubes (SWCNTs). Our in silico experiments, based on geometrical optimization of the system consisting of SWCNT+proton within Density Functional Theory, revealed that protons can bind to the outer side of SWCNT so generating a positive charge. Calculation results allow one to propose the following mechanism of SWCNTs mitochondrial targeting. SWCNTs enter the space between inner and outer membranes of mitochondria, where the excess of protons has been formed by diffusion. In this compartment SWCNTs are loaded with protons and acquire positive charges distributed over their surface. Protonation of hydrophobic SWCNTs can also be carried out within the mitochondrial membrane through interaction with the protonated ubiquinone. Such "charge loaded" particles can be transferred as "Sculachev ions" through the inner membrane of the mitochondria due to the potential difference generated by the inner membrane. Physiological consequences of the described mechanism are discussed.

  19. Robust and real-time control of magnetic bearings for space engines

    NASA Technical Reports Server (NTRS)

    Sinha, Alok; Wang, Kon-Well; Mease, K.; Lewis, S.

    1991-01-01

    Currently, NASA Lewis Research Center is developing magnetic bearings for Space Shuttle Main Engine (SSME) turbopumps. The control algorithms which have been used are based on either the proportional-intergral-derivative control (PID) approach or the linear quadratic (LQ) state space approach. These approaches lead to an acceptable performance only when the system model is accurately known, which is seldom true in practice. For example, the rotor eccentricity, which is a major source of vibration at high speeds, cannot be predicted accurately. Furthermore, the dynamics of a rotor shaft, which must be treated as a flexible system to model the elastic rotor shaft, is infinite dimensional in theory and the controller can only be developed on the basis of a finite number of modes. Therefore, the development of the control system is further complicated by the possibility of closed loop system instability because of residual or uncontrolled modes, the so called spillover problem. Consequently, novel control algorithms for magnetic bearings are being developed to be robust to inevitable parametric uncertainties, external disturbances, spillover phenomenon and noise. Also, as pointed out earlier, magnetic bearings must exhibit good performance at a speed over 30,000 rpm. This implies that the sampling period available for the design of a digital control system has to be of the order of 0.5 milli-seconds. Therefore, feedback coefficients and other required controller parameters have to be computed off-line so that the on-line computational burden is extremely small. The development of the robust and real-time control algorithms is based on the sliding mode control theory. In this method, a dynamic system is made to move along a manifold of sliding hyperplanes to the origin of the state space. The number of sliding hyperplanes equals that of actuators. The sliding mode controller has two parts; linear state feedback and nonlinear terms. The nonlinear terms guarantee that the systems would reach the intersection of all sliding hyperplanes and remain on it when bounds on the errors in the system parameters and external disturbances are known. The linear part of the control drives the system to the origin of state space. Another important feature is that the controller parameter can be computed off-line. Consequently, on-line computational burden is small.

  20. Computer-Based Auditory Training Programs for Children with Hearing Impairment - A Scoping Review.

    PubMed

    Nanjundaswamy, Manohar; Prabhu, Prashanth; Rajanna, Revathi Kittur; Ningegowda, Raghavendra Gulaganji; Sharma, Madhuri

    2018-01-01

    Introduction  Communication breakdown, a consequence of hearing impairment (HI), is being fought by fitting amplification devices and providing auditory training since the inception of audiology. The advances in both audiology and rehabilitation programs have led to the advent of computer-based auditory training programs (CBATPs). Objective  To review the existing literature documenting the evidence-based CBATPs for children with HIs. Since there was only one such article, we also chose to review the commercially available CBATPs for children with HI. The strengths and weaknesses of the existing literature were reviewed in order to improve further researches. Data Synthesis  Google Scholar and PubMed databases were searched using various combinations of keywords. The participant, intervention, control, outcome and study design (PICOS) criteria were used for the inclusion of articles. Out of 124 article abstracts reviewed, 5 studies were shortlisted for detailed reading. One among them satisfied all the criteria, and was taken for review. The commercially available programs were chosen based on an extensive search in Google. The reviewed article was well-structured, with appropriate outcomes. The commercially available programs cover many aspects of the auditory training through a wide range of stimuli and activities. Conclusions  There is a dire need for extensive research to be performed in the field of CBATPs to establish their efficacy, also to establish them as evidence-based practices.

  1. Computer-Based Auditory Training Programs for Children with Hearing Impairment – A Scoping Review

    PubMed Central

    Nanjundaswamy, Manohar; Prabhu, Prashanth; Rajanna, Revathi Kittur; Ningegowda, Raghavendra Gulaganji; Sharma, Madhuri

    2018-01-01

    Introduction  Communication breakdown, a consequence of hearing impairment (HI), is being fought by fitting amplification devices and providing auditory training since the inception of audiology. The advances in both audiology and rehabilitation programs have led to the advent of computer-based auditory training programs (CBATPs). Objective  To review the existing literature documenting the evidence-based CBATPs for children with HIs. Since there was only one such article, we also chose to review the commercially available CBATPs for children with HI. The strengths and weaknesses of the existing literature were reviewed in order to improve further researches. Data Synthesis  Google Scholar and PubMed databases were searched using various combinations of keywords. The participant, intervention, control, outcome and study design (PICOS) criteria were used for the inclusion of articles. Out of 124 article abstracts reviewed, 5 studies were shortlisted for detailed reading. One among them satisfied all the criteria, and was taken for review. The commercially available programs were chosen based on an extensive search in Google. The reviewed article was well-structured, with appropriate outcomes. The commercially available programs cover many aspects of the auditory training through a wide range of stimuli and activities. Conclusions  There is a dire need for extensive research to be performed in the field of CBATPs to establish their efficacy, also to establish them as evidence-based practices. PMID:29371904

  2. GREEN SUPERCOMPUTING IN A DESKTOP BOX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HSU, CHUNG-HSING; FENG, WU-CHUN; CHING, AVERY

    2007-01-17

    The computer workstation, introduced by Sun Microsystems in 1982, was the tool of choice for scientists and engineers as an interactive computing environment for the development of scientific codes. However, by the mid-1990s, the performance of workstations began to lag behind high-end commodity PCs. This, coupled with the disappearance of BSD-based operating systems in workstations and the emergence of Linux as an open-source operating system for PCs, arguably led to the demise of the workstation as we knew it. Around the same time, computational scientists started to leverage PCs running Linux to create a commodity-based (Beowulf) cluster that provided dedicatedmore » computer cycles, i.e., supercomputing for the rest of us, as a cost-effective alternative to large supercomputers, i.e., supercomputing for the few. However, as the cluster movement has matured, with respect to cluster hardware and open-source software, these clusters have become much more like their large-scale supercomputing brethren - a shared (and power-hungry) datacenter resource that must reside in a machine-cooled room in order to operate properly. Consequently, the above observations, when coupled with the ever-increasing performance gap between the PC and cluster supercomputer, provide the motivation for a 'green' desktop supercomputer - a turnkey solution that provides an interactive and parallel computing environment with the approximate form factor of a Sun SPARCstation 1 'pizza box' workstation. In this paper, they present the hardware and software architecture of such a solution as well as its prowess as a developmental platform for parallel codes. In short, imagine a 12-node personal desktop supercomputer that achieves 14 Gflops on Linpack but sips only 185 watts of power at load, resulting in a performance-power ratio that is over 300% better than their reference SMP platform.« less

  3. Identifying the relationship between feedback provided in computer-assisted instructional modules, science self-efficacy, and academic achievement

    NASA Astrophysics Data System (ADS)

    Mazingo, Diann Etsuko

    Feedback has been identified as a key variable in developing academic self-efficacy. The types of feedback can vary from a traditional, objectivist approach that focuses on minimizing learner errors to a more constructivist approach, focusing on facilitating understanding. The influx of computer-based courses, whether online or through a series of computer-assisted instruction (CAI) modules require that the current research of effective feedback techniques in the classroom be extended to computer environments in order to impact their instructional design. In this study, exposure to different types of feedback during a chemistry CAI module was studied in relation to science self-efficacy (SSE) and performance on an objective-driven assessment (ODA) of the chemistry concepts covered in the unit. The quantitative analysis consisted of two separate ANCOVAs on the dependent variables, using pretest as the covariate and group as the fixed factor. No significant differences were found for either variable between the three groups on adjusted posttest means for the ODA and SSE measures (.95F(2, 106) = 1.311, p = 0.274 and .95F(2, 106) = 1.080, p = 0.344, respectively). However, a mixed methods approach yielded valuable qualitative insights into why only one overall quantitative effect was observed. These findings are discussed in relation to the need to further refine the instruments and methods used in order to more fully explore the possibility that type of feedback might play a role in developing SSE, and consequently, improve academic performance in science. Future research building on this study may reveal significance that could impact instructional design practices for developing online and computer-based instruction.

  4. A parallel algorithm for the initial screening of space debris collisions prediction using the SGP4/SDP4 models and GPU acceleration

    NASA Astrophysics Data System (ADS)

    Lin, Mingpei; Xu, Ming; Fu, Xiaoyu

    2017-05-01

    Currently, a tremendous amount of space debris in Earth's orbit imperils operational spacecraft. It is essential to undertake risk assessments of collisions and predict dangerous encounters in space. However, collision predictions for an enormous amount of space debris give rise to large-scale computations. In this paper, a parallel algorithm is established on the Compute Unified Device Architecture (CUDA) platform of NVIDIA Corporation for collision prediction. According to the parallel structure of NVIDIA graphics processors, a block decomposition strategy is adopted in the algorithm. Space debris is divided into batches, and the computation and data transfer operations of adjacent batches overlap. As a consequence, the latency to access shared memory during the entire computing process is significantly reduced, and a higher computing speed is reached. Theoretically, a simulation of collision prediction for space debris of any amount and for any time span can be executed. To verify this algorithm, a simulation example including 1382 pieces of debris, whose operational time scales vary from 1 min to 3 days, is conducted on Tesla C2075 of NVIDIA. The simulation results demonstrate that with the same computational accuracy as that of a CPU, the computing speed of the parallel algorithm on a GPU is 30 times that on a CPU. Based on this algorithm, collision prediction of over 150 Chinese spacecraft for a time span of 3 days can be completed in less than 3 h on a single computer, which meets the timeliness requirement of the initial screening task. Furthermore, the algorithm can be adapted for multiple tasks, including particle filtration, constellation design, and Monte-Carlo simulation of an orbital computation.

  5. The spinal posture of computing adolescents in a real-life setting

    PubMed Central

    2014-01-01

    Background It is assumed that good postural alignment is associated with the less likelihood of musculoskeletal pain symptoms. Encouraging good sitting postures have not reported consequent musculoskeletal pain reduction in school-based populations, possibly due to a lack of clear understanding of good posture. Therefore this paper describes the variability of postural angles in a cohort of asymptomatic high-school students whilst working on desk-top computers in a school computer classroom and to report on the relationship between the postural angles and age, gender, height, weight and computer use. Methods The baseline data from a 12 month longitudinal study is reported. The study was conducted in South African school computer classrooms. 194 Grade 10 high-school students, from randomly selected high-schools, aged 15–17 years, enrolled in Computer Application Technology for the first time, asymptomatic during the preceding month, and from whom written informed consent were obtained, participated in the study. The 3D Posture Analysis Tool captured five postural angles (head flexion, neck flexion, cranio-cervical angle, trunk flexion and head lateral bend) while the students were working on desk-top computers. Height, weight and computer use were also measured. Individual and combinations of postural angles were analysed. Results 944 Students were screened for eligibility of which the data of 194 students are reported. Trunk flexion was the most variable angle. Increased neck flexion and the combination of increased head flexion, neck flexion and trunk flexion were significantly associated with increased weight and BMI (p = 0.0001). Conclusions High-school students sit with greater ranges of trunk flexion (leaning forward or reclining) when using the classroom computer. Increased weight is significantly associated with increased sagittal plane postural angles. PMID:24950887

  6. Fast parallel tandem mass spectral library searching using GPU hardware acceleration

    PubMed Central

    Baumgardner, Lydia Ashleigh; Shanmugam, Avinash Kumar; Lam, Henry; Eng, Jimmy K.; Martin, Daniel B.

    2011-01-01

    Mass spectrometry-based proteomics is a maturing discipline of biologic research that is experiencing substantial growth. Instrumentation has steadily improved over time with the advent of faster and more sensitive instruments collecting ever larger data files. Consequently, the computational process of matching a peptide fragmentation pattern to its sequence, traditionally accomplished by sequence database searching and more recently also by spectral library searching, has become a bottleneck in many mass spectrometry experiments. In both of these methods, the main rate limiting step is the comparison of an acquired spectrum with all potential matches from a spectral library or sequence database. This is a highly parallelizable process because the core computational element can be represented as a simple but arithmetically intense multiplication of two vectors. In this paper we present a proof of concept project taking advantage of the massively parallel computing available on graphics processing units (GPUs) to distribute and accelerate the process of spectral assignment using spectral library searching. This program, which we have named FastPaSS (for Fast Parallelized Spectral Searching) is implemented in CUDA (Compute Unified Device Architecture) from NVIDIA which allows direct access to the processors in an NVIDIA GPU. Our efforts demonstrate the feasibility of GPU computing for spectral assignment, through implementation of the validated spectral searching algorithm SpectraST in the CUDA environment. PMID:21545112

  7. The 'partial resonance' of the ring in the NLO crystal melaminium formate: study using vibrational spectra, DFT, HOMO-LUMO and MESP mapping.

    PubMed

    Binoy, J; Marchewka, M K; Jayakumar, V S

    2013-03-01

    The molecular geometry and vibrational spectral investigations of melaminium formate, a potential material known for toxicity and NLO activity, has been performed. The FT IR and FT Raman spectral investigations of melaminium formate is performed aided by the computed spectra of melaminium formate, triazine, melamine, melaminium and formate ion, along with bond orders and PED, computed using the density functional method (B3LYP) with 6-31G(d) basis set and XRD data, to reveal intermolecular interactions of amino groups with neighbor formula units in the crystal, intramolecular H⋯H repulsion of amino group hydrogen with protonating hydrogen, consequent loss of resonance in the melaminium ring, restriction of resonance to N(3)C(1)N(1) moiety leading to special type resonance of the ring and the resonance structure of CO(2) group of formate ion. The 3D matrix of hyperpolarizability tensor components has been computed to quantify NLO activity of melamine, melaminium and melaminium formate and the hyperpolarizability enhancement is analyzed using computed plots of HOMO and LUMO orbitals. A new mechanism of proton transfer responsible for NLO activity has been suggested, based on anomalous IR spectral bands in the high wavenumber region. The computed MEP contour maps have been used to analyze the interaction of melaminium and formate ions in the crystal. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. The `partial resonance' of the ring in the NLO crystal melaminium formate: Study using vibrational spectra, DFT, HOMO-LUMO and MESP mapping

    NASA Astrophysics Data System (ADS)

    Binoy, J.; Marchewka, M. K.; Jayakumar, V. S.

    2013-03-01

    The molecular geometry and vibrational spectral investigations of melaminium formate, a potential material known for toxicity and NLO activity, has been performed. The FT IR and FT Raman spectral investigations of melaminium formate is performed aided by the computed spectra of melaminium formate, triazine, melamine, melaminium and formate ion, along with bond orders and PED, computed using the density functional method (B3LYP) with 6-31G(d) basis set and XRD data, to reveal intermolecular interactions of amino groups with neighbor formula units in the crystal, intramolecular H⋯H repulsion of amino group hydrogen with protonating hydrogen, consequent loss of resonance in the melaminium ring, restriction of resonance to N3C1N1 moiety leading to special type resonance of the ring and the resonance structure of CO2 group of formate ion. The 3D matrix of hyperpolarizability tensor components has been computed to quantify NLO activity of melamine, melaminium and melaminium formate and the hyperpolarizability enhancement is analyzed using computed plots of HOMO and LUMO orbitals. A new mechanism of proton transfer responsible for NLO activity has been suggested, based on anomalous IR spectral bands in the high wavenumber region. The computed MEP contour maps have been used to analyze the interaction of melaminium and formate ions in the crystal.

  9. Inference in fuzzy rule bases with conflicting evidence

    NASA Technical Reports Server (NTRS)

    Koczy, Laszlo T.

    1992-01-01

    Inference based on fuzzy 'If ... then' rules has played a very important role since when Zadeh proposed the Compositional Rule of Inference and, especially, since the first successful application presented by Mamdani. From the mid-1980's when the 'fuzzy boom' started in Japan, numerous industrial applications appeared, all using simplified techniques because of the high levels of computational complexity. Another feature is that antecedents in the rules are distributed densely in the input space, so the conclusion can be calculated by some weighted combination of the consequents of the matching (fired) rules. The CRI works in the following way: If R is a rule and A* is an observation, the conclusion is computed by B* = R o A* (o stands for the max-min composition). Algorithms implementing this idea directly have an exponential time complexity (maybe the problem is NP-hard) as the rules are relations in X x Y, a k1 x k2 dimensional space, if X is k1, Y is k2 dimensional. The simplified techniques usually decompose the relation into k1 projections in X(sub i) and measure in some way the degree of similarity between observation and antecedent by some parameter of the overlapping. These parameters are aggregated to a single value in (0,1) which is applied as a resulting weight for the given rule. The projections of rules in dimensions Y(sub i) are weighted by these aggregated values and then they are combined in order to obtain a resulting conclusion separately in every dimension. This method is unapplicable with sparse bases as there is no guarantee that an arbitrary observation matches with any of the antecedents. Then, the degree of similarity is 0 and all consequents are weighted by 0. Some considerations for such a situation are summarized in the next sections.

  10. Gap models and their individual-based relatives in the assessment of the consequences of global change

    NASA Astrophysics Data System (ADS)

    Shugart, Herman H.; Wang, Bin; Fischer, Rico; Ma, Jianyong; Fang, Jing; Yan, Xiaodong; Huth, Andreas; Armstrong, Amanda H.

    2018-03-01

    Individual-based models (IBMs) of complex systems emerged in the 1960s and early 1970s, across diverse disciplines from astronomy to zoology. Ecological IBMs arose with seemingly independent origins out of the tradition of understanding the ecosystems dynamics of ecosystems from a ‘bottom-up’ accounting of the interactions of the parts. Individual trees are principal among the parts of forests. Because these models are computationally demanding, they have prospered as the power of digital computers has increased exponentially over the decades following the 1970s. This review will focus on a class of forest IBMs called gap models. Gap models simulate the changes in forests by simulating the birth, growth and death of each individual tree on a small plot of land. The summation of these plots comprise a forest (or set of sample plots on a forested landscape or region). Other, more aggregated forest IBMs have been used in global applications including cohort-based models, ecosystem demography models, etc. Gap models have been used to provide the parameters for these bulk models. Currently, gap models have grown from local-scale to continental-scale and even global-scale applications to assess the potential consequences of climate change on natural forests. Modifications to the models have enabled simulation of disturbances including fire, insect outbreak and harvest. Our objective in this review is to provide the reader with an overview of the history, motivation and applications, including theoretical applications, of these models. In a time of concern over global changes, gap models are essential tools to understand forest responses to climate change, modified disturbance regimes and other change agents. Development of forest surveys to provide the starting points for simulations and better estimates of the behavior of the diversity of tree species in response to the environment are continuing needs for improvement for these and other IBMs.

  11. Combining operational models and data into a dynamic vessel risk assessment tool for coastal regions

    NASA Astrophysics Data System (ADS)

    Fernandes, R.; Braunschweig, F.; Lourenço, F.; Neves, R.

    2016-02-01

    The technological evolution in terms of computational capacity, data acquisition systems, numerical modelling and operational oceanography is supplying opportunities for designing and building holistic approaches and complex tools for newer and more efficient management (planning, prevention and response) of coastal water pollution risk events. A combined methodology to dynamically estimate time and space variable individual vessel accident risk levels and shoreline contamination risk from ships has been developed, integrating numerical metocean forecasts and oil spill simulations with vessel tracking automatic identification systems (AIS). The risk rating combines the likelihood of an oil spill occurring from a vessel navigating in a study area - the Portuguese continental shelf - with the assessed consequences to the shoreline. The spill likelihood is based on dynamic marine weather conditions and statistical information from previous accidents. The shoreline consequences reflect the virtual spilled oil amount reaching shoreline and its environmental and socio-economic vulnerabilities. The oil reaching shoreline is quantified with an oil spill fate and behaviour model running multiple virtual spills from vessels along time, or as an alternative, a correction factor based on vessel distance from coast. Shoreline risks can be computed in real time or from previously obtained data. Results show the ability of the proposed methodology to estimate the risk properly sensitive to dynamic metocean conditions and to oil transport behaviour. The integration of meteo-oceanic + oil spill models with coastal vulnerability and AIS data in the quantification of risk enhances the maritime situational awareness and the decision support model, providing a more realistic approach in the assessment of shoreline impacts. The risk assessment from historical data can help finding typical risk patterns ("hot spots") or developing sensitivity analysis to specific conditions, whereas real-time risk levels can be used in the prioritization of individual ships, geographical areas, strategic tug positioning and implementation of dynamic risk-based vessel traffic monitoring.

  12. A note on AB INITIO semiconductor band structures

    NASA Astrophysics Data System (ADS)

    Fiorentini, Vincenzo

    1992-09-01

    We point out that only the internal features of the DFT ab initio theoretical picture of a crystal should be used in a consistent ab initio calculation of the band structure. As a consequence, we show that ground-state band structure calculations should be performed for the system in equilibrium at zero pressure, i.e. at the computed equilibrium cell volume ω th. Examples of consequences of this attitude are considered.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ritchie, L.T.; Johnson, J.D.; Blond, R.M.

    The CRAC2 computer code is a revision of the Calculation of Reactor Accident Consequences computer code, CRAC, developed for the Reactor Safety Study. The CRAC2 computer code incorporates significant modeling improvements in the areas of weather sequence sampling and emergency response, and refinements to the plume rise, atmospheric dispersion, and wet deposition models. New output capabilities have also been added. This guide is to facilitate the informed and intelligent use of CRAC2. It includes descriptions of the input data, the output results, the file structures, control information, and five sample problems.

  14. Computational Psychometrics for Modeling System Dynamics during Stressful Disasters.

    PubMed

    Cipresso, Pietro; Bessi, Alessandro; Colombo, Desirée; Pedroli, Elisa; Riva, Giuseppe

    2017-01-01

    Disasters can be very stressful events. However, computational models of stress require data that might be very difficult to collect during disasters. Moreover, personal experiences are not repeatable, so it is not possible to collect bottom-up information when building a coherent model. To overcome these problems, we propose the use of computational models and virtual reality integration to recreate disaster situations, while examining possible dynamics in order to understand human behavior and relative consequences. By providing realistic parameters associated with disaster situations, computational scientists can work more closely with emergency responders to improve the quality of interventions in the future.

  15. Personal Decision Factors Considered by Information Technology Executives: Their Impacts on Business Intentions and Consequent Cloud Computing Services Adoption Rates

    ERIC Educational Resources Information Center

    Smith, Marcus L., Jr.

    2016-01-01

    During its infancy, the cloud computing industry was the province largely of small and medium-sized business customers. Despite their size, these companies required a professionally run, yet economical information technology (IT) operation. These customers used a total value strategy whereby they avoided paying for essential, yet underutilized,…

  16. Teaching Web Application Development: A Case Study in a Computer Science Course

    ERIC Educational Resources Information Center

    Del Fabro, Marcos Didonet; de Alimeda, Eduardo Cunha; Sluzarski, Fabiano

    2012-01-01

    Teaching web development in Computer Science undergraduate courses is a difficult task. Often, there is a gap between the students' experiences and the reality in the industry. As a consequence, the students are not always well-prepared once they get the degree. This gap is due to several reasons, such as the complexity of the assignments, the…

  17. Clarifying the "A" in CAI for Learners of Different Abilities. Assessing the Cognitive consequences of Computer Environments for Learning (ACCCEL).

    ERIC Educational Resources Information Center

    Mandinach, Ellen B.

    This study investigated the degree to which 48 seventh and eighth grade students of different abilities acquired strategic planning knowledge from an intellectual computer game ("Wumpus"). Relationships between ability and student performance with two versions of the game were also investigated. The two versions differed in the structure…

  18. Multi-axis control based on movement control cards in NC systems

    NASA Astrophysics Data System (ADS)

    Jiang, Tingbiao; Wei, Yunquan

    2005-12-01

    Today most movement control cards need special control software of topper computers and are only suitable for fixed-axis controls. Consequently, the number of axes which can be controlled is limited. Advanced manufacture technology develops at a very high speed, and that development brings forth. New requirements for movement control in mechanisms and electronics. This paper introduces products of the 5th generation of movement control cards, PMAC 2A-PC/104, made by the Delta Tau Company in the USA. Based on an analysis of PMAC 2A-PC/104, this paper first describes two aspects relevant to the hardware structure of movement control cards and the interrelated software of the topper computers. Then, two methods are presented for solving these problems. The first method is to set limit switches on the movement control cards; all of them can be used to control each moving axis. The second method is to program applied software with existing programming language (for example, VC ++, Visual Basic, Delphi, and so forth). This program is much easier to operate and expand by its users. By using a limit switch, users can choose different axes in movement control cards. Also, users can change parts of the parameters in the control software of topper computers to realize different control axes. Combining these 2 methods proves to be convenient for realizing multi-axis control in numerical control systems.

  19. On the Impact of Execution Models: A Case Study in Computational Chemistry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavarría-Miranda, Daniel; Halappanavar, Mahantesh; Krishnamoorthy, Sriram

    2015-05-25

    Efficient utilization of high-performance computing (HPC) platforms is an important and complex problem. Execution models, abstract descriptions of the dynamic runtime behavior of the execution stack, have significant impact on the utilization of HPC systems. Using a computational chemistry kernel as a case study and a wide variety of execution models combined with load balancing techniques, we explore the impact of execution models on the utilization of an HPC system. We demonstrate a 50 percent improvement in performance by using work stealing relative to a more traditional static scheduling approach. We also use a novel semi-matching technique for load balancingmore » that has comparable performance to a traditional hypergraph-based partitioning implementation, which is computationally expensive. Using this study, we found that execution model design choices and assumptions can limit critical optimizations such as global, dynamic load balancing and finding the correct balance between available work units and different system and runtime overheads. With the emergence of multi- and many-core architectures and the consequent growth in the complexity of HPC platforms, we believe that these lessons will be beneficial to researchers tuning diverse applications on modern HPC platforms, especially on emerging dynamic platforms with energy-induced performance variability.« less

  20. RAID v2.0: an updated resource of RNA-associated interactions across organisms.

    PubMed

    Yi, Ying; Zhao, Yue; Li, Chunhua; Zhang, Lin; Huang, Huiying; Li, Yana; Liu, Lanlan; Hou, Ping; Cui, Tianyu; Tan, Puwen; Hu, Yongfei; Zhang, Ting; Huang, Yan; Li, Xiaobo; Yu, Jia; Wang, Dong

    2017-01-04

    With the development of biotechnologies and computational prediction algorithms, the number of experimental and computational prediction RNA-associated interactions has grown rapidly in recent years. However, diverse RNA-associated interactions are scattered over a wide variety of resources and organisms, whereas a fully comprehensive view of diverse RNA-associated interactions is still not available for any species. Hence, we have updated the RAID database to version 2.0 (RAID v2.0, www.rna-society.org/raid/) by integrating experimental and computational prediction interactions from manually reading literature and other database resources under one common framework. The new developments in RAID v2.0 include (i) over 850-fold RNA-associated interactions, an enhancement compared to the previous version; (ii) numerous resources integrated with experimental or computational prediction evidence for each RNA-associated interaction; (iii) a reliability assessment for each RNA-associated interaction based on an integrative confidence score; and (iv) an increase of species coverage to 60. Consequently, RAID v2.0 recruits more than 5.27 million RNA-associated interactions, including more than 4 million RNA-RNA interactions and more than 1.2 million RNA-protein interactions, referring to nearly 130 000 RNA/protein symbols across 60 species. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  1. VEDA: a web-based virtual environment for dynamic atomic force microscopy.

    PubMed

    Melcher, John; Hu, Shuiqing; Raman, Arvind

    2008-06-01

    We describe here the theory and applications of virtual environment dynamic atomic force microscopy (VEDA), a suite of state-of-the-art simulation tools deployed on nanoHUB (www.nanohub.org) for the accurate simulation of tip motion in dynamic atomic force microscopy (dAFM) over organic and inorganic samples. VEDA takes advantage of nanoHUB's cyberinfrastructure to run high-fidelity dAFM tip dynamics computations on local clusters and the teragrid. Consequently, these tools are freely accessible and the dAFM simulations are run using standard web-based browsers without requiring additional software. A wide range of issues in dAFM ranging from optimal probe choice, probe stability, and tip-sample interaction forces, power dissipation, to material property extraction and scanning dynamics over hetereogeneous samples can be addressed.

  2. Invited Article: VEDA: A web-based virtual environment for dynamic atomic force microscopy

    NASA Astrophysics Data System (ADS)

    Melcher, John; Hu, Shuiqing; Raman, Arvind

    2008-06-01

    We describe here the theory and applications of virtual environment dynamic atomic force microscopy (VEDA), a suite of state-of-the-art simulation tools deployed on nanoHUB (www.nanohub.org) for the accurate simulation of tip motion in dynamic atomic force microscopy (dAFM) over organic and inorganic samples. VEDA takes advantage of nanoHUB's cyberinfrastructure to run high-fidelity dAFM tip dynamics computations on local clusters and the teragrid. Consequently, these tools are freely accessible and the dAFM simulations are run using standard web-based browsers without requiring additional software. A wide range of issues in dAFM ranging from optimal probe choice, probe stability, and tip-sample interaction forces, power dissipation, to material property extraction and scanning dynamics over hetereogeneous samples can be addressed.

  3. A neural network model of metaphor understanding with dynamic interaction based on a statistical language analysis: targeting a human-like model.

    PubMed

    Terai, Asuka; Nakagawa, Masanori

    2007-08-01

    The purpose of this paper is to construct a model that represents the human process of understanding metaphors, focusing specifically on similes of the form an "A like B". Generally speaking, human beings are able to generate and understand many sorts of metaphors. This study constructs the model based on a probabilistic knowledge structure for concepts which is computed from a statistical analysis of a large-scale corpus. Consequently, this model is able to cover the many kinds of metaphors that human beings can generate. Moreover, the model implements the dynamic process of metaphor understanding by using a neural network with dynamic interactions. Finally, the validity of the model is confirmed by comparing model simulations with the results from a psychological experiment.

  4. Scalable Energy Efficiency with Resilience for High Performance Computing Systems: A Quantitative Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Li; Chen, Zizhong; Song, Shuaiwen

    2016-01-18

    Energy efficiency and resilience are two crucial challenges for HPC systems to reach exascale. While energy efficiency and resilience issues have been extensively studied individually, little has been done to understand the interplay between energy efficiency and resilience for HPC systems. Decreasing the supply voltage associated with a given operating frequency for processors and other CMOS-based components can significantly reduce power consumption. However, this often raises system failure rates and consequently increases application execution time. In this work, we present an energy saving undervolting approach that leverages the mainstream resilience techniques to tolerate the increased failures caused by undervolting.

  5. Adaptable Iterative and Recursive Kalman Filter Schemes

    NASA Technical Reports Server (NTRS)

    Zanetti, Renato

    2014-01-01

    Nonlinear filters are often very computationally expensive and usually not suitable for real-time applications. Real-time navigation algorithms are typically based on linear estimators, such as the extended Kalman filter (EKF) and, to a much lesser extent, the unscented Kalman filter. The Iterated Kalman filter (IKF) and the Recursive Update Filter (RUF) are two algorithms that reduce the consequences of the linearization assumption of the EKF by performing N updates for each new measurement, where N is the number of recursions, a tuning parameter. This paper introduces an adaptable RUF algorithm to calculate N on the go, a similar technique can be used for the IKF as well.

  6. High-Fidelity Micromechanics Model Developed for the Response of Multiphase Materials

    NASA Technical Reports Server (NTRS)

    Aboudi, Jacob; Pindera, Marek-Jerzy; Arnold, Steven M.

    2002-01-01

    A new high-fidelity micromechanics model has been developed under funding from the NASA Glenn Research Center for predicting the response of multiphase materials with arbitrary periodic microstructures. The model's analytical framework is based on the homogenization technique, but the method of solution for the local displacement and stress fields borrows concepts previously employed in constructing the higher order theory for functionally graded materials. The resulting closed-form macroscopic and microscopic constitutive equations, valid for both uniaxial and multiaxial loading of periodic materials with elastic and inelastic constitutive phases, can be incorporated into a structural analysis computer code. Consequently, this model now provides an alternative, accurate method.

  7. Ensuring critical event sequences in high consequence computer based systems as inspired by path expressions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kidd, M.E.C.

    1997-02-01

    The goal of our work is to provide a high level of confidence that critical software driven event sequences are maintained in the face of hardware failures, malevolent attacks and harsh or unstable operating environments. This will be accomplished by providing dynamic fault management measures directly to the software developer and to their varied development environments. The methodology employed here is inspired by previous work in path expressions. This paper discusses the perceived problems, a brief overview of path expressions, the proposed methods, and a discussion of the differences between the proposed methods and traditional path expression usage and implementation.

  8. Investigating the Interplay between Energy Efficiency and Resilience in High Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Li; Song, Shuaiwen; Wu, Panruo

    2015-05-29

    Energy efficiency and resilience are two crucial challenges for HPC systems to reach exascale. While energy efficiency and resilience issues have been extensively studied individually, little has been done to understand the interplay between energy efficiency and resilience for HPC systems. Decreasing the supply voltage associated with a given operating frequency for processors and other CMOS-based components can significantly reduce power consumption. However, this often raises system failure rates and consequently increases application execution time. In this work, we present an energy saving undervolting approach that leverages the mainstream resilience techniques to tolerate the increased failures caused by undervolting.

  9. Scalable Energy Efficiency with Resilience for High Performance Computing Systems: A Quantitative Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Li; Chen, Zizhong; Song, Shuaiwen Leon

    2015-11-16

    Energy efficiency and resilience are two crucial challenges for HPC systems to reach exascale. While energy efficiency and resilience issues have been extensively studied individually, little has been done to understand the interplay between energy efficiency and resilience for HPC systems. Decreasing the supply voltage associated with a given operating frequency for processors and other CMOS-based components can significantly reduce power consumption. However, this often raises system failure rates and consequently increases application execution time. In this work, we present an energy saving undervolting approach that leverages the mainstream resilience techniques to tolerate the increased failures caused by undervolting.

  10. From computing with numbers to computing with words. From manipulation of measurements to manipulation of perceptions.

    PubMed

    Zadeh, L A

    2001-04-01

    Interest in issues relating to consciousness has grown markedly during the last several years. And yet, nobody can claim that consciousness is a well-understood concept that lends itself to precise analysis. It may be argued that, as a concept, consciousness is much too complex to fit into the conceptual structure of existing theories based on Aristotelian logic and probability theory. An approach suggested in this paper links consciousness to perceptions and perceptions to their descriptors in a natural language. In this way, those aspects of consciousness which relate to reasoning and concept formation are linked to what is referred to as the methodology of computing with words (CW). Computing, in its usual sense, is centered on manipulation of numbers and symbols. In contrast, computing with words, or CW for short, is a methodology in which the objects of computation are words and propositions drawn from a natural language (e.g., small, large, far, heavy, not very likely, the price of gas is low and declining, Berkeley is near San Francisco, it is very unlikely that there will be a significant increase in the price of oil in the near future, etc.). Computing with words is inspired by the remarkable human capability to perform a wide variety of physical and mental tasks without any measurements and any computations. Familiar examples of such tasks are parking a car, driving in heavy traffic, playing golf, riding a bicycle, understanding speech, and summarizing a story. Underlying this remarkable capability is the brain's crucial ability to manipulate perceptions--perceptions of distance, size, weight, color, speed, time, direction, force, number, truth, likelihood, and other characteristics of physical and mental objects. Manipulation of perceptions plays a key role in human recognition, decision and execution processes. As a methodology, computing with words provides a foundation for a computational theory of perceptions: a theory which may have an important bearing on how humans make--and machines might make--perception-based rational decisions in an environment of imprecision, uncertainty, and partial truth. A basic difference between perceptions and measurements is that, in general, measurements are crisp, whereas perceptions are fuzzy. One of the fundamental aims of science has been and continues to be that of progressing from perceptions to measurements. Pursuit of this aim has led to brilliant successes. We have sent men to the moon; we can build computers that are capable of performing billions of computations per second; we have constructed telescopes that can explore the far reaches of the universe; and we can date the age of rocks that are millions of years old. But alongside the brilliant successes stand conspicuous underachievements and outright failures. We cannot build robots that can move with the agility of animals or humans; we cannot automate driving in heavy traffic; we cannot translate from one language to another at the level of a human interpreter; we cannot create programs that can summarize non-trivial stories; our ability to model the behavior of economic systems leaves much to be desired; and we cannot build machines that can compete with children in the performance of a wide variety of physical and cognitive tasks. It may be argued that underlying the underachievements and failures is the unavailability of a methodology for reasoning and computing with perceptions rather than measurements. An outline of such a methodology--referred to as a computational theory of perceptions--is presented in this paper. The computational theory of perceptions (CTP) is based on the methodology of CW. In CTP, words play the role of labels of perceptions, and, more generally, perceptions are expressed as propositions in a natural language. CW-based techniques are employed to translate propositions expressed in a natural language into what is called the Generalized Constraint Language (GCL). In this language, the meaning of a proposition is expressed as a generalized constraint, X isr R, where X is the constrained variable, R is the constraining relation, and isr is a variable copula in which r is an indexing variable whose value defines the way in which R constrains X. Among the basic types of constraints are possibilistic, veristic, probabilistic, random set, Pawlak set, fuzzy graph, and usuality. The wide variety of constraints in GCL makes GCL a much more expressive language than the language of predicate logic. In CW, the initial and terminal data sets, IDS and TDS, are assumed to consist of propositions expressed in a natural language. These propositions are translated, respectively, into antecedent and consequent constraints. Consequent constraints are derived from antecedent constraints through the use of rules of constraint propagation. The principal constraint propagation rule is the generalized extension principle. (ABSTRACT TRUNCATED)

  11. A Primer on High-Throughput Computing for Genomic Selection

    PubMed Central

    Wu, Xiao-Lin; Beissinger, Timothy M.; Bauck, Stewart; Woodward, Brent; Rosa, Guilherme J. M.; Weigel, Kent A.; Gatti, Natalia de Leon; Gianola, Daniel

    2011-01-01

    High-throughput computing (HTC) uses computer clusters to solve advanced computational problems, with the goal of accomplishing high-throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long, and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl, and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general-purpose computation on a graphics processing unit provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin–Madison, which can be leveraged for genomic selection, in terms of central processing unit capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general-purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of marker panels to realized genetic gain). Eventually, HTC may change our view of data analysis as well as decision-making in the post-genomic era of selection programs in animals and plants, or in the study of complex diseases in humans. PMID:22303303

  12. A method for semi-automatic segmentation and evaluation of intracranial aneurysms in bone-subtraction computed tomography angiography (BSCTA) images

    NASA Astrophysics Data System (ADS)

    Krämer, Susanne; Ditt, Hendrik; Biermann, Christina; Lell, Michael; Keller, Jörg

    2009-02-01

    The rupture of an intracranial aneurysm has dramatic consequences for the patient. Hence early detection of unruptured aneurysms is of paramount importance. Bone-subtraction computed tomography angiography (BSCTA) has proven to be a powerful tool for detection of aneurysms in particular those located close to the skull base. Most aneurysms though are chance findings in BSCTA scans performed for other reasons. Therefore it is highly desirable to have techniques operating on standard BSCTA scans available which assist radiologists and surgeons in evaluation of intracranial aneurysms. In this paper we present a semi-automatic method for segmentation and assessment of intracranial aneurysms. The only user-interaction required is placement of a marker into the vascular malformation. Termination ensues automatically as soon as the segmentation reaches the vessels which feed the aneurysm. The algorithm is derived from an adaptive region-growing which employs a growth gradient as criterion for termination. Based on this segmentation values of high clinical and prognostic significance, such as volume, minimum and maximum diameter as well as surface of the aneurysm, are calculated automatically. the segmentation itself as well as the calculated diameters are visualised. Further segmentation of the adjoining vessels provides the means for visualisation of the topographical situation of vascular structures associated to the aneurysm. A stereolithographic mesh (STL) can be derived from the surface of the segmented volume. STL together with parameters like the resiliency of vascular wall tissue provide for an accurate wall model of the aneurysm and its associated vascular structures. Consequently the haemodynamic situation in the aneurysm itself and close to it can be assessed by flow modelling. Significant values of haemodynamics such as pressure onto the vascular wall, wall shear stress or pathlines of the blood flow can be computed. Additionally a dynamic flow model can be generated. Thus the presented method supports a better understanding of the clinical situation and assists the evaluation of therapeutic options. Furthermore it contributes to future research addressing intervention planning and prognostic assessment of intracranial aneurysms.

  13. Computerized patient identification for the EMBRACA clinical trial using real-time data from the PRAEGNANT network for metastatic breast cancer patients.

    PubMed

    Hein, Alexander; Gass, Paul; Walter, Christina Barbara; Taran, Florin-Andrei; Hartkopf, Andreas; Overkamp, Friedrich; Kolberg, Hans-Christian; Hadji, Peyman; Tesch, Hans; Ettl, Johannes; Wuerstlein, Rachel; Lounsbury, Debra; Lux, Michael P; Lüftner, Diana; Wallwiener, Markus; Müller, Volkmar; Belleville, Erik; Janni, Wolfgang; Fehm, Tanja N; Wallwiener, Diethelm; Ganslandt, Thomas; Ruebner, Matthias; Beckmann, Matthias W; Schneeweiss, Andreas; Fasching, Peter A; Brucker, Sara Y

    2016-07-01

    As breast cancer is a diverse disease, clinical trials are becoming increasingly diversified and are consequently being conducted in very small subgroups of patients, making study recruitment increasingly difficult. The aim of this study was to assess the use of data from a remote data entry system that serves a large national registry for metastatic breast cancer. The PRAEGNANT network is a real-time registry with an integrated biomaterials bank that was designed as a scientific study and as a means of identifying patients who are eligible for clinical trials, based on clinical and molecular information. Here, we report on the automated use of the clinical data documented to identify patients for a clinical trial (EMBRACA) for patients with metastatic breast cancer. The patients' charts were assessed by two independent physicians involved in the clinical trial and also by a computer program that tested patients for eligibility using a structured query language script. In all, 326 patients from two study sites in the PRAEGNANT network were included in the analysis. Using expert assessment, 120 of the 326 patients (37 %) appeared to be eligible for inclusion in the EMBRACA study; with the computer algorithm assessment, a total of 129 appeared to be eligible. The sensitivity of the computer algorithm was 0.87 and its specificity was 0.88. Using computer-based identification of patients for clinical trials appears feasible. With the instrument's high specificity, its application in a large cohort of patients appears to be feasible, and the workload for reassessing the patients is limited.

  14. What Can Pictures Tell Us About Web Pages? Improving Document Search Using Images.

    PubMed

    Rodriguez-Vaamonde, Sergio; Torresani, Lorenzo; Fitzgibbon, Andrew W

    2015-06-01

    Traditional Web search engines do not use the images in the HTML pages to find relevant documents for a given query. Instead, they typically operate by computing a measure of agreement between the keywords provided by the user and only the text portion of each page. In this paper we study whether the content of the pictures appearing in a Web page can be used to enrich the semantic description of an HTML document and consequently boost the performance of a keyword-based search engine. We present a Web-scalable system that exploits a pure text-based search engine to find an initial set of candidate documents for a given query. Then, the candidate set is reranked using visual information extracted from the images contained in the pages. The resulting system retains the computational efficiency of traditional text-based search engines with only a small additional storage cost needed to encode the visual information. We test our approach on one of the TREC Million Query Track benchmarks where we show that the exploitation of visual content yields improvement in accuracies for two distinct text-based search engines, including the system with the best reported performance on this benchmark. We further validate our approach by collecting document relevance judgements on our search results using Amazon Mechanical Turk. The results of this experiment confirm the improvement in accuracy produced by our image-based reranker over a pure text-based system.

  15. Health information regarding diabetes mellitus reduces misconceptions and underestimation of consequences in the general population.

    PubMed

    Dorner, Thomas E; Lackinger, Christian; Schindler, Karin; Stein, K Viktoria; Rieder, Anita; Ludvik, Bernhard

    2013-11-01

    To evaluate self-assessed knowledge about diabetes mellitus, to assess determinants of health knowledge and to evaluate consequences of health knowledge on appraisal about consequences of the disease. Population-based computer-assisted web interview survey, supplemented with a paper-and-pencil survey via post. Representative sample of the general Austrian population aged 15 years and older. Men (n 1935) and women (n 2065) with and without diabetes mellitus. Some 20.5% of men and 17.7% of women with diabetes, and 46.2% of men and 36.7% of women without diabetes, rated their knowledge about diabetes mellitus to be ‘very bad’ or ‘rather bad’. Individuals with diabetes and individuals with a family member with diabetes rated their information level more often as ‘very good’ or ‘rather good’, with adjusted OR (95% CI) of 1.7 (1.1, 2.8) and 2.1 (1.6, 2.7), respectively, in men and 2.7 (1.5, 4.8) and 2.7 (2.1, 3.5), respectively, in women. Additional significant influencing factors on diabetes knowledge were age and educational level in both sexes, and city size in men. Independent of personal diabetes status, diabetes knowledge was associated with a lower perception of restrictions on daily life of diabetes patients and with a lower probability of underestimating health consequences of diabetes. Health knowledge is associated with fewer misconceptions and less underestimation of health consequences in individuals both with and without diabetes mellitus. Thus health information about diabetes is important on the individual level towards disease management as well as on the public health level towards disease prevention.

  16. Computational approach to estimating the effects of blood properties on changes in intra-stent flow.

    PubMed

    Benard, Nicolas; Perrault, Robert; Coisne, Damien

    2006-08-01

    In this study various blood rheological assumptions are numerically investigated for the hemodynamic properties of intra-stent flow. Non-newtonian blood properties have never been implemented in blood coronary stented flow investigation, although its effects appear essential for a correct estimation and distribution of wall shear stress (WSS) exerted by the fluid on the internal vessel surface. Our numerical model is based on a full 3D stent mesh. Rigid wall and stationary inflow conditions are applied. Newtonian behavior, non-newtonian model based on Carreau-Yasuda relation and a characteristic newtonian value defined with flow representative parameters are introduced in this research. Non-newtonian flow generates an alteration of near wall viscosity norms compared to newtonian. Maximal WSS values are located in the center part of stent pattern structure and minimal values are focused on the proximal stent wire surface. A flow rate increase emphasizes fluid perturbations, and generates a WSS rise except for interstrut area. Nevertheless, a local quantitative analysis discloses an underestimation of WSS for modelisation using a newtonian blood flow, with clinical consequence of overestimate restenosis risk area. Characteristic viscosity introduction appears to present a useful option compared to rheological modelisation based on experimental data, with computer time gain and relevant results for quantitative and qualitative WSS determination.

  17. Persons with multiple disabilities select environmental stimuli through a smile response monitored via camera-based technology.

    PubMed

    Lancioni, Giulio E; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N; O'reilly, Mark F; Lang, Russell; Didden, Robert; Bosco, Andrea

    2011-01-01

    To assess whether two persons with multiple disabilities could use smile expressions and new camera-based microswitch technology to select environmental stimuli. Within each session, a computer system provided samples/reminders of preferred and non-preferred stimuli. The camera-based microswitch determined whether the participants had smile expressions in relation to those samples. If they did, stimuli matching the specific samples to which they responded were presented for 20 seconds. The smile expression could be profitably used by the participants who managed to select means of ∼70% or 75% of the preferred stimulus opportunities made available by the environment while avoiding almost all the non-preferred stimulus opportunities. Smile expressions (a) might be an effective and rapid means for selecting preferred stimulation and (b) might develop into cognitively more elaborate forms of responding through the learning experience (i.e. their consistent association with positive/reinforcing consequences).

  18. TMS for Instantiating a Knowledge Base With Incomplete Data

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    A computer program that belongs to the class known among software experts as output truth-maintenance-systems (output TMSs) has been devised as one of a number of software tools for reducing the size of the knowledge base that must be searched during execution of artificial- intelligence software of the rule-based inference-engine type in a case in which data are missing. This program determines whether the consequences of activation of two or more rules can be combined without causing a logical inconsistency. For example, in a case involving hypothetical scenarios that could lead to turning a given device on or off, the program determines whether a scenario involving a given combination of rules could lead to turning the device both on and off at the same time, in which case that combination of rules would not be included in the scenario.

  19. Near-Infrared Spectroscopy – Electroencephalography-Based Brain-State-Dependent Electrotherapy: A Computational Approach Based on Excitation–Inhibition Balance Hypothesis

    PubMed Central

    Dagar, Snigdha; Chowdhury, Shubhajit Roy; Bapi, Raju Surampudi; Dutta, Anirban; Roy, Dipanjan

    2016-01-01

    Stroke is the leading cause of severe chronic disability and the second cause of death worldwide with 15 million new cases and 50 million stroke survivors. The poststroke chronic disability may be ameliorated with early neuro rehabilitation where non-invasive brain stimulation (NIBS) techniques can be used as an adjuvant treatment to hasten the effects. However, the heterogeneity in the lesioned brain will require individualized NIBS intervention where innovative neuroimaging technologies of portable electroencephalography (EEG) and functional-near-infrared spectroscopy (fNIRS) can be leveraged for Brain State Dependent Electrotherapy (BSDE). In this hypothesis and theory article, we propose a computational approach based on excitation–inhibition (E–I) balance hypothesis to objectively quantify the poststroke individual brain state using online fNIRS–EEG joint imaging. One of the key events that occurs following Stroke is the imbalance in local E–I (that is the ratio of Glutamate/GABA), which may be targeted with NIBS using a computational pipeline that includes individual “forward models” to predict current flow patterns through the lesioned brain or brain target region. The current flow will polarize the neurons, which can be captured with E–I-based brain models. Furthermore, E–I balance hypothesis can be used to find the consequences of cellular polarization on neuronal information processing, which can then be implicated in changes in function. We first review the evidence that shows how this local imbalance between E–I leading to functional dysfunction can be restored in targeted sites with NIBS (motor cortex and somatosensory cortex) resulting in large-scale plastic reorganization over the cortex, and probably facilitating recovery of functions. Second, we show evidence how BSDE based on E–I balance hypothesis may target a specific brain site or network as an adjuvant treatment. Hence, computational neural mass model-based integration of neurostimulation with online neuroimaging systems may provide less ambiguous, robust optimization of NIBS, and its application in neurological conditions and disorders across individual patients. PMID:27551273

  20. Recruitment of Foreigners in the Market for Computer Scientists in the United States

    PubMed Central

    Bound, John; Braga, Breno; Golden, Joseph M.

    2016-01-01

    We present and calibrate a dynamic model that characterizes the labor market for computer scientists. In our model, firms can recruit computer scientists from recently graduated college students, from STEM workers working in other occupations or from a pool of foreign talent. Counterfactual simulations suggest that wages for computer scientists would have been 2.8–3.8% higher, and the number of Americans employed as computers scientists would have been 7.0–13.6% higher in 2004 if firms could not hire more foreigners than they could in 1994. In contrast, total CS employment would have been 3.8–9.0% lower, and consequently output smaller. PMID:27170827

  1. Vienna Fortran - A Language Specification. Version 1.1

    DTIC Science & Technology

    1992-03-01

    other computer archi- tectures is the fact that the memory is physically distributed among the processors; the time required to access a non-local...datum may be an order of magnitude higher than the time taken to access locally stored data. This has important consequences for program efficiency. In...machine in many aspects. It is tedious, time -consuming and error prone. It has led to particularly slow software development cycles and, in consequence

  2. A new computational account of cognitive control over reinforcement-based decision-making: Modeling of a probabilistic learning task.

    PubMed

    Zendehrouh, Sareh

    2015-11-01

    Recent work on decision-making field offers an account of dual-system theory for decision-making process. This theory holds that this process is conducted by two main controllers: a goal-directed system and a habitual system. In the reinforcement learning (RL) domain, the habitual behaviors are connected with model-free methods, in which appropriate actions are learned through trial-and-error experiences. However, goal-directed behaviors are associated with model-based methods of RL, in which actions are selected using a model of the environment. Studies on cognitive control also suggest that during processes like decision-making, some cortical and subcortical structures work in concert to monitor the consequences of decisions and to adjust control according to current task demands. Here a computational model is presented based on dual system theory and cognitive control perspective of decision-making. The proposed model is used to simulate human performance on a variant of probabilistic learning task. The basic proposal is that the brain implements a dual controller, while an accompanying monitoring system detects some kinds of conflict including a hypothetical cost-conflict one. The simulation results address existing theories about two event-related potentials, namely error related negativity (ERN) and feedback related negativity (FRN), and explore the best account of them. Based on the results, some testable predictions are also presented. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Illustration of Some Consequences of the Indistinguishability of Electrons

    ERIC Educational Resources Information Center

    Moore, John W.; Davies, William G.

    1976-01-01

    Discusses how color-coded overhead transparencies of computer-generated dot-density diagrams can be used to illustrate hybrid orbitals and the principle of the indistinguishability of electrons. (MLH)

  4. CMOL/CMOS hardware architectures and performance/price for Bayesian memory - The building block of intelligent systems

    NASA Astrophysics Data System (ADS)

    Zaveri, Mazad Shaheriar

    The semiconductor/computer industry has been following Moore's law for several decades and has reaped the benefits in speed and density of the resultant scaling. Transistor density has reached almost one billion per chip, and transistor delays are in picoseconds. However, scaling has slowed down, and the semiconductor industry is now facing several challenges. Hybrid CMOS/nano technologies, such as CMOL, are considered as an interim solution to some of the challenges. Another potential architectural solution includes specialized architectures for applications/models in the intelligent computing domain, one aspect of which includes abstract computational models inspired from the neuro/cognitive sciences. Consequently in this dissertation, we focus on the hardware implementations of Bayesian Memory (BM), which is a (Bayesian) Biologically Inspired Computational Model (BICM). This model is a simplified version of George and Hawkins' model of the visual cortex, which includes an inference framework based on Judea Pearl's belief propagation. We then present a "hardware design space exploration" methodology for implementing and analyzing the (digital and mixed-signal) hardware for the BM. This particular methodology involves: analyzing the computational/operational cost and the related micro-architecture, exploring candidate hardware components, proposing various custom hardware architectures using both traditional CMOS and hybrid nanotechnology - CMOL, and investigating the baseline performance/price of these architectures. The results suggest that CMOL is a promising candidate for implementing a BM. Such implementations can utilize the very high density storage/computation benefits of these new nano-scale technologies much more efficiently; for example, the throughput per 858 mm2 (TPM) obtained for CMOL based architectures is 32 to 40 times better than the TPM for a CMOS based multiprocessor/multi-FPGA system, and almost 2000 times better than the TPM for a PC implementation. We later use this methodology to investigate the hardware implementations of cortex-scale spiking neural system, which is an approximate neural equivalent of BICM based cortex-scale system. The results of this investigation also suggest that CMOL is a promising candidate to implement such large-scale neuromorphic systems. In general, the assessment of such hypothetical baseline hardware architectures provides the prospects for building large-scale (mammalian cortex-scale) implementations of neuromorphic/Bayesian/intelligent systems using state-of-the-art and beyond state-of-the-art silicon structures.

  5. How Children Can Support Their Learning to Write and Read by Computer in the Early Years of School

    ERIC Educational Resources Information Center

    Nurmilaakso, Marja

    2015-01-01

    Over the last decades the nature and form of what children can choose to read has changed radically, partly as a consequence of rapid technological advances and the increasing dominance of the image. The research questions were: (1) "How do children learn to read and write by computer?"; (2) "How can one support children's learning…

  6. Computer Aided Design in FE. Some Suggestions on the Inclusion of CAD Topics in Mechanical Engineering Courses. An Occasional Paper.

    ERIC Educational Resources Information Center

    Ingham, P. C.

    This report investigates the feasibility of including computer aided design (CAD) materials in engineering courses. Section 1 briefly discusses the inevitability of CAD being adopted widely by British industry and the consequent need for its inclusion in engineering syllabi at all levels. A short description of what is meant by CAD follows in…

  7. The Range Safety Debris Catalog Analysis in Preparation for the Pad Abort One Flight Test

    NASA Technical Reports Server (NTRS)

    Kutty, Prasad; Pratt, William

    2010-01-01

    With each flight test a Range Safety Data Package is assembled to understand the potential consequences of various failure scenarios. Debris catalog analysis considers an overpressure failure of the Abort Motor and the resulting debris field created 1. Characterize debris fragments generated by failure: weight, shape, and area 2. Compute fragment ballistic coefficients 3. Compute fragment ejection velocities.

  8. Local Competition-Based Superpixel Segmentation Algorithm in Remote Sensing

    PubMed Central

    Liu, Jiayin; Tang, Zhenmin; Cui, Ying; Wu, Guoxing

    2017-01-01

    Remote sensing technologies have been widely applied in urban environments’ monitoring, synthesis and modeling. Incorporating spatial information in perceptually coherent regions, superpixel-based approaches can effectively eliminate the “salt and pepper” phenomenon which is common in pixel-wise approaches. Compared with fixed-size windows, superpixels have adaptive sizes and shapes for different spatial structures. Moreover, superpixel-based algorithms can significantly improve computational efficiency owing to the greatly reduced number of image primitives. Hence, the superpixel algorithm, as a preprocessing technique, is more and more popularly used in remote sensing and many other fields. In this paper, we propose a superpixel segmentation algorithm called Superpixel Segmentation with Local Competition (SSLC), which utilizes a local competition mechanism to construct energy terms and label pixels. The local competition mechanism leads to energy terms locality and relativity, and thus, the proposed algorithm is less sensitive to the diversity of image content and scene layout. Consequently, SSLC could achieve consistent performance in different image regions. In addition, the Probability Density Function (PDF), which is estimated by Kernel Density Estimation (KDE) with the Gaussian kernel, is introduced to describe the color distribution of superpixels as a more sophisticated and accurate measure. To reduce computational complexity, a boundary optimization framework is introduced to only handle boundary pixels instead of the whole image. We conduct experiments to benchmark the proposed algorithm with the other state-of-the-art ones on the Berkeley Segmentation Dataset (BSD) and remote sensing images. Results demonstrate that the SSLC algorithm yields the best overall performance, while the computation time-efficiency is still competitive. PMID:28604641

  9. Local Competition-Based Superpixel Segmentation Algorithm in Remote Sensing.

    PubMed

    Liu, Jiayin; Tang, Zhenmin; Cui, Ying; Wu, Guoxing

    2017-06-12

    Remote sensing technologies have been widely applied in urban environments' monitoring, synthesis and modeling. Incorporating spatial information in perceptually coherent regions, superpixel-based approaches can effectively eliminate the "salt and pepper" phenomenon which is common in pixel-wise approaches. Compared with fixed-size windows, superpixels have adaptive sizes and shapes for different spatial structures. Moreover, superpixel-based algorithms can significantly improve computational efficiency owing to the greatly reduced number of image primitives. Hence, the superpixel algorithm, as a preprocessing technique, is more and more popularly used in remote sensing and many other fields. In this paper, we propose a superpixel segmentation algorithm called Superpixel Segmentation with Local Competition (SSLC), which utilizes a local competition mechanism to construct energy terms and label pixels. The local competition mechanism leads to energy terms locality and relativity, and thus, the proposed algorithm is less sensitive to the diversity of image content and scene layout. Consequently, SSLC could achieve consistent performance in different image regions. In addition, the Probability Density Function (PDF), which is estimated by Kernel Density Estimation (KDE) with the Gaussian kernel, is introduced to describe the color distribution of superpixels as a more sophisticated and accurate measure. To reduce computational complexity, a boundary optimization framework is introduced to only handle boundary pixels instead of the whole image. We conduct experiments to benchmark the proposed algorithm with the other state-of-the-art ones on the Berkeley Segmentation Dataset (BSD) and remote sensing images. Results demonstrate that the SSLC algorithm yields the best overall performance, while the computation time-efficiency is still competitive.

  10. Protein tyrosine nitration in plants: Present knowledge, computational prediction and future perspectives.

    PubMed

    Kolbert, Zsuzsanna; Feigl, Gábor; Bordé, Ádám; Molnár, Árpád; Erdei, László

    2017-04-01

    Nitric oxide (NO) and related molecules (reactive nitrogen species) regulate diverse physiological processes mainly through posttranslational modifications such as protein tyrosine nitration (PTN). PTN is a covalent and specific modification of tyrosine (Tyr) residues resulting in altered protein structure and function. In the last decade, great efforts have been made to reveal candidate proteins, target Tyr residues and functional consequences of nitration in plants. This review intends to evaluate the accumulated knowledge about the biochemical mechanism, the structural and functional consequences and the selectivity of plants' protein nitration and also about the decomposition or conversion of nitrated proteins. At the same time, this review emphasizes yet unanswered or uncertain questions such as the reversibility/irreversibility of tyrosine nitration, the involvement of proteasomes in the removal of nitrated proteins or the effect of nitration on Tyr phosphorylation. The different NO producing systems of algae and higher plants raise the possibility of diversely regulated protein nitration. Therefore studying PTN from an evolutionary point of view would enrich our present understanding with novel aspects. Plant proteomic research can be promoted by the application of computational prediction tools such as GPS-YNO 2 and iNitro-Tyr software. Using the reference Arabidopsis proteome, Authors performed in silico analysis of tyrosine nitration in order to characterize plant tyrosine nitroproteome. Nevertheless, based on the common results of the present prediction and previous experiments the most likely nitrated proteins were selected thus recommending candidates for detailed future research. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  11. The connection between cellular mechanoregulation and tissue patterns during bone healing.

    PubMed

    Repp, Felix; Vetter, Andreas; Duda, Georg N; Weinkamer, Richard

    2015-09-01

    The formation of different tissues in the callus during secondary bone healing is at least partly influenced by mechanical stimuli. We use computer simulations to test the consequences of different hypotheses of the mechanoregulation at the cellular level on the patterns of tissues formed during healing. The computational study is based on an experiment on sheep, where after a tibial osteotomy, histological sections were harvested at different time points. In the simulations, we used a recently proposed basic phenomenological model, which allows ossification to occur either via endochondral or intramembranous ossification, but tries otherwise to employ a minimal number of simulation parameters. The model was extended to consider also the possibility of bone resorption and consequently allowing a description of the full healing progression till the restoration of the cortex. Specifically, we investigated how three changes in the mechanoregulation influence the resulting tissue patterns: (1) a time delay between stimulation of the cell and the formation of the tissue, (2) a variable mechanosensitivity of the cells, and (3) an independence of long time intervals of the soft tissue maturation from the mechanical stimulus. For all three scenarios, our simulations do not show qualitative differences in the time development of the tissue patterns. Largest differences were observed in the intermediate phases of healing in the amount and location of the cartilage. Interestingly, the course of healing was virtually unaltered in case of scenario (3) where tissue maturation proceeded independent of mechanical stimulation.

  12. Characterization of a human tooth with carious lesions using conventional and synchrotron radiation-based micro computed tomography

    NASA Astrophysics Data System (ADS)

    Dziadowiec, Iwona; Beckmann, Felix; Schulz, Georg; Deyhle, Hans; Müller, Bert

    2014-09-01

    In a dental office, every day X rays of teeth within the oral cavity are obtained. Caries induces a mineral loss and, therefore, becomes visible by reduced X-ray absorption. The detailed spatial distribution of the mineral loss, however, is inaccessible in conventional dental radiology, since the dose for such studies is intolerable. As a consequence, such measurements can only be performed after tooth extraction. We have taken advantage of synchrotron radiation-based micro computed tomography to characterize a human tooth with a rather small, natural caries lesion and an artificially induced lesion provoked by acidic etching. Both halves of the tooth were separately visualized from 2400 radiographs recorded at the beam line P07 / PETRA III (HASYLAB at DESY, Hamburg, Germany) with an asymmetric rotation axis at photon energy of 45 keV. Because of the setup, one finds an energy shift in the horizontal plane, to be corrected. After the appropriate three-dimensional registration of the data with the ones of the same crown using the better accessible phoenix nanotom® m of General Electric, Wunstorf, Germany, one can determine the joint histogram, which enable to calibrate the system with the conventional X-ray source.

  13. A hybrid model for combining case-control and cohort studies in systematic reviews of diagnostic tests

    PubMed Central

    Chen, Yong; Liu, Yulun; Ning, Jing; Cormier, Janice; Chu, Haitao

    2014-01-01

    Systematic reviews of diagnostic tests often involve a mixture of case-control and cohort studies. The standard methods for evaluating diagnostic accuracy only focus on sensitivity and specificity and ignore the information on disease prevalence contained in cohort studies. Consequently, such methods cannot provide estimates of measures related to disease prevalence, such as population averaged or overall positive and negative predictive values, which reflect the clinical utility of a diagnostic test. In this paper, we propose a hybrid approach that jointly models the disease prevalence along with the diagnostic test sensitivity and specificity in cohort studies, and the sensitivity and specificity in case-control studies. In order to overcome the potential computational difficulties in the standard full likelihood inference of the proposed hybrid model, we propose an alternative inference procedure based on the composite likelihood. Such composite likelihood based inference does not suffer computational problems and maintains high relative efficiency. In addition, it is more robust to model mis-specifications compared to the standard full likelihood inference. We apply our approach to a review of the performance of contemporary diagnostic imaging modalities for detecting metastases in patients with melanoma. PMID:25897179

  14. Time-averaged current analysis of a thunderstorm using ground-based measurements

    NASA Astrophysics Data System (ADS)

    Driscoll, Kevin T.; Blakeslee, Richard J.; Koshak, William J.

    1994-05-01

    The amount of upward current provided to the ionosphere by a thunderstorm that appeared over the Kennedy Space Center (KSC) on July 11, 1978, is reexamined using an analytic equation that describes a bipolar thunderstorm's current contribution to the global circuit in terms of its generator current, lightning currents, the altitudes of its charge centers, and the conductivity profile of the atmosphere. Ground-based measurements, which were obtained from a network of electric field mills positioned at various distances from the thunderstorm, were used to characterize the electrical activity inside the thundercloud. The location of the lightning discharges, the type of lightning, and the amount of charge neutralized during this thunderstorm were computed through a least squares inversion of the measured changes in the electric fields following each lightning discharge. These measurements provided the information necessary to implement the analytic equation, and consequently, a time-averaged estimate of this thunderstorm's current contribution to the global circuit was calculated. From these results the amount of conduction current supplied to the ionosphere by this small thunderstorm was computed to be less than 25% of the time-averaged generator current that flowed between the two vertically displaced charge centers.

  15. Remedial Action Assessment System (RAAS): A computer-based methodology for conducting feasibility studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buelt, J.L.; Stottlemyre, J.A.; White, M.K.

    1991-09-01

    Because of the great complexity and number of potential waste sites facing the US Department of Energy (DOE) for potential cleanup, the DOE is supporting the development of a computer-based methodology to streamline the remedial investigations/feasibility study process required for DOE operable units. DOE operable units are generally more complex in nature because of the existence of multiple waste sites within many of the operable units and the presence of mixed radioactive and hazardous chemical wastes. Consequently, Pacific Northwest Laboratory (PNL) is developing the Remedial Action Assessment System (RAAS), which is aimed at screening, linking, and evaluating establishment technology processmore » options in support of conducting feasibility studies under the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA). It is also intended to do the same in support of corrective measures studies requires by the Resource Conservation and Recovery Act (RCRA). This paper presents the characteristics of two RAAS prototypes currently being developed. These include the RAAS Technology Information System, which accesses information on technologies in a graphical and tabular manner, and the main RAAS methodology, which screens, links, and evaluates remedial technologies. 4 refs., 3 figs., 1 tab.« less

  16. Remedial Action Assessment System (RAAS): A computer-based methodology for conducting feasibility studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buelt, J.L.; Stottlemyre, J.A.; White, M.K.

    1991-02-01

    Because of the great complexity and number of potential waste sites facing the US Department of Energy (DOE) for potential cleanup, the DOE is supporting the development of a computer-based methodology to streamline the remedial investigation/feasibility study process required for DOE operable units. DOE operable units are generally more complex in nature because of the existence of multiple waste sites within many of the operable units and the presence of mixed radioactive and hazardous chemical wastes. Consequently, Pacific Northwest Laboratory (PNL) is developing the Remedial Action Assessment System (RAAS), which is aimed at screening, linking, and evaluating established technology processmore » options in support of conducting feasibility studies under the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA). It is also intended to do the same in support of corrective measures studies required by the Resource Conservation and Recovery Act (RCRA). This paper presents the characteristics of two RAAS prototypes currently being developed. These include the RAAS Technology Information System, which accesses information on technologies in a graphical and tabular manner, and the main RAAS methodology, which screens, links, and evaluates remedial technologies. 4 refs., 3 figs., 1 tab.« less

  17. Plant Identification Based on Leaf Midrib Cross-Section Images Using Fractal Descriptors.

    PubMed

    da Silva, Núbia Rosa; Florindo, João Batista; Gómez, María Cecilia; Rossatto, Davi Rodrigo; Kolb, Rosana Marta; Bruno, Odemir Martinez

    2015-01-01

    The correct identification of plants is a common necessity not only to researchers but also to the lay public. Recently, computational methods have been employed to facilitate this task, however, there are few studies front of the wide diversity of plants occurring in the world. This study proposes to analyse images obtained from cross-sections of leaf midrib using fractal descriptors. These descriptors are obtained from the fractal dimension of the object computed at a range of scales. In this way, they provide rich information regarding the spatial distribution of the analysed structure and, as a consequence, they measure the multiscale morphology of the object of interest. In Biology, such morphology is of great importance because it is related to evolutionary aspects and is successfully employed to characterize and discriminate among different biological structures. Here, the fractal descriptors are used to identify the species of plants based on the image of their leaves. A large number of samples are examined, being 606 leaf samples of 50 species from Brazilian flora. The results are compared to other imaging methods in the literature and demonstrate that fractal descriptors are precise and reliable in the taxonomic process of plant species identification.

  18. Computational Psychometrics for Modeling System Dynamics during Stressful Disasters

    PubMed Central

    Cipresso, Pietro; Bessi, Alessandro; Colombo, Desirée; Pedroli, Elisa; Riva, Giuseppe

    2017-01-01

    Disasters can be very stressful events. However, computational models of stress require data that might be very difficult to collect during disasters. Moreover, personal experiences are not repeatable, so it is not possible to collect bottom-up information when building a coherent model. To overcome these problems, we propose the use of computational models and virtual reality integration to recreate disaster situations, while examining possible dynamics in order to understand human behavior and relative consequences. By providing realistic parameters associated with disaster situations, computational scientists can work more closely with emergency responders to improve the quality of interventions in the future. PMID:28861026

  19. People and computers--some recent highlights.

    PubMed

    Shackel, B

    2000-12-01

    This paper aims to review selectively a fair proportion of the literature on human-computer interaction (HCI) over the three years since Shackel (J. Am. Soc. Inform. Sci. 48 (11) (1997) 970-986). After a brief note of history I discuss traditional input, output and workplace aspects, the web and 'E-topics', web-related aspects, virtual reality, safety-critical systems, and the need to move from HCI to human-system integration (HSI). Finally I suggest, and consider briefly, some future possibilities and issues including web consequences, embedded ubiquitous computing, and 'back to systems ergonomics?'.

  20. Dendritic Properties Control Energy Efficiency of Action Potentials in Cortical Pyramidal Cells

    PubMed Central

    Yi, Guosheng; Wang, Jiang; Wei, Xile; Deng, Bin

    2017-01-01

    Neural computation is performed by transforming input signals into sequences of action potentials (APs), which is metabolically expensive and limited by the energy available to the brain. The metabolic efficiency of single AP has important consequences for the computational power of the cell, which is determined by its biophysical properties and morphologies. Here we adopt biophysically-based two-compartment models to investigate how dendrites affect energy efficiency of APs in cortical pyramidal neurons. We measure the Na+ entry during the spike and examine how it is efficiently used for generating AP depolarization. We show that increasing the proportion of dendritic area or coupling conductance between two chambers decreases Na+ entry efficiency of somatic AP. Activating inward Ca2+ current in dendrites results in dendritic spike, which increases AP efficiency. Activating Ca2+-activated outward K+ current in dendrites, however, decreases Na+ entry efficiency. We demonstrate that the active and passive dendrites take effects by altering the overlap between Na+ influx and internal current flowing from soma to dendrite. We explain a fundamental link between dendritic properties and AP efficiency, which is essential to interpret how neural computation consumes metabolic energy and how biophysics and morphologies contribute to such consumption. PMID:28919852

  1. Dendritic Properties Control Energy Efficiency of Action Potentials in Cortical Pyramidal Cells.

    PubMed

    Yi, Guosheng; Wang, Jiang; Wei, Xile; Deng, Bin

    2017-01-01

    Neural computation is performed by transforming input signals into sequences of action potentials (APs), which is metabolically expensive and limited by the energy available to the brain. The metabolic efficiency of single AP has important consequences for the computational power of the cell, which is determined by its biophysical properties and morphologies. Here we adopt biophysically-based two-compartment models to investigate how dendrites affect energy efficiency of APs in cortical pyramidal neurons. We measure the Na + entry during the spike and examine how it is efficiently used for generating AP depolarization. We show that increasing the proportion of dendritic area or coupling conductance between two chambers decreases Na + entry efficiency of somatic AP. Activating inward Ca 2+ current in dendrites results in dendritic spike, which increases AP efficiency. Activating Ca 2+ -activated outward K + current in dendrites, however, decreases Na + entry efficiency. We demonstrate that the active and passive dendrites take effects by altering the overlap between Na + influx and internal current flowing from soma to dendrite. We explain a fundamental link between dendritic properties and AP efficiency, which is essential to interpret how neural computation consumes metabolic energy and how biophysics and morphologies contribute to such consumption.

  2. Nonlinear dynamic systems identification using recurrent interval type-2 TSK fuzzy neural network - A novel structure.

    PubMed

    El-Nagar, Ahmad M

    2018-01-01

    In this study, a novel structure of a recurrent interval type-2 Takagi-Sugeno-Kang (TSK) fuzzy neural network (FNN) is introduced for nonlinear dynamic and time-varying systems identification. It combines the type-2 fuzzy sets (T2FSs) and a recurrent FNN to avoid the data uncertainties. The fuzzy firing strengths in the proposed structure are returned to the network input as internal variables. The interval type-2 fuzzy sets (IT2FSs) is used to describe the antecedent part for each rule while the consequent part is a TSK-type, which is a linear function of the internal variables and the external inputs with interval weights. All the type-2 fuzzy rules for the proposed RIT2TSKFNN are learned on-line based on structure and parameter learning, which are performed using the type-2 fuzzy clustering. The antecedent and consequent parameters of the proposed RIT2TSKFNN are updated based on the Lyapunov function to achieve network stability. The obtained results indicate that our proposed network has a small root mean square error (RMSE) and a small integral of square error (ISE) with a small number of rules and a small computation time compared with other type-2 FNNs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Basic Microbiologic and Infection Control Information to Reduce the Potential Transmission of Pathogens to Patients via Computer Hardware

    PubMed Central

    Neely, Alice N.; Sittig, Dean F.

    2002-01-01

    Computer technology from the management of individual patient medical records to the tracking of epidemiologic trends has become an essential part of all aspects of modern medicine. Consequently, computers, including bedside components, point-of-care testing equipment, and handheld computer devices, are increasingly present in patients’ rooms. Recent articles have indicated that computer hardware, just as other medical equipment, may act as a reservoir for microorganisms and contribute to the transfer of pathogens to patients. This article presents basic microbiological concepts relative to infection, reviews the present literature concerning possible links between computer contamination and nosocomial colonizations and infections, discusses basic principles for the control of contamination, and provides guidelines for reducing the risk of transfer of microorganisms to susceptible patient populations. PMID:12223502

  4. An evaluation of Computational Fluid dynamics model for flood risk analysis

    NASA Astrophysics Data System (ADS)

    Di Francesco, Silvia; Biscarini, Chiara; Montesarchio, Valeria

    2014-05-01

    This work presents an analysis of the hydrological-hydraulic engineering requisites for Risk evaluation and efficient flood damage reduction plans. Most of the research efforts have been dedicated to the scientific and technical aspects of risk assessment, providing estimates of possible alternatives and of the risk associated. In the decision making process for mitigation plan, the contribute of scientist is crucial, due to the fact that Risk-Damage analysis is based on evaluation of flow field ,of Hydraulic Risk and on economical and societal considerations. The present paper will focus on the first part of process, the mathematical modelling of flood events which is the base for all further considerations. The evaluation of potential catastrophic damage consequent to a flood event and in particular to dam failure requires modelling of the flood with sufficient detail so to capture the spatial and temporal evolutions of the event, as well of the velocity field. Thus, the selection of an appropriate mathematical model to correctly simulate flood routing is an essential step. In this work we present the application of two 3D Computational fluid dynamics models to a synthetic and real case study in order to evaluate the correct evolution of flow field and the associated flood Risk . The first model is based on a opensource CFD platform called openFoam. Water flow is schematized with a classical continuum approach based on Navier-Stokes equation coupled with Volume of fluid (VOF) method to take in account the multiphase character of river bottom-water- air systems. The second model instead is based on the Lattice Boltzmann method, an innovative numerical fluid dynamics scheme based on Boltzmann's kinetic equation that represents the flow dynamics at the macroscopic level by incorporating a microscopic kinetic approach. Fluid is seen as composed by particles that can move and collide among them. Simulation results from both models are promising and congruent to experimental results available in literature, thought the LBM model requires less computational effort respect to the NS one.

  5. Technique Developed for Optimizing Traveling-Wave Tubes

    NASA Technical Reports Server (NTRS)

    Wilson, Jeffrey D.

    1999-01-01

    A traveling-wave tube (TWT) is an electron beam device that is used to amplify electromagnetic communication waves at radio and microwave frequencies. TWT s are critical components in deep-space probes, geosynchronous communication satellites, and high-power radar systems. Power efficiency is of paramount importance for TWT s employed in deep-space probes and communications satellites. Consequently, increasing the power efficiency of TWT s has been the primary goal of the TWT group at the NASA Lewis Research Center over the last 25 years. An in-house effort produced a technique (ref. 1) to design TWT's for optimized power efficiency. This technique is based on simulated annealing, which has an advantage over conventional optimization techniques in that it enables the best possible solution to be obtained (ref. 2). A simulated annealing algorithm was created and integrated into the NASA TWT computer model (ref. 3). The new technique almost doubled the computed conversion power efficiency of a TWT from 7.1 to 13.5 percent (ref. 1).

  6. A molecular orbital study of the energy spectrum, exchange interaction and gate crosstalk of a four-quantum-dot system

    NASA Astrophysics Data System (ADS)

    Yang, Xu-Chen; Wang, Xin

    The manipulation of coupled quantum dot devices is crucial to scalable, fault-tolerant quantum computation. We present a theoretical study of a four-electron four-quantum-dot system based on molecular orbital methods, which depicts a pair of singlet-triplet (S-T) qubits. We find that while the two S-T qubits are coupled by the capacitive interaction when they are sufficiently far away, the admixture of wave functions undergoes a substantial change as the two S-T qubits get closer. We find that in certain parameter regime the exchange interaction may only be defined in the sense of an effective one when the computational basis states no longer dominate the eigenstates. We further discuss the gate crosstalk as a consequence of this wave function mixing. This work was supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (No. CityU 21300116) and the National Natural Science Foundation of China (No. 11604277).

  7. Role of visual and non-visual cues in constructing a rotation-invariant representation of heading in parietal cortex

    PubMed Central

    Sunkara, Adhira

    2015-01-01

    As we navigate through the world, eye and head movements add rotational velocity patterns to the retinal image. When such rotations accompany observer translation, the rotational velocity patterns must be discounted to accurately perceive heading. The conventional view holds that this computation requires efference copies of self-generated eye/head movements. Here we demonstrate that the brain implements an alternative solution in which retinal velocity patterns are themselves used to dissociate translations from rotations. These results reveal a novel role for visual cues in achieving a rotation-invariant representation of heading in the macaque ventral intraparietal area. Specifically, we show that the visual system utilizes both local motion parallax cues and global perspective distortions to estimate heading in the presence of rotations. These findings further suggest that the brain is capable of performing complex computations to infer eye movements and discount their sensory consequences based solely on visual cues. DOI: http://dx.doi.org/10.7554/eLife.04693.001 PMID:25693417

  8. Automatic summary generating technology of vegetable traceability for information sharing

    NASA Astrophysics Data System (ADS)

    Zhenxuan, Zhang; Minjing, Peng

    2017-06-01

    In order to solve problems of excessive data entries and consequent high costs for data collection in vegetable traceablility for farmers in traceability applications, the automatic summary generating technology of vegetable traceability for information sharing was proposed. The proposed technology is an effective way for farmers to share real-time vegetable planting information in social networking platforms to enhance their brands and obtain more customers. In this research, the influencing factors in the vegetable traceablility for customers were analyzed to establish the sub-indicators and target indicators and propose a computing model based on the collected parameter values of the planted vegetables and standard legal systems on food safety. The proposed standard parameter model involves five steps: accessing database, establishing target indicators, establishing sub-indicators, establishing standard reference model and computing scores of indicators. On the basis of establishing and optimizing the standards of food safety and traceability system, this proposed technology could be accepted by more and more farmers and customers.

  9. Controllability and observability of Boolean networks arising from biology

    NASA Astrophysics Data System (ADS)

    Li, Rui; Yang, Meng; Chu, Tianguang

    2015-02-01

    Boolean networks are currently receiving considerable attention as a computational scheme for system level analysis and modeling of biological systems. Studying control-related problems in Boolean networks may reveal new insights into the intrinsic control in complex biological systems and enable us to develop strategies for manipulating biological systems using exogenous inputs. This paper considers controllability and observability of Boolean biological networks. We propose a new approach, which draws from the rich theory of symbolic computation, to solve the problems. Consequently, simple necessary and sufficient conditions for reachability, controllability, and observability are obtained, and algorithmic tests for controllability and observability which are based on the Gröbner basis method are presented. As practical applications, we apply the proposed approach to several different biological systems, namely, the mammalian cell-cycle network, the T-cell activation network, the large granular lymphocyte survival signaling network, and the Drosophila segment polarity network, gaining novel insights into the control and/or monitoring of the specific biological systems.

  10. Development of Semi-Automatic Lathe by using Intelligent Soft Computing Technique

    NASA Astrophysics Data System (ADS)

    Sakthi, S.; Niresh, J.; Vignesh, K.; Anand Raj, G.

    2018-03-01

    This paper discusses the enhancement of conventional lathe machine to semi-automated lathe machine by implementing a soft computing method. In the present scenario, lathe machine plays a vital role in the engineering division of manufacturing industry. While the manual lathe machines are economical, the accuracy and efficiency are not up to the mark. On the other hand, CNC machine provide the desired accuracy and efficiency, but requires a huge capital. In order to over come this situation, a semi-automated approach towards the conventional lathe machine is developed by employing stepper motors to the horizontal and vertical drive, that can be controlled by Arduino UNO -microcontroller. Based on the input parameters of the lathe operation the arduino coding is been generated and transferred to the UNO board. Thus upgrading from manual to semi-automatic lathe machines can significantly increase the accuracy and efficiency while, at the same time, keeping a check on investment cost and consequently provide a much needed escalation to the manufacturing industry.

  11. Computational Tools for Allosteric Drug Discovery: Site Identification and Focus Library Design.

    PubMed

    Huang, Wenkang; Nussinov, Ruth; Zhang, Jian

    2017-01-01

    Allostery is an intrinsic phenomenon of biological macromolecules involving regulation and/or signal transduction induced by a ligand binding to an allosteric site distinct from a molecule's active site. Allosteric drugs are currently receiving increased attention in drug discovery because drugs that target allosteric sites can provide important advantages over the corresponding orthosteric drugs including specific subtype selectivity within receptor families. Consequently, targeting allosteric sites, instead of orthosteric sites, can reduce drug-related side effects and toxicity. On the down side, allosteric drug discovery can be more challenging than traditional orthosteric drug discovery due to difficulties associated with determining the locations of allosteric sites and designing drugs based on these sites and the need for the allosteric effects to propagate through the structure, reach the ligand binding site and elicit a conformational change. In this study, we present computational tools ranging from the identification of potential allosteric sites to the design of "allosteric-like" modulator libraries. These tools may be particularly useful for allosteric drug discovery.

  12. Heuristic algorithms for the minmax regret flow-shop problem with interval processing times.

    PubMed

    Ćwik, Michał; Józefczyk, Jerzy

    2018-01-01

    An uncertain version of the permutation flow-shop with unlimited buffers and the makespan as a criterion is considered. The investigated parametric uncertainty is represented by given interval-valued processing times. The maximum regret is used for the evaluation of uncertainty. Consequently, the minmax regret discrete optimization problem is solved. Due to its high complexity, two relaxations are applied to simplify the optimization procedure. First of all, a greedy procedure is used for calculating the criterion's value, as such calculation is NP-hard problem itself. Moreover, the lower bound is used instead of solving the internal deterministic flow-shop. The constructive heuristic algorithm is applied for the relaxed optimization problem. The algorithm is compared with previously elaborated other heuristic algorithms basing on the evolutionary and the middle interval approaches. The conducted computational experiments showed the advantage of the constructive heuristic algorithm with regards to both the criterion and the time of computations. The Wilcoxon paired-rank statistical test confirmed this conclusion.

  13. Visual behavior characterization for intrusion and misuse detection

    NASA Astrophysics Data System (ADS)

    Erbacher, Robert F.; Frincke, Deborah

    2001-05-01

    As computer and network intrusions become more and more of a concern, the need for better capabilities, to assist in the detection and analysis of intrusions also increase. System administrators typically rely on log files to analyze usage and detect misuse. However, as a consequence of the amount of data collected by each machine, multiplied by the tens or hundreds of machines under the system administrator's auspices, the entirety of the data available is neither collected nor analyzed. This is compounded by the need to analyze network traffic data as well. We propose a methodology for analyzing network and computer log information visually based on the analysis of the behavior of the users. Each user's behavior is the key to determining their intent and overriding activity, whether they attempt to hide their actions or not. Proficient hackers will attempt to hide their ultimate activities, which hinders the reliability of log file analysis. Visually analyzing the users''s behavior however, is much more adaptable and difficult to counteract.

  14. Charged-particle motion in multidimensional magnetic-field turbulence

    NASA Technical Reports Server (NTRS)

    Giacalone, J.; Jokipii, J. R.

    1994-01-01

    We present a new analysis of the fundamental physics of charged-particle motion in a turbulent magnetic field using a numerical simulation. The magnetic field fluctuations are taken to be static and to have a power spectrum which is Kolmogorov. The charged particles are treated as test particles. It is shown that when the field turbulence is independent of one coordinate (i.e., k lies in a plane), the motion of these particles across the magnetic field is essentially zero, as required by theory. Consequently, the only motion across the average magnetic field direction that is allowed is that due to field-line random walk. On the other hand, when a fully three-dimensional realization of the turbulence is considered, the particles readily cross the field. Transport coefficients both along and across the ambient magnetic field are computed. This scheme provides a direct computation of the Fokker-Planck coefficients based on the motions of individual particles, and allows for comparison with analytic theory.

  15. Massively Parallel Signal Processing using the Graphics Processing Unit for Real-Time Brain-Computer Interface Feature Extraction.

    PubMed

    Wilson, J Adam; Williams, Justin C

    2009-01-01

    The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card [graphics processing unit (GPU)] was developed for real-time neural signal processing of a brain-computer interface (BCI). The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter), followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a central processing unit-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels of 250 ms in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  16. Evaluation de l'incertitude d'un modele d'analyse de cycle de vie temporel de la production et de la consommation de l'electricite dans un contexte de gestion des centres de donnees

    NASA Astrophysics Data System (ADS)

    Vallee Schmitter, Constant

    As part of a research program focused on finding ways to decrease the environnemental impacts of data center, the CIRAIG developed two models in order to be able to select the province with the cleanest electricity. Even if both models use the life cycle analysis methodology (LCA) they differ on their approach. The first model is based on attributional LCA and the second one on consequential LCA. However the last step of an LCA, as recommended by the ISO, is to evaluate the uncertainty of the results. This step was left aside in the previous studies to be the main subject of this research. The goal of this research is to improve the trust in the those models by doing uncertainty analysis on the results they produced. This analysis was split into four parts: 1) compute the distributions of the grid mix used by the two studies; 2) compute the consequences of those distributions on the decisions; 3) quantify the differences between the data sources and evaluate their consequences on the decisions; 4) identify and quantify the power plants not included in the data sources and evaluate their contribution on the grid-mixes. To fulfil those goals, scripts were written to compute Monte-Carlo simulations of the environnemental impacts of the multiple grid-mix used in the models for the tree provinces. Data about the electric production have been collected to identify previously not accounted for power plants. Comparisons of the data sources used in the original studies were carry out to evaluate the significance of the disparities. Finally a model of the electric grid of Ontario was implemented in a power system simulation software. This was to show the importance of some of the physical constraints inside the network. The result of this study show that the uncertainty included in the results have little to no consequences on the decision process for the studied provinces. This two new models, implemented to take into account the temporal aspect of electric consumption of electricity on the environmental impacts, are a real improvement to the previous static models.

  17. Insights into Parkinson's disease from computational models of the basal ganglia.

    PubMed

    Humphries, Mark D; Obeso, Jose Angel; Dreyer, Jakob Kisbye

    2018-04-17

    Movement disorders arise from the complex interplay of multiple changes to neural circuits. Successful treatments for these disorders could interact with these complex changes in myriad ways, and as a consequence their mechanisms of action and their amelioration of symptoms are incompletely understood. Using Parkinson's disease as a case study, we review here how computational models are a crucial tool for taming this complexity, across causative mechanisms, consequent neural dynamics and treatments. For mechanisms, we review models that capture the effects of losing dopamine on basal ganglia function; for dynamics, we discuss models that have transformed our understanding of how beta-band (15-30 Hz) oscillations arise in the parkinsonian basal ganglia. For treatments, we touch on the breadth of computational modelling work trying to understand the therapeutic actions of deep brain stimulation. Collectively, models from across all levels of description are providing a compelling account of the causes, symptoms and treatments for Parkinson's disease. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  18. FitzPatrick Lecture: King George III and the porphyria myth - causes, consequences and re-evaluation of his mental illness with computer diagnostics.

    PubMed

    Peters, Timothy

    2015-04-01

    Recent studies have shown that the claim that King George III suffered from acute porphyria is seriously at fault. This article explores some of the causes of this misdiagnosis and the consequences of the misleading claims, also reporting on the nature of the king's recurrent mental illness according to computer diagnostics. In addition, techniques of cognitive archaeology are used to investigate the nature of the king's final decade of mental illness, which resulted in the appointment of the Prince of Wales as Prince Regent. The results of this analysis confirm that the king suffered from bipolar disorder type I, with a final decade of dementia, due, in part, to the neurotoxicity of his recurrent episodes of acute mania. © 2015 Royal College of Physicians.

  19. High-speed DNA-based rolling motors powered by RNase H

    PubMed Central

    Yehl, Kevin; Mugler, Andrew; Vivek, Skanda; Liu, Yang; Zhang, Yun; Fan, Mengzhen; Weeks, Eric R.

    2016-01-01

    DNA-based machines that walk by converting chemical energy into controlled motion could be of use in applications such as next generation sensors, drug delivery platforms, and biological computing. Despite their exquisite programmability, DNA-based walkers are, however, challenging to work with due to their low fidelity and slow rates (~1 nm/min). Here, we report DNA-based machines that roll rather than walk, and consequently have a maximum speed and processivity that is three-orders of magnitude greater than conventional DNA motors. The motors are made from DNA-coated spherical particles that hybridise to a surface modified with complementary RNA; motion is achieved through the addition of RNase H, which selectively hydrolyses hybridised RNA. Spherical motors move in a self-avoiding manner, whereas anisotropic particles, such as dimerised particles or rod-shaped particles travel linearly without a track or external force. Finally, we demonstrate detection of single nucleotide polymorphism by measuring particle displacement using a smartphone camera. PMID:26619152

  20. A cognitive computational model inspired by the immune system response.

    PubMed

    Abdo Abd Al-Hady, Mohamed; Badr, Amr Ahmed; Mostafa, Mostafa Abd Al-Azim

    2014-01-01

    The immune system has a cognitive ability to differentiate between healthy and unhealthy cells. The immune system response (ISR) is stimulated by a disorder in the temporary fuzzy state that is oscillating between the healthy and unhealthy states. However, modeling the immune system is an enormous challenge; the paper introduces an extensive summary of how the immune system response functions, as an overview of a complex topic, to present the immune system as a cognitive intelligent agent. The homogeneity and perfection of the natural immune system have been always standing out as the sought-after model we attempted to imitate while building our proposed model of cognitive architecture. The paper divides the ISR into four logical phases: setting a computational architectural diagram for each phase, proceeding from functional perspectives (input, process, and output), and their consequences. The proposed architecture components are defined by matching biological operations with computational functions and hence with the framework of the paper. On the other hand, the architecture focuses on the interoperability of main theoretical immunological perspectives (classic, cognitive, and danger theory), as related to computer science terminologies. The paper presents a descriptive model of immune system, to figure out the nature of response, deemed to be intrinsic for building a hybrid computational model based on a cognitive intelligent agent perspective and inspired by the natural biology. To that end, this paper highlights the ISR phases as applied to a case study on hepatitis C virus, meanwhile illustrating our proposed architecture perspective.

  1. A Cognitive Computational Model Inspired by the Immune System Response

    PubMed Central

    Abdo Abd Al-Hady, Mohamed; Badr, Amr Ahmed; Mostafa, Mostafa Abd Al-Azim

    2014-01-01

    The immune system has a cognitive ability to differentiate between healthy and unhealthy cells. The immune system response (ISR) is stimulated by a disorder in the temporary fuzzy state that is oscillating between the healthy and unhealthy states. However, modeling the immune system is an enormous challenge; the paper introduces an extensive summary of how the immune system response functions, as an overview of a complex topic, to present the immune system as a cognitive intelligent agent. The homogeneity and perfection of the natural immune system have been always standing out as the sought-after model we attempted to imitate while building our proposed model of cognitive architecture. The paper divides the ISR into four logical phases: setting a computational architectural diagram for each phase, proceeding from functional perspectives (input, process, and output), and their consequences. The proposed architecture components are defined by matching biological operations with computational functions and hence with the framework of the paper. On the other hand, the architecture focuses on the interoperability of main theoretical immunological perspectives (classic, cognitive, and danger theory), as related to computer science terminologies. The paper presents a descriptive model of immune system, to figure out the nature of response, deemed to be intrinsic for building a hybrid computational model based on a cognitive intelligent agent perspective and inspired by the natural biology. To that end, this paper highlights the ISR phases as applied to a case study on hepatitis C virus, meanwhile illustrating our proposed architecture perspective. PMID:25003131

  2. Nonunitary quantum computation in the ground space of local Hamiltonians

    NASA Astrophysics Data System (ADS)

    Usher, Naïri; Hoban, Matty J.; Browne, Dan E.

    2017-09-01

    A central result in the study of quantum Hamiltonian complexity is that the k -local Hamiltonian problem is quantum-Merlin-Arthur-complete. In that problem, we must decide if the lowest eigenvalue of a Hamiltonian is bounded below some value, or above another, promised one of these is true. Given the ground state of the Hamiltonian, a quantum computer can determine this question, even if the ground state itself may not be efficiently quantum preparable. Kitaev's proof of QMA-completeness encodes a unitary quantum circuit in QMA into the ground space of a Hamiltonian. However, we now have quantum computing models based on measurement instead of unitary evolution; furthermore, we can use postselected measurement as an additional computational tool. In this work, we generalize Kitaev's construction to allow for nonunitary evolution including postselection. Furthermore, we consider a type of postselection under which the construction is consistent, which we call tame postselection. We consider the computational complexity consequences of this construction and then consider how the probability of an event upon which we are postselecting affects the gap between the ground-state energy and the energy of the first excited state of its corresponding Hamiltonian. We provide numerical evidence that the two are not immediately related by giving a family of circuits where the probability of an event upon which we postselect is exponentially small, but the gap in the energy levels of the Hamiltonian decreases as a polynomial.

  3. Analysis of a highly birefringent asymmetric photonic crystal fibre based on a surface plasmon resonance sensor

    NASA Astrophysics Data System (ADS)

    Liu, Chao; Wang, Famei; Zheng, Shijie; Sun, Tao; Lv, Jingwei; Liu, Qiang; Yang, Lin; Mu, Haiwei; Chu, Paul K.

    2016-07-01

    A highly birefringent photonic crystal fibre is proposed and characterized based on a surface plasmon resonance sensor. The birefringence of the sensor is numerically analyzed by the finite-element method. In the numerical simulation, the resonance wavelength can be directly positioned at this birefringence abrupt change point and the depth of the abrupt change of birefringence reflects the intensity of excited surface plasmon. Consequently, the novel approach can accurately locate the resonance peak of the system without analyzing the loss spectrum. Simulated average sensitivity is as high as 1131 nm/RIU, corresponding to a resolution of 1 × 10-4 RIU in this sensor. Therefore, results obtained via the approach not only show polarization independence and less noble metal consumption, but also reveal better performance in terms of accuracy and computation efficiency.

  4. Engineering models for catastrophe risk and their application to insurance

    NASA Astrophysics Data System (ADS)

    Dong, Weimin

    2002-06-01

    Internationally earthquake insurance, like all other insurance (fire, auto), adopted actuarial approach in the past, which is, based on historical loss experience to determine insurance rate. Due to the fact that earthquake is a rare event with severe consequence, irrational determination of premium rate and lack of understanding scale of potential loss led to many insurance companies insolvent after Northridge earthquake in 1994. Along with recent advances in earth science, computer science and engineering, computerized loss estimation methodologies based on first principles have been developed to the point that losses from destructive earthquakes can be quantified with reasonable accuracy using scientific modeling techniques. This paper intends to introduce how engineering models can assist to quantify earthquake risk and how insurance industry can use this information to manage their risk in the United States and abroad.

  5. Architecture Design and Experimental Platform Demonstration of Optical Network based on OpenFlow Protocol

    NASA Astrophysics Data System (ADS)

    Xing, Fangyuan; Wang, Honghuan; Yin, Hongxi; Li, Ming; Luo, Shenzi; Wu, Chenguang

    2016-02-01

    With the extensive application of cloud computing and data centres, as well as the constantly emerging services, the big data with the burst characteristic has brought huge challenges to optical networks. Consequently, the software defined optical network (SDON) that combines optical networks with software defined network (SDN), has attracted much attention. In this paper, an OpenFlow-enabled optical node employed in optical cross-connect (OXC) and reconfigurable optical add/drop multiplexer (ROADM), is proposed. An open source OpenFlow controller is extended on routing strategies. In addition, the experiment platform based on OpenFlow protocol for software defined optical network, is designed. The feasibility and availability of the OpenFlow-enabled optical nodes and the extended OpenFlow controller are validated by the connectivity test, protection switching and load balancing experiments in this test platform.

  6. Guidelines for computer security in general practice.

    PubMed

    Schattner, Peter; Pleteshner, Catherine; Bhend, Heinz; Brouns, Johan

    2007-01-01

    As general practice becomes increasingly computerised, data security becomes increasingly important for both patient health and the efficient operation of the practice. To develop guidelines for computer security in general practice based on a literature review, an analysis of available information on current practice and a series of key stakeholder interviews. While the guideline was produced in the context of Australian general practice, we have developed a template that is also relevant for other countries. Current data on computer security measures was sought from Australian divisions of general practice. Semi-structured interviews were conducted with general practitioners (GPs), the medical software industry, senior managers within government responsible for health IT (information technology) initiatives, technical IT experts, divisions of general practice and a member of a health information consumer group. The respondents were asked to assess both the likelihood and the consequences of potential risks in computer security being breached. The study suggested that the most important computer security issues in general practice were: the need for a nominated IT security coordinator; having written IT policies, including a practice disaster recovery plan; controlling access to different levels of electronic data; doing and testing backups; protecting against viruses and other malicious codes; installing firewalls; undertaking routine maintenance of hardware and software; and securing electronic communication, for example via encryption. This information led to the production of computer security guidelines, including a one-page summary checklist, which were subsequently distributed to all GPs in Australia. This paper maps out a process for developing computer security guidelines for general practice. The specific content will vary in different countries according to their levels of adoption of IT, and cultural, technical and other health service factors. Making these guidelines relevant to local contexts should help maximise their uptake.

  7. Blading Design for Axial Turbomachines

    DTIC Science & Technology

    1989-05-01

    three- dimensional, viscous computation systems appear to have a long development period ahead, in which fluid shear stress modeling and computation time ...and n directions and T is the shear stress , As a consequence the solution time is longer than for integral methods, dependent largely on thc accuracy of...distributions over airfoils is an adaptation of thin plate deflection theory from stress analysis. At the same time , it minimizes designer effort

  8. Uncertainty quantification for environmental models

    USGS Publications Warehouse

    Hill, Mary C.; Lu, Dan; Kavetski, Dmitri; Clark, Martyn P.; Ye, Ming

    2012-01-01

    Environmental models are used to evaluate the fate of fertilizers in agricultural settings (including soil denitrification), the degradation of hydrocarbons at spill sites, and water supply for people and ecosystems in small to large basins and cities—to mention but a few applications of these models. They also play a role in understanding and diagnosing potential environmental impacts of global climate change. The models are typically mildly to extremely nonlinear. The persistent demand for enhanced dynamics and resolution to improve model realism [17] means that lengthy individual model execution times will remain common, notwithstanding continued enhancements in computer power. In addition, high-dimensional parameter spaces are often defined, which increases the number of model runs required to quantify uncertainty [2]. Some environmental modeling projects have access to extensive funding and computational resources; many do not. The many recent studies of uncertainty quantification in environmental model predictions have focused on uncertainties related to data error and sparsity of data, expert judgment expressed mathematically through prior information, poorly known parameter values, and model structure (see, for example, [1,7,9,10,13,18]). Approaches for quantifying uncertainty include frequentist (potentially with prior information [7,9]), Bayesian [13,18,19], and likelihood-based. A few of the numerous methods, including some sensitivity and inverse methods with consequences for understanding and quantifying uncertainty, are as follows: Bayesian hierarchical modeling and Bayesian model averaging; single-objective optimization with error-based weighting [7] and multi-objective optimization [3]; methods based on local derivatives [2,7,10]; screening methods like OAT (one at a time) and the method of Morris [14]; FAST (Fourier amplitude sensitivity testing) [14]; the Sobol' method [14]; randomized maximum likelihood [10]; Markov chain Monte Carlo (MCMC) [10]. There are also bootstrapping and cross-validation approaches.Sometimes analyses are conducted using surrogate models [12]. The availability of so many options can be confusing. Categorizing methods based on fundamental questions assists in communicating the essential results of uncertainty analyses to stakeholders. Such questions can focus on model adequacy (e.g., How well does the model reproduce observed system characteristics and dynamics?) and sensitivity analysis (e.g., What parameters can be estimated with available data? What observations are important to parameters and predictions? What parameters are important to predictions?), as well as on the uncertainty quantification (e.g., How accurate and precise are the predictions?). The methods can also be classified by the number of model runs required: few (10s to 1000s) or many (10,000s to 1,000,000s). Of the methods listed above, the most computationally frugal are generally those based on local derivatives; MCMC methods tend to be among the most computationally demanding. Surrogate models (emulators)do not necessarily produce computational frugality because many runs of the full model are generally needed to create a meaningful surrogate model. With this categorization, we can, in general, address all the fundamental questions mentioned above using either computationally frugal or demanding methods. Model development and analysis can thus be conducted consistently using either computation-ally frugal or demanding methods; alternatively, different fundamental questions can be addressed using methods that require different levels of effort. Based on this perspective, we pose the question: Can computationally frugal methods be useful companions to computationally demanding meth-ods? The reliability of computationally frugal methods generally depends on the model being reasonably linear, which usually means smooth nonlin-earities and the assumption of Gaussian errors; both tend to be more valid with more linear

  9. Information processing in echo state networks at the edge of chaos.

    PubMed

    Boedecker, Joschka; Obst, Oliver; Lizier, Joseph T; Mayer, N Michael; Asada, Minoru

    2012-09-01

    We investigate information processing in randomly connected recurrent neural networks. It has been shown previously that the computational capabilities of these networks are maximized when the recurrent layer is close to the border between a stable and an unstable dynamics regime, the so called edge of chaos. The reasons, however, for this maximized performance are not completely understood. We adopt an information-theoretical framework and are for the first time able to quantify the computational capabilities between elements of these networks directly as they undergo the phase transition to chaos. Specifically, we present evidence that both information transfer and storage in the recurrent layer are maximized close to this phase transition, providing an explanation for why guiding the recurrent layer toward the edge of chaos is computationally useful. As a consequence, our study suggests self-organized ways of improving performance in recurrent neural networks, driven by input data. Moreover, the networks we study share important features with biological systems such as feedback connections and online computation on input streams. A key example is the cerebral cortex, which was shown to also operate close to the edge of chaos. Consequently, the behavior of model systems as studied here is likely to shed light on reasons why biological systems are tuned into this specific regime.

  10. A computational perspective on autism

    PubMed Central

    Rosenberg, Ari; Patterson, Jaclyn Sky; Angelaki, Dora E.

    2015-01-01

    Autism is a neurodevelopmental disorder that manifests as a heterogeneous set of social, cognitive, motor, and perceptual symptoms. This system-wide pervasiveness suggests that, rather than narrowly impacting individual systems such as affection or vision, autism may broadly alter neural computation. Here, we propose that alterations in nonlinear, canonical computations occurring throughout the brain may underlie the behavioral characteristics of autism. One such computation, called divisive normalization, balances a neuron’s net excitation with inhibition reflecting the overall activity of the neuronal population. Through neural network simulations, we investigate how alterations in divisive normalization may give rise to autism symptomatology. Our findings show that a reduction in the amount of inhibition that occurs through divisive normalization can account for perceptual consequences of autism, consistent with the hypothesis of an increased ratio of neural excitation to inhibition (E/I) in the disorder. These results thus establish a bridge between an E/I imbalance and behavioral data on autism that is currently absent. Interestingly, our findings implicate the context-dependent, neuronal milieu as a key factor in autism symptomatology, with autism reflecting a less “social” neuronal population. Through a broader discussion of perceptual data, we further examine how altered divisive normalization may contribute to a wide array of the disorder’s behavioral consequences. These analyses show how a computational framework can provide insights into the neural basis of autism and facilitate the generation of falsifiable hypotheses. A computational perspective on autism may help resolve debates within the field and aid in identifying physiological pathways to target in the treatment of the disorder. PMID:26170299

  11. Decision making under uncertainty: a quasimetric approach.

    PubMed

    N'Guyen, Steve; Moulin-Frier, Clément; Droulez, Jacques

    2013-01-01

    We propose a new approach for solving a class of discrete decision making problems under uncertainty with positive cost. This issue concerns multiple and diverse fields such as engineering, economics, artificial intelligence, cognitive science and many others. Basically, an agent has to choose a single or series of actions from a set of options, without knowing for sure their consequences. Schematically, two main approaches have been followed: either the agent learns which option is the correct one to choose in a given situation by trial and error, or the agent already has some knowledge on the possible consequences of his decisions; this knowledge being generally expressed as a conditional probability distribution. In the latter case, several optimal or suboptimal methods have been proposed to exploit this uncertain knowledge in various contexts. In this work, we propose following a different approach, based on the geometric intuition of distance. More precisely, we define a goal independent quasimetric structure on the state space, taking into account both cost function and transition probability. We then compare precision and computation time with classical approaches.

  12. Designing automation for human use: empirical studies and quantitative models.

    PubMed

    Parasuraman, R

    2000-07-01

    An emerging knowledge base of human performance research can provide guidelines for designing automation that can be used effectively by human operators of complex systems. Which functions should be automated and to what extent in a given system? A model for types and levels of automation that provides a framework and an objective basis for making such choices is described. The human performance consequences of particular types and levels of automation constitute primary evaluative criteria for automation design when using the model. Four human performance areas are considered--mental workload, situation awareness, complacency and skill degradation. Secondary evaluative criteria include such factors as automation reliability, the risks of decision/action consequences and the ease of systems integration. In addition to this qualitative approach, quantitative models can inform design. Several computational and formal models of human interaction with automation that have been proposed by various researchers are reviewed. An important future research need is the integration of qualitative and quantitative approaches. Application of these models provides an objective basis for designing automation for effective human use.

  13. First stereo video dataset with ground truth for remote car pose estimation using satellite markers

    NASA Astrophysics Data System (ADS)

    Gil, Gustavo; Savino, Giovanni; Pierini, Marco

    2018-04-01

    Leading causes of PTW (Powered Two-Wheeler) crashes and near misses in urban areas are on the part of a failure or delayed prediction of the changing trajectories of other vehicles. Regrettably, misperception from both car drivers and motorcycle riders results in fatal or serious consequences for riders. Intelligent vehicles could provide early warning about possible collisions, helping to avoid the crash. There is evidence that stereo cameras can be used for estimating the heading angle of other vehicles, which is key to anticipate their imminent location, but there is limited heading ground truth data available in the public domain. Consequently, we employed a marker-based technique for creating ground truth of car pose and create a dataset∗ for computer vision benchmarking purposes. This dataset of a moving vehicle collected from a static mounted stereo camera is a simplification of a complex and dynamic reality, which serves as a test bed for car pose estimation algorithms. The dataset contains the accurate pose of the moving obstacle, and realistic imagery including texture-less and non-lambertian surfaces (e.g. reflectance and transparency).

  14. Project - based teaching and other methods to make learning more attractive

    NASA Astrophysics Data System (ADS)

    Švecová, Libuše; Vlková, Iva

    2017-01-01

    This contribution presents the results of a research carried out at secondary schools in the Moravian-Silesian Region. This research involved a total of 120 pupils and focused on project teaching with the emphasis on pupil inquiry activity and the connection of their knowledge in the fields of physics and biology. To verify pupil inquiry activity, the tasks on the worksheets have been designed specifically to measure physical quantities on the human body by computer-aided measuring processes. To support pupil inquiry activity, group work was selected as the organization method of teaching. Audio recording and pedagogical observations were used as the research tools for assessment and a consequent evaluation of acquired data.

  15. Simulation of polymer translocation through protein channels

    PubMed Central

    Muthukumar, M.; Kong, C. Y.

    2006-01-01

    A modeling algorithm is presented to compute simultaneously polymer conformations and ionic current, as single polymer molecules undergo translocation through protein channels. The method is based on a combination of Langevin dynamics for coarse-grained models of polymers and the Poisson–Nernst–Planck formalism for ionic current. For the illustrative example of ssDNA passing through the α-hemolysin pore, vivid details of conformational fluctuations of the polymer inside the vestibule and β-barrel compartments of the protein pore, and their consequent effects on the translocation time and extent of blocked ionic current are presented. In addition to yielding insights into several experimentally reported puzzles, our simulations offer experimental strategies to sequence polymers more efficiently. PMID:16567657

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lue Xing; Sun Kun; Wang Pan

    In the framework of Bell-polynomial manipulations, under investigation hereby are three single-field bilinearizable equations: the (1+1)-dimensional shallow water wave model, Boiti-Leon-Manna-Pempinelli model, and (2+1)-dimensional Sawada-Kotera model. Based on the concept of scale invariance, a direct and unifying Bell-polynomial scheme is employed to achieve the Baecklund transformations and Lax pairs associated with those three soliton equations. Note that the Bell-polynomial expressions and Bell-polynomial-typed Baecklund transformations for those three soliton equations can be, respectively, cast into the bilinear equations and bilinear Baecklund transformations with symbolic computation. Consequently, it is also shown that the Bell-polynomial-typed Baecklund transformations can be linearized into the correspondingmore » Lax pairs.« less

  17. Creativity, information, and consciousness: The information dynamics of thinking.

    PubMed

    Wiggins, Geraint A

    2018-05-07

    This paper presents a theory of the basic operation of mind, Information Dynamics of Thinking, which is intended for computational implementation and thence empirical testing. It is based on the information theory of Shannon, and treats the mind/brain as an information processing organ that aims to be information-efficient, in that it predicts its world, so as to use information efficiently, and regularly re-represents it, so as to store information efficiently. The theory is presented in context of a background review of various research areas that impinge upon its development. Consequences of the theory and testable hypotheses arising from it are discussed. Copyright © 2018. Published by Elsevier B.V.

  18. ConGEMs: Condensed Gene Co-Expression Module Discovery Through Rule-Based Clustering and Its Application to Carcinogenesis.

    PubMed

    Mallik, Saurav; Zhao, Zhongming

    2017-12-28

    For transcriptomic analysis, there are numerous microarray-based genomic data, especially those generated for cancer research. The typical analysis measures the difference between a cancer sample-group and a matched control group for each transcript or gene. Association rule mining is used to discover interesting item sets through rule-based methodology. Thus, it has advantages to find causal effect relationships between the transcripts. In this work, we introduce two new rule-based similarity measures-weighted rank-based Jaccard and Cosine measures-and then propose a novel computational framework to detect condensed gene co-expression modules ( C o n G E M s) through the association rule-based learning system and the weighted similarity scores. In practice, the list of evolved condensed markers that consists of both singular and complex markers in nature depends on the corresponding condensed gene sets in either antecedent or consequent of the rules of the resultant modules. In our evaluation, these markers could be supported by literature evidence, KEGG (Kyoto Encyclopedia of Genes and Genomes) pathway and Gene Ontology annotations. Specifically, we preliminarily identified differentially expressed genes using an empirical Bayes test. A recently developed algorithm-RANWAR-was then utilized to determine the association rules from these genes. Based on that, we computed the integrated similarity scores of these rule-based similarity measures between each rule-pair, and the resultant scores were used for clustering to identify the co-expressed rule-modules. We applied our method to a gene expression dataset for lung squamous cell carcinoma and a genome methylation dataset for uterine cervical carcinogenesis. Our proposed module discovery method produced better results than the traditional gene-module discovery measures. In summary, our proposed rule-based method is useful for exploring biomarker modules from transcriptomic data.

  19. Impaired Flexible Reward-Based Decision-Making in Binge Eating Disorder: Evidence from Computational Modeling and Functional Neuroimaging.

    PubMed

    Reiter, Andrea M F; Heinze, Hans-Jochen; Schlagenhauf, Florian; Deserno, Lorenz

    2017-02-01

    Despite its clinical relevance and the recent recognition as a diagnostic category in the DSM-5, binge eating disorder (BED) has rarely been investigated from a cognitive neuroscientific perspective targeting a more precise neurocognitive profiling of the disorder. BED patients suffer from a lack of behavioral control during recurrent binge eating episodes and thus fail to adapt their behavior in the face of negative consequences, eg, high risk for obesity. To examine impairments in flexible reward-based decision-making, we exposed BED patients (n=22) and matched healthy individuals (n=22) to a reward-guided decision-making task during functional resonance imaging (fMRI). Performing fMRI analysis informed via computational modeling of choice behavior, we were able to identify specific signatures of altered decision-making in BED. On the behavioral level, we observed impaired behavioral adaptation in BED, which was due to enhanced switching behavior, a putative deficit in striking a balance between exploration and exploitation appropriately. This was accompanied by diminished activation related to exploratory decisions in the anterior insula/ventro-lateral prefrontal cortex. Moreover, although so-called model-free reward prediction errors remained intact, representation of ventro-medial prefrontal learning signatures, incorporating inference on unchosen options, was reduced in BED, which was associated with successful decision-making in the task. On the basis of a computational psychiatry account, the presented findings contribute to defining a neurocognitive phenotype of BED.

  20. Impaired Flexible Reward-Based Decision-Making in Binge Eating Disorder: Evidence from Computational Modeling and Functional Neuroimaging

    PubMed Central

    Reiter, Andrea M F; Heinze, Hans-Jochen; Schlagenhauf, Florian; Deserno, Lorenz

    2017-01-01

    Despite its clinical relevance and the recent recognition as a diagnostic category in the DSM-5, binge eating disorder (BED) has rarely been investigated from a cognitive neuroscientific perspective targeting a more precise neurocognitive profiling of the disorder. BED patients suffer from a lack of behavioral control during recurrent binge eating episodes and thus fail to adapt their behavior in the face of negative consequences, eg, high risk for obesity. To examine impairments in flexible reward-based decision-making, we exposed BED patients (n=22) and matched healthy individuals (n=22) to a reward-guided decision-making task during functional resonance imaging (fMRI). Performing fMRI analysis informed via computational modeling of choice behavior, we were able to identify specific signatures of altered decision-making in BED. On the behavioral level, we observed impaired behavioral adaptation in BED, which was due to enhanced switching behavior, a putative deficit in striking a balance between exploration and exploitation appropriately. This was accompanied by diminished activation related to exploratory decisions in the anterior insula/ventro-lateral prefrontal cortex. Moreover, although so-called model-free reward prediction errors remained intact, representation of ventro–medial prefrontal learning signatures, incorporating inference on unchosen options, was reduced in BED, which was associated with successful decision-making in the task. On the basis of a computational psychiatry account, the presented findings contribute to defining a neurocognitive phenotype of BED. PMID:27301429

  1. The impact of 14-nm photomask uncertainties on computational lithography solutions

    NASA Astrophysics Data System (ADS)

    Sturtevant, John; Tejnil, Edita; Lin, Tim; Schultze, Steffen; Buck, Peter; Kalk, Franklin; Nakagawa, Kent; Ning, Guoxiang; Ackmann, Paul; Gans, Fritz; Buergel, Christian

    2013-04-01

    Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models, which must balance accuracy demands with simulation runtime boundary conditions, rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. While certain system input variables, such as scanner numerical aperture, can be empirically tuned to wafer CD data over a small range around the presumed set point, it can be dangerous to do so since CD errors can alias across multiple input variables. Therefore, many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine with a simulation sensitivity study, the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD Bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and awareness.

  2. A pilot evaluation of a computer-based psychometric test battery designed to detect impairment in patients with cirrhosis.

    PubMed

    Cook, Nicola A; Kim, Jin Un; Pasha, Yasmin; Crossey, Mary Me; Schembri, Adrian J; Harel, Brian T; Kimhofer, Torben; Taylor-Robinson, Simon D

    2017-01-01

    Psychometric testing is used to identify patients with cirrhosis who have developed hepatic encephalopathy (HE). Most batteries consist of a series of paper-and-pencil tests, which are cumbersome for most clinicians. A modern, easy-to-use, computer-based battery would be a helpful clinical tool, given that in its minimal form, HE has an impact on both patients' quality of life and the ability to drive and operate machinery (with societal consequences). We compared the Cogstate™ computer battery testing with the Psychometric Hepatic Encephalopathy Score (PHES) tests, with a view to simplify the diagnosis. This was a prospective study of 27 patients with histologically proven cirrhosis. An analysis of psychometric testing was performed using accuracy of task performance and speed of completion as primary variables to create a correlation matrix. A stepwise linear regression analysis was performed with backward elimination, using analysis of variance. Strong correlations were found between the international shopping list, international shopping list delayed recall of Cogstate and the PHES digit symbol test. The Shopping List Tasks were the only tasks that consistently had P values of <0.05 in the linear regression analysis. Subtests of the Cogstate battery correlated very strongly with the digit symbol component of PHES in discriminating severity of HE. These findings would indicate that components of the current PHES battery with the international shopping list tasks of Cogstate would be discriminant and have the potential to be used easily in clinical practice.

  3. Building a base map with AutoCAD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flarity, S.J.

    1989-12-01

    The fundamental step in the exploration process is building a base map. Consequently, any serious computer exploration program should be capable of providing base maps. Data used in constructing base maps are available from commercial sources such as Tobin. and Petroleum Information. These data sets include line and well data, the line data being latitude longitude vectors, and the ell data any identifying text information for well and their locations. AutoCAD is a commercial program useful in building base maps. Its features include infinite zoom and pan capability, layering, block definition, text dialog boxes, and a command language, AutoLisp. AutoLispmore » provides more power by allowing the geologist to modify the way the program works. Three AutoLisp routines presented here allow geologists to construct a geologic base map from raw Tobin data. The first program, WELLS.LSP, sets up the map environment for the subsequent programs, WELLADD.LSP and LINEADD.LSP. Welladd.lisp reads the Tobin data and spots the well symbols and the identifying information. Lineadd.lsp performs the same task on line and textural information contained within the data set.« less

  4. On thinning of chains in MCMC

    USGS Publications Warehouse

    Link, William A.; Eaton, Mitchell J.

    2012-01-01

    4. We discuss the background and prevalence of thinning, illustrate its consequences, discuss circumstances when it might be regarded as a reasonable option and recommend against routine thinning of chains unless necessitated by computer memory limitations.

  5. New 3D model for dynamics modeling

    NASA Astrophysics Data System (ADS)

    Perez, Alain

    1994-05-01

    The wrist articulation represents one of the most complex mechanical systems of the human body. It is composed of eight bones rolling and sliding along their surface and along the faces of the five metacarpals of the hand and the two bones of the arm. The wrist dynamics are however fundamental for the hand movement, but it is so complex that it still remains incompletely explored. This work is a part of a new concept of computer-assisted surgery, which consists in developing computer models to perfect surgery acts by predicting their consequences. The modeling of the wrist dynamics are based first on the static model of its bones in three dimensions. This 3D model must optimise the collision detection procedure which is the necessary step to estimate the physical contact constraints. As many other possible computer vision models do not fit with enough precision to this problem, a new 3D model has been developed thanks to the median axis of the digital distance map of the bones reconstructed volume. The collision detection procedure is then simplified for contacts are detected between spheres. The experiment of this original 3D dynamic model products realistic computer animation images of solids in contact. It is now necessary to detect ligaments on digital medical images and to model them in order to complete a wrist model.

  6. Identifying local structural states in atomic imaging by computer vision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laanait, Nouamane; Ziatdinov, Maxim; He, Qian

    The availability of atomically resolved imaging modalities enables an unprecedented view into the local structural states of materials, which manifest themselves by deviations from the fundamental assumptions of periodicity and symmetry. Consequently, approaches that aim to extract these local structural states from atomic imaging data with minimal assumptions regarding the average crystallographic configuration of a material are indispensable to advances in structural and chemical investigations of materials. Here, we present an approach to identify and classify local structural states that is rooted in computer vision. This approach introduces a definition of a structural state that is composed of both localmore » and non-local information extracted from atomically resolved images, and is wholly untethered from the familiar concepts of symmetry and periodicity. Instead, this approach relies on computer vision techniques such as feature detection, and concepts such as scale-invariance. We present the fundamental aspects of local structural state extraction and classification by application to simulated scanning transmission electron microscopy images, and analyze the robustness of this approach in the presence of common instrumental factors such as noise, limited spatial resolution, and weak contrast. Finally, we apply this computer vision-based approach for the unsupervised detection and classification of local structural states in an experimental electron micrograph of a complex oxides interface, and a scanning tunneling micrograph of a defect engineered multilayer graphene surface.« less

  7. Identifying local structural states in atomic imaging by computer vision

    DOE PAGES

    Laanait, Nouamane; Ziatdinov, Maxim; He, Qian; ...

    2016-11-02

    The availability of atomically resolved imaging modalities enables an unprecedented view into the local structural states of materials, which manifest themselves by deviations from the fundamental assumptions of periodicity and symmetry. Consequently, approaches that aim to extract these local structural states from atomic imaging data with minimal assumptions regarding the average crystallographic configuration of a material are indispensable to advances in structural and chemical investigations of materials. Here, we present an approach to identify and classify local structural states that is rooted in computer vision. This approach introduces a definition of a structural state that is composed of both localmore » and non-local information extracted from atomically resolved images, and is wholly untethered from the familiar concepts of symmetry and periodicity. Instead, this approach relies on computer vision techniques such as feature detection, and concepts such as scale-invariance. We present the fundamental aspects of local structural state extraction and classification by application to simulated scanning transmission electron microscopy images, and analyze the robustness of this approach in the presence of common instrumental factors such as noise, limited spatial resolution, and weak contrast. Finally, we apply this computer vision-based approach for the unsupervised detection and classification of local structural states in an experimental electron micrograph of a complex oxides interface, and a scanning tunneling micrograph of a defect engineered multilayer graphene surface.« less

  8. Characterizing the heterogeneity of tumor tissues from spatially resolved molecular measures

    PubMed Central

    Zavodszky, Maria I.

    2017-01-01

    Background Tumor heterogeneity can manifest itself by sub-populations of cells having distinct phenotypic profiles expressed as diverse molecular, morphological and spatial distributions. This inherent heterogeneity poses challenges in terms of diagnosis, prognosis and efficient treatment. Consequently, tools and techniques are being developed to properly characterize and quantify tumor heterogeneity. Multiplexed immunofluorescence (MxIF) is one such technology that offers molecular insight into both inter-individual and intratumor heterogeneity. It enables the quantification of both the concentration and spatial distribution of 60+ proteins across a tissue section. Upon bioimage processing, protein expression data can be generated for each cell from a tissue field of view. Results The Multi-Omics Heterogeneity Analysis (MOHA) tool was developed to compute tissue heterogeneity metrics from MxIF spatially resolved tissue imaging data. This technique computes the molecular state of each cell in a sample based on a pathway or gene set. Spatial states are then computed based on the spatial arrangements of the cells as distinguished by their respective molecular states. MOHA computes tissue heterogeneity metrics from the distributions of these molecular and spatially defined states. A colorectal cancer cohort of approximately 700 subjects with MxIF data is presented to demonstrate the MOHA methodology. Within this dataset, statistically significant correlations were found between the intratumor AKT pathway state diversity and cancer stage and histological tumor grade. Furthermore, intratumor spatial diversity metrics were found to correlate with cancer recurrence. Conclusions MOHA provides a simple and robust approach to characterize molecular and spatial heterogeneity of tissues. Research projects that generate spatially resolved tissue imaging data can take full advantage of this useful technique. The MOHA algorithm is implemented as a freely available R script (see supplementary information). PMID:29190747

  9. Online Self-Administered Cognitive Testing Using the Amsterdam Cognition Scan: Establishing Psychometric Properties and Normative Data.

    PubMed

    Feenstra, Heleen Em; Vermeulen, Ivar E; Murre, Jaap Mj; Schagen, Sanne B

    2018-05-30

    Online tests enable efficient self-administered assessments and consequently facilitate large-scale data collection for many fields of research. The Amsterdam Cognition Scan is a new online neuropsychological test battery that measures a broad variety of cognitive functions. The aims of this study were to evaluate the psychometric properties of the Amsterdam Cognition Scan and to establish regression-based normative data. The Amsterdam Cognition Scan was self-administrated twice from home-with an interval of 6 weeks-by 248 healthy Dutch-speaking adults aged 18 to 81 years. Test-retest reliability was moderate to high and comparable with that of equivalent traditional tests (intraclass correlation coefficients: .45 to .80; .83 for the Amsterdam Cognition Scan total score). Multiple regression analyses indicated that (1) participants' age negatively influenced all (12) cognitive measures, (2) gender was associated with performance on six measures, and (3) education level was positively associated with performance on four measures. In addition, we observed influences of tested computer skills and of self-reported amount of computer use on cognitive performance. Demographic characteristics that proved to influence Amsterdam Cognition Scan test performance were included in regression-based predictive formulas to establish demographically adjusted normative data. Initial results from a healthy adult sample indicate that the Amsterdam Cognition Scan has high usability and can give reliable measures of various generic cognitive ability areas. For future use, the influence of computer skills and experience should be further studied, and for repeated measurements, computer configuration should be consistent. The reported normative data allow for initial interpretation of Amsterdam Cognition Scan performances. ©Heleen EM Feenstra, Ivar E Vermeulen, Jaap MJ Murre, Sanne B Schagen. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 30.05.2018.

  10. Consequences of Base Time for Redundant Signals Experiments

    PubMed Central

    Townsend, James T.; Honey, Christopher

    2007-01-01

    We report analytical and computational investigations into the effects of base time on the diagnosticity of two popular theoretical tools in the redundant signals literature: (1) the race model inequality and (2) the capacity coefficient. We show analytically and without distributional assumptions that the presence of base time decreases the sensitivity of both of these measures to model violations. We further use simulations to investigate the statistical power model selection tools based on the race model inequality, both with and without base time. Base time decreases statistical power, and biases the race model test toward conservatism. The magnitude of this biasing effect increases as we increase the proportion of total reaction time variance contributed by base time. We marshal empirical evidence to suggest that the proportion of reaction time variance contributed by base time is relatively small, and that the effects of base time on the diagnosticity of our model-selection tools are therefore likely to be minor. However, uncertainty remains concerning the magnitude and even the definition of base time. Experimentalists should continue to be alert to situations in which base time may contribute a large proportion of the total reaction time variance. PMID:18670591

  11. Office ergonomics: deficiencies in computer workstation design.

    PubMed

    Shikdar, Ashraf A; Al-Kindi, Mahmoud A

    2007-01-01

    The objective of this research was to study and identify ergonomic deficiencies in computer workstation design in typical offices. Physical measurements and a questionnaire were used to study 40 workstations. Major ergonomic deficiencies were found in physical design and layout of the workstations, employee postures, work practices, and training. The consequences in terms of user health and other problems were significant. Forty-five percent of the employees used nonadjustable chairs, 48% of computers faced windows, 90% of the employees used computers more than 4 hrs/day, 45% of the employees adopted bent and unsupported back postures, and 20% used office tables for computers. Major problems reported were eyestrain (58%), shoulder pain (45%), back pain (43%), arm pain (35%), wrist pain (30%), and neck pain (30%). These results indicated serious ergonomic deficiencies in office computer workstation design, layout, and usage. Strategies to reduce or eliminate ergonomic deficiencies in computer workstation design were suggested.

  12. A boundary integral method for numerical computation of radar cross section of 3D targets using hybrid BEM/FEM with edge elements

    NASA Astrophysics Data System (ADS)

    Dodig, H.

    2017-11-01

    This contribution presents the boundary integral formulation for numerical computation of time-harmonic radar cross section for 3D targets. Method relies on hybrid edge element BEM/FEM to compute near field edge element coefficients that are associated with near electric and magnetic fields at the boundary of the computational domain. Special boundary integral formulation is presented that computes radar cross section directly from these edge element coefficients. Consequently, there is no need for near-to-far field transformation (NTFFT) which is common step in RCS computations. By the end of the paper it is demonstrated that the formulation yields accurate results for canonical models such as spheres, cubes, cones and pyramids. Method has demonstrated accuracy even in the case of dielectrically coated PEC sphere at interior resonance frequency which is common problem for computational electromagnetic codes.

  13. Economic consequences of aviation system disruptions: A reduced-form computable general equilibrium analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Zhenhua; Rose, Adam Z.; Prager, Fynnwin

    The state of the art approach to economic consequence analysis (ECA) is computable general equilibrium (CGE) modeling. However, such models contain thousands of equations and cannot readily be incorporated into computerized systems used by policy analysts to yield estimates of economic impacts of various types of transportation system failures due to natural hazards, human related attacks or technological accidents. This paper presents a reduced-form approach to simplify the analytical content of CGE models to make them more transparent and enhance their utilization potential. The reduced-form CGE analysis is conducted by first running simulations one hundred times, varying key parameters, suchmore » as magnitude of the initial shock, duration, location, remediation, and resilience, according to a Latin Hypercube sampling procedure. Statistical analysis is then applied to the “synthetic data” results in the form of both ordinary least squares and quantile regression. The analysis yields linear equations that are incorporated into a computerized system and utilized along with Monte Carlo simulation methods for propagating uncertainties in economic consequences. Although our demonstration and discussion focuses on aviation system disruptions caused by terrorist attacks, the approach can be applied to a broad range of threat scenarios.« less

  14. Metal Artifact Reduction in X-ray Computed Tomography Using Computer-Aided Design Data of Implants as Prior Information.

    PubMed

    Ruth, Veikko; Kolditz, Daniel; Steiding, Christian; Kalender, Willi A

    2017-06-01

    The performance of metal artifact reduction (MAR) methods in x-ray computed tomography (CT) suffers from incorrect identification of metallic implants in the artifact-affected volumetric images. The aim of this study was to investigate potential improvements of state-of-the-art MAR methods by using prior information on geometry and material of the implant. The influence of a novel prior knowledge-based segmentation (PS) compared with threshold-based segmentation (TS) on 2 MAR methods (linear interpolation [LI] and normalized-MAR [NORMAR]) was investigated. The segmentation is the initial step of both MAR methods. Prior knowledge-based segmentation uses 3-dimensional registered computer-aided design (CAD) data as prior knowledge to estimate the correct position and orientation of the metallic objects. Threshold-based segmentation uses an adaptive threshold to identify metal. Subsequently, for LI and NORMAR, the selected voxels are projected into the raw data domain to mark metal areas. Attenuation values in these areas are replaced by different interpolation schemes followed by a second reconstruction. Finally, the previously selected metal voxels are replaced by the metal voxels determined by PS or TS in the initial reconstruction. First, we investigated in an elaborate phantom study if the knowledge of the exact implant shape extracted from the CAD data provided by the manufacturer of the implant can improve the MAR result. Second, the leg of a human cadaver was scanned using a clinical CT system before and after the implantation of an artificial knee joint. The results were compared regarding segmentation accuracy, CT number accuracy, and the restoration of distorted structures. The use of PS improved the efficacy of LI and NORMAR compared with TS. Artifacts caused by insufficient segmentation were reduced, and additional information was made available within the projection data. The estimation of the implant shape was more exact and not dependent on a threshold value. Consequently, the visibility of structures was improved when comparing the new approach to the standard method. This was further confirmed by improved CT value accuracy and reduced image noise. The PS approach based on prior implant information provides image quality which is superior to TS-based MAR, especially when the shape of the metallic implant is complex. The new approach can be useful for improving MAR methods and dose calculations within radiation therapy based on the MAR corrected CT images.

  15. Evaluation of flexural strength and surface properties of prepolymerized CAD/CAM PMMA-based polymers used for digital 3D complete dentures.

    PubMed

    Arslan, Mustafa; Murat, Sema; Alp, Gulce; Zaimoglu, Ali

    2018-01-01

    The objectives of this in vitro study were to evaluate the flexural strength (FS), surface roughness (Ra), and hydrophobicity of polymethylmethacrylate (PMMA)-based computer-aided design/computer-aided manufacturing (CAD/CAM) polymers and to compare the properties of different CAD/CAM PMMA-based polymers with conventional heat-polymerized PMMA following thermal cycling. Twenty rectangular-shaped specimens (64 × 10 × 3.3 mm) were fabricated from three CAD/CAM PMMA-based polymers (M-PM Disc [M], AvaDent Puck Disc [A], and Pink CAD/CAM Disc Polident [P], and one conventional heat-polymerized PMMA (Promolux [C]), according to ISO 20795-1:2013 standards. The specimens were divided into two subgroups (n = 10), a control and a thermocycled group. The specimens in the thermocycled group were subjected to 5000 thermal cycling procedures (5 to 55°C; 30 s dwell times). The Ra value was measured using a profilometer. Contact angle (CA) was assessed using the sessile drop method to evaluate surface hydrophobicity. In addition, the FS of the specimens was tested in a universal testing machine at a crosshead speed of 1.0 mm/min. Surface texture of the materials was assessed using scanning electron microscope (SEM). The data were analyzed using two-way analysis of variance (ANOVA), followed by Tukey's HSD post-hoc test (α < 0.05). CAD/CAM PMMA-based polymers showed significantly higher FS than conventional heat-polymerized PMMA for each group (P < 0.001). CAD/CAM PMMA-based polymer [P] showed the highest FS, whereas conventional PMMA [C] showed the lowest FS before and after thermal cycling (P < 0.001). There were no significant differences among the Ra values of the tested denture base polymers in the control group (P > 0.05). In the thermocycled group, the lowest Ra value was observed for CAD/CAM PMMA-based polymer [M] (P < 0.001), whereas CAD/CAM PMMA-based polymers [A] and [P], and conventional PMMA [C] had similar Ra values (P > 0.05). Conventional PMMA [C] had a significantly lower CA and consequently lower hydrophobicity compared to the CAD/CAM polymers in the control group (P < 0.001). In the thermocycled group, CAD/CAM PMMA-based polymer [A] and conventional PMMA [C] had significantly higher CA, and consequently higher hydrophobicity when compared to CAD/CAM polymers [M] and [P] (P < 0.001). However, no significant differences were found among the other materials (P > 0.05). The FS and hydrophobicity of the CAD/CAM PMMA-based polymers were higher than the conventional heat-polymerized PMMA, whereas the CAD/CAM PMMA-based polymers had similar Ra values to the conventional PMMA. Thermocycling had a significant effect on FS and hydrophobicity except for the Ra of denture base materials.

  16. Spin-wave utilization in a quantum computer

    NASA Astrophysics Data System (ADS)

    Khitun, A.; Ostroumov, R.; Wang, K. L.

    2001-12-01

    We propose a quantum computer scheme using spin waves for quantum-information exchange. We demonstrate that spin waves in the antiferromagnetic layer grown on silicon may be used to perform single-qubit unitary transformations together with two-qubit operations during the cycle of computation. The most attractive feature of the proposed scheme is the possibility of random access to any qubit and, consequently, the ability to recognize two qubit gates between any two distant qubits. Also, spin waves allow us to eliminate the use of a strong external magnetic field and microwave pulses. By estimate, the proposed scheme has as high as 104 ratio between quantum system coherence time and the time of a single computational step.

  17. Methods of parallel computation applied on granular simulations

    NASA Astrophysics Data System (ADS)

    Martins, Gustavo H. B.; Atman, Allbens P. F.

    2017-06-01

    Every year, parallel computing has becoming cheaper and more accessible. As consequence, applications were spreading over all research areas. Granular materials is a promising area for parallel computing. To prove this statement we study the impact of parallel computing in simulations of the BNE (Brazil Nut Effect). This property is due the remarkable arising of an intruder confined to a granular media when vertically shaken against gravity. By means of DEM (Discrete Element Methods) simulations, we study the code performance testing different methods to improve clock time. A comparison between serial and parallel algorithms, using OpenMP® is also shown. The best improvement was obtained by optimizing the function that find contacts using Verlet's cells.

  18. Transuranic Computational Chemistry.

    PubMed

    Kaltsoyannis, Nikolas

    2018-02-26

    Recent developments in the chemistry of the transuranic elements are surveyed, with particular emphasis on computational contributions. Examples are drawn from molecular coordination and organometallic chemistry, and from the study of extended solid systems. The role of the metal valence orbitals in covalent bonding is a particular focus, especially the consequences of the stabilization of the 5f orbitals as the actinide series is traversed. The fledgling chemistry of transuranic elements in the +II oxidation state is highlighted. Throughout, the symbiotic interplay of experimental and computational studies is emphasized; the extraordinary challenges of experimental transuranic chemistry afford computational chemistry a particularly valuable role at the frontier of the periodic table. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Big Computing in Astronomy: Perspectives and Challenges

    NASA Astrophysics Data System (ADS)

    Pankratius, Victor

    2014-06-01

    Hardware progress in recent years has led to astronomical instruments gathering large volumes of data. In radio astronomy for instance, the current generation of antenna arrays produces data at Tbits per second, and forthcoming instruments will expand these rates much further. As instruments are increasingly becoming software-based, astronomers will get more exposed to computer science. This talk therefore outlines key challenges that arise at the intersection of computer science and astronomy and presents perspectives on how both communities can collaborate to overcome these challenges.Major problems are emerging due to increases in data rates that are much larger than in storage and transmission capacity, as well as humans being cognitively overwhelmed when attempting to opportunistically scan through Big Data. As a consequence, the generation of scientific insight will become more dependent on automation and algorithmic instrument control. Intelligent data reduction will have to be considered across the entire acquisition pipeline. In this context, the presentation will outline the enabling role of machine learning and parallel computing.BioVictor Pankratius is a computer scientist who joined MIT Haystack Observatory following his passion for astronomy. He is currently leading efforts to advance astronomy through cutting-edge computer science and parallel computing. Victor is also involved in projects such as ALMA Phasing to enhance the ALMA Observatory with Very-Long Baseline Interferometry capabilities, the Event Horizon Telescope, as well as in the Radio Array of Portable Interferometric Detectors (RAPID) to create an analysis environment using parallel computing in the cloud. He has an extensive track record of research in parallel multicore systems and software engineering, with contributions to auto-tuning, debugging, and empirical experiments studying programmers. Victor has worked with major industry partners such as Intel, Sun Labs, and Oracle. He holds a distinguished doctorate and a Habilitation degree in Computer Science from the University of Karlsruhe. Contact him at pankrat@mit.edu, victorpankratius.com, or Twitter @vpankratius.

  20. Rule-based modeling: a computational approach for studying biomolecular site dynamics in cell signaling systems

    PubMed Central

    Chylek, Lily A.; Harris, Leonard A.; Tung, Chang-Shung; Faeder, James R.; Lopez, Carlos F.

    2013-01-01

    Rule-based modeling was developed to address the limitations of traditional approaches for modeling chemical kinetics in cell signaling systems. These systems consist of multiple interacting biomolecules (e.g., proteins), which themselves consist of multiple parts (e.g., domains, linear motifs, and sites of phosphorylation). Consequently, biomolecules that mediate information processing generally have the potential to interact in multiple ways, with the number of possible complexes and post-translational modification states tending to grow exponentially with the number of binary interactions considered. As a result, only large reaction networks capture all possible consequences of the molecular interactions that occur in a cell signaling system, which is problematic because traditional modeling approaches for chemical kinetics (e.g., ordinary differential equations) require explicit network specification. This problem is circumvented through representation of interactions in terms of local rules. With this approach, network specification is implicit and model specification is concise. Concise representation results in a coarse graining of chemical kinetics, which is introduced because all reactions implied by a rule inherit the rate law associated with that rule. Coarse graining can be appropriate if interactions are modular, and the coarseness of a model can be adjusted as needed. Rules can be specified using specialized model-specification languages, and recently developed tools designed for specification of rule-based models allow one to leverage powerful software engineering capabilities. A rule-based model comprises a set of rules, which can be processed by general-purpose simulation and analysis tools to achieve different objectives (e.g., to perform either a deterministic or stochastic simulation). PMID:24123887

  1. Computer Simulation For Design Of TWT's

    NASA Technical Reports Server (NTRS)

    Bartos, Karen F.; Fite, E. Brian; Shalkhauser, Kurt A.; Sharp, G. Richard

    1992-01-01

    A three-dimensional finite-element analytical technique facilitates design and fabrication of traveling-wave-tube (TWT) slow-wave structures. Used to perform thermal and mechanical analyses of TWT designed with variety of configurations, geometries, and materials. Using three-dimensional computer analysis, designer able to simulate building and testing of TWT, with consequent substantial saving of time and money. Technique enables detailed look into operation of traveling-wave tubes to help improve performance for future communications systems.

  2. The National Shipbuilding Research Program. Photogrammetric Dimensioning of Distributive Systems Models. Phase 1

    DTIC Science & Technology

    1978-08-01

    21- accepts piping geometry as one of its basic inputs; whether this geometry comes from arrangement drawings or models is of no real consequence. c ... computer . Geometric data is taken from the catalogue and automatically merged with the piping geometry data. Also, fitting orientation is automatically...systems require a number of data manipulation routines to convert raw digitized data into logical pipe geometry acceptable to a computer -aided piping design

  3. Fault and Defect Tolerant Computer Architectures: Reliable Computing with Unreliable Devices

    DTIC Science & Technology

    2006-08-31

    supply voltage, the delay of the inverter increases parabolically . 2.2.2.5 High Field Effects. A consequence of maintaining a higher Vdd than...be explained by dispro- portionate scaling of QCRIT with respect to collector efficiency. 78 Technology trends, then, indicate a moderate increase in...using clustered defects, a compounding procedure is used. Compounding considers λ as a random variable rather than a constant. Let l be this defect

  4. Management of geminated maxillary lateral incisor using cone beam computed tomography as a diagnostic tool.

    PubMed

    James, Elizabeth Prabha; Johns, Dexton Antony; Johnson, Ki; Maroli, Ramesh Kumar

    2014-05-01

    Geminated teeth are consequences of developmental anomalies leading to joined elements, due to incomplete attempt of one tooth germ to divide into two. This case report describes successful endodontic treatment of an unaesthetic geminated permanent maxillary lateral incisor tooth and its esthetic rehabilitation using all ceramic crowns. Newer imaging technique like cone beam computed tomography was taken for the better understanding of the complicated root canal morphology.

  5. Risk-based maintenance of ethylene oxide production facilities.

    PubMed

    Khan, Faisal I; Haddara, Mahmoud R

    2004-05-20

    This paper discusses a methodology for the design of an optimum inspection and maintenance program. The methodology, called risk-based maintenance (RBM) is based on integrating a reliability approach and a risk assessment strategy to obtain an optimum maintenance schedule. First, the likely equipment failure scenarios are formulated. Out of many likely failure scenarios, the ones, which are most probable, are subjected to a detailed study. Detailed consequence analysis is done for the selected scenarios. Subsequently, these failure scenarios are subjected to a fault tree analysis to determine their probabilities. Finally, risk is computed by combining the results of the consequence and the probability analyses. The calculated risk is compared against known acceptable criteria. The frequencies of the maintenance tasks are obtained by minimizing the estimated risk. A case study involving an ethylene oxide production facility is presented. Out of the five most hazardous units considered, the pipeline used for the transportation of the ethylene is found to have the highest risk. Using available failure data and a lognormal reliability distribution function human health risk factors are calculated. Both societal risk factors and individual risk factors exceeded the acceptable risk criteria. To determine an optimal maintenance interval, a reverse fault tree analysis was used. The maintenance interval was determined such that the original high risk is brought down to an acceptable level. A sensitivity analysis is also undertaken to study the impact of changing the distribution of the reliability model as well as the error in the distribution parameters on the maintenance interval.

  6. Nature of the C2-methylation effect on the properties of imidazolium ionic liquids.

    PubMed

    Rodrigues, Ana S M C; Lima, Carlos F R A C; Coutinho, João A P; Santos, Luís M N B F

    2017-02-15

    Methylation at the C2 position of 1,3-disubstituted imidazolium-based ionic liquids (ILs) is one of the structural features that has gained attention due to its drastic impact on thermophysical and transport properties. Several hypotheses have been proposed to explain this effect but there is still much discrepancy. Aiming for the rationalization of the effects of these structural features on the properties of imidazolium ILs, we present a thermodynamic and computational study of two methylated ILs at the C2 position of imidazolium, [ 1 C 4 2 C 1 3 C 1 im][NTf 2 ] and [ 1 C 3 2 C 1 3 C 1 im][NTf 2 ]. The phase behaviour (glass transition and vaporization equilibrium) and computational studies of the anion rotation around the cation and ion pair interaction energies for both ILs were explored. The results have shown that C2-methylation has no impact on the enthalpy of vaporization. However, it decreases the entropy of vaporization, which is a consequence of the change in the ion pair dynamics that affects both the liquid and gas phases. In addition, the more hindered dynamics of the ion pair are also reflected in the increase in the glass transition temperature, T g . The entropic contribution of anion-around-cation rotation in the imidazolium [NTf 2 ] ILs was quantified experimentally by the comparative analysis of the entropy of vaporization, and computationally by the calculation of the entropies of hindered internal rotation. The global results exclude the existence of significant H-bonding in the C2-protonated (non-methylated) ILs and explain the C2-methylation effect in terms of reduced entropy of the ion pair in the liquid and gas phases. In light of these results, the C2-methylation effect is intrinsically entropic and originates from the more hindered anion-around-cation rotation as a consequence of the substitution of the -H with a bulkier -CH 3 group.

  7. NAS: The first year

    NASA Technical Reports Server (NTRS)

    Bailey, F. R.; Kutler, Paul

    1988-01-01

    Discussed are the capabilities of NASA's Numerical Aerodynamic Simulation (NAS) Program and its application as an advanced supercomputing system for computational fluid dynamics (CFD) research. First, the paper describes the NAS computational system, called the NAS Processing System Network, and the advanced computational capabilities it offers as a consequence of carrying out the NAS pathfinder objective. Second, it presents examples of pioneering CFD research accomplished during NAS's first operational year. Examples are included which illustrate CFD applications for predicting fluid phenomena, complementing and supplementing experimentation, and aiding in design. Finally, pacing elements and future directions for CFD and NAS are discussed.

  8. A visual programming environment for the Navier-Stokes computer

    NASA Technical Reports Server (NTRS)

    Tomboulian, Sherryl; Crockett, Thomas W.; Middleton, David

    1988-01-01

    The Navier-Stokes computer is a high-performance, reconfigurable, pipelined machine designed to solve large computational fluid dynamics problems. Due to the complexity of the architecture, development of effective, high-level language compilers for the system appears to be a very difficult task. Consequently, a visual programming methodology has been developed which allows users to program the system at an architectural level by constructing diagrams of the pipeline configuration. These schematic program representations can then be checked for validity and automatically translated into machine code. The visual environment is illustrated by using a prototype graphical editor to program an example problem.

  9. Confidence assignment for mass spectrometry based peptide identifications via the extreme value distribution.

    PubMed

    Alves, Gelio; Yu, Yi-Kuo

    2016-09-01

    There is a growing trend for biomedical researchers to extract evidence and draw conclusions from mass spectrometry based proteomics experiments, the cornerstone of which is peptide identification. Inaccurate assignments of peptide identification confidence thus may have far-reaching and adverse consequences. Although some peptide identification methods report accurate statistics, they have been limited to certain types of scoring function. The extreme value statistics based method, while more general in the scoring functions it allows, demands accurate parameter estimates and requires, at least in its original design, excessive computational resources. Improving the parameter estimate accuracy and reducing the computational cost for this method has two advantages: it provides another feasible route to accurate significance assessment, and it could provide reliable statistics for scoring functions yet to be developed. We have formulated and implemented an efficient algorithm for calculating the extreme value statistics for peptide identification applicable to various scoring functions, bypassing the need for searching large random databases. The source code, implemented in C ++ on a linux system, is available for download at ftp://ftp.ncbi.nlm.nih.gov/pub/qmbp/qmbp_ms/RAId/RAId_Linux_64Bit yyu@ncbi.nlm.nih.gov Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the US.

  10. Patterning control strategies for minimum edge placement error in logic devices

    NASA Astrophysics Data System (ADS)

    Mulkens, Jan; Hanna, Michael; Slachter, Bram; Tel, Wim; Kubis, Michael; Maslow, Mark; Spence, Chris; Timoshkov, Vadim

    2017-03-01

    In this paper we discuss the edge placement error (EPE) for multi-patterning semiconductor manufacturing. In a multi-patterning scheme the creation of the final pattern is the result of a sequence of lithography and etching steps, and consequently the contour of the final pattern contains error sources of the different process steps. We describe the fidelity of the final pattern in terms of EPE, which is defined as the relative displacement of the edges of two features from their intended target position. We discuss our holistic patterning optimization approach to understand and minimize the EPE of the final pattern. As an experimental test vehicle we use the 7-nm logic device patterning process flow as developed by IMEC. This patterning process is based on Self-Aligned-Quadruple-Patterning (SAQP) using ArF lithography, combined with line cut exposures using EUV lithography. The computational metrology method to determine EPE is explained. It will be shown that ArF to EUV overlay, CDU from the individual process steps, and local CD and placement of the individual pattern features, are the important contributors. Based on the error budget, we developed an optimization strategy for each individual step and for the final pattern. Solutions include overlay and CD metrology based on angle resolved scatterometry, scanner actuator control to enable high order overlay corrections and computational lithography optimization to minimize imaging induced pattern placement errors of devices and metrology targets.

  11. Closed Forms for 4-Parameter Families of Integrals

    ERIC Educational Resources Information Center

    Dana-Picard, Thierry; Zeitoun, David G.

    2009-01-01

    We compute closed forms for two multiparameter families of definite integrals, thus obtaining combinatorial formulas. As a consequence, a surprising formula is derived between a definite integral and an improper integral for the same parametric function.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carpentier, J.L.; Di Bono, P.J.; Tournebise, P.J.

    The efficient bounding method for DC contingency analysis is improved using reciprocity properties. Knowing the consequences of the outage of a branch, these properties provide the consequences on that branch of various kinds of outages. This is used in order to reduce computation times and to get rid of some difficulties, such as those occurring when a branch flow is close to its limit before outage. Compensation, sparse vector, sparse inverse and bounding techniques are also used. A program has been implemented for single branch outages and tested on actual French EHV 650 bus network. Computation times are 60% ofmore » the Efficient Bounding method. The relevant algorithm is described in detail in the first part of this paper. In the second part, reciprocity properties and bounding formulas are extended for multiple branch outages and for multiple generator or load outages. An algorithm is proposed in order to handle all these cases simultaneously.« less

  13. Does a better model yield a better argument? An info-gap analysis

    NASA Astrophysics Data System (ADS)

    Ben-Haim, Yakov

    2017-04-01

    Theories, models and computations underlie reasoned argumentation in many areas. The possibility of error in these arguments, though of low probability, may be highly significant when the argument is used in predicting the probability of rare high-consequence events. This implies that the choice of a theory, model or computational method for predicting rare high-consequence events must account for the probability of error in these components. However, error may result from lack of knowledge or surprises of various sorts, and predicting the probability of error is highly uncertain. We show that the putatively best, most innovative and sophisticated argument may not actually have the lowest probability of error. Innovative arguments may entail greater uncertainty than more standard but less sophisticated methods, creating an innovation dilemma in formulating the argument. We employ info-gap decision theory to characterize and support the resolution of this problem and present several examples.

  14. Influence of system size on the properties of a fluid adsorbed in a nanopore: Physical manifestations and methodological consequences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Puibasset, Joël, E-mail: puibasset@cnrs-orleans.fr; Kierlik, Edouard, E-mail: edouard.kierlik@upmc.fr; Tarjus, Gilles, E-mail: tarjus@lptl.jussieu.fr

    Hysteresis and discontinuities in the isotherms of a fluid adsorbed in a nanopore in general hamper the determination of equilibrium thermodynamic properties, even in computer simulations. A way around this has been to consider both a reservoir of small size and a pore of small extent in order to restrict the fluctuations of density and approach a classical van der Waals loop. We assess this suggestion by thoroughly studying through Monte Carlo simulations and density functional theory the influence of system size on the equilibrium configurations of the adsorbed fluid and on the resulting isotherms. We stress the importance ofmore » pore-symmetry-breaking states that even for modest pore sizes lead to discontinuous isotherms and we discuss the physical relevance of these states and the methodological consequences for computing thermodynamic quantities.« less

  15. A Neurocomputational Model of Goal-Directed Navigation in Insect-Inspired Artificial Agents

    PubMed Central

    Goldschmidt, Dennis; Manoonpong, Poramate; Dasgupta, Sakyasingha

    2017-01-01

    Despite their small size, insect brains are able to produce robust and efficient navigation in complex environments. Specifically in social insects, such as ants and bees, these navigational capabilities are guided by orientation directing vectors generated by a process called path integration. During this process, they integrate compass and odometric cues to estimate their current location as a vector, called the home vector for guiding them back home on a straight path. They further acquire and retrieve path integration-based vector memories globally to the nest or based on visual landmarks. Although existing computational models reproduced similar behaviors, a neurocomputational model of vector navigation including the acquisition of vector representations has not been described before. Here we present a model of neural mechanisms in a modular closed-loop control—enabling vector navigation in artificial agents. The model consists of a path integration mechanism, reward-modulated global learning, random search, and action selection. The path integration mechanism integrates compass and odometric cues to compute a vectorial representation of the agent's current location as neural activity patterns in circular arrays. A reward-modulated learning rule enables the acquisition of vector memories by associating the local food reward with the path integration state. A motor output is computed based on the combination of vector memories and random exploration. In simulation, we show that the neural mechanisms enable robust homing and localization, even in the presence of external sensory noise. The proposed learning rules lead to goal-directed navigation and route formation performed under realistic conditions. Consequently, we provide a novel approach for vector learning and navigation in a simulated, situated agent linking behavioral observations to their possible underlying neural substrates. PMID:28446872

  16. A Neurocomputational Model of Goal-Directed Navigation in Insect-Inspired Artificial Agents.

    PubMed

    Goldschmidt, Dennis; Manoonpong, Poramate; Dasgupta, Sakyasingha

    2017-01-01

    Despite their small size, insect brains are able to produce robust and efficient navigation in complex environments. Specifically in social insects, such as ants and bees, these navigational capabilities are guided by orientation directing vectors generated by a process called path integration. During this process, they integrate compass and odometric cues to estimate their current location as a vector, called the home vector for guiding them back home on a straight path. They further acquire and retrieve path integration-based vector memories globally to the nest or based on visual landmarks. Although existing computational models reproduced similar behaviors, a neurocomputational model of vector navigation including the acquisition of vector representations has not been described before. Here we present a model of neural mechanisms in a modular closed-loop control-enabling vector navigation in artificial agents. The model consists of a path integration mechanism, reward-modulated global learning, random search, and action selection. The path integration mechanism integrates compass and odometric cues to compute a vectorial representation of the agent's current location as neural activity patterns in circular arrays. A reward-modulated learning rule enables the acquisition of vector memories by associating the local food reward with the path integration state. A motor output is computed based on the combination of vector memories and random exploration. In simulation, we show that the neural mechanisms enable robust homing and localization, even in the presence of external sensory noise. The proposed learning rules lead to goal-directed navigation and route formation performed under realistic conditions. Consequently, we provide a novel approach for vector learning and navigation in a simulated, situated agent linking behavioral observations to their possible underlying neural substrates.

  17. Improving zero-training brain-computer interfaces by mixing model estimators

    NASA Astrophysics Data System (ADS)

    Verhoeven, T.; Hübner, D.; Tangermann, M.; Müller, K. R.; Dambre, J.; Kindermans, P. J.

    2017-06-01

    Objective. Brain-computer interfaces (BCI) based on event-related potentials (ERP) incorporate a decoder to classify recorded brain signals and subsequently select a control signal that drives a computer application. Standard supervised BCI decoders require a tedious calibration procedure prior to every session. Several unsupervised classification methods have been proposed that tune the decoder during actual use and as such omit this calibration. Each of these methods has its own strengths and weaknesses. Our aim is to improve overall accuracy of ERP-based BCIs without calibration. Approach. We consider two approaches for unsupervised classification of ERP signals. Learning from label proportions (LLP) was recently shown to be guaranteed to converge to a supervised decoder when enough data is available. In contrast, the formerly proposed expectation maximization (EM) based decoding for ERP-BCI does not have this guarantee. However, while this decoder has high variance due to random initialization of its parameters, it obtains a higher accuracy faster than LLP when the initialization is good. We introduce a method to optimally combine these two unsupervised decoding methods, letting one method’s strengths compensate for the weaknesses of the other and vice versa. The new method is compared to the aforementioned methods in a resimulation of an experiment with a visual speller. Main results. Analysis of the experimental results shows that the new method exceeds the performance of the previous unsupervised classification approaches in terms of ERP classification accuracy and symbol selection accuracy during the spelling experiment. Furthermore, the method shows less dependency on random initialization of model parameters and is consequently more reliable. Significance. Improving the accuracy and subsequent reliability of calibrationless BCIs makes these systems more appealing for frequent use.

  18. A Computational Model Predicting Disruption of Blood Vessel Development

    PubMed Central

    Kleinstreuer, Nicole; Dix, David; Rountree, Michael; Baker, Nancy; Sipes, Nisha; Reif, David; Spencer, Richard; Knudsen, Thomas

    2013-01-01

    Vascular development is a complex process regulated by dynamic biological networks that vary in topology and state across different tissues and developmental stages. Signals regulating de novo blood vessel formation (vasculogenesis) and remodeling (angiogenesis) come from a variety of biological pathways linked to endothelial cell (EC) behavior, extracellular matrix (ECM) remodeling and the local generation of chemokines and growth factors. Simulating these interactions at a systems level requires sufficient biological detail about the relevant molecular pathways and associated cellular behaviors, and tractable computational models that offset mathematical and biological complexity. Here, we describe a novel multicellular agent-based model of vasculogenesis using the CompuCell3D (http://www.compucell3d.org/) modeling environment supplemented with semi-automatic knowledgebase creation. The model incorporates vascular endothelial growth factor signals, pro- and anti-angiogenic inflammatory chemokine signals, and the plasminogen activating system of enzymes and proteases linked to ECM interactions, to simulate nascent EC organization, growth and remodeling. The model was shown to recapitulate stereotypical capillary plexus formation and structural emergence of non-coded cellular behaviors, such as a heterologous bridging phenomenon linking endothelial tip cells together during formation of polygonal endothelial cords. Molecular targets in the computational model were mapped to signatures of vascular disruption derived from in vitro chemical profiling using the EPA's ToxCast high-throughput screening (HTS) dataset. Simulating the HTS data with the cell-agent based model of vascular development predicted adverse effects of a reference anti-angiogenic thalidomide analog, 5HPP-33, on in vitro angiogenesis with respect to both concentration-response and morphological consequences. These findings support the utility of cell agent-based models for simulating a morphogenetic series of events and for the first time demonstrate the applicability of these models for predictive toxicology. PMID:23592958

  19. Dynamic emulation modelling for the optimal operation of water systems: an overview

    NASA Astrophysics Data System (ADS)

    Castelletti, A.; Galelli, S.; Giuliani, M.

    2014-12-01

    Despite sustained increase in computing power over recent decades, computational limitations remain a major barrier to the effective and systematic use of large-scale, process-based simulation models in rational environmental decision-making. Whereas complex models may provide clear advantages when the goal of the modelling exercise is to enhance our understanding of the natural processes, they introduce problems of model identifiability caused by over-parameterization and suffer from high computational burden when used in management and planning problems. As a result, increasing attention is now being devoted to emulation modelling (or model reduction) as a way of overcoming these limitations. An emulation model, or emulator, is a low-order approximation of the process-based model that can be substituted for it in order to solve high resource-demanding problems. In this talk, an overview of emulation modelling within the context of the optimal operation of water systems will be provided. Particular emphasis will be given to Dynamic Emulation Modelling (DEMo), a special type of model complexity reduction in which the dynamic nature of the original process-based model is preserved, with consequent advantages in a wide range of problems, particularly feedback control problems. This will be contrasted with traditional non-dynamic emulators (e.g. response surface and surrogate models) that have been studied extensively in recent years and are mainly used for planning purposes. A number of real world numerical experiences will be used to support the discussion ranging from multi-outlet water quality control in water reservoir through erosion/sedimentation rebalancing in the operation of run-off-river power plants to salinity control in lake and reservoirs.

  20. Consequence assessment of large rock slope failures in Norway

    NASA Astrophysics Data System (ADS)

    Oppikofer, Thierry; Hermanns, Reginald L.; Horton, Pascal; Sandøy, Gro; Roberts, Nicholas J.; Jaboyedoff, Michel; Böhme, Martina; Yugsi Molina, Freddy X.

    2014-05-01

    Steep glacially carved valleys and fjords in Norway are prone to many landslide types, including large rockslides, rockfalls, and debris flows. Large rockslides and their secondary effects (rockslide-triggered displacement waves, inundation behind landslide dams and outburst floods from failure of landslide dams) pose a significant hazard to the population living in the valleys and along the fjords shoreline. The Geological Survey of Norway performs systematic mapping of unstable rock slopes in Norway and has detected more than 230 unstable slopes with significant postglacial deformation. This large number necessitates prioritisation of follow-up activities, such as more detailed investigations, periodic displacement measurements, continuous monitoring and early-warning systems. Prioritisation is achieved through a hazard and risk classification system, which has been developed by a panel of international and Norwegian experts (www.ngu.no/en-gb/hm/Publications/Reports/2012/2012-029). The risk classification system combines a qualitative hazard assessment with a consequences assessment focusing on potential life losses. The hazard assessment is based on a series of nine geomorphological, engineering geological and structural criteria, as well as displacement rates, past events and other signs of activity. We present a method for consequence assessment comprising four main steps: 1. computation of the volume of the unstable rock slope; 2. run-out assessment based on the volume-dependent angle of reach (Fahrböschung) or detailed numerical run-out modelling; 3. assessment of possible displacement wave propagation and run-up based on empirical relations or modelling in 2D or 3D; and 4. estimation of the number of persons exposed to rock avalanches or displacement waves. Volume computation of an unstable rock slope is based on the sloping local base level technique, which uses a digital elevation model to create a second-order curved surface between the mapped extent of the unstable rock slope. This surface represents the possible basal sliding surface of an unstable rock slope. The elevation difference between this surface and the topographic surface estimates the volume of the unstable rock slope. A tool has been developed for the present study to adapt the curvature parameters of the computed surface to local geological and structural conditions. The obtained volume is then used to define the angle of reach of a possible rock avalanche from the unstable rock slope by using empirical derived values of angle of reach vs. volume relations. Run-out area is calculated using FlowR; the software is widely used for run-out assessment of debris flows and is adapted here for assessment of rock avalanches, including their potential to ascend opposing slopes. Under certain conditions, more sophisticated and complex numerical run-out models are also used. For rock avalanches with potential to reach a fjord or a lake the propagation and run-up area of triggered displacement waves is assessed. Empirical relations of wave run-up height as a function of rock avalanche volume and distance from impact location are derived from a national and international inventory of landslide-triggered displacement waves. These empirical relations are used in first-level hazard assessment and where necessary, followed by 2D or 3D displacement wave modelling. Finally, the population exposed in the rock avalanche run-out area and in the run-up area of a possible displacement wave is assessed taking into account different population groups: inhabitants, persons in critical infrastructure (hospitals and other emergency services), persons in schools and kindergartens, persons at work or in shops, tourists, persons on ferries and so on. Exposure levels are defined for each population group and vulnerability values are set for the rock avalanche run-out area (100%) and the run-up area of a possible displacement wave (70%). Finally, the total number of persons within the hazard area is calculated taking into account exposure and vulnerability. The method for consequence assessment is currently tested through several case studies in Norway and, thereafter, applied to all unstable rock slopes in the country to assess their risk level. Follow-up activities (detailed investigations, periodic displacement measurements or continuous monitoring and early-warning systems) can then be prioritized based on the risk level and with a standard approach for whole Norway.

  1. Decision-making based on emotional images.

    PubMed

    Katahira, Kentaro; Fujimura, Tomomi; Okanoya, Kazuo; Okada, Masato

    2011-01-01

    The emotional outcome of a choice affects subsequent decision making. While the relationship between decision making and emotion has attracted attention, studies on emotion and decision making have been independently developed. In this study, we investigated how the emotional valence of pictures, which was stochastically contingent on participants' choices, influenced subsequent decision making. In contrast to traditional value-based decision-making studies that used money or food as a reward, the "reward value" of the decision outcome, which guided the update of value for each choice, is unknown beforehand. To estimate the reward value of emotional pictures from participants' choice data, we used reinforcement learning models that have successfully been used in previous studies for modeling value-based decision making. Consequently, we found that the estimated reward value was asymmetric between positive and negative pictures. The negative reward value of negative pictures (relative to neutral pictures) was larger in magnitude than the positive reward value of positive pictures. This asymmetry was not observed in valence for an individual picture, which was rated by the participants regarding the emotion experienced upon viewing it. These results suggest that there may be a difference between experienced emotion and the effect of the experienced emotion on subsequent behavior. Our experimental and computational paradigm provides a novel way for quantifying how and what aspects of emotional events affect human behavior. The present study is a first step toward relating a large amount of knowledge in emotion science and in taking computational approaches to value-based decision making.

  2. Probabilistic Reward- and Punishment-based Learning in Opioid Addiction: Experimental and Computational Data

    PubMed Central

    Myers, Catherine E.; Sheynin, Jony; Baldson, Tarryn; Luzardo, Andre; Beck, Kevin D.; Hogarth, Lee; Haber, Paul; Moustafa, Ahmed A.

    2016-01-01

    Addiction is the continuation of a habit in spite of negative consequences. A vast literature gives evidence that this poor decision-making behavior in individuals addicted to drugs also generalizes to laboratory decision making tasks, suggesting that the impairment in decision-making is not limited to decisions about taking drugs. In the current experiment, opioid-addicted individuals and matched controls with no history of illicit drug use were administered a probabilistic classification task that embeds both reward-based and punishment-based learning trials, and a computational model of decision making was applied to understand the mechanisms describing individuals’ performance on the task. Although behavioral results showed thatopioid-addicted individuals performed as well as controls on both reward- and punishment-based learning, the modeling results suggested subtle differences in how decisions were made between the two groups. Specifically, the opioid-addicted group showed decreased tendency to repeat prior responses, meaning that they were more likely to “chase reward” when expectancies were violated, whereas controls were more likely to stick with a previously-successful response rule, despite occasional expectancy violations. This tendency to chase short-term reward, potentially at the expense of developing rules that maximize reward over the long term, may be a contributing factor to opioid addiction. Further work is indicated to better understand whether this tendency arises as a result of brain changes in the wake of continued opioid use/abuse, or might be a pre-existing factor that may contribute to risk for addiction. PMID:26381438

  3. Efficient stabilization and acceleration of numerical simulation of fluid flows by residual recombination

    NASA Astrophysics Data System (ADS)

    Citro, V.; Luchini, P.; Giannetti, F.; Auteri, F.

    2017-09-01

    The study of the stability of a dynamical system described by a set of partial differential equations (PDEs) requires the computation of unstable states as the control parameter exceeds its critical threshold. Unfortunately, the discretization of the governing equations, especially for fluid dynamic applications, often leads to very large discrete systems. As a consequence, matrix based methods, like for example the Newton-Raphson algorithm coupled with a direct inversion of the Jacobian matrix, lead to computational costs too large in terms of both memory and execution time. We present a novel iterative algorithm, inspired by Krylov-subspace methods, which is able to compute unstable steady states and/or accelerate the convergence to stable configurations. Our new algorithm is based on the minimization of the residual norm at each iteration step with a projection basis updated at each iteration rather than at periodic restarts like in the classical GMRES method. The algorithm is able to stabilize any dynamical system without increasing the computational time of the original numerical procedure used to solve the governing equations. Moreover, it can be easily inserted into a pre-existing relaxation (integration) procedure with a call to a single black-box subroutine. The procedure is discussed for problems of different sizes, ranging from a small two-dimensional system to a large three-dimensional problem involving the Navier-Stokes equations. We show that the proposed algorithm is able to improve the convergence of existing iterative schemes. In particular, the procedure is applied to the subcritical flow inside a lid-driven cavity. We also discuss the application of Boostconv to compute the unstable steady flow past a fixed circular cylinder (2D) and boundary-layer flow over a hemispherical roughness element (3D) for supercritical values of the Reynolds number. We show that Boostconv can be used effectively with any spatial discretization, be it a finite-difference, finite-volume, finite-element or spectral method.

  4. Development traumatic brain injury computer user interface for disaster area in Indonesia supported by emergency broadband access network.

    PubMed

    Sutiono, Agung Budi; Suwa, Hirohiko; Ohta, Toshizumi; Arifin, Muh Zafrullah; Kitamura, Yohei; Yoshida, Kazunari; Merdika, Daduk; Qiantori, Andri; Iskandar

    2012-12-01

    Disasters bring consequences of negative impacts on the environment and human life. One of the common cause of critical condition is traumatic brain injury (TBI), namely, epidural (EDH) and subdural hematoma (SDH), due to downfall hard things during earthquake. We proposed and analyzed the user response, namely neurosurgeon, general doctor/surgeon and nurse when they interacted with TBI computer interface. The communication systems was supported by TBI web based applications using emergency broadband access network with tethered balloon and simulated in the field trial to evaluate the coverage area. The interface consisted of demography data and multi tabs for anamnesis, treatment, follow up and teleconference interfaces. The interface allows neurosurgeon, surgeon/general doctors and nurses to entry the EDH and SDH patient's data during referring them on the emergency simulation and evaluated based on time needs and their understanding. The average time needed was obtained after simulated by Lenovo T500 notebook using mouse; 8-10 min for neurosurgeons, 12-15 min for surgeons/general doctors and 15-19 min for nurses. By using Think Pad X201 Tablet, the time needed for entry data was 5-7 min for neurosurgeon, 7-10 min for surgeons/general doctors and 12-16 min for nurses. We observed that the time difference was depending on the computer type and user literacy qualification as well as their understanding on traumatic brain injury, particularly for the nurses. In conclusion, there are five data classification for simply TBI GUI, namely, 1) demography, 2) specific anamnesis for EDH and SDH, 3) treatment action and medicine of TBI, 4) follow up data display and 5) teleneurosurgery for streaming video consultation. The type of computer, particularly tablet PC was more convenient and faster for entry data, compare to that computer mouse touched pad. Emergency broadband access network using tethered balloon is possible to be employed to cover the communications systems in disaster area.

  5. Quantum analogue computing.

    PubMed

    Kendon, Vivien M; Nemoto, Kae; Munro, William J

    2010-08-13

    We briefly review what a quantum computer is, what it promises to do for us and why it is so hard to build one. Among the first applications anticipated to bear fruit is the quantum simulation of quantum systems. While most quantum computation is an extension of classical digital computation, quantum simulation differs fundamentally in how the data are encoded in the quantum computer. To perform a quantum simulation, the Hilbert space of the system to be simulated is mapped directly onto the Hilbert space of the (logical) qubits in the quantum computer. This type of direct correspondence is how data are encoded in a classical analogue computer. There is no binary encoding, and increasing precision becomes exponentially costly: an extra bit of precision doubles the size of the computer. This has important consequences for both the precision and error-correction requirements of quantum simulation, and significant open questions remain about its practicality. It also means that the quantum version of analogue computers, continuous-variable quantum computers, becomes an equally efficient architecture for quantum simulation. Lessons from past use of classical analogue computers can help us to build better quantum simulators in future.

  6. Macroscopic elastic properties of textured ZrN-AlN polycrystalline aggregates: From ab initio calculations to grain-scale interactions

    NASA Astrophysics Data System (ADS)

    Holec, D.; Tasnádi, F.; Wagner, P.; Friák, M.; Neugebauer, J.; Mayrhofer, P. H.; Keckes, J.

    2014-11-01

    Despite the fast development of computational material modeling, the theoretical description of macroscopic elastic properties of textured polycrystalline aggregates starting from basic principles remains a challenging task. In this study we use a supercell-based approach to obtain the elastic properties of a random solid solution cubic Zr1 -xAlxN system as a function of the metallic sublattice composition and texture descriptors. The employed special quasirandom structures are optimized not only with respect to short-range-order parameters, but also to make the three cubic directions [1 0 0 ] , [0 1 0 ] , and [0 0 1 ] as similar as possible. In this way, only a small spread of elastic constant tensor components is achieved and an optimum trade-off between modeling of chemical disorder and computational limits regarding the supercell size and calculational time is proposed. The single-crystal elastic constants are shown to vary smoothly with composition, yielding x ≈0.5 an alloy constitution with an almost isotropic response. Consequently, polycrystals with this composition are suggested to have Young's modulus independent of the actual microstructure. This is indeed confirmed by explicit calculations of polycrystal elastic properties, both within the isotropic aggregate limit and with fiber textures with various orientations and sharpness. It turns out that for low AlN mole fractions, the spread of the possible Young's modulus data caused by the texture variation can be larger than 100 GPa. Consequently, our discussion of Young's modulus data of cubic Zr1 -xAlxN contains also the evaluation of the texture typical for thin films.

  7. Computational Modeling Approach in Probing the Effects of Cytosine Methylation on the Transcription Factor Binding to DNA.

    PubMed

    Tenayuca, John; Cousins, Kimberley; Yang, Shumei; Zhang, Lubo

    2017-01-01

    Cytosine methylation at CpG dinucleotides is a chief mechanism in epigenetic modification of gene expression patterns. Previous studies demonstrated that increased CpG methylation of Sp1 sites at -268 and -346 of protein kinase C ε promoter repressed the gene expression. The present study investigated the impact of CpG methylation on the Sp1 binding via molecular modeling and electrophoretic mobility shift assay. Each of the Sp1 sites contain two CpGs. Methylation of either CpG lowered the binding affinity of Sp1, whereas methylation of both CpGs produced a greater decrease in the binding affinity. Computation of van der Waals (VDW) energy of Sp1 in complex with the Sp1 sites demonstrated increased VDW values from one to two sites of CpG methylation. Molecular modeling indicated that single CpG methylation caused underwinding of the DNA fragment, with the phosphate groups at C1, C4 and C5 reoriented from their original positions. Methylation of both CpGs pinched the minor groove and increased the helical twist concomitant with a shallow, hydrophobic major groove. Additionally, double methylation eliminated hydrogen bonds on recognition helix residues located at positions -1 and 1, which were essential for interaction with O6/N7 of G-bases. Bonding from linker residues Arg565, Lys595 and Lys596 were also reduced. Methylation of single or both CpGs significantly affected hydrogen bonding from all three Sp1 DNA binding domains, demonstrating that the consequences of cytosine modification extend beyond the neighboring nucleotides. The results indicate that cytosine methylation causes subtle structural alterations in Sp1 binding sites consequently resulting in inhibition of side chain interactions critical for specific base recognition and reduction of the binding affinity of Sp1. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  8. Neural Network Based Sensory Fusion for Landmark Detection

    NASA Technical Reports Server (NTRS)

    Kumbla, Kishan -K.; Akbarzadeh, Mohammad R.

    1997-01-01

    NASA is planning to send numerous unmanned planetary missions to explore the space. This requires autonomous robotic vehicles which can navigate in an unstructured, unknown, and uncertain environment. Landmark based navigation is a new area of research which differs from the traditional goal-oriented navigation, where a mobile robot starts from an initial point and reaches a destination in accordance with a pre-planned path. The landmark based navigation has the advantage of allowing the robot to find its way without communication with the mission control station and without exact knowledge of its coordinates. Current algorithms based on landmark navigation however pose several constraints. First, they require large memories to store the images. Second, the task of comparing the images using traditional methods is computationally intensive and consequently real-time implementation is difficult. The method proposed here consists of three stages, First stage utilizes a heuristic-based algorithm to identify significant objects. The second stage utilizes a neural network (NN) to efficiently classify images of the identified objects. The third stage combines distance information with the classification results of neural networks for efficient and intelligent navigation.

  9. Community Cloud Computing

    NASA Astrophysics Data System (ADS)

    Marinos, Alexandros; Briscoe, Gerard

    Cloud Computing is rising fast, with its data centres growing at an unprecedented rate. However, this has come with concerns over privacy, efficiency at the expense of resilience, and environmental sustainability, because of the dependence on Cloud vendors such as Google, Amazon and Microsoft. Our response is an alternative model for the Cloud conceptualisation, providing a paradigm for Clouds in the community, utilising networked personal computers for liberation from the centralised vendor model. Community Cloud Computing (C3) offers an alternative architecture, created by combing the Cloud with paradigms from Grid Computing, principles from Digital Ecosystems, and sustainability from Green Computing, while remaining true to the original vision of the Internet. It is more technically challenging than Cloud Computing, having to deal with distributed computing issues, including heterogeneous nodes, varying quality of service, and additional security constraints. However, these are not insurmountable challenges, and with the need to retain control over our digital lives and the potential environmental consequences, it is a challenge we must pursue.

  10. Application of infrared thermography in computer aided diagnosis

    NASA Astrophysics Data System (ADS)

    Faust, Oliver; Rajendra Acharya, U.; Ng, E. Y. K.; Hong, Tan Jen; Yu, Wenwei

    2014-09-01

    The invention of thermography, in the 1950s, posed a formidable problem to the research community: What is the relationship between disease and heat radiation captured with Infrared (IR) cameras? The research community responded with a continuous effort to find this crucial relationship. This effort was aided by advances in processing techniques, improved sensitivity and spatial resolution of thermal sensors. However, despite this progress fundamental issues with this imaging modality still remain. The main problem is that the link between disease and heat radiation is complex and in many cases even non-linear. Furthermore, the change in heat radiation as well as the change in radiation pattern, which indicate disease, is minute. On a technical level, this poses high requirements on image capturing and processing. On a more abstract level, these problems lead to inter-observer variability and on an even more abstract level they lead to a lack of trust in this imaging modality. In this review, we adopt the position that these problems can only be solved through a strict application of scientific principles and objective performance assessment. Computing machinery is inherently objective; this helps us to apply scientific principles in a transparent way and to assess the performance results. As a consequence, we aim to promote thermography based Computer-Aided Diagnosis (CAD) systems. Another benefit of CAD systems comes from the fact that the diagnostic accuracy is linked to the capability of the computing machinery and, in general, computers become ever more potent. We predict that a pervasive application of computers and networking technology in medicine will help us to overcome the shortcomings of any single imaging modality and this will pave the way for integrated health care systems which maximize the quality of patient care.

  11. Integration of Gravitational Torques in Cerebellar Pathways Allows for the Dynamic Inverse Computation of Vertical Pointing Movements of a Robot Arm

    PubMed Central

    Gentili, Rodolphe J.; Papaxanthis, Charalambos; Ebadzadeh, Mehdi; Eskiizmirliler, Selim; Ouanezar, Sofiane; Darlot, Christian

    2009-01-01

    Background Several authors suggested that gravitational forces are centrally represented in the brain for planning, control and sensorimotor predictions of movements. Furthermore, some studies proposed that the cerebellum computes the inverse dynamics (internal inverse model) whereas others suggested that it computes sensorimotor predictions (internal forward model). Methodology/Principal Findings This study proposes a model of cerebellar pathways deduced from both biological and physical constraints. The model learns the dynamic inverse computation of the effect of gravitational torques from its sensorimotor predictions without calculating an explicit inverse computation. By using supervised learning, this model learns to control an anthropomorphic robot arm actuated by two antagonists McKibben artificial muscles. This was achieved by using internal parallel feedback loops containing neural networks which anticipate the sensorimotor consequences of the neural commands. The artificial neural networks architecture was similar to the large-scale connectivity of the cerebellar cortex. Movements in the sagittal plane were performed during three sessions combining different initial positions, amplitudes and directions of movements to vary the effects of the gravitational torques applied to the robotic arm. The results show that this model acquired an internal representation of the gravitational effects during vertical arm pointing movements. Conclusions/Significance This is consistent with the proposal that the cerebellar cortex contains an internal representation of gravitational torques which is encoded through a learning process. Furthermore, this model suggests that the cerebellum performs the inverse dynamics computation based on sensorimotor predictions. This highlights the importance of sensorimotor predictions of gravitational torques acting on upper limb movements performed in the gravitational field. PMID:19384420

  12. The role of strategies in motor learning

    PubMed Central

    Taylor, Jordan A.; Ivry, Richard B.

    2015-01-01

    There has been renewed interest in the role of strategies in sensorimotor learning. The combination of new behavioral methods and computational methods has begun to unravel the interaction between processes related to strategic control and processes related to motor adaptation. These processes may operate on very different error signals. Strategy learning is sensitive to goal-based performance error. In contrast, adaptation is sensitive to prediction errors between the desired and actual consequences of a planned movement. The former guides what the desired movement should be, whereas the latter guides how to implement the desired movement. Whereas traditional approaches have favored serial models in which an initial strategy-based phase gives way to more automatized forms of control, it now seems that strategic and adaptive processes operate with considerable independence throughout learning, although the relative weight given the two processes will shift with changes in performance. As such, skill acquisition involves the synergistic engagement of strategic and adaptive processes. PMID:22329960

  13. Variance decomposition in stochastic simulators.

    PubMed

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  14. Usefulness of a Regional Health Care Information System in primary care: a case study.

    PubMed

    Maass, Marianne C; Asikainen, Paula; Mäenpää, Tiina; Wanne, Olli; Suominen, Tarja

    2008-08-01

    The goal of this paper is to describe some benefits and possible cost consequences of computer based access to specialised health care information. A before-after activity analysis regarding 20 diabetic patients' clinical appointments was performed in a Health Centre in Satakunta region in Finland. Cost data, an interview, time-and-motion studies, and flow charts based on modelling were applied. Access to up-to-date diagnostic information reduced redundant clinical re-appointments, repeated tests, and mail orders for missing data. Timely access to diagnostic information brought about several benefits regarding workflow, patient care, and disease management. These benefits resulted in theoretical net cost savings. The study results indicated that Regional Information Systems may be useful tools to support performance and improve efficiency. However, further studies are required in order to verify how the monetary savings would impact the performance of Health Care Units.

  15. Nonequilibrium Green's function theory for nonadiabatic effects in quantum electron transport

    NASA Astrophysics Data System (ADS)

    Kershaw, Vincent F.; Kosov, Daniel S.

    2017-12-01

    We develop nonequilibrium Green's function-based transport theory, which includes effects of nonadiabatic nuclear motion in the calculation of the electric current in molecular junctions. Our approach is based on the separation of slow and fast time scales in the equations of motion for Green's functions by means of the Wigner representation. Time derivatives with respect to central time serve as a small parameter in the perturbative expansion enabling the computation of nonadiabatic corrections to molecular Green's functions. Consequently, we produce a series of analytic expressions for non-adiabatic electronic Green's functions (up to the second order in the central time derivatives), which depend not solely on the instantaneous molecular geometry but likewise on nuclear velocities and accelerations. An extended formula for electric current is derived which accounts for the non-adiabatic corrections. This theory is concisely illustrated by the calculations on a model molecular junction.

  16. Nonequilibrium Green's function theory for nonadiabatic effects in quantum electron transport.

    PubMed

    Kershaw, Vincent F; Kosov, Daniel S

    2017-12-14

    We develop nonequilibrium Green's function-based transport theory, which includes effects of nonadiabatic nuclear motion in the calculation of the electric current in molecular junctions. Our approach is based on the separation of slow and fast time scales in the equations of motion for Green's functions by means of the Wigner representation. Time derivatives with respect to central time serve as a small parameter in the perturbative expansion enabling the computation of nonadiabatic corrections to molecular Green's functions. Consequently, we produce a series of analytic expressions for non-adiabatic electronic Green's functions (up to the second order in the central time derivatives), which depend not solely on the instantaneous molecular geometry but likewise on nuclear velocities and accelerations. An extended formula for electric current is derived which accounts for the non-adiabatic corrections. This theory is concisely illustrated by the calculations on a model molecular junction.

  17. An experimentally based analytical model for the shear capacity of FRP-strengthened reinforced concrete beams

    NASA Astrophysics Data System (ADS)

    Pellegrino, C.; Modena, C.

    2008-05-01

    This paper deals with the shear strengthening of Reinforced Concrete (RC) flexural members with externally bonded Fiber-Reinforced Polymers (FRPs). The interaction between an external FRP and an internal transverse steel reinforcement is not considered in actual code recommendations, but it strongly influences the efficiency of the shear strengthening rehabilitation technique and, as a consequence, the computation of interacting contributions to the nominal shear strength of beams. This circumstance is also discussed on the basis of the results of an experimental investigation of rectangular RC beams strengthened in shear with "U-jacketed" carbon FRP sheets. Based on experimental results of the present and other investigations, a new analytical model for describing the shear capacity of RC beams strengthened according to the most common schemes (side-bonded and "U-jacketed"), taking into account the interaction between steel and FRP shear strength contributions, is proposed.

  18. Reconciliation of ontology and terminology to cope with linguistics.

    PubMed

    Baud, Robert H; Ceusters, Werner; Ruch, Patrick; Rassinoux, Anne-Marie; Lovis, Christian; Geissbühler, Antoine

    2007-01-01

    To discuss the relationships between ontologies, terminologies and language in the context of Natural Language Processing (NLP) applications in order to show the negative consequences of confusing them. The viewpoints of the terminologist and (computational) linguist are developed separately, and then compared, leading to the presentation of reconciliation among these points of view, with consideration of the role of the ontologist. In order to encourage appropriate usage of terminologies, guidelines are presented advocating the simultaneous publication of pragmatic vocabularies supported by terminological material based on adequate ontological analysis. Ontologies, terminologies and natural languages each have their own purpose. Ontologies support machine understanding, natural languages support human communication, and terminologies should form the bridge between them. Therefore, future terminology standards should be based on sound ontology and do justice to the diversities in natural languages. Moreover, they should support local vocabularies, in order to be easily adaptable to local needs and practices.

  19. Computer assisted diagnostic system in tumor radiography.

    PubMed

    Faisal, Ahmed; Parveen, Sharmin; Badsha, Shahriar; Sarwar, Hasan; Reza, Ahmed Wasif

    2013-06-01

    An improved and efficient method is presented in this paper to achieve a better trade-off between noise removal and edge preservation, thereby detecting the tumor region of MRI brain images automatically. Compass operator has been used in the fourth order Partial Differential Equation (PDE) based denoising technique to preserve the anatomically significant information at the edges. A new morphological technique is also introduced for stripping skull region from the brain images, which consequently leading to the process of detecting tumor accurately. Finally, automatic seeded region growing segmentation based on an improved single seed point selection algorithm is applied to detect the tumor. The method is tested on publicly available MRI brain images and it gives an average PSNR (Peak Signal to Noise Ratio) of 36.49. The obtained results also show detection accuracy of 99.46%, which is a significant improvement than that of the existing results.

  20. OVERSEER: An Expert System Monitor for the Psychiatric Hospital

    PubMed Central

    Bronzino, Joseph D.; Morelli, Ralph A.; Goethe, John W.

    1988-01-01

    In order to improve patient care, comply with regulatory guidelines and decrease potential liability, psychiatric hospitals and clinics have been searching for computer systems to monitor the management and treatment of patients. This paper describes OVERSEER: a knowledge based system that monitors the treatment of psychiatric patients in real time. Based on procedures and protocols developed in the psychiatric setting, OVERSEER monitors the clinical database and issues alerts when standard clinical practices are not followed or when laboratory results or other clinical indicators are abnormal. Written in PROLOG, OVERSEER is designed to interface directly with the hospital's database, and, thereby utilizes all available pharmacy and laboratory data. Moreover, unlike the interactive expert systems developed for the psychiatric clinic, OVERSEER does not require extensive data entry by the clinician. Consequently, the chief benefit of OVERSEER's monitoring approach is the unobtrusive manner in which it evaluates treatment and patient responses and provides information regarding patient management.

  1. An extended Lagrangian method for subsonic flows

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Loh, Ching Y.

    1992-01-01

    It is well known that fluid motion can be specified by either the Eulerian of Lagrangian description. Most of Computational Fluid Dynamics (CFD) developments over the last three decades have been based on the Eulerian description and considerable progress has been made. In particular, the upwind methods, inspired and guided by the work of Gudonov, have met with many successes in dealing with complex flows, especially where discontinuities exist. However, this shock capturing property has proven to be accurate only when the discontinuity is aligned with one of the grid lines since most upwind methods are strictly formulated in 1-D framework and only formally extended to multi-dimensions. Consequently, the attractive property of crisp resolution of these discontinuities is lost and research on genuine multi-dimensional approach has just been undertaken by several leading researchers. Nevertheless they are still based on the Eulerian description.

  2. Variance decomposition in stochastic simulators

    NASA Astrophysics Data System (ADS)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  3. Propagation Velocity of Solid Earth Tides

    NASA Astrophysics Data System (ADS)

    Pathak, S.

    2017-12-01

    One of the significant considerations in most of the geodetic investigations is to take into account the outcome of Solid Earth tides on the location and its consequent impact on the time series of coordinates. In this research work, the propagation velocity resulting from the Solid Earth tides between the Indian stations is computed. Mean daily coordinates for the stations have been computed by applying static precise point positioning technique for a day. The computed coordinates are used as an input for computing the tidal displacements at the stations by Gravity method along three directions at 1-minute interval for 24 hours. Further the baseline distances are computed between four Indian stations. Computation of the propagation velocity for Solid Earth tides can be done by the virtue of study of the concurrent effect of it in-between the stations of identified baseline distance along with the time consumed by the tides for reaching from one station to another. The propagation velocity helps in distinguishing the impact at any station if the consequence at a known station for a specific time-period is known. Thus, with the knowledge of propagation velocity, the spatial and temporal effects of solid earth tides can be estimated with respect to a known station. As theoretically explained, the tides generated are due to the position of celestial bodies rotating about Earth. So the need of study is to observe the correlation of propagation velocity with the rotation speed of the Earth. The propagation velocity of Solid Earth tides comes out to be in the range of 440-470 m/s. This velocity comes out to be in a good agreement with the Earth's rotation speed.

  4. Cloud Computing-based Platform for Drought Decision-Making using Remote Sensing and Modeling Products: Preliminary Results for Brazil

    NASA Astrophysics Data System (ADS)

    Vivoni, E.; Mascaro, G.; Shupe, J. W.; Hiatt, C.; Potter, C. S.; Miller, R. L.; Stanley, J.; Abraham, T.; Castilla-Rubio, J.

    2012-12-01

    Droughts and their hydrological consequences are a major threat to food security throughout the world. In arid and semiarid regions dependent on irrigated agriculture, prolonged droughts lead to significant and recurring economic and social losses. In this contribution, we present preliminary results on integrating a set of multi-resolution drought indices into a cloud computing-based visualization platform. We focused our initial efforts on Brazil due to a severe, on-going drought in a large agricultural area in the northeastern part of the country. The online platform includes drought products developed from: (1) a MODIS-based water stress index (WSI) based on inferences from normalized difference vegetation index and land surface temperature fields, (2) a volumetric water content (VWC) index obtained from application of the NASA CASA model, and (3) a set of AVHRR-based vegetation health indices obtained from NOAA/NESDIS. The drought indices are also presented in terms of anomalies with respect to a baseline period. Since our main objective is to engage stakeholders and decision-makers in Brazil, we incorporated other relevant geospatial data into the platform, including irrigation areas, dams and reservoirs, administrative units and annual climate information. We will also present a set of use cases developed to help stakeholders explore, query and provide feedback that allowed fine-tuning of the drought product delivery, presentation and analysis tools. Finally, we discuss potential next steps in development of the online platform, including applications at finer resolutions in specific basins and at a coarser global scale.

  5. Cryptanalysis and improvement of a biometrics-based authentication and key agreement scheme for multi-server environments.

    PubMed

    Yang, Li; Zheng, Zhiming

    2018-01-01

    According to advancements in the wireless technologies, study of biometrics-based multi-server authenticated key agreement schemes has acquired a lot of momentum. Recently, Wang et al. presented a three-factor authentication protocol with key agreement and claimed that their scheme was resistant to several prominent attacks. Unfortunately, this paper indicates that their protocol is still vulnerable to the user impersonation attack, privileged insider attack and server spoofing attack. Furthermore, their protocol cannot provide the perfect forward secrecy. As a remedy of these aforementioned problems, we propose a biometrics-based authentication and key agreement scheme for multi-server environments. Compared with various related schemes, our protocol achieves the stronger security and provides more functionality properties. Besides, the proposed protocol shows the satisfactory performances in respect of storage requirement, communication overhead and computational cost. Thus, our protocol is suitable for expert systems and other multi-server architectures. Consequently, the proposed protocol is more appropriate in the distributed networks.

  6. Alcohol marketing, drunkenness, and problem drinking among Zambian youth: findings from the 2004 Global School-Based Student Health Survey.

    PubMed

    Swahn, Monica H; Ali, Bina; Palmier, Jane B; Sikazwe, George; Mayeya, John

    2011-01-01

    This study examines the associations between alcohol marketing strategies, alcohol education including knowledge about dangers of alcohol and refusal of alcohol, and drinking prevalence, problem drinking, and drunkenness. Analyses are based on the Global School-Based Student Health Survey (GSHS) conducted in Zambia (2004) of students primarily 11 to 16 years of age (N = 2257). Four statistical models were computed to test the associations between alcohol marketing and education and alcohol use, while controlling for possible confounding factors. Alcohol marketing, specifically through providing free alcohol through a company representative, was associated with drunkenness (AOR = 1.49; 95% CI: 1.09-2.02) and problem drinking (AOR = 1.41; 95% CI: 1.06-1.87) among youth after controlling for demographic characteristics, risky behaviors, and alcohol education. However, alcohol education was not associated with drunkenness or problem drinking. These findings underscore the importance of restricting alcohol marketing practices as an important policy strategy for reducing alcohol use and its dire consequences among vulnerable youth.

  7. Trustworthy data collection from implantable medical devices via high-speed security implementation based on IEEE 1363.

    PubMed

    Hu, Fei; Hao, Qi; Lukowiak, Marcin; Sun, Qingquan; Wilhelm, Kyle; Radziszowski, Stanisław; Wu, Yao

    2010-11-01

    Implantable medical devices (IMDs) have played an important role in many medical fields. Any failure in IMDs operations could cause serious consequences and it is important to protect the IMDs access from unauthenticated access. This study investigates secure IMD data collection within a telehealthcare [mobile health (m-health)] network. We use medical sensors carried by patients to securely access IMD data and perform secure sensor-to-sensor communications between patients to relay the IMD data to a remote doctor's server. To meet the requirements on low computational complexity, we choose N-th degree truncated polynomial ring (NTRU)-based encryption/decryption to secure IMD-sensor and sensor-sensor communications. An extended matryoshkas model is developed to estimate direct/indirect trust relationship among sensors. An NTRU hardware implementation in very large integrated circuit hardware description language is studied based on industry Standard IEEE 1363 to increase the speed of key generation. The performance analysis results demonstrate the security robustness of the proposed IMD data access trust model.

  8. The AI Bus architecture for distributed knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Schultz, Roger D.; Stobie, Iain

    1991-01-01

    The AI Bus architecture is layered, distributed object oriented framework developed to support the requirements of advanced technology programs for an order of magnitude improvement in software costs. The consequent need for highly autonomous computer systems, adaptable to new technology advances over a long lifespan, led to the design of an open architecture and toolbox for building large scale, robust, production quality systems. The AI Bus accommodates a mix of knowledge based and conventional components, running on heterogeneous, distributed real world and testbed environment. The concepts and design is described of the AI Bus architecture and its current implementation status as a Unix C++ library or reusable objects. Each high level semiautonomous agent process consists of a number of knowledge sources together with interagent communication mechanisms based on shared blackboards and message passing acquaintances. Standard interfaces and protocols are followed for combining and validating subsystems. Dynamic probes or demons provide an event driven means for providing active objects with shared access to resources, and each other, while not violating their security.

  9. A Tribute to Charlie Chaplin: Induced Positive Affect Improves Reward-Based Decision-Learning in Parkinson’s Disease

    PubMed Central

    Ridderinkhof, K. Richard; van Wouwe, Nelleke C.; Band, Guido P. H.; Wylie, Scott A.; Van der Stigchel, Stefan; van Hees, Pieter; Buitenweg, Jessika; van de Vijver, Irene; van den Wildenberg, Wery P. M.

    2012-01-01

    Reward-based decision-learning refers to the process of learning to select those actions that lead to rewards while avoiding actions that lead to punishments. This process, known to rely on dopaminergic activity in striatal brain regions, is compromised in Parkinson’s disease (PD). We hypothesized that such decision-learning deficits are alleviated by induced positive affect, which is thought to incur transient boosts in midbrain and striatal dopaminergic activity. Computational measures of probabilistic reward-based decision-learning were determined for 51 patients diagnosed with PD. Previous work has shown these measures to rely on the nucleus caudatus (outcome evaluation during the early phases of learning) and the putamen (reward prediction during later phases of learning). We observed that induced positive affect facilitated learning, through its effects on reward prediction rather than outcome evaluation. Viewing a few minutes of comedy clips served to remedy dopamine-related problems associated with frontostriatal circuitry and, consequently, learning to predict which actions will yield reward. PMID:22707944

  10. Cryptanalysis and improvement of a biometrics-based authentication and key agreement scheme for multi-server environments

    PubMed Central

    Zheng, Zhiming

    2018-01-01

    According to advancements in the wireless technologies, study of biometrics-based multi-server authenticated key agreement schemes has acquired a lot of momentum. Recently, Wang et al. presented a three-factor authentication protocol with key agreement and claimed that their scheme was resistant to several prominent attacks. Unfortunately, this paper indicates that their protocol is still vulnerable to the user impersonation attack, privileged insider attack and server spoofing attack. Furthermore, their protocol cannot provide the perfect forward secrecy. As a remedy of these aforementioned problems, we propose a biometrics-based authentication and key agreement scheme for multi-server environments. Compared with various related schemes, our protocol achieves the stronger security and provides more functionality properties. Besides, the proposed protocol shows the satisfactory performances in respect of storage requirement, communication overhead and computational cost. Thus, our protocol is suitable for expert systems and other multi-server architectures. Consequently, the proposed protocol is more appropriate in the distributed networks. PMID:29534085

  11. Thermodynamic consequences of hydrogen combustion within a containment of pressurized water reactor

    NASA Astrophysics Data System (ADS)

    Bury, Tomasz

    2011-12-01

    Gaseous hydrogen may be generated in a nuclear reactor system as an effect of the core overheating. This creates a risk of its uncontrolled combustion which may have a destructive consequences, as it could be observed during the Fukushima nuclear power plant accident. Favorable conditions for hydrogen production occur during heavy loss-of-coolant accidents. The author used an own computer code, called HEPCAL, of the lumped parameter type to realize a set of simulations of a large scale loss-of-coolant accidents scenarios within containment of second generation pressurized water reactor. Some simulations resulted in high pressure peaks, seemed to be irrational. A more detailed analysis and comparison with Three Mile Island and Fukushima accidents consequences allowed for withdrawing interesting conclusions.

  12. Home Media and Children’s Achievement and Behavior

    PubMed Central

    Hofferth, Sandra L.

    2010-01-01

    This study provides a national picture of the time American 6–12 year olds spent playing video games, using the computer, and watching television at home in 1997 and 2003 and the association of early use with their achievement and behavior as adolescents. Girls benefited from computers more than boys and Black children’s achievement benefited more from greater computer use than did that of White children. Greater computer use in middle childhood was associated with increased achievement for White and Black girls and Black boys, but not White boys. Greater computer play was also associated with a lower risk of becoming socially isolated among girls. Computer use does not crowd out positive learning-related activities, whereas video game playing does. Consequently, increased video game play had both positive and negative associations with the achievement of girls but not boys. For boys, increased video game play was linked to increased aggressive behavior problems. PMID:20840243

  13. Computer vision syndrome (CVS) - Thermographic Analysis

    NASA Astrophysics Data System (ADS)

    Llamosa-Rincón, L. E.; Jaime-Díaz, J. M.; Ruiz-Cardona, D. F.

    2017-01-01

    The use of computers has reported an exponential growth in the last decades, the possibility of carrying out several tasks for both professional and leisure purposes has contributed to the great acceptance by the users. The consequences and impact of uninterrupted tasks with computers screens or displays on the visual health, have grabbed researcher’s attention. When spending long periods of time in front of a computer screen, human eyes are subjected to great efforts, which in turn triggers a set of symptoms known as Computer Vision Syndrome (CVS). Most common of them are: blurred vision, visual fatigue and Dry Eye Syndrome (DES) due to unappropriate lubrication of ocular surface when blinking decreases. An experimental protocol was de-signed and implemented to perform thermographic studies on healthy human eyes during exposure to dis-plays of computers, with the main purpose of comparing the existing differences in temperature variations of healthy ocular surfaces.

  14. Computational Investigation of the Performance and Back-Pressure Limits of a Hypersonic Inlet

    NASA Technical Reports Server (NTRS)

    Smart, Michael K.; White, Jeffery A.

    2002-01-01

    A computational analysis of Mach 6.2 operation of a hypersonic inlet with rectangular-to-elliptical shape transition has been performed. The results of the computations are compared with experimental data for cases with and without a manually imposed back-pressure. While the no-back-pressure numerical solutions match the general trends of the data, certain features observed in the experiments did not appear in the computational solutions. The reasons for these discrepancies are discussed and possible remedies are suggested. Most importantly, however, the computational analysis increased the understanding of the consequences of certain aspects of the inlet design. This will enable the performance of future inlets of this class to be improved. Computational solutions with back-pressure under-estimated the back-pressure limit observed in the experiments, but did supply significant insight into the character of highly back-pressured inlet flows.

  15. [Computer game addiction: a psychopathological symptom complex in adolescence].

    PubMed

    Wölfling, Klaus; Thalemann, Ralf; Grüsser-Sinopoli, Sabine M

    2008-07-01

    Cases of excessive computer gaming are increasingly reported by practitioners in the psychiatric field. Since there is no standardized definition of this symptom complex, the aim of this study is to access excessive computer gaming in German adolescents as an addictive disorder and its potential negative consequences. Psychopathological computer gaming behavior was diagnosed by applying the adapted diagnostic criteria of substance-related-addictions as defined by the ICD-10. At the same time demographic variables, state of clinical anxiety and underlying cognitive mechanisms were analyzed. 6.3 % of the 221 participating pupils - mostly boys with a low educational background - fulfilled the diagnostic criteria of a behavioral addiction. Clinically diagnosed adolescents exhibited limited cognitive flexibility and were identified to utilize computer gaming as a mood management strategy. These results can be interpreted as a first hint for a prevalence estimation of psychopathological computer gaming in German adolescents.

  16. The use of cone beam computed tomography in the diagnosis and management of internal root resorption associated with chronic apical periodontitis: a case report.

    PubMed

    Perlea, Paula; Nistor, Cristina Coralia; Iliescu, Mihaela Georgiana; Iliescu, Alexandru Andrei

    2015-01-01

    Internal root resorption is a consequence of chronic pulp inflammation. Later on, the pulp necrosis followed by a chronic apical periodontitis is installed. Hence, usually, in clinical practice, both lesions have to be simultaneously managed. Conventional periapical radiograph is mandatory in diagnosis. Improving the diagnosis and management of both lesions, cone beam computed tomography proves to be more reliable than conventional radiography.

  17. Computer Analysis Of High-Speed Roller Bearings

    NASA Technical Reports Server (NTRS)

    Coe, H.

    1988-01-01

    High-speed cylindrical roller-bearing analysis program (CYBEAN) developed to compute behavior of cylindrical rolling-element bearings at high speeds and with misaligned shafts. With program, accurate assessment of geometry-induced roller preload possible for variety of out-ring and housing configurations and loading conditions. Enables detailed examination of bearing performance and permits exploration of causes and consequences of bearing skew. Provides general capability for assessment of designs of bearings supporting main shafts of engines. Written in FORTRAN IV.

  18. Advanced Optical Burst Switched Network Concepts

    NASA Astrophysics Data System (ADS)

    Nejabati, Reza; Aracil, Javier; Castoldi, Piero; de Leenheer, Marc; Simeonidou, Dimitra; Valcarenghi, Luca; Zervas, Georgios; Wu, Jian

    In recent years, as the bandwidth and the speed of networks have increased significantly, a new generation of network-based applications using the concept of distributed computing and collaborative services is emerging (e.g., Grid computing applications). The use of the available fiber and DWDM infrastructure for these applications is a logical choice offering huge amounts of cheap bandwidth and ensuring global reach of computing resources [230]. Currently, there is a great deal of interest in deploying optical circuit (wavelength) switched network infrastructure for distributed computing applications that require long-lived wavelength paths and address the specific needs of a small number of well-known users. Typical users are particle physicists who, due to their international collaborations and experiments, generate enormous amounts of data (Petabytes per year). These users require a network infrastructures that can support processing and analysis of large datasets through globally distributed computing resources [230]. However, providing wavelength granularity bandwidth services is not an efficient and scalable solution for applications and services that address a wider base of user communities with different traffic profiles and connectivity requirements. Examples of such applications may be: scientific collaboration in smaller scale (e.g., bioinformatics, environmental research), distributed virtual laboratories (e.g., remote instrumentation), e-health, national security and defense, personalized learning environments and digital libraries, evolving broadband user services (i.e., high resolution home video editing, real-time rendering, high definition interactive TV). As a specific example, in e-health services and in particular mammography applications due to the size and quantity of images produced by remote mammography, stringent network requirements are necessary. Initial calculations have shown that for 100 patients to be screened remotely, the network would have to securely transport 1.2 GB of data every 30 s [230]. According to the above explanation it is clear that these types of applications need a new network infrastructure and transport technology that makes large amounts of bandwidth at subwavelength granularity, storage, computation, and visualization resources potentially available to a wide user base for specified time durations. As these types of collaborative and network-based applications evolve addressing a wide range and large number of users, it is infeasible to build dedicated networks for each application type or category. Consequently, there should be an adaptive network infrastructure able to support all application types, each with their own access, network, and resource usage patterns. This infrastructure should offer flexible and intelligent network elements and control mechanism able to deploy new applications quickly and efficiently.

  19. Gain Modulation in the Central Nervous System: Where Behavior, Neurophysiology, and Computation Meet

    PubMed Central

    SALINAS, EMILIO; SEJNOWSKI, TERRENCE J.

    2010-01-01

    Gain modulation is a nonlinear way in which neurons combine information from two (or more) sources, which may be of sensory, motor, or cognitive origin. Gain modulation is revealed when one input, the modulatory one, affects the gain or the sensitivity of the neuron to the other input, without modifying its selectivity or receptive field properties. This type of modulatory interaction is important for two reasons. First, it is an extremely widespread integration mechanism; it is found in a plethora of cortical areas and in some subcortical structures as well, and as a consequence it seems to play an important role in a striking variety of functions, including eye and limb movements, navigation, spatial perception, attentional processing, and object recognition. Second, there is a theoretical foundation indicating that gain-modulated neurons may serve as a basis for a general class of computations, namely, coordinate transformations and the generation of invariant responses, which indeed may underlie all the brain functions just mentioned. This article describes the relationships between computational models, the physiological properties of a variety of gain-modulated neurons, and some of the behavioral consequences of damage to gain-modulated neural representations. PMID:11597102

  20. Risk prediction and aversion by anterior cingulate cortex.

    PubMed

    Brown, Joshua W; Braver, Todd S

    2007-12-01

    The recently proposed error-likelihood hypothesis suggests that anterior cingulate cortex (ACC) and surrounding areas will become active in proportion to the perceived likelihood of an error. The hypothesis was originally derived from a computational model prediction. The same computational model now makes a further prediction that ACC will be sensitive not only to predicted error likelihood, but also to the predicted magnitude of the consequences, should an error occur. The product of error likelihood and predicted error consequence magnitude collectively defines the general "expected risk" of a given behavior in a manner analogous but orthogonal to subjective expected utility theory. New fMRI results from an incentivechange signal task now replicate the error-likelihood effect, validate the further predictions of the computational model, and suggest why some segments of the population may fail to show an error-likelihood effect. In particular, error-likelihood effects and expected risk effects in general indicate greater sensitivity to earlier predictors of errors and are seen in risk-averse but not risk-tolerant individuals. Taken together, the results are consistent with an expected risk model of ACC and suggest that ACC may generally contribute to cognitive control by recruiting brain activity to avoid risk.

  1. Educating Executive Function

    PubMed Central

    Blair, Clancy

    2016-01-01

    Executive functions are thinking skills that assist with reasoning, planning, problem solving, and managing one’s life. The brain areas that underlie these skills are interconnected with and influenced by activity in many different brain areas, some of which are associated with emotion and stress. One consequence of the stress-specific connections is that executive functions, which help us to organize our thinking, tend to be disrupted when stimulation is too high and we are stressed out, or too low when we are bored and lethargic. Given their central role in reasoning and also in managing stress and emotion, scientists have conducted studies, primarily with adults, to determine whether executive functions can be improved by training. By and large, results have shown that they can be, in part through computer-based videogame-like activities. Evidence of wider, more general benefits from such computer-based training, however, is mixed. Accordingly, scientists have reasoned that training will have wider benefits if it is implemented early, with very young children as the neural circuitry of executive functions is developing, and that it will be most effective if embedded in children’s everyday activities. Evidence produced by this research, however, is also mixed. In sum, much remains to be learned about executive function training. Without question, however, continued research on this important topic will yield valuable information about cognitive development. PMID:27906522

  2. Comparing Macroscale and Microscale Simulations of Porous Battery Electrodes

    DOE PAGES

    Higa, Kenneth; Wu, Shao-Ling; Parkinson, Dilworth Y.; ...

    2017-06-22

    This article describes a vertically-integrated exploration of NMC electrode rate limitations, combining experiments with corresponding macroscale (macro-homogeneous) and microscale models. Parameters common to both models were obtained from experiments or based on published results. Positive electrode tortuosity was the sole fitting parameter used in the macroscale model, while the microscale model used no fitting parameters, instead relying on microstructural domains generated from X-ray microtomography of pristine electrode material held under compression while immersed in electrolyte solution (additionally providing novel observations of electrode wetting). Macroscale simulations showed that the capacity decrease observed at higher rates resulted primarily from solution-phase diffusion resistance.more » This ability to provide such qualitative insights at low computational costs is a strength of macroscale models, made possible by neglecting electrode spatial details. To explore the consequences of such simplification, the corresponding, computationally-expensive microscale model was constructed. This was found to have limitations preventing quantitatively accurate predictions, for reasons that are discussed in the hope of guiding future work. Nevertheless, the microscale simulation results complement those of the macroscale model by providing a reality-check based on microstructural information; in particular, this novel comparison of the two approaches suggests a reexamination of salt diffusivity measurements.« less

  3. Distributed Finite Element Analysis Using a Transputer Network

    NASA Technical Reports Server (NTRS)

    Watson, James; Favenesi, James; Danial, Albert; Tombrello, Joseph; Yang, Dabby; Reynolds, Brian; Turrentine, Ronald; Shephard, Mark; Baehmann, Peggy

    1989-01-01

    The principal objective of this research effort was to demonstrate the extraordinarily cost effective acceleration of finite element structural analysis problems using a transputer-based parallel processing network. This objective was accomplished in the form of a commercially viable parallel processing workstation. The workstation is a desktop size, low-maintenance computing unit capable of supercomputer performance yet costs two orders of magnitude less. To achieve the principal research objective, a transputer based structural analysis workstation termed XPFEM was implemented with linear static structural analysis capabilities resembling commercially available NASTRAN. Finite element model files, generated using the on-line preprocessing module or external preprocessing packages, are downloaded to a network of 32 transputers for accelerated solution. The system currently executes at about one third Cray X-MP24 speed but additional acceleration appears likely. For the NASA selected demonstration problem of a Space Shuttle main engine turbine blade model with about 1500 nodes and 4500 independent degrees of freedom, the Cray X-MP24 required 23.9 seconds to obtain a solution while the transputer network, operated from an IBM PC-AT compatible host computer, required 71.7 seconds. Consequently, the $80,000 transputer network demonstrated a cost-performance ratio about 60 times better than the $15,000,000 Cray X-MP24 system.

  4. Keep your opponents close: social context affects EEG and fEMG linkage in a turn-based computer game.

    PubMed

    Spapé, Michiel M; Kivikangas, J Matias; Järvelä, Simo; Kosunen, Ilkka; Jacucci, Giulio; Ravaja, Niklas

    2013-01-01

    In daily life, we often copy the gestures and expressions of those we communicate with, but recent evidence shows that such mimicry has a physiological counterpart: interaction elicits linkage, which is a concordance between the biological signals of those involved. To find out how the type of social interaction affects linkage, pairs of participants played a turn-based computer game in which the level of competition was systematically varied between cooperation and competition. Linkage in the beta and gamma frequency bands was observed in the EEG, especially when the participants played directly against each other. Emotional expression, measured using facial EMG, reflected this pattern, with the most competitive condition showing enhanced linkage over the facial muscle-regions involved in smiling. These effects were found to be related to self-reported social presence: linkage in positive emotional expression was associated with self-reported shared negative feelings. The observed effects confirmed the hypothesis that the social context affected the degree to which participants had similar reactions to their environment and consequently showed similar patterns of brain activity. We discuss the functional resemblance between linkage, as an indicator of a shared physiology and affect, and the well-known mirror neuron system, and how they relate to social functions like empathy.

  5. Big data and high-performance analytics in structural health monitoring for bridge management

    NASA Astrophysics Data System (ADS)

    Alampalli, Sharada; Alampalli, Sandeep; Ettouney, Mohammed

    2016-04-01

    Structural Health Monitoring (SHM) can be a vital tool for effective bridge management. Combining large data sets from multiple sources to create a data-driven decision-making framework is crucial for the success of SHM. This paper presents a big data analytics framework that combines multiple data sets correlated with functional relatedness to convert data into actionable information that empowers risk-based decision-making. The integrated data environment incorporates near real-time streams of semi-structured data from remote sensors, historical visual inspection data, and observations from structural analysis models to monitor, assess, and manage risks associated with the aging bridge inventories. Accelerated processing of dataset is made possible by four technologies: cloud computing, relational database processing, support from NOSQL database, and in-memory analytics. The framework is being validated on a railroad corridor that can be subjected to multiple hazards. The framework enables to compute reliability indices for critical bridge components and individual bridge spans. In addition, framework includes a risk-based decision-making process that enumerate costs and consequences of poor bridge performance at span- and network-levels when rail networks are exposed to natural hazard events such as floods and earthquakes. Big data and high-performance analytics enable insights to assist bridge owners to address problems faster.

  6. Pore-scale micro-computed-tomography imaging: Nonwetting-phase cluster-size distribution during drainage and imbibition

    NASA Astrophysics Data System (ADS)

    Georgiadis, A.; Berg, S.; Makurat, A.; Maitland, G.; Ott, H.

    2013-09-01

    We investigated the cluster-size distribution of the residual nonwetting phase in a sintered glass-bead porous medium at two-phase flow conditions, by means of micro-computed-tomography (μCT) imaging with pore-scale resolution. Cluster-size distribution functions and cluster volumes were obtained by image analysis for a range of injected pore volumes under both imbibition and drainage conditions; the field of view was larger than the porosity-based representative elementary volume (REV). We did not attempt to make a definition for a two-phase REV but used the nonwetting-phase cluster-size distribution as an indicator. Most of the nonwetting-phase total volume was found to be contained in clusters that were one to two orders of magnitude larger than the porosity-based REV. The largest observed clusters in fact ranged in volume from 65% to 99% of the entire nonwetting phase in the field of view. As a consequence, the largest clusters observed were statistically not represented and were found to be smaller than the estimated maximum cluster length. The results indicate that the two-phase REV is larger than the field of view attainable by μCT scanning, at a resolution which allows for the accurate determination of cluster connectivity.

  7. A Comparative Survey of Methods for Remote Heart Rate Detection From Frontal Face Videos

    PubMed Central

    Wang, Chen; Pun, Thierry; Chanel, Guillaume

    2018-01-01

    Remotely measuring physiological activity can provide substantial benefits for both the medical and the affective computing applications. Recent research has proposed different methodologies for the unobtrusive detection of heart rate (HR) using human face recordings. These methods are based on subtle color changes or motions of the face due to cardiovascular activities, which are invisible to human eyes but can be captured by digital cameras. Several approaches have been proposed such as signal processing and machine learning. However, these methods are compared with different datasets, and there is consequently no consensus on method performance. In this article, we describe and evaluate several methods defined in literature, from 2008 until present day, for the remote detection of HR using human face recordings. The general HR processing pipeline is divided into three stages: face video processing, face blood volume pulse (BVP) signal extraction, and HR computation. Approaches presented in the paper are classified and grouped according to each stage. At each stage, algorithms are analyzed and compared based on their performance using the public database MAHNOB-HCI. Results found in this article are limited on MAHNOB-HCI dataset. Results show that extracted face skin area contains more BVP information. Blind source separation and peak detection methods are more robust with head motions for estimating HR. PMID:29765940

  8. Computer use and stress, sleep disturbances, and symptoms of depression among young adults – a prospective cohort study

    PubMed Central

    2012-01-01

    Background We have previously studied prospective associations between computer use and mental health symptoms in a selected young adult population. The purpose of this study was to investigate if high computer use is a prospective risk factor for developing mental health symptoms in a population-based sample of young adults. Methods The study group was a cohort of young adults (n = 4163), 20–24 years old, who responded to a questionnaire at baseline and 1-year follow-up. Exposure variables included time spent on computer use (CU) in general, email/chat use, computer gaming, CU without breaks, and CU at night causing lost sleep. Mental health outcomes included perceived stress, sleep disturbances, symptoms of depression, and reduced performance due to stress, depressed mood, or tiredness. Prevalence ratios (PRs) were calculated for prospective associations between exposure variables at baseline and mental health outcomes (new cases) at 1-year follow-up for the men and women separately. Results Both high and medium computer use compared to low computer use at baseline were associated with sleep disturbances in the men at follow-up. High email/chat use was negatively associated with perceived stress, but positively associated with reported sleep disturbances for the men. For the women, high email/chat use was (positively) associated with several mental health outcomes, while medium computer gaming was associated with symptoms of depression, and CU without breaks with most mental health outcomes. CU causing lost sleep was associated with mental health outcomes for both men and women. Conclusions Time spent on general computer use was prospectively associated with sleep disturbances and reduced performance for the men. For the women, using the computer without breaks was a risk factor for several mental health outcomes. Some associations were enhanced in interaction with mobile phone use. Using the computer at night and consequently losing sleep was associated with most mental health outcomes for both men and women. Further studies should focus on mechanisms relating information and communication technology (ICT) use to sleep disturbances. PMID:23088719

  9. Computer use and stress, sleep disturbances, and symptoms of depression among young adults--a prospective cohort study.

    PubMed

    Thomée, Sara; Härenstam, Annika; Hagberg, Mats

    2012-10-22

    We have previously studied prospective associations between computer use and mental health symptoms in a selected young adult population. The purpose of this study was to investigate if high computer use is a prospective risk factor for developing mental health symptoms in a population-based sample of young adults. The study group was a cohort of young adults (n = 4163), 20-24 years old, who responded to a questionnaire at baseline and 1-year follow-up. Exposure variables included time spent on computer use (CU) in general, email/chat use, computer gaming, CU without breaks, and CU at night causing lost sleep. Mental health outcomes included perceived stress, sleep disturbances, symptoms of depression, and reduced performance due to stress, depressed mood, or tiredness. Prevalence ratios (PRs) were calculated for prospective associations between exposure variables at baseline and mental health outcomes (new cases) at 1-year follow-up for the men and women separately. Both high and medium computer use compared to low computer use at baseline were associated with sleep disturbances in the men at follow-up. High email/chat use was negatively associated with perceived stress, but positively associated with reported sleep disturbances for the men. For the women, high email/chat use was (positively) associated with several mental health outcomes, while medium computer gaming was associated with symptoms of depression, and CU without breaks with most mental health outcomes. CU causing lost sleep was associated with mental health outcomes for both men and women. Time spent on general computer use was prospectively associated with sleep disturbances and reduced performance for the men. For the women, using the computer without breaks was a risk factor for several mental health outcomes. Some associations were enhanced in interaction with mobile phone use. Using the computer at night and consequently losing sleep was associated with most mental health outcomes for both men and women. Further studies should focus on mechanisms relating information and communication technology (ICT) use to sleep disturbances.

  10. An Interval Type-2 Neural Fuzzy System for Online System Identification and Feature Elimination.

    PubMed

    Lin, Chin-Teng; Pal, Nikhil R; Wu, Shang-Lin; Liu, Yu-Ting; Lin, Yang-Yin

    2015-07-01

    We propose an integrated mechanism for discarding derogatory features and extraction of fuzzy rules based on an interval type-2 neural fuzzy system (NFS)-in fact, it is a more general scheme that can discard bad features, irrelevant antecedent clauses, and even irrelevant rules. High-dimensional input variable and a large number of rules not only enhance the computational complexity of NFSs but also reduce their interpretability. Therefore, a mechanism for simultaneous extraction of fuzzy rules and reducing the impact of (or eliminating) the inferior features is necessary. The proposed approach, namely an interval type-2 Neural Fuzzy System for online System Identification and Feature Elimination (IT2NFS-SIFE), uses type-2 fuzzy sets to model uncertainties associated with information and data in designing the knowledge base. The consequent part of the IT2NFS-SIFE is of Takagi-Sugeno-Kang type with interval weights. The IT2NFS-SIFE possesses a self-evolving property that can automatically generate fuzzy rules. The poor features can be discarded through the concept of a membership modulator. The antecedent and modulator weights are learned using a gradient descent algorithm. The consequent part weights are tuned via the rule-ordered Kalman filter algorithm to enhance learning effectiveness. Simulation results show that IT2NFS-SIFE not only simplifies the system architecture by eliminating derogatory/irrelevant antecedent clauses, rules, and features but also maintains excellent performance.

  11. 3D Printing technology over a drug delivery for tissue engineering.

    PubMed

    Lee, Jin Woo; Cho, Dong-Woo

    2015-01-01

    Many researchers have attempted to use computer-aided design (CAD) and computer-aided manufacturing (CAM) to realize a scaffold that provides a three-dimensional (3D) environment for regeneration of tissues and organs. As a result, several 3D printing technologies, including stereolithography, deposition modeling, inkjet-based printing and selective laser sintering have been developed. Because these 3D printing technologies use computers for design and fabrication, and they can fabricate 3D scaffolds as designed; as a consequence, they can be standardized. Growth of target tissues and organs requires the presence of appropriate growth factors, so fabrication of 3Dscaffold systems that release these biomolecules has been explored. A drug delivery system (DDS) that administrates a pharmaceutical compound to achieve a therapeutic effect in cells, animals and humans is a key technology that delivers biomolecules without side effects caused by excessive doses. 3D printing technologies and DDSs have been assembled successfully, so new possibilities for improved tissue regeneration have been suggested. If the interaction between cells and scaffold system with biomolecules can be understood and controlled, and if an optimal 3D tissue regenerating environment is realized, 3D printing technologies will become an important aspect of tissue engineering research in the near future.

  12. Phase-Space Detection of Cyber Events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez Jimenez, Jarilyn M; Ferber, Aaron E; Prowell, Stacy J

    Energy Delivery Systems (EDS) are a network of processes that produce, transfer and distribute energy. EDS are increasingly dependent on networked computing assets, as are many Industrial Control Systems. Consequently, cyber-attacks pose a real and pertinent threat, as evidenced by Stuxnet, Shamoon and Dragonfly. Hence, there is a critical need for novel methods to detect, prevent, and mitigate effects of such attacks. To detect cyber-attacks in EDS, we developed a framework for gathering and analyzing timing data that involves establishing a baseline execution profile and then capturing the effect of perturbations in the state from injecting various malware. The datamore » analysis was based on nonlinear dynamics and graph theory to improve detection of anomalous events in cyber applications. The goal was the extraction of changing dynamics or anomalous activity in the underlying computer system. Takens' theorem in nonlinear dynamics allows reconstruction of topologically invariant, time-delay-embedding states from the computer data in a sufficiently high-dimensional space. The resultant dynamical states were nodes, and the state-to-state transitions were links in a mathematical graph. Alternatively, sequential tabulation of executing instructions provides the nodes with corresponding instruction-to-instruction links. Graph theorems guarantee graph-invariant measures to quantify the dynamical changes in the running applications. Results showed a successful detection of cyber events.« less

  13. Post-processing of seismic parameter data based on valid seismic event determination

    DOEpatents

    McEvilly, Thomas V.

    1985-01-01

    An automated seismic processing system and method are disclosed, including an array of CMOS microprocessors for unattended battery-powered processing of a multi-station network. According to a characterizing feature of the invention, each channel of the network is independently operable to automatically detect, measure times and amplitudes, and compute and fit Fast Fourier transforms (FFT's) for both P- and S- waves on analog seismic data after it has been sampled at a given rate. The measured parameter data from each channel are then reviewed for event validity by a central controlling microprocessor and if determined by preset criteria to constitute a valid event, the parameter data are passed to an analysis computer for calculation of hypocenter location, running b-values, source parameters, event count, P- wave polarities, moment-tensor inversion, and Vp/Vs ratios. The in-field real-time analysis of data maximizes the efficiency of microearthquake surveys allowing flexibility in experimental procedures, with a minimum of traditional labor-intensive postprocessing. A unique consequence of the system is that none of the original data (i.e., the sensor analog output signals) are necessarily saved after computation, but rather, the numerical parameters generated by the automatic analysis are the sole output of the automated seismic processor.

  14. Vector computer memory bank contention

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.

    1985-01-01

    A number of vector supercomputers feature very large memories. Unfortunately the large capacity memory chips that are used in these computers are much slower than the fast central processing unit (CPU) circuitry. As a result, memory bank reservation times (in CPU ticks) are much longer than on previous generations of computers. A consequence of these long reservation times is that memory bank contention is sharply increased, resulting in significantly lowered performance rates. The phenomenon of memory bank contention in vector computers is analyzed using both a Markov chain model and a Monte Carlo simulation program. The results of this analysis indicate that future generations of supercomputers must either employ much faster memory chips or else feature very large numbers of independent memory banks.

  15. Usefulness of Tc99m-mebrofenin Hepatobiliary Scintigraphy and Single Photon Emission Computed Tomography/Computed Tomography in the Diagnosis of Bronchobiliary Fistula.

    PubMed

    Parghane, Rahul Vithalrao; Phulsunga, Rohit Kumar; Gupta, Rajesh; Basher, Rajender Kumar; Bhattacharya, Anish; Mittal, Bhagwant Rai

    2017-01-01

    Bronchobiliary fistula (BBF), a rare complication of liver disease, is an abnormal communication between the biliary tract and bronchial tree. BBF may occur as a consequence of local liver infections such as hydatid or amebic disease, pyogenic liver abscess or trauma to the liver, obstruction of biliary tract, and tumor. As such management of liver disease with BBF is very difficult and often associated with a high rate of morbidity and mortality. Therefore, timely diagnosis of BBF is imperative. Hepatobiliary scintigraphy along with hybrid single photon emission computed tomography/computed tomography using Tc99m-mebrofenin is a very useful noninvasive imaging modality, in the diagnosis of BBF.

  16. Usefulness of Tc99m-mebrofenin Hepatobiliary Scintigraphy and Single Photon Emission Computed Tomography/Computed Tomography in the Diagnosis of Bronchobiliary Fistula

    PubMed Central

    Parghane, Rahul Vithalrao; Phulsunga, Rohit Kumar; Gupta, Rajesh; Basher, Rajender Kumar; Bhattacharya, Anish; Mittal, Bhagwant Rai

    2017-01-01

    Bronchobiliary fistula (BBF), a rare complication of liver disease, is an abnormal communication between the biliary tract and bronchial tree. BBF may occur as a consequence of local liver infections such as hydatid or amebic disease, pyogenic liver abscess or trauma to the liver, obstruction of biliary tract, and tumor. As such management of liver disease with BBF is very difficult and often associated with a high rate of morbidity and mortality. Therefore, timely diagnosis of BBF is imperative. Hepatobiliary scintigraphy along with hybrid single photon emission computed tomography/computed tomography using Tc99m-mebrofenin is a very useful noninvasive imaging modality, in the diagnosis of BBF. PMID:29033682

  17. Vector computer memory bank contention

    NASA Technical Reports Server (NTRS)

    Bailey, David H.

    1987-01-01

    A number of vector supercomputers feature very large memories. Unfortunately the large capacity memory chips that are used in these computers are much slower than the fast central processing unit (CPU) circuitry. As a result, memory bank reservation times (in CPU ticks) are much longer than on previous generations of computers. A consequence of these long reservation times is that memory bank contention is sharply increased, resulting in significantly lowered performance rates. The phenomenon of memory bank contention in vector computers is analyzed using both a Markov chain model and a Monte Carlo simulation program. The results of this analysis indicate that future generations of supercomputers must either employ much faster memory chips or else feature very large numbers of independent memory banks.

  18. Effect of Continued Support of Midwifery Students in Labour on the Childbirth and Labour Consequences: A Randomized Controlled Clinical Trial

    PubMed Central

    Bolbol-Haghighi, Nahid; Masoumi, Seyedeh Zahra

    2016-01-01

    Introduction Childbirth experience is a process throughout women’s life and the most important consequence of labour. Support is the key factor to have a positive experience of childbirth. In order to improve and reduce the stress and anxiety levels in women during labour and cope with the childbirth pain, the emotional, physical and educational support of doulas can be used. Aim This study was aimed to evaluate the effect of continued support of midwifery students in labour on the childbirth and labour consequences. Materials and Methods The present study was conducted using a randomized controlled clinical trial design on 100 pregnant women referred to the maternity ward at Fatemieh Hospital, Shahroud, Iran. The participants were assigned to the supportive or non-supportive group based on allocation sequence using a randomized block design and table of computer-generated random numbers prior to beginning the study. Supportive care was provided by the trained midwifery students. Childbirth and labour consequences were analysed by chi-square test, Fisher-exact test, independent t-test, Mann-Whitney U-test using SPSS-21 software. Results The results showed a significantly lower duration of the first stage of labour in the supportive group, as compared to that in the non-supportive group (p <0.001). Moreover, Apgar scores in the supportive group, compared to those in the non-supportive group, significantly increased at minutes 1 and 5 (p <0.001 and p = 0.04, respectively). Conclusion The findings of this study showed that the supportive care provided by the midwifery students shortens duration of the first stage of labour and improves the Apgar scores in the first and fifth minutes. PMID:27790526

  19. CAD/CAM and scientific data management at Dassault

    NASA Technical Reports Server (NTRS)

    Bohn, P.

    1984-01-01

    The history of CAD/CAM and scientific data management at Dassault are presented. Emphasis is put on the targets of the now commercially available software CATIA. The links with scientific computations such as aerodynamics and structural analysis are presented. Comments are made on the principles followed within the company. The consequences of the approximative nature of scientific data are examined. Consequence of the new history function is mainly its protection against copy or alteration. Future plans at Dassault for scientific data appear to be in opposite directions compared to some general tendencies.

  20. 2006 - 2016: Ten Years Of Tsunami In French Polynesia

    NASA Astrophysics Data System (ADS)

    Reymond, D.; Jamelot, A.; Hyvernaud, O.

    2016-12-01

    Located in South central Pacific and despite of its far field situation, the French Polynesia is very much concerned by the tsunamis generated along the major subduction zones located around the Pacific. At the time of writing, 10 tsunamis have been generated in the Pacific Ocean since 2006; all these events recorded in French Polynesia, produced different levels of warning, starting from a simple seismic warning with an information bulletin, up to an effective tsunami warning with evacuation of the coastal zone. These tsunamigenic events represent an invaluable opportunity of evolutions and tests of the tsunami warning system developed in French Polynesia: during the last ten years, the warning rules had evolved from a simple criterion of magnitudes up to the computation of the main seismic source parameters (location, slowness determinant (Newman & Okal, 1998) and focal geometry) using two independent methods: the first one uses an inversion of W-phases (Kanamori & Rivera, 2012) and the second one performs an inversion of long period surface waves (Clément & Reymond, 2014); the source parameters such estimated allow to compute in near real time the expected distributions of tsunami heights (with the help of a super-computer and parallelized codes of numerical simulations). Furthermore, two kinds of numerical modeling are used: the first one, very rapid (performed in about 5minutes of computation time) is based on the Green's law (Jamelot & Reymond, 2015), and a more detailed and precise one that uses classical numerical simulations through nested grids (about 45 minutes of computation time). Consequently, the criteria of tsunami warning are presently based on the expected tsunami heights in the different archipelagos and islands of French Polynesia. This major evolution allows to differentiate and use different levels of warning for the different archipelagos,working in tandem with the Civil Defense. We present the comparison of the historical observed tsunami heights (instrumental records, including deep ocean measurements provided by DART buoys and measured of tsunamis run-up) to the computed ones. In addition, the sites known for their amplification and resonance effects are well reproduced by the numerical simulations.

Top