Sample records for reasonable computing time

  1. 16 CFR 1115.14 - Time computations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION CONSUMER PRODUCT SAFETY ACT REGULATIONS SUBSTANTIAL... spend a reasonable time for investigation and evaluation. (See § 1115.14(d).) (d) Time for investigation and evaluation. A subject firm may conduct a reasonably expeditious investigation in order to evaluate...

  2. Accessing Computers in Education, One Byte at a Time.

    ERIC Educational Resources Information Center

    Manzo, Anthony V.

    This paper discusses computers and their potential role in education. The term "byte" is first explained, to emphasize the idea that the use of computers should be implemented one "byte" or step at a time. The reasons for this approach are then outlined. Potential applications in computer usage in educational administration are suggested, computer…

  3. Assessing Clinical Reasoning (ASCLIRE): Instrument Development and Validation

    ERIC Educational Resources Information Center

    Kunina-Habenicht, Olga; Hautz, Wolf E.; Knigge, Michel; Spies, Claudia; Ahlers, Olaf

    2015-01-01

    Clinical reasoning is an essential competency in medical education. This study aimed at developing and validating a test to assess diagnostic accuracy, collected information, and diagnostic decision time in clinical reasoning. A norm-referenced computer-based test for the assessment of clinical reasoning (ASCLIRE) was developed, integrating the…

  4. 37 CFR 350.5 - Time.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Time. 350.5 Section 350.5... RULES AND PROCEDURES GENERAL ADMINISTRATIVE PROVISIONS § 350.5 Time. (a) Computation. To compute the due... reasons why there is good cause for the delay; (5) The justification for the amount of additional time...

  5. 37 CFR 350.5 - Time.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2011-07-01 2011-07-01 false Time. 350.5 Section 350.5... RULES AND PROCEDURES GENERAL ADMINISTRATIVE PROVISIONS § 350.5 Time. (a) Computation. To compute the due... reasons why there is good cause for the delay; (5) The justification for the amount of additional time...

  6. 37 CFR 350.5 - Time.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2014-07-01 2014-07-01 false Time. 350.5 Section 350.5... RULES AND PROCEDURES GENERAL ADMINISTRATIVE PROVISIONS § 350.5 Time. (a) Computation. To compute the due... reasons why there is good cause for the delay; (5) The justification for the amount of additional time...

  7. 37 CFR 350.5 - Time.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2013-07-01 2013-07-01 false Time. 350.5 Section 350.5... RULES AND PROCEDURES GENERAL ADMINISTRATIVE PROVISIONS § 350.5 Time. (a) Computation. To compute the due... reasons why there is good cause for the delay; (5) The justification for the amount of additional time...

  8. 37 CFR 350.5 - Time.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2012-07-01 2012-07-01 false Time. 350.5 Section 350.5... RULES AND PROCEDURES GENERAL ADMINISTRATIVE PROVISIONS § 350.5 Time. (a) Computation. To compute the due... reasons why there is good cause for the delay; (5) The justification for the amount of additional time...

  9. Parametric Study of a YAV-8B Harrier in Ground Effect Using Time-Dependent Navier-Stokes Computations

    NASA Technical Reports Server (NTRS)

    Shishir, Pandya; Chaderjian, Neal; Ahmad, Jsaim; Kwak, Dochan (Technical Monitor)

    2001-01-01

    Flow simulations using the time-dependent Navier-Stokes equations remain a challenge for several reasons. Principal among them are the difficulty to accurately model complex flows, and the time needed to perform the computations. A parametric study of such complex problems is not considered practical due to the large cost associated with computing many time-dependent solutions. The computation time for each solution must be reduced in order to make a parametric study possible. With successful reduction of computation time, the issue of accuracy, and appropriateness of turbulence models will become more tractable.

  10. Retention in a Computer-based Outreach Intervention For Chronically Ill Rural Women

    PubMed Central

    Weinert, Clarann; Cudney, Shirley; Hill, Wade G.

    2009-01-01

    The study's purpose was to examine retention factors in a computer intervention with 158 chronically ill rural women. After a 22 week intervention, 18.9 percent of the women had dropped out. A Cox regression survival analysis was performed to assess the effects of selected covariates on retention. Reasons for dropping were tallied and categorized. Major reasons for dropping were: lack of time; decline in health status, and non-participation in study activities. Four covariates predicted survival time: level of computer skills, marital status, work outside of home, and impact of social events on participants' lives. Retention-enhancing strategies are suggested for implementation. PMID:18226760

  11. Some People Should Be Afraid of Computers.

    ERIC Educational Resources Information Center

    Rubin, Charles

    1983-01-01

    Discusses the "computerphobia" phenomenon, separating the valid reasons for some individual's anxiety about computers from their irrational fears. Among the factors examined are fear of breaking the computer, use of unclear documentation, lack of time for learning how to use the computer, and lack of computer knowledge. (JN)

  12. Children Can Solve Bayesian Problems: The Role of Representation in Mental Computation

    ERIC Educational Resources Information Center

    Zhu, Liqi; Gigerenzer, Gerd

    2006-01-01

    Can children reason the Bayesian way? We argue that the answer to this question depends on how numbers are represented, because a representation can do part of the computation. We test, for the first time, whether Bayesian reasoning can be elicited in children by means of natural frequencies. We show that when information was presented to fourth,…

  13. Exhaustive Versus Randomized Searchers for Nonlinear Optimization in 21st Century Computing: Solar Application

    NASA Technical Reports Server (NTRS)

    Sen, Syamal K.; AliShaykhian, Gholam

    2010-01-01

    We present a simple multi-dimensional exhaustive search method to obtain, in a reasonable time, the optimal solution of a nonlinear programming problem. It is more relevant in the present day non-mainframe computing scenario where an estimated 95% computing resources remains unutilized and computing speed touches petaflops. While the processor speed is doubling every 18 months, the band width is doubling every 12 months, and the hard disk space is doubling every 9 months. A randomized search algorithm or, equivalently, an evolutionary search method is often used instead of an exhaustive search algorithm. The reason is that a randomized approach is usually polynomial-time, i.e., fast while an exhaustive search method is exponential-time i.e., slow. We discuss the increasing importance of exhaustive search in optimization with the steady increase of computing power for solving many real-world problems of reasonable size. We also discuss the computational error and complexity of the search algorithm focusing on the fact that no measuring device can usually measure a quantity with an accuracy greater than 0.005%. We stress the fact that the quality of solution of the exhaustive search - a deterministic method - is better than that of randomized search. In 21 st century computing environment, exhaustive search cannot be left aside as an untouchable and it is not always exponential. We also describe a possible application of these algorithms in improving the efficiency of solar cells - a real hot topic - in the current energy crisis. These algorithms could be excellent tools in the hands of experimentalists and could save not only large amount of time needed for experiments but also could validate the theory against experimental results fast.

  14. The Reasoning behind the Scene: Why Do Early Childhood Educators Use Computers in Their Classrooms?

    ERIC Educational Resources Information Center

    Edwards, Suzy

    2005-01-01

    In recent times discussion surrounding the use of computers in early childhood education has emphasised the role computers play in children's everyday lives. This realisation has replaced early debate regarding the appropriateness or otherwise of computer use for young children in early childhood education. An important component of computer use…

  15. Incorporating time and spatial-temporal reasoning into situation management

    NASA Astrophysics Data System (ADS)

    Jakobson, Gabriel

    2010-04-01

    Spatio-temporal reasoning plays a significant role in situation management that is performed by intelligent agents (human or machine) by affecting how the situations are recognized, interpreted, acted upon or predicted. Many definitions and formalisms for the notion of spatio-temporal reasoning have emerged in various research fields including psychology, economics and computer science (computational linguistics, data management, control theory, artificial intelligence and others). In this paper we examine the role of spatio-temporal reasoning in situation management, particularly how to resolve situations that are described by using spatio-temporal relations among events and situations. We discuss a model for describing context sensitive temporal relations and show have the model can be extended for spatial relations.

  16. 14 CFR 389.14 - Locating and copying records and documents.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Records Service (NARS) of the General Services Administration or by computer service bureaus. (1) The..., will furnish the tapes for a reasonable length of time to a computer service bureau chosen by the applicant subject to the Director's approval. The computer service bureau shall assume the liability for the...

  17. 14 CFR 389.14 - Locating and copying records and documents.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Records Service (NARS) of the General Services Administration or by computer service bureaus. (1) The..., will furnish the tapes for a reasonable length of time to a computer service bureau chosen by the applicant subject to the Director's approval. The computer service bureau shall assume the liability for the...

  18. 14 CFR 389.14 - Locating and copying records and documents.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Records Service (NARS) of the General Services Administration or by computer service bureaus. (1) The..., will furnish the tapes for a reasonable length of time to a computer service bureau chosen by the applicant subject to the Director's approval. The computer service bureau shall assume the liability for the...

  19. Improving real-time efficiency of case-based reasoning for medical diagnosis.

    PubMed

    Park, Yoon-Joo

    2014-01-01

    Conventional case-based reasoning (CBR) does not perform efficiently for high volume dataset because of case-retrieval time. Some previous researches overcome this problem by clustering a case-base into several small groups, and retrieve neighbors within a corresponding group to a target case. However, this approach generally produces less accurate predictive performances than the conventional CBR. This paper suggests a new case-based reasoning method called the Clustering-Merging CBR (CM-CBR) which produces similar level of predictive performances than the conventional CBR with spending significantly less computational cost.

  20. Scratch: Multimedia Programming Environment for Young Gifted Learners

    ERIC Educational Resources Information Center

    Lee, Young-Jin

    2011-01-01

    Despite the educational benefits, computer programming has not been adopted in the current K-12 education as much as it could have been. One of the reasons for the low adoption of computer programming in K-12 education is the time it takes for (especially young) students to learn computer programming using a text-based programming language, which…

  1. A Computer for Low Context-Switch Time

    DTIC Science & Technology

    1990-03-01

    Results To find out how an implementation performs, we use a set of programs that make up a simulation system. These programs compile C language programs ...have worse relative context-switch performance: the time needed to switch contexts has not de- creased as much as the time to run programs . Much of...this study is: How seriously is throughput performance im- paired by this approach to computer architecture? Reasonable estimates are possible only

  2. An Upgrade of the Aeroheating Software ''MINIVER''

    NASA Technical Reports Server (NTRS)

    Louderback, Pierce

    2013-01-01

    Detailed computational modeling: CFO often used to create and execute computational domains. Increasing complexity when moving from 20 to 30 geometries. Computational time increased as finer grids are used (accuracy). Strong tool, but takes time to set up and run. MINIVER: Uses theoretical and empirical correlations. Orders of magnitude faster to set up and run. Not as accurate as CFO, but gives reasonable estimations. MINIVER's Drawbacks: Rigid command-line interface. Lackluster, unorganized documentation. No central control; multiple versions exist and have diverged.

  3. Monitoring real-time navigation processes using the automated reasoning tool (ART)

    NASA Technical Reports Server (NTRS)

    Maletz, M. C.; Culbert, C. J.

    1985-01-01

    An expert system is described for monitoring and controlling navigation processes in real-time. The ART-based system features data-driven computation, accommodation of synchronous and asynchronous data, temporal modeling for individual time intervals and chains of time intervals, and hypothetical reasoning capabilities that consider alternative interpretations of the state of navigation processes. The concept is illustrated in terms of the NAVEX system for monitoring and controlling the high speed ground navigation console for Mission Control at Johnson Space Center. The reasoning processes are outlined, including techniques used to consider alternative data interpretations. Installation of the system has permitted using a single operator, instead of three, to monitor the ascent and entry phases of a Shuttle mission.

  4. The Humanistic Duo: The Park/Recreation Professional and the Computer. (Computer-Can I Use It?).

    ERIC Educational Resources Information Center

    Weiner, Myron E.

    This paper states that there are two fundamental reasons for the comparative absence of computer use for parks and recreation at the present time. These are (1) lack of clear cut cost justification and (2) reluctance on the part of recreation professionals to accept their role as managers and, consequently, to utilize modern management tools. The…

  5. Concepts and Relations in Neurally Inspired In Situ Concept-Based Computing

    PubMed Central

    van der Velde, Frank

    2016-01-01

    In situ concept-based computing is based on the notion that conceptual representations in the human brain are “in situ.” In this way, they are grounded in perception and action. Examples are neuronal assemblies, whose connection structures develop over time and are distributed over different brain areas. In situ concepts representations cannot be copied or duplicated because that will disrupt their connection structure, and thus the meaning of these concepts. Higher-level cognitive processes, as found in language and reasoning, can be performed with in situ concepts by embedding them in specialized neurally inspired “blackboards.” The interactions between the in situ concepts and the blackboards form the basis for in situ concept computing architectures. In these architectures, memory (concepts) and processing are interwoven, in contrast with the separation between memory and processing found in Von Neumann architectures. Because the further development of Von Neumann computing (more, faster, yet power limited) is questionable, in situ concept computing might be an alternative for concept-based computing. In situ concept computing will be illustrated with a recently developed BABI reasoning task. Neurorobotics can play an important role in the development of in situ concept computing because of the development of in situ concept representations derived in scenarios as needed for reasoning tasks. Neurorobotics would also benefit from power limited and in situ concept computing. PMID:27242504

  6. Concepts and Relations in Neurally Inspired In Situ Concept-Based Computing.

    PubMed

    van der Velde, Frank

    2016-01-01

    In situ concept-based computing is based on the notion that conceptual representations in the human brain are "in situ." In this way, they are grounded in perception and action. Examples are neuronal assemblies, whose connection structures develop over time and are distributed over different brain areas. In situ concepts representations cannot be copied or duplicated because that will disrupt their connection structure, and thus the meaning of these concepts. Higher-level cognitive processes, as found in language and reasoning, can be performed with in situ concepts by embedding them in specialized neurally inspired "blackboards." The interactions between the in situ concepts and the blackboards form the basis for in situ concept computing architectures. In these architectures, memory (concepts) and processing are interwoven, in contrast with the separation between memory and processing found in Von Neumann architectures. Because the further development of Von Neumann computing (more, faster, yet power limited) is questionable, in situ concept computing might be an alternative for concept-based computing. In situ concept computing will be illustrated with a recently developed BABI reasoning task. Neurorobotics can play an important role in the development of in situ concept computing because of the development of in situ concept representations derived in scenarios as needed for reasoning tasks. Neurorobotics would also benefit from power limited and in situ concept computing.

  7. Factors Influencing Trainee Participation in Computer Software Applications Training.

    ERIC Educational Resources Information Center

    Alexander, Melody Webler

    1993-01-01

    Participants (n=130) who had completed training in WordPerfect, Lotus 1-2-3, and dBase III+ completed a questionnaire related to demographic characteristics and factors that influence training participation. Trainees are participating in computer training for personal reasons, seeking convenient time, location, and length. Child care or…

  8. Computer measurement of particle sizes in electron microscope images

    NASA Technical Reports Server (NTRS)

    Hall, E. L.; Thompson, W. B.; Varsi, G.; Gauldin, R.

    1976-01-01

    Computer image processing techniques have been applied to particle counting and sizing in electron microscope images. Distributions of particle sizes were computed for several images and compared to manually computed distributions. The results of these experiments indicate that automatic particle counting within a reasonable error and computer processing time is feasible. The significance of the results is that the tedious task of manually counting a large number of particles can be eliminated while still providing the scientist with accurate results.

  9. Reasoning about real-time systems with temporal interval logic constraints on multi-state automata

    NASA Technical Reports Server (NTRS)

    Gabrielian, Armen

    1991-01-01

    Models of real-time systems using a single paradigm often turn out to be inadequate, whether the paradigm is based on states, rules, event sequences, or logic. A model-based approach to reasoning about real-time systems is presented in which a temporal interval logic called TIL is employed to define constraints on a new type of high level automata. The combination, called hierarchical multi-state (HMS) machines, can be used to model formally a real-time system, a dynamic set of requirements, the environment, heuristic knowledge about planning-related problem solving, and the computational states of the reasoning mechanism. In this framework, mathematical techniques were developed for: (1) proving the correctness of a representation; (2) planning of concurrent tasks to achieve goals; and (3) scheduling of plans to satisfy complex temporal constraints. HMS machines allow reasoning about a real-time system from a model of how truth arises instead of merely depending of what is true in a system.

  10. 14 CFR 27.801 - Ditching.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... shown that, under reasonably probable water conditions, the flotation time and trim of the rotorcraft... compliance with this provision is shown by buoyancy and trim computations, appropriate allowances must be...

  11. Experiences of Computer Science Curriculum Design: A Phenomenological Study

    ERIC Educational Resources Information Center

    Sloan, Arthur; Bowe, Brian

    2015-01-01

    This paper presents a qualitative study of 12 computer science lecturers' experiences of curriculum design of several degree programmes during a time of transition from year-long to semesterised courses, due to institutional policy change. The background to the study is outlined, as are the reasons for choosing the research methodology. The main…

  12. Image Analysis via Soft Computing: Prototype Applications at NASA KSC and Product Commercialization

    NASA Technical Reports Server (NTRS)

    Dominguez, Jesus A.; Klinko, Steve

    2011-01-01

    This slide presentation reviews the use of "soft computing" which differs from "hard computing" in that it is more tolerant of imprecision, partial truth, uncertainty, and approximation and its use in image analysis. Soft computing provides flexible information processing to handle real life ambiguous situations and achieve tractability, robustness low solution cost, and a closer resemblance to human decision making. Several systems are or have been developed: Fuzzy Reasoning Edge Detection (FRED), Fuzzy Reasoning Adaptive Thresholding (FRAT), Image enhancement techniques, and visual/pattern recognition. These systems are compared with examples that show the effectiveness of each. NASA applications that are reviewed are: Real-Time (RT) Anomaly Detection, Real-Time (RT) Moving Debris Detection and the Columbia Investigation. The RT anomaly detection reviewed the case of a damaged cable for the emergency egress system. The use of these techniques is further illustrated in the Columbia investigation with the location and detection of Foam debris. There are several applications in commercial usage: image enhancement, human screening and privacy protection, visual inspection, 3D heart visualization, tumor detections and x ray image enhancement.

  13. 29 CFR 4000.30 - What if I need to resend my filing or issuance for technical reasons?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false What if I need to resend my filing or issuance for... GUARANTY CORPORATION GENERAL FILING, ISSUANCE, COMPUTATION OF TIME, AND RECORD RETENTION Determining Filing and Issuance Dates § 4000.30 What if I need to resend my filing or issuance for technical reasons? (a...

  14. 14 CFR 25.801 - Ditching.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ..., under reasonably probable water conditions, the flotation time and trim of the airplane will allow the... provision is shown by buoyancy and trim computations, appropriate allowances must be made for probable...

  15. 14 CFR 29.801 - Ditching.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... must be shown that, under reasonably probable water conditions, the flotation time and trim of the... compliance with this provision is shown by bouyancy and trim computations, appropriate allowances must be...

  16. Computer Game Design Classes: The Students' and Professionals' Perspectives

    ERIC Educational Resources Information Center

    Swacha, Jakub; Skrzyszewski, Adam; Syslo, Wojciech A.

    2010-01-01

    There are multiple reasons that justify teaching computer game design. Its multi-aspectual nature creates opportunity to develop, at the same time, creativity, technical skills and ability to work in team. Thinking of game design classes, one needs direction on what to focus on so that the students could benefit the most. In this paper, we present…

  17. Processing Conversational Implicatures: Alternatives and Counterfactual Reasoning.

    PubMed

    van Tiel, Bob; Schaeken, Walter

    2017-05-01

    In a series of experiments, Bott and Noveck (2004) found that the computation of scalar inferences, a variety of conversational implicature, caused a delay in response times. In order to determine what aspect of the inferential process that underlies scalar inferences caused this delay, we extended their paradigm to three other kinds of inferences: free choice inferences, conditional perfection, and exhaustivity in "it"-clefts. In contrast to scalar inferences, the computation of these three kinds of inferences facilitated response times. Following a suggestion made by Chemla and Bott (2014), we propose that the time it takes to compute a conversational implicature depends on the structural characteristics of the required alternatives. Copyright © 2016 Cognitive Science Society, Inc.

  18. Using Computer Simulations for Promoting Model-based Reasoning. Epistemological and Educational Dimensions

    NASA Astrophysics Data System (ADS)

    Develaki, Maria

    2017-11-01

    Scientific reasoning is particularly pertinent to science education since it is closely related to the content and methodologies of science and contributes to scientific literacy. Much of the research in science education investigates the appropriate framework and teaching methods and tools needed to promote students' ability to reason and evaluate in a scientific way. This paper aims (a) to contribute to an extended understanding of the nature and pedagogical importance of model-based reasoning and (b) to exemplify how using computer simulations can support students' model-based reasoning. We provide first a background for both scientific reasoning and computer simulations, based on the relevant philosophical views and the related educational discussion. This background suggests that the model-based framework provides an epistemologically valid and pedagogically appropriate basis for teaching scientific reasoning and for helping students develop sounder reasoning and decision-taking abilities and explains how using computer simulations can foster these abilities. We then provide some examples illustrating the use of computer simulations to support model-based reasoning and evaluation activities in the classroom. The examples reflect the procedure and criteria for evaluating models in science and demonstrate the educational advantages of their application in classroom reasoning activities.

  19. A Systematic Approach to Simulating Metabolism in Computational Toxicology. I. The Times Heuristic Modeling Framework

    EPA Science Inventory

    This paper presents a new system for automated 2D-3D migration of chemicals in large databases with conformer multiplication. The main advantages of this system are its straightforward performance, reasonable execution time, simplicity, and applicability to building large 3D che...

  20. 7 CFR 47.25 - Filing; extensions of time; effective date of filing; computations of time; official notice.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... transmittal), if, in the judgment of the examiner or the Secretary, as the case may be, there is good reason... the regulations in this part to be filed shall be deemed to be filed when it reaches the Department in...

  1. Approximation algorithms for planning and control

    NASA Technical Reports Server (NTRS)

    Boddy, Mark; Dean, Thomas

    1989-01-01

    A control system operating in a complex environment will encounter a variety of different situations, with varying amounts of time available to respond to critical events. Ideally, such a control system will do the best possible with the time available. In other words, its responses should approximate those that would result from having unlimited time for computation, where the degree of the approximation depends on the amount of time it actually has. There exist approximation algorithms for a wide variety of problems. Unfortunately, the solution to any reasonably complex control problem will require solving several computationally intensive problems. Algorithms for successive approximation are a subclass of the class of anytime algorithms, algorithms that return answers for any amount of computation time, where the answers improve as more time is allotted. An architecture is described for allocating computation time to a set of anytime algorithms, based on expectations regarding the value of the answers they return. The architecture described is quite general, producing optimal schedules for a set of algorithms under widely varying conditions.

  2. [APPLICATION OF COMPUTER-ASSISTED TECHNOLOGY IN ANALYSIS OF REVISION REASON OF UNICOMPARTMENTAL KNEE ARTHROPLASTY].

    PubMed

    Jia, Di; Li, Yanlin; Wang, Guoliang; Gao, Huanyu; Yu, Yang

    2016-01-01

    To conclude the revision reason of unicompartmental knee arthroplasty (UKA) using computer-assisted technology so as to provide reference for reducing the revision incidence and improving the level of surgical technique and rehabilitation. The relevant literature on analyzing revision reason of UKA using computer-assisted technology in recent years was extensively reviewed. The revision reasons by computer-assisted technology are fracture of the medial tibial plateau, progressive osteoarthritis of reserved compartment, dislocation of mobile bearing, prosthesis loosening, polyethylene wear, and unexplained persistent pain. Computer-assisted technology can be used to analyze the revision reason of UKA and guide the best operating method and rehabilitation scheme by simulating the operative process and knee joint activities.

  3. Investigating College and Graduate Students' Multivariable Reasoning in Computational Modeling

    ERIC Educational Resources Information Center

    Wu, Hsin-Kai; Wu, Pai-Hsing; Zhang, Wen-Xin; Hsu, Ying-Shao

    2013-01-01

    Drawing upon the literature in computational modeling, multivariable reasoning, and causal attribution, this study aims at characterizing multivariable reasoning practices in computational modeling and revealing the nature of understanding about multivariable causality. We recruited two freshmen, two sophomores, two juniors, two seniors, four…

  4. gpuSPHASE-A shared memory caching implementation for 2D SPH using CUDA

    NASA Astrophysics Data System (ADS)

    Winkler, Daniel; Meister, Michael; Rezavand, Massoud; Rauch, Wolfgang

    2017-04-01

    Smoothed particle hydrodynamics (SPH) is a meshless Lagrangian method that has been successfully applied to computational fluid dynamics (CFD), solid mechanics and many other multi-physics problems. Using the method to solve transport phenomena in process engineering requires the simulation of several days to weeks of physical time. Based on the high computational demand of CFD such simulations in 3D need a computation time of years so that a reduction to a 2D domain is inevitable. In this paper gpuSPHASE, a new open-source 2D SPH solver implementation for graphics devices, is developed. It is optimized for simulations that must be executed with thousands of frames per second to be computed in reasonable time. A novel caching algorithm for Compute Unified Device Architecture (CUDA) shared memory is proposed and implemented. The software is validated and the performance is evaluated for the well established dambreak test case.

  5. Vehicle Integrated Prognostic Reasoner (VIPR) Metric Report

    NASA Technical Reports Server (NTRS)

    Cornhill, Dennis; Bharadwaj, Raj; Mylaraswamy, Dinkar

    2013-01-01

    This document outlines a set of metrics for evaluating the diagnostic and prognostic schemes developed for the Vehicle Integrated Prognostic Reasoner (VIPR), a system-level reasoner that encompasses the multiple levels of large, complex systems such as those for aircraft and spacecraft. VIPR health managers are organized hierarchically and operate together to derive diagnostic and prognostic inferences from symptoms and conditions reported by a set of diagnostic and prognostic monitors. For layered reasoners such as VIPR, the overall performance cannot be evaluated by metrics solely directed toward timely detection and accuracy of estimation of the faults in individual components. Among other factors, overall vehicle reasoner performance is governed by the effectiveness of the communication schemes between monitors and reasoners in the architecture, and the ability to propagate and fuse relevant information to make accurate, consistent, and timely predictions at different levels of the reasoner hierarchy. We outline an extended set of diagnostic and prognostics metrics that can be broadly categorized as evaluation measures for diagnostic coverage, prognostic coverage, accuracy of inferences, latency in making inferences, computational cost, and sensitivity to different fault and degradation conditions. We report metrics from Monte Carlo experiments using two variations of an aircraft reference model that supported both flat and hierarchical reasoning.

  6. Dual Trajectories of Reactive and Proactive Aggression from Mid-childhood to Early Adolescence: Relations to Sensation Seeking, Risk Taking, and Moral Reasoning.

    PubMed

    Cui, Lixian; Colasante, Tyler; Malti, Tina; Ribeaud, Denis; Eisner, Manuel P

    2016-05-01

    We examined the roles of sensation seeking, risk taking, and moral reasoning in the development of reactive and proactive aggression. Data were drawn from a multiethnic, longitudinal study of children from Switzerland (N = 1571; 52 % male; assessed annually over 6 years; 7-years-old at Time 1). At all 6 time points, teachers reported children's reactive and proactive aggression via questionnaire. Children's sensation seeking (at Time 1) and risk taking (at Time 2) were assessed with two interactive computer tasks and their moral reasoning was assessed at Time 2 in response to four hypothetical vignettes depicting moral transgressions. Parallel process Latent Class Growth Analysis (PP-LCGA) identified six dual trajectories of reactive and proactive aggression. Children with either childhood-limited or adolescent-onset aggression showed high sensation seeking. Children with persistent, high levels of both reactive and proactive aggression across time showed high levels of sensation seeking and risk taking, as well as low levels of moral reasoning. Children with only high risk taking were more likely to display moderate levels of aggression across time. These findings highlight the shared and differential roles of sensation seeking, risk taking, and moral reasoning in the dual development of reactive and proactive aggression from mid-childhood to early adolescence. We discuss implications for common and tailored strategies to combat these aggression subtypes.

  7. Hypersonic Research Vehicle (HRV) real-time flight test support feasibility and requirements study. Part 2: Remote computation support for flight systems functions

    NASA Technical Reports Server (NTRS)

    Rediess, Herman A.; Hewett, M. D.

    1991-01-01

    The requirements are assessed for the use of remote computation to support HRV flight testing. First, remote computational requirements were developed to support functions that will eventually be performed onboard operational vehicles of this type. These functions which either cannot be performed onboard in the time frame of initial HRV flight test programs because the technology of airborne computers will not be sufficiently advanced to support the computational loads required, or it is not desirable to perform the functions onboard in the flight test program for other reasons. Second, remote computational support either required or highly desirable to conduct flight testing itself was addressed. The use is proposed of an Automated Flight Management System which is described in conceptual detail. Third, autonomous operations is discussed and finally, unmanned operations.

  8. Onward to Petaflops Computing

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    With programs such as the US High Performance Computing and Communications Program (HPCCP), the attention of scientists and engineers worldwide has been focused on the potential of very high performance scientific computing, namely systems that are hundreds or thousands of times more powerful than those typically available in desktop systems at any given point in time. Extending the frontiers of computing in this manner has resulted in remarkable advances, both in computing technology itself and also in the various scientific and engineering disciplines that utilize these systems. Within the month or two, a sustained rate of 1 Tflop/s (also written 1 teraflops, or 10(exp 12) floating-point operations per second) is likely to be achieved by the 'ASCI Red' system at Sandia National Laboratory in New Mexico. With this objective in sight, it is reasonable to ask what lies ahead for high-end computing.

  9. Exponential Methods for the Time Integration of Schroedinger Equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cano, B.; Gonzalez-Pachon, A.

    2010-09-30

    We consider exponential methods of second order in time in order to integrate the cubic nonlinear Schroedinger equation. We are interested in taking profit of the special structure of this equation. Therefore, we look at symmetry, symplecticity and approximation of invariants of the proposed methods. That will allow to integrate till long times with reasonable accuracy. Computational efficiency is also our aim. Therefore, we make numerical computations in order to compare the methods considered and so as to conclude that explicit Lawson schemes projected on the norm of the solution are an efficient tool to integrate this equation.

  10. 25 CFR 700.163 - Expenses in searching for replacement location-nonresidential moves.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ..., including— (a) Transportation computed at prevailing federal per diem and mileage allowance schedules; meals and lodging away from home; (b) Time spent searching, based on reasonable earnings; (c) Fees paid to a...

  11. 25 CFR 700.163 - Expenses in searching for replacement location-nonresidential moves.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ..., including— (a) Transportation computed at prevailing federal per diem and mileage allowance schedules; meals and lodging away from home; (b) Time spent searching, based on reasonable earnings; (c) Fees paid to a...

  12. Sedentary patterns and media availability in European adolescents: The HELENA study.

    PubMed

    Rey-López, Juan P; Vicente-Rodriguez, Germán; Ortega, Francisco B; Ruiz, Jonatan R; Martinez-Gómez, David; De Henauw, Stefan; Manios, Yannis; Molnar, Denes; Polito, Angela; Verloigne, Maite; Castillo, Manuel J; Sjöström, Michael; De Bourdeaudhuij, Ilse; Moreno, Luis A

    2010-07-01

    To describe sedentary behaviors in adolescents and to examine the influence of media availability on TV viewing. The study assessed 3278 adolescents (1537 males and 1741 females, 12.5-17.5 years) involved in the HELENA study (2007). Adolescents reported hours of TV viewing, computer games, console, internet for study and non-study reasons, and study, as well as availability of TVs, computers and consoles. Time spent in sedentary behaviors was higher during weekends (all p<0.001). Males spent more hours on TV viewing (for > or = 15 years at weekends), playing computer games and console games while females spent more time studying and surfing for non-study reasons. During weekdays, one third of adolescents exceeded the screen time guidelines (>2h/day) based solely on TV viewing, whereas around 60% exceeded it at weekends. Having a TV or a console in the bedroom was associated with higher TV viewing (OR=2.66; 95% CI 2.23-3.18; and OR=1.92; 95% CI 1.61-2.28, respectively) whereas the presence of computer reduced it (OR=0.57; 95% CI 0.48-0.68). Adolescents living in Europe are not meeting media recommendations, especially during weekend. The absence of a TV in the adolescents' bedroom might reduce TV viewing. Further studies are needed to confirm or contrast our findings. Copyright 2010 Elsevier Inc. All rights reserved.

  13. Time Requirements for the Different Item Types Proposed for Use in the Revised SAT®. Research Report No. 2007-3. ETS RR-07-35

    ERIC Educational Resources Information Center

    Bridgeman, Brent; Laitusis, Cara Cahalan; Cline, Frederick

    2007-01-01

    The current study used three data sources to estimate time requirements for different item types on the now current SAT Reasoning Test™. First, we estimated times from a computer-adaptive version of the SAT® (SAT CAT) that automatically recorded item times. Second, we observed students as they answered SAT questions under strict time limits and…

  14. Gust Acoustics Computation with a Space-Time CE/SE Parallel 3D Solver

    NASA Technical Reports Server (NTRS)

    Wang, X. Y.; Himansu, A.; Chang, S. C.; Jorgenson, P. C. E.; Reddy, D. R. (Technical Monitor)

    2002-01-01

    The benchmark Problem 2 in Category 3 of the Third Computational Aero-Acoustics (CAA) Workshop is solved using the space-time conservation element and solution element (CE/SE) method. This problem concerns the unsteady response of an isolated finite-span swept flat-plate airfoil bounded by two parallel walls to an incident gust. The acoustic field generated by the interaction of the gust with the flat-plate airfoil is computed by solving the 3D (three-dimensional) Euler equations in the time domain using a parallel version of a 3D CE/SE solver. The effect of the gust orientation on the far-field directivity is studied. Numerical solutions are presented and compared with analytical solutions, showing a reasonable agreement.

  15. Elucidating Reaction Mechanisms on Quantum Computers

    NASA Astrophysics Data System (ADS)

    Wiebe, Nathan; Reiher, Markus; Svore, Krysta; Wecker, Dave; Troyer, Matthias

    We show how a quantum computer can be employed to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical-computer simulations for such problems, to significantly increase their accuracy and enable hitherto intractable simulations. Detailed resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. This demonstrates that quantum computers will realistically be able to tackle important problems in chemistry that are both scientifically and economically significant.

  16. Computer considerations for real time simulation of a generalized rotor model

    NASA Technical Reports Server (NTRS)

    Howe, R. M.; Fogarty, L. E.

    1977-01-01

    Scaled equations were developed to meet requirements for real time computer simulation of the rotor system research aircraft. These equations form the basis for consideration of both digital and hybrid mechanization for real time simulation. For all digital simulation estimates of the required speed in terms of equivalent operations per second are developed based on the complexity of the equations and the required intergration frame rates. For both conventional hybrid simulation and hybrid simulation using time-shared analog elements the amount of required equipment is estimated along with a consideration of the dynamic errors. Conventional hybrid mechanization using analog simulation of those rotor equations which involve rotor-spin frequencies (this consititutes the bulk of the equations) requires too much analog equipment. Hybrid simulation using time-sharing techniques for the analog elements appears possible with a reasonable amount of analog equipment. All-digital simulation with affordable general-purpose computers is not possible because of speed limitations, but specially configured digital computers do have the required speed and consitute the recommended approach.

  17. TRAC posttest calculations of Semiscale Test S-06-3. [PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ireland, J.R.; Bleiweis, P.B.

    A comparison of Transient Reactor Analysis Code (TRAC) steady-state and transient results with Semiscale Test S-06-3 (US Standard Problem 8) experimental data is discussed. The TRAC model used employs fewer mesh cells than normal data comparison models so that TRAC's ability to obtain reasonable results with less computer time can be assessed. In general, the TRAC results are in good agreement with the data and the major phenomena found in the experiment are reproduced by the code with a substantial reduction in computing times.

  18. A new computational method for reacting hypersonic flows

    NASA Astrophysics Data System (ADS)

    Niculescu, M. L.; Cojocaru, M. G.; Pricop, M. V.; Fadgyas, M. C.; Pepelea, D.; Stoican, M. G.

    2017-07-01

    Hypersonic gas dynamics computations are challenging due to the difficulties to have reliable and robust chemistry models that are usually added to Navier-Stokes equations. From the numerical point of view, it is very difficult to integrate together Navier-Stokes equations and chemistry model equations because these partial differential equations have different specific time scales. For these reasons, almost all known finite volume methods fail shortly to solve this second order partial differential system. Unfortunately, the heating of Earth reentry vehicles such as space shuttles and capsules is very close linked to endothermic chemical reactions. A better prediction of wall heat flux leads to smaller safety coefficient for thermal shield of space reentry vehicle; therefore, the size of thermal shield decreases and the payload increases. For these reasons, the present paper proposes a new computational method based on chemical equilibrium, which gives accurate prediction of hypersonic heating in order to support the Earth reentry capsule design.

  19. Assessing clinical reasoning (ASCLIRE): Instrument development and validation.

    PubMed

    Kunina-Habenicht, Olga; Hautz, Wolf E; Knigge, Michel; Spies, Claudia; Ahlers, Olaf

    2015-12-01

    Clinical reasoning is an essential competency in medical education. This study aimed at developing and validating a test to assess diagnostic accuracy, collected information, and diagnostic decision time in clinical reasoning. A norm-referenced computer-based test for the assessment of clinical reasoning (ASCLIRE) was developed, integrating the entire clinical decision process. In a cross-sectional study participants were asked to choose as many diagnostic measures as they deemed necessary to diagnose the underlying disease of six different cases with acute or sub-acute dyspnea and provide a diagnosis. 283 students and 20 content experts participated. In addition to diagnostic accuracy, respective decision time and number of used relevant diagnostic measures were documented as distinct performance indicators. The empirical structure of the test was investigated using a structural equation modeling approach. Experts showed higher accuracy rates and lower decision times than students. In a cross-sectional comparison, the diagnostic accuracy of students improved with the year of study. Wrong diagnoses provided by our sample were comparable to wrong diagnoses in practice. We found an excellent fit for a model with three latent factors-diagnostic accuracy, decision time, and choice of relevant diagnostic information-with diagnostic accuracy showing no significant correlation with decision time. ASCLIRE considers decision time as an important performance indicator beneath diagnostic accuracy and provides evidence that clinical reasoning is a complex ability comprising diagnostic accuracy, decision time, and choice of relevant diagnostic information as three partly correlated but still distinct aspects.

  20. Covering Resilience: A Recent Development for Binomial Checkpointing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walther, Andrea; Narayanan, Sri Hari Krishna

    In terms of computing time, adjoint methods offer a very attractive alternative to compute gradient information, required, e.g., for optimization purposes. However, together with this very favorable temporal complexity result comes a memory requirement that is in essence proportional with the operation count of the underlying function, e.g., if algorithmic differentiation is used to provide the adjoints. For this reason, checkpointing approaches in many variants have become popular. This paper analyzes an extension of the so-called binomial approach to cover also possible failures of the computing systems. Such a measure of precaution is of special interest for massive parallel simulationsmore » and adjoint calculations where the mean time between failure of the large scale computing system is smaller than the time needed to complete the calculation of the adjoint information. We describe the extensions of standard checkpointing approaches required for such resilience, provide a corresponding implementation and discuss first numerical results.« less

  1. Kalman filter estimation of human pilot-model parameters

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Roland, V. R.

    1975-01-01

    The parameters of a human pilot-model transfer function are estimated by applying the extended Kalman filter to the corresponding retarded differential-difference equations in the time domain. Use of computer-generated data indicates that most of the parameters, including the implicit time delay, may be reasonably estimated in this way. When applied to two sets of experimental data obtained from a closed-loop tracking task performed by a human, the Kalman filter generated diverging residuals for one of the measurement types, apparently because of model assumption errors. Application of a modified adaptive technique was found to overcome the divergence and to produce reasonable estimates of most of the parameters.

  2. Computing the universe: how large-scale simulations illuminate galaxies and dark energy

    NASA Astrophysics Data System (ADS)

    O'Shea, Brian

    2015-04-01

    High-performance and large-scale computing is absolutely to understanding astronomical objects such as stars, galaxies, and the cosmic web. This is because these are structures that operate on physical, temporal, and energy scales that cannot be reasonably approximated in the laboratory, and whose complexity and nonlinearity often defies analytic modeling. In this talk, I show how the growth of computing platforms over time has facilitated our understanding of astrophysical and cosmological phenomena, focusing primarily on galaxies and large-scale structure in the Universe.

  3. A European Flagship Programme on Extreme Computing and Climate

    NASA Astrophysics Data System (ADS)

    Palmer, Tim

    2017-04-01

    In 2016, an outline proposal co-authored by a number of leading climate modelling scientists from around Europe for a (c. 1 billion euro) flagship project on exascale computing and high-resolution global climate modelling was sent to the EU via its Future and Emerging Flagship Technologies Programme. The project is formally entitled "A Flagship European Programme on Extreme Computing and Climate (EPECC)"? In this talk I will outline the reasons why I believe such a project is needed and describe the current status of the project. I will leave time for some discussion.

  4. A New Application of the Channel Packet Method for Low Energy 1-D Elastic Scattering

    DTIC Science & Technology

    2006-09-01

    matter. On a cosmic scale, we wonder if a collision between an asteroid and Earth led to the extinction of the dinosaurs . Collisions are important...in Figure 12. In an effort to have the computation time reasonable was chosen to be for this simulation. In order to represent the intermediate...linear regions joined by the two labeled points. However, based on Figure 13 the two potential functions are reasonably close and so one would not

  5. A Large number of fast cosmological simulations

    NASA Astrophysics Data System (ADS)

    Koda, Jun; Kazin, E.; Blake, C.

    2014-01-01

    Mock galaxy catalogs are essential tools to analyze large-scale structure data. Many independent realizations of mock catalogs are necessary to evaluate the uncertainties in the measurements. We perform 3600 cosmological simulations for the WiggleZ Dark Energy Survey to obtain the new improved Baron Acoustic Oscillation (BAO) cosmic distance measurements using the density field "reconstruction" technique. We use 1296^3 particles in a periodic box of 600/h Mpc on a side, which is the minimum requirement from the survey volume and observed galaxies. In order to perform such large number of simulations, we developed a parallel code using the COmoving Lagrangian Acceleration (COLA) method, which can simulate cosmological large-scale structure reasonably well with only 10 time steps. Our simulation is more than 100 times faster than conventional N-body simulations; one COLA simulation takes only 15 minutes with 216 computing cores. We have completed the 3600 simulations with a reasonable computation time of 200k core hours. We also present the results of the revised WiggleZ BAO distance measurement, which are significantly improved by the reconstruction technique.

  6. An on-line reactivity and power monitor for a TRIGA reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Binney, Stephen E.; Bakir, Alia J.

    1988-07-01

    As the personal computer (PC) becomes more and more of a significant influence on modern technology, it is reasonable that at some point in time they would be used to interface with TRIGA reactors. A personal computer with a special interface board has been used to monitor key parameters during operation of the Oregon State University TRIGA Reactor (OSTR). A description of the apparatus used and sample results are included.

  7. Developing Computer Model-Based Assessment of Chemical Reasoning: A Feasibility Study

    ERIC Educational Resources Information Center

    Liu, Xiufeng; Waight, Noemi; Gregorius, Roberto; Smith, Erica; Park, Mihwa

    2012-01-01

    This paper reports a feasibility study on developing computer model-based assessments of chemical reasoning at the high school level. Computer models are flash and NetLogo environments to make simultaneously available three domains in chemistry: macroscopic, submicroscopic, and symbolic. Students interact with computer models to answer assessment…

  8. Solving probability reasoning based on DNA strand displacement and probability modules.

    PubMed

    Zhang, Qiang; Wang, Xiaobiao; Wang, Xiaojun; Zhou, Changjun

    2017-12-01

    In computation biology, DNA strand displacement technology is used to simulate the computation process and has shown strong computing ability. Most researchers use it to solve logic problems, but it is only rarely used in probabilistic reasoning. To process probabilistic reasoning, a conditional probability derivation model and total probability model based on DNA strand displacement were established in this paper. The models were assessed through the game "read your mind." It has been shown to enable the application of probabilistic reasoning in genetic diagnosis. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Computer-aided design of antenna structures and components

    NASA Technical Reports Server (NTRS)

    Levy, R.

    1976-01-01

    This paper discusses computer-aided design procedures for antenna reflector structures and related components. The primary design aid is a computer program that establishes cross sectional sizes of the structural members by an optimality criterion. Alternative types of deflection-dependent objectives can be selected for designs subject to constraints on structure weight. The computer program has a special-purpose formulation to design structures of the type frequently used for antenna construction. These structures, in common with many in other areas of application, are represented by analytical models that employ only the three translational degrees of freedom at each node. The special-purpose construction of the program, however, permits coding and data management simplifications that provide advantages in problem size and execution speed. Size and speed are essentially governed by the requirements of structural analysis and are relatively unaffected by the added requirements of design. Computation times to execute several design/analysis cycles are comparable to the times required by general-purpose programs for a single analysis cycle. Examples in the paper illustrate effective design improvement for structures with several thousand degrees of freedom and within reasonable computing times.

  10. Improved teaching-learning-based and JAYA optimization algorithms for solving flexible flow shop scheduling problems

    NASA Astrophysics Data System (ADS)

    Buddala, Raviteja; Mahapatra, Siba Sankar

    2017-11-01

    Flexible flow shop (or a hybrid flow shop) scheduling problem is an extension of classical flow shop scheduling problem. In a simple flow shop configuration, a job having `g' operations is performed on `g' operation centres (stages) with each stage having only one machine. If any stage contains more than one machine for providing alternate processing facility, then the problem becomes a flexible flow shop problem (FFSP). FFSP which contains all the complexities involved in a simple flow shop and parallel machine scheduling problems is a well-known NP-hard (Non-deterministic polynomial time) problem. Owing to high computational complexity involved in solving these problems, it is not always possible to obtain an optimal solution in a reasonable computation time. To obtain near-optimal solutions in a reasonable computation time, a large variety of meta-heuristics have been proposed in the past. However, tuning algorithm-specific parameters for solving FFSP is rather tricky and time consuming. To address this limitation, teaching-learning-based optimization (TLBO) and JAYA algorithm are chosen for the study because these are not only recent meta-heuristics but they do not require tuning of algorithm-specific parameters. Although these algorithms seem to be elegant, they lose solution diversity after few iterations and get trapped at the local optima. To alleviate such drawback, a new local search procedure is proposed in this paper to improve the solution quality. Further, mutation strategy (inspired from genetic algorithm) is incorporated in the basic algorithm to maintain solution diversity in the population. Computational experiments have been conducted on standard benchmark problems to calculate makespan and computational time. It is found that the rate of convergence of TLBO is superior to JAYA. From the results, it is found that TLBO and JAYA outperform many algorithms reported in the literature and can be treated as efficient methods for solving the FFSP.

  11. Shortest path problem on a grid network with unordered intermediate points

    NASA Astrophysics Data System (ADS)

    Saw, Veekeong; Rahman, Amirah; Eng Ong, Wen

    2017-10-01

    We consider a shortest path problem with single cost factor on a grid network with unordered intermediate points. A two stage heuristic algorithm is proposed to find a feasible solution path within a reasonable amount of time. To evaluate the performance of the proposed algorithm, computational experiments are performed on grid maps of varying size and number of intermediate points. Preliminary results for the problem are reported. Numerical comparisons against brute forcing show that the proposed algorithm consistently yields solutions that are within 10% of the optimal solution and uses significantly less computation time.

  12. Application of computational aero-acoustics to real world problems

    NASA Technical Reports Server (NTRS)

    Hardin, Jay C.

    1996-01-01

    The application of computational aeroacoustics (CAA) to real problems is discussed in relation to the analysis performed with the aim of assessing the application of the various techniques. It is considered that the applications are limited by the inability of the computational resources to resolve the large range of scales involved in high Reynolds number flows. Possible simplifications are discussed. It is considered that problems remain to be solved in relation to the efficient use of the power of parallel computers and in the development of turbulent modeling schemes. The goal of CAA is stated as being the implementation of acoustic design studies on a computer terminal with reasonable run times.

  13. The University of Wisconsin OAO operating system

    NASA Technical Reports Server (NTRS)

    Heacox, H. C.; Mcnall, J. F.

    1972-01-01

    The Wisconsin OAO operating system is presented which consists of two parts: a computer program called HARUSPEX, which makes possible reasonably efficient and convenient operation of the package and ground operations equipment which provides real-time status monitoring, commanding and a quick-look at the data.

  14. Computerized optimization of radioimmunoassays for hCG and estradiol: an experimental evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yanagishita, M.; Rodbard, D.

    1978-07-15

    The mathematical and statistical theory of radioimmunoassays (RIAs) has been used to develop a series of computer programs to optimize sensitivity or precision at any desired dose level for either equilibrium or nonequilibrium assays. These computer programs provide for the calculation of the equilibrium constants of association and binding capacities for antisera (parameters of Scatchard plots), the association and dissociation rate constants, and prediction of optimum concentration of labeled ligand and antibody and optimum incubation times for the assay. This paper presents an experimental evaluation of the use of these computer programs applied to RIAs for human chorionic gonadotropin (hCG)more » and estradiol. The experimental results are in reasonable semiquantitative agreement with the predictions of the computer simulations (usually within a factor of two) and thus partially validate the use of computer techniques to optimize RIAs that are reasonably well behaved, as in the case of the hCG and estradiol RIAs. Further, these programs can provide insights into the nature of the RIA system, e.g., the general nature of the sensitivity and precision surfaces. This facilitates empirical optimization of conditions.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hiller, Mauritius M.; Veinot, Kenneth G.; Easterly, Clay E.

    In this study, methods are addressed to reduce the computational time to compute organ-dose rate coefficients using Monte Carlo techniques. Several variance reduction techniques are compared including the reciprocity method, importance sampling, weight windows and the use of the ADVANTG software package. For low-energy photons, the runtime was reduced by a factor of 10 5 when using the reciprocity method for kerma computation for immersion of a phantom in contaminated water. This is particularly significant since impractically long simulation times are required to achieve reasonable statistical uncertainties in organ dose for low-energy photons in this source medium and geometry. Althoughmore » the MCNP Monte Carlo code is used in this paper, the reciprocity technique can be used equally well with other Monte Carlo codes.« less

  16. Approaching Gender Parity: Women in Computer Science at Afghanistan's Kabul University

    ERIC Educational Resources Information Center

    Plane, Jandelyn

    2010-01-01

    This study explores the representation of women in computer science at the tertiary level through data collected about undergraduate computer science education at Kabul University in Afghanistan. Previous studies have theorized reasons for underrepresentation of women in computer science, and while many of these reasons are indeed present in…

  17. It Depends on When You Ask: Motives for Using Marijuana Assessed Before versus After a Marijuana Use Event

    PubMed Central

    Shrier, Lydia A.; Scherer, Emily Blood

    2014-01-01

    Marijuana use motives are typically evaluated retrospectively using measures that summarize or generalize across episodes of use, which may compromise validity. Using Ecological Momentary Assessment data, we examined the main reason for a specific marijuana use event measured both prospectively and retrospectively. We then determined reason types, event characteristics, and user characteristics that predicted change in reason. Thirty-six medical outpatients age 15 to 24 years who used marijuana two times a week or more used a handheld computer to select their main reason for use from the five categories of the Marijuana Motives Measure (Simons, Correia, & Carey, 1998) just before and after each time they used marijuana over two weeks (n = 263 events with before/after reason). Reasons were examined individually and according to dimensions identified in motivational models of substance use (positive/negative, internal/external). Reason assessed before use changed to a different reason after use for 20% of events: 10% of events for pleasure; 21%, to cope; 35%, to be more social; 55%, to expand my mind; and 100%, to conform. In the multivariable model, external and expansion reasons each predicted change in reason for use (p < 0.0001 and p = 0.001, respectively). Youth were also more likely to change their reason if older (p = 0.04), if male (p = 0.02), and with weekend use (p = 0.002). Retrospective assessments of event-specific motives for marijuana use may be unreliable and therefore invalid for a substantial minority of events, particularly if use is for external or expansion reasons. PMID:25123342

  18. Integrated software health management for aerospace guidance, navigation, and control systems: A probabilistic reasoning approach

    NASA Astrophysics Data System (ADS)

    Mbaya, Timmy

    Embedded Aerospace Systems have to perform safety and mission critical operations in a real-time environment where timing and functional correctness are extremely important. Guidance, Navigation, and Control (GN&C) systems substantially rely on complex software interfacing with hardware in real-time; any faults in software or hardware, or their interaction could result in fatal consequences. Integrated Software Health Management (ISWHM) provides an approach for detection and diagnosis of software failures while the software is in operation. The ISWHM approach is based on probabilistic modeling of software and hardware sensors using a Bayesian network. To meet memory and timing constraints of real-time embedded execution, the Bayesian network is compiled into an Arithmetic Circuit, which is used for on-line monitoring. This type of system monitoring, using an ISWHM, provides automated reasoning capabilities that compute diagnoses in a timely manner when failures occur. This reasoning capability enables time-critical mitigating decisions and relieves the human agent from the time-consuming and arduous task of foraging through a multitude of isolated---and often contradictory---diagnosis data. For the purpose of demonstrating the relevance of ISWHM, modeling and reasoning is performed on a simple simulated aerospace system running on a real-time operating system emulator, the OSEK/Trampoline platform. Models for a small satellite and an F-16 fighter jet GN&C (Guidance, Navigation, and Control) system have been implemented. Analysis of the ISWHM is then performed by injecting faults and analyzing the ISWHM's diagnoses.

  19. A prognostic model for temporal courses that combines temporal abstraction and case-based reasoning.

    PubMed

    Schmidt, Rainer; Gierl, Lothar

    2005-03-01

    Since clinical management of patients and clinical research are essentially time-oriented endeavours, reasoning about time has become a hot topic in medical informatics. Here we present a method for prognosis of temporal courses, which combines temporal abstractions with case-based reasoning. It is useful for application domains where neither well-known standards, nor known periodicity, nor a complete domain theory exist. We have used our method in two prognostic applications. The first one deals with prognosis of the kidney function for intensive care patients. The idea is to elicit impairments on time, especially to warn against threatening kidney failures. Our second application deals with a completely different domain, namely geographical medicine. Its intention is to compute early warnings against approaching infectious diseases, which are characterised by irregular cyclic occurrences. So far, we have applied our program on influenza and bronchitis. In this paper, we focus on influenza forecast and show first experimental results.

  20. System matrix computation vs storage on GPU: A comparative study in cone beam CT.

    PubMed

    Matenine, Dmitri; Côté, Geoffroi; Mascolo-Fortin, Julia; Goussard, Yves; Després, Philippe

    2018-02-01

    Iterative reconstruction algorithms in computed tomography (CT) require a fast method for computing the intersection distances between the trajectories of photons and the object, also called ray tracing or system matrix computation. This work focused on the thin-ray model is aimed at comparing different system matrix handling strategies using graphical processing units (GPUs). In this work, the system matrix is modeled by thin rays intersecting a regular grid of box-shaped voxels, known to be an accurate representation of the forward projection operator in CT. However, an uncompressed system matrix exceeds the random access memory (RAM) capacities of typical computers by one order of magnitude or more. Considering the RAM limitations of GPU hardware, several system matrix handling methods were compared: full storage of a compressed system matrix, on-the-fly computation of its coefficients, and partial storage of the system matrix with partial on-the-fly computation. These methods were tested on geometries mimicking a cone beam CT (CBCT) acquisition of a human head. Execution times of three routines of interest were compared: forward projection, backprojection, and ordered-subsets convex (OSC) iteration. A fully stored system matrix yielded the shortest backprojection and OSC iteration times, with a 1.52× acceleration for OSC when compared to the on-the-fly approach. Nevertheless, the maximum problem size was bound by the available GPU RAM and geometrical symmetries. On-the-fly coefficient computation did not require symmetries and was shown to be the fastest for forward projection. It also offered reasonable execution times of about 176.4 ms per view per OSC iteration for a detector of 512 × 448 pixels and a volume of 384 3 voxels, using commodity GPU hardware. Partial system matrix storage has shown a performance similar to the on-the-fly approach, while still relying on symmetries. Partial system matrix storage was shown to yield the lowest relative performance. On-the-fly ray tracing was shown to be the most flexible method, yielding reasonable execution times. A fully stored system matrix allowed for the lowest backprojection and OSC iteration times and may be of interest for certain performance-oriented applications. © 2017 American Association of Physicists in Medicine.

  1. A Transitional Model for the Introduction of Technology.

    ERIC Educational Resources Information Center

    Terrell, Steven R.; And Others

    There are many reasons for teachers' resistance to innovation in the classroom through computer use. This resistance can be categorized into one of several broad-based themes: resistance to organizational change; resistance to outside intervention; time management problems; lack of support from the administration; teachers' perceptions; or…

  2. The Kenya rangeland ecological monitoring unit

    NASA Technical Reports Server (NTRS)

    Stevens, W. E. (Principal Investigator)

    1978-01-01

    The author has identified the following significant results. Methodology for aerial surveys and ground truth studies was developed, tested, and revised several times to produce reasonably firm methods of procedure. Computer programs were adapted or developed to analyze, store, and recall data from the ground and air monitoring surveys.

  3. Performance assessment of KORAT-3D on the ANL IBM-SP computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexeyev, A.V.; Zvenigorodskaya, O.A.; Shagaliev, R.M.

    1999-09-01

    The TENAR code is currently being developed at the Russian Federal Nuclear Center (VNIIEF) as a coupled dynamics code for the simulation of transients in VVER and RBMK systems and other nuclear systems. The neutronic module in this code system is KORAT-3D. This module is also one of the most computationally intensive components of the code system. A parallel version of KORAT-3D has been implemented to achieve the goal of obtaining transient solutions in reasonable computational time, particularly for RBMK calculations that involve the application of >100,000 nodes. An evaluation of the KORAT-3D code performance was recently undertaken on themore » Argonne National Laboratory (ANL) IBM ScalablePower (SP) parallel computer located in the Mathematics and Computer Science Division of ANL. At the time of the study, the ANL IBM-SP computer had 80 processors. This study was conducted under the auspices of a technical staff exchange program sponsored by the International Nuclear Safety Center (INSC).« less

  4. Diagnostics in the Extendable Integrated Support Environment (EISE)

    NASA Technical Reports Server (NTRS)

    Brink, James R.; Storey, Paul

    1988-01-01

    Extendable Integrated Support Environment (EISE) is a real-time computer network consisting of commercially available hardware and software components to support systems level integration, modifications, and enhancement to weapons systems. The EISE approach offers substantial potential savings by eliminating unique support environments in favor of sharing common modules for the support of operational weapon systems. An expert system is being developed that will help support diagnosing faults in this network. This is a multi-level, multi-expert diagnostic system that uses experiential knowledge relating symptoms to faults and also reasons from structural and functional models of the underlying physical model when experiential reasoning is inadequate. The individual expert systems are orchestrated by a supervisory reasoning controller, a meta-level reasoner which plans the sequence of reasoning steps to solve the given specific problem. The overall system, termed the Diagnostic Executive, accesses systems level performance checks and error reports, and issues remote test procedures to formulate and confirm fault hypotheses.

  5. Adiabatic approximation with exponential accuracy for many-body systems and quantum computation

    NASA Astrophysics Data System (ADS)

    Lidar, Daniel A.; Rezakhani, Ali T.; Hamma, Alioscia

    2009-10-01

    We derive a version of the adiabatic theorem that is especially suited for applications in adiabatic quantum computation, where it is reasonable to assume that the adiabatic interpolation between the initial and final Hamiltonians is controllable. Assuming that the Hamiltonian is analytic in a finite strip around the real-time axis, that some number of its time derivatives vanish at the initial and final times, and that the target adiabatic eigenstate is nondegenerate and separated by a gap from the rest of the spectrum, we show that one can obtain an error between the final adiabatic eigenstate and the actual time-evolved state which is exponentially small in the evolution time, where this time itself scales as the square of the norm of the time derivative of the Hamiltonian divided by the cube of the minimal gap.

  6. [Electronic medical records: Evolution of physician-patient relationship in the Primary Care clinic].

    PubMed

    Pérez-Santonja, T; Gómez-Paredes, L; Álvarez-Montero, S; Cabello-Ballesteros, L; Mombiela-Muruzabal, M T

    2017-04-01

    The introduction of electronic medical records and computer media in clinics, has influenced the physician-patient relationship. These modifications have many advantages, but there is concern that the computer has become too important, going from a working tool to the centre of our attention during the clinical interview, decreasing doctor interaction with the patient. The objective of the study was to estimate the percentage of time that family physicians spend on computer media compared to interpersonal communication with the patient, and whether this time is modified depending on different variables such as, doctor's age or reason for the consultation. An observational and descriptive study was conducted for 10 weeks, with 2 healthcare centres involved. The researchers attended all doctor- patient interviews, recording the patient time in and out of the consultation. Each time the doctor fixed his gaze on computer media the time was clocked. A total of 436 consultations were collected. The doctors looked at the computer support a median 38.33% of the total duration of an interview. Doctors of 45 years and older spent more time fixing their eyes on computer media (P<.05). Family physicians used almost 40% of the consultation time looking at computer media, and depends on age of physician, number of queries, and number of medical appointments. Copyright © 2016 Sociedad Española de Médicos de Atención Primaria (SEMERGEN). Publicado por Elsevier España, S.L.U. All rights reserved.

  7. Computation of turbulent boundary layers employing the defect wall-function method. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Brown, Douglas L.

    1994-01-01

    In order to decrease overall computational time requirements of spatially-marching parabolized Navier-Stokes finite-difference computer code when applied to turbulent fluid flow, a wall-function methodology, originally proposed by R. Barnwell, was implemented. This numerical effort increases computational speed and calculates reasonably accurate wall shear stress spatial distributions and boundary-layer profiles. Since the wall shear stress is analytically determined from the wall-function model, the computational grid near the wall is not required to spatially resolve the laminar-viscous sublayer. Consequently, a substantially increased computational integration step size is achieved resulting in a considerable decrease in net computational time. This wall-function technique is demonstrated for adiabatic flat plate test cases from Mach 2 to Mach 8. These test cases are analytically verified employing: (1) Eckert reference method solutions, (2) experimental turbulent boundary-layer data of Mabey, and (3) finite-difference computational code solutions with fully resolved laminar-viscous sublayers. Additionally, results have been obtained for two pressure-gradient cases: (1) an adiabatic expansion corner and (2) an adiabatic compression corner.

  8. Mask CD relationship to temperature at the time backscatter is received

    NASA Astrophysics Data System (ADS)

    Zable, Harold; Kronmiller, Tom; Pearman, Ryan; Guthrie, Bill; Shirali, Nagesh; Masuda, Yukihiro; Kamikubo, Takashi; Nakayamada, Noriaki; Fujimura, Aki

    2017-07-01

    Mask writers need to be able to write sub-50nm features accurately. Nano-imprint lithography (NIL) masters need to create sub-20nm line and space (L:S) patterns reliably. Increasingly slower resists are deployed, but mask write times need to remain reasonable. The leading edge EBM-9500 offers 1200A/cm2 current density to shoot variable shaped beam (VSB) to write the masks. Last year, thermal effect correction (TEC) was introduced by NuFlare in the EBM-95001. It is a GPU-accelerated inline correction for the effect that the temperature of the resist has on CD. For example, a 100nm CD may print at 102nm where that area was at a comparably high temperature at the time of the shot. Since thermal effect is a temporal effect, the simulated temperature of the surface of the mask is dynamically updated for the effect of each shot in order to accurately predict the cumulative effect that is the temperature at the location of the shot at the time of the shot and therefore its impact on CD. The shot dose is changed to reverse the effects of the temperature change. This paper for the first time reveals an enhancement to this thermal model and a simulator for it. It turns out that the temperature at the time each location receives backscatter from other shots also make a difference to the CD. The effect is secondary, but still measurable for some resists and substrates. Results of a test-chip study will be presented. The computation required for the backscatter effect is substantial. It has been demonstrated that this calculation can be performed fast enough to be inline with the EBM-9500 with a reasonable-sized computing platform. Run-time results and the computing architecture will be presented.

  9. Examining the role of self-regulation and emotion in clinical reasoning: Implications for developing expertise.

    PubMed

    Lajoie, Susanne P; Zheng, Juan; Li, Shan

    2018-06-27

    This paper explores the role that self-regulation and emotions play in establishing a clinical diagnosis in the context of solving a clinical case in BioWorld, a computer supported learning environment designed for medical students to practice clinical reasoning. Group differences between high and low performers were explored. The results revealed no group differences in overall measures of SRL but high performers spend more time than lows in a subcategory of the reflection phase (reflecting on prioritized evidence and results). A reciprocal role of emotions was demonstrated for clinical reasoning and predicted students' diagnostic performance. High performers showed less negative activating emotions than low performers.

  10. Evaluation of subgrid-scale turbulence models using a fully simulated turbulent flow

    NASA Technical Reports Server (NTRS)

    Clark, R. A.; Ferziger, J. H.; Reynolds, W. C.

    1977-01-01

    An exact turbulent flow field was calculated on a three-dimensional grid with 64 points on a side. The flow simulates grid-generated turbulence from wind tunnel experiments. In this simulation, the grid spacing is small enough to include essentially all of the viscous energy dissipation, and the box is large enough to contain the largest eddy in the flow. The method is limited to low-turbulence Reynolds numbers, in our case R sub lambda = 36.6. To complete the calculation using a reasonable amount of computer time with reasonable accuracy, a third-order time-integration scheme was developed which runs at about the same speed as a simple first-order scheme. It obtains this accuracy by saving the velocity field and its first-time derivative at each time step. Fourth-order accurate space-differencing is used.

  11. Elucidating reaction mechanisms on quantum computers.

    PubMed

    Reiher, Markus; Wiebe, Nathan; Svore, Krysta M; Wecker, Dave; Troyer, Matthias

    2017-07-18

    With rapid recent advances in quantum technology, we are close to the threshold of quantum devices whose computational powers can exceed those of classical supercomputers. Here, we show that a quantum computer can be used to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. Our resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. Our results demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources.

  12. Elucidating reaction mechanisms on quantum computers

    PubMed Central

    Reiher, Markus; Wiebe, Nathan; Svore, Krysta M.; Wecker, Dave; Troyer, Matthias

    2017-01-01

    With rapid recent advances in quantum technology, we are close to the threshold of quantum devices whose computational powers can exceed those of classical supercomputers. Here, we show that a quantum computer can be used to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. Our resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. Our results demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources. PMID:28674011

  13. Elucidating reaction mechanisms on quantum computers

    NASA Astrophysics Data System (ADS)

    Reiher, Markus; Wiebe, Nathan; Svore, Krysta M.; Wecker, Dave; Troyer, Matthias

    2017-07-01

    With rapid recent advances in quantum technology, we are close to the threshold of quantum devices whose computational powers can exceed those of classical supercomputers. Here, we show that a quantum computer can be used to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. Our resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. Our results demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources.

  14. Knowledge Reasoning with Semantic Data for Real-Time Data Processing in Smart Factory

    PubMed Central

    Wang, Shiyong; Li, Di; Liu, Chengliang

    2018-01-01

    The application of high-bandwidth networks and cloud computing in manufacturing systems will be followed by mass data. Industrial data analysis plays important roles in condition monitoring, performance optimization, flexibility, and transparency of the manufacturing system. However, the currently existing architectures are mainly for offline data analysis, not suitable for real-time data processing. In this paper, we first define the smart factory as a cloud-assisted and self-organized manufacturing system in which physical entities such as machines, conveyors, and products organize production through intelligent negotiation and the cloud supervises this self-organized process for fault detection and troubleshooting based on data analysis. Then, we propose a scheme to integrate knowledge reasoning and semantic data where the reasoning engine processes the ontology model with real time semantic data coming from the production process. Based on these ideas, we build a benchmarking system for smart candy packing application that supports direct consumer customization and flexible hybrid production, and the data are collected and processed in real time for fault diagnosis and statistical analysis. PMID:29415444

  15. Design of an EEG-based brain-computer interface (BCI) from standard components running in real-time under Windows.

    PubMed

    Guger, C; Schlögl, A; Walterspacher, D; Pfurtscheller, G

    1999-01-01

    An EEG-based brain-computer interface (BCI) is a direct connection between the human brain and the computer. Such a communication system is needed by patients with severe motor impairments (e.g. late stage of Amyotrophic Lateral Sclerosis) and has to operate in real-time. This paper describes the selection of the appropriate components to construct such a BCI and focuses also on the selection of a suitable programming language and operating system. The multichannel system runs under Windows 95, equipped with a real-time Kernel expansion to obtain reasonable real-time operations on a standard PC. Matlab controls the data acquisition and the presentation of the experimental paradigm, while Simulink is used to calculate the recursive least square (RLS) algorithm that describes the current state of the EEG in real-time. First results of the new low-cost BCI show that the accuracy of differentiating imagination of left and right hand movement is around 95%.

  16. Extending the Binomial Checkpointing Technique for Resilience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walther, Andrea; Narayanan, Sri Hari Krishna

    In terms of computing time, adjoint methods offer a very attractive alternative to compute gradient information, re- quired, e.g., for optimization purposes. However, together with this very favorable temporal complexity result comes a memory requirement that is in essence proportional with the operation count of the underlying function, e.g., if algo- rithmic differentiation is used to provide the adjoints. For this reason, checkpointing approaches in many variants have become popular. This paper analyzes an extension of the so-called binomial approach to cover also possible failures of the computing systems. Such a measure of precaution is of special interest for massivemore » parallel simulations and adjoint calculations where the mean time between failure of the large scale computing system is smaller than the time needed to complete the calculation of the adjoint information. We de- scribe the extensions of standard checkpointing approaches required for such resilience, provide a corresponding imple- mentation and discuss numerical results.« less

  17. Structural performance analysis and redesign

    NASA Technical Reports Server (NTRS)

    Whetstone, W. D.

    1978-01-01

    Program performs stress buckling and vibrational analysis of large, linear, finite-element systems in excess of 50,000 degrees of freedom. Cost, execution time, and storage requirements are kept reasonable through use of sparse matrix solution techniques, and other computational and data management procedures designed for problems of very large size.

  18. User's guide to program FLEXSTAB. [aerodynamics

    NASA Technical Reports Server (NTRS)

    Cavin, R. K., III; Colunga, D.

    1975-01-01

    A manual is presented for correctly submitting program runs in aerodynamics on the UNIVAC 1108 computer system. All major program modules are included. Control cards are documented for the user's convenience, and card parameters are included in order to provide some idea as to reasonable time estimates for the program modules.

  19. Computational Study of Axisymmetric Off-Design Nozzle Flows

    NASA Technical Reports Server (NTRS)

    DalBello, Teryn; Georgiadis, Nicholas; Yoder, Dennis; Keith, Theo

    2003-01-01

    Computational Fluid Dynamics (CFD) analyses of axisymmetric circular-arc boattail nozzles operating off-design at transonic Mach numbers have been completed. These computations span the very difficult transonic flight regime with shock-induced separations and strong adverse pressure gradients. External afterbody and internal nozzle pressure distributions computed with the Wind code are compared with experimental data. A range of turbulence models were examined, including the Explicit Algebraic Stress model. Computations have been completed at freestream Mach numbers of 0.9 and 1.2, and nozzle pressure ratios (NPR) of 4 and 6. Calculations completed with variable time-stepping (steady-state) did not converge to a true steady-state solution. Calculations obtained using constant timestepping (timeaccurate) indicate less variations in flow properties compared with steady-state solutions. This failure to converge to a steady-state solution was the result of using variable time-stepping with large-scale separations present in the flow. Nevertheless, time-averaged boattail surface pressure coefficient and internal nozzle pressures show reasonable agreement with experimental data. The SST turbulence model demonstrates the best overall agreement with experimental data.

  20. On computations of variance, covariance and correlation for interval data

    NASA Astrophysics Data System (ADS)

    Kishida, Masako

    2017-02-01

    In many practical situations, the data on which statistical analysis is to be performed is only known with interval uncertainty. Different combinations of values from the interval data usually lead to different values of variance, covariance, and correlation. Hence, it is desirable to compute the endpoints of possible values of these statistics. This problem is, however, NP-hard in general. This paper shows that the problem of computing the endpoints of possible values of these statistics can be rewritten as the problem of computing skewed structured singular values ν, for which there exist feasible (polynomial-time) algorithms that compute reasonably tight bounds in most practical cases. This allows one to find tight intervals of the aforementioned statistics for interval data.

  1. Electronic questionnaires for measuring parent satisfaction and as a basis for quality improvement.

    PubMed

    Ammentorp, Jette; Rasmussen, Anne Mette; Nørgaard, Betty; Kirketerp, Edel; Kofoed, Poul-Erik

    2007-04-01

    Using paper questionnaires to measure quality of care from the perspective of the patient is a time consuming procedure resulting in very slow feedback. Response rates are low and patients who cannot read the local language are usually excluded. To investigate the applicability of an electronic questionnaire by evaluating the response rate. To study whether computer-based continuous monitoring could elucidate reasons for parents being less satisfied with care and treatment and to compare parent satisfaction with the results of a study performed in 2003. Parents were asked to assess the quality of care and treatment by answering questions on a touch-screen computer. The questions, which were translated into seven languages, corresponded to the indicators selected by the department for monitoring parents' satisfaction. The system was developed in cooperation with a software company. A total of 780 parents answered (69%). Of these, 2% parents answered in a foreign language. The main reasons for being less satisfied were perceived difficulties in getting in contact with the staff, having experienced unnecessary long waiting time and having the impression that the nurses did not have enough time. Significant improvements in satisfaction had occurred as compared to a study from 2003. By using electronic questionnaires, it was possible to focus on the small percentage of parents not satisfied, to identify reasons for being less satisfied and to respond immediately to the feedback from the parents. Electronic surveys produce a satisfactory response rates.

  2. A Computational Account of Children's Analogical Reasoning: Balancing Inhibitory Control in Working Memory and Relational Representation

    ERIC Educational Resources Information Center

    Morrison, Robert G.; Doumas, Leonidas A. A.; Richland, Lindsey E.

    2011-01-01

    Theories accounting for the development of analogical reasoning tend to emphasize either the centrality of relational knowledge accretion or changes in information processing capability. Simulations in LISA (Hummel & Holyoak, 1997, 2003), a neurally inspired computer model of analogical reasoning, allow us to explore how these factors may…

  3. Highly fault-tolerant parallel computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spielman, D.A.

    We re-introduce the coded model of fault-tolerant computation in which the input and output of a computational device are treated as words in an error-correcting code. A computational device correctly computes a function in the coded model if its input and output, once decoded, are a valid input and output of the function. In the coded model, it is reasonable to hope to simulate all computational devices by devices whose size is greater by a constant factor but which are exponentially reliable even if each of their components can fail with some constant probability. We consider fine-grained parallel computations inmore » which each processor has a constant probability of producing the wrong output at each time step. We show that any parallel computation that runs for time t on w processors can be performed reliably on a faulty machine in the coded model using w log{sup O(l)} w processors and time t log{sup O(l)} w. The failure probability of the computation will be at most t {center_dot} exp(-w{sup 1/4}). The codes used to communicate with our fault-tolerant machines are generalized Reed-Solomon codes and can thus be encoded and decoded in O(n log{sup O(1)} n) sequential time and are independent of the machine they are used to communicate with. We also show how coded computation can be used to self-correct many linear functions in parallel with arbitrarily small overhead.« less

  4. Debugging embedded computer programs. [tactical missile computers

    NASA Technical Reports Server (NTRS)

    Kemp, G. H.

    1980-01-01

    Every embedded computer program must complete its debugging cycle using some system that will allow real time debugging. Many of the common items addressed during debugging are listed. Seven approaches to debugging are analyzed to evaluate how well they treat those items. Cost evaluations are also included in the comparison. The results indicate that the best collection of capabilities to cover the common items present in the debugging task occurs in the approach where a minicomputer handles the environment simulation with an emulation of some kind representing the embedded computer. This approach can be taken at a reasonable cost. The case study chosen is an embedded computer in a tactical missile. Several choices of computer for the environment simulation are discussed as well as different approaches to the embedded emulator.

  5. Scaling predictive modeling in drug development with cloud computing.

    PubMed

    Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola

    2015-01-26

    Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.

  6. A Pipeline for Large Data Processing Using Regular Sampling for Unstructured Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berres, Anne Sabine; Adhinarayanan, Vignesh; Turton, Terece

    2017-05-12

    Large simulation data requires a lot of time and computational resources to compute, store, analyze, visualize, and run user studies. Today, the largest cost of a supercomputer is not hardware but maintenance, in particular energy consumption. Our goal is to balance energy consumption and cognitive value of visualizations of resulting data. This requires us to go through the entire processing pipeline, from simulation to user studies. To reduce the amount of resources, data can be sampled or compressed. While this adds more computation time, the computational overhead is negligible compared to the simulation time. We built a processing pipeline atmore » the example of regular sampling. The reasons for this choice are two-fold: using a simple example reduces unnecessary complexity as we know what to expect from the results. Furthermore, it provides a good baseline for future, more elaborate sampling methods. We measured time and energy for each test we did, and we conducted user studies in Amazon Mechanical Turk (AMT) for a range of different results we produced through sampling.« less

  7. An efficient two-stage approach for image-based FSI analysis of atherosclerotic arteries

    PubMed Central

    Rayz, Vitaliy L.; Mofrad, Mohammad R. K.; Saloner, David

    2010-01-01

    Patient-specific biomechanical modeling of atherosclerotic arteries has the potential to aid clinicians in characterizing lesions and determining optimal treatment plans. To attain high levels of accuracy, recent models use medical imaging data to determine plaque component boundaries in three dimensions, and fluid–structure interaction is used to capture mechanical loading of the diseased vessel. As the plaque components and vessel wall are often highly complex in shape, constructing a suitable structured computational mesh is very challenging and can require a great deal of time. Models based on unstructured computational meshes require relatively less time to construct and are capable of accurately representing plaque components in three dimensions. These models unfortunately require additional computational resources and computing time for accurate and meaningful results. A two-stage modeling strategy based on unstructured computational meshes is proposed to achieve a reasonable balance between meshing difficulty and computational resource and time demand. In this method, a coarsegrained simulation of the full arterial domain is used to guide and constrain a fine-scale simulation of a smaller region of interest within the full domain. Results for a patient-specific carotid bifurcation model demonstrate that the two-stage approach can afford a large savings in both time for mesh generation and time and resources needed for computation. The effects of solid and fluid domain truncation were explored, and were shown to minimally affect accuracy of the stress fields predicted with the two-stage approach. PMID:19756798

  8. A Massively Parallel Bayesian Approach to Planetary Protection Trajectory Analysis and Design

    NASA Technical Reports Server (NTRS)

    Wallace, Mark S.

    2015-01-01

    The NASA Planetary Protection Office has levied a requirement that the upper stage of future planetary launches have a less than 10(exp -4) chance of impacting Mars within 50 years after launch. A brute-force approach requires a decade of computer time to demonstrate compliance. By using a Bayesian approach and taking advantage of the demonstrated reliability of the upper stage, the required number of fifty-year propagations can be massively reduced. By spreading the remaining embarrassingly parallel Monte Carlo simulations across multiple computers, compliance can be demonstrated in a reasonable time frame. The method used is described here.

  9. ASPEN Version 3.0

    NASA Technical Reports Server (NTRS)

    Rabideau, Gregg; Chien, Steve; Knight, Russell; Schaffer, Steven; Tran, Daniel; Cichy, Benjamin; Sherwood, Robert

    2006-01-01

    The Automated Scheduling and Planning Environment (ASPEN) computer program has been updated to version 3.0. ASPEN is a modular, reconfigurable, application software framework for solving batch problems that involve reasoning about time, activities, states, and resources. Applications of ASPEN can include planning spacecraft missions, scheduling of personnel, and managing supply chains, inventories, and production lines. ASPEN 3.0 can be customized for a wide range of applications and for a variety of computing environments that include various central processing units and random access memories.

  10. Toward Scalable Trustworthy Computing Using the Human-Physiology-Immunity Metaphor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hively, Lee M; Sheldon, Frederick T

    The cybersecurity landscape consists of an ad hoc patchwork of solutions. Optimal cybersecurity is difficult for various reasons: complexity, immense data and processing requirements, resource-agnostic cloud computing, practical time-space-energy constraints, inherent flaws in 'Maginot Line' defenses, and the growing number and sophistication of cyberattacks. This article defines the high-priority problems and examines the potential solution space. In that space, achieving scalable trustworthy computing and communications is possible through real-time knowledge-based decisions about cyber trust. This vision is based on the human-physiology-immunity metaphor and the human brain's ability to extract knowledge from data and information. The article outlines future steps towardmore » scalable trustworthy systems requiring a long-term commitment to solve the well-known challenges.« less

  11. Reducing statistical uncertainties in simulated organ doses of phantoms immersed in water

    DOE PAGES

    Hiller, Mauritius M.; Veinot, Kenneth G.; Easterly, Clay E.; ...

    2016-08-13

    In this study, methods are addressed to reduce the computational time to compute organ-dose rate coefficients using Monte Carlo techniques. Several variance reduction techniques are compared including the reciprocity method, importance sampling, weight windows and the use of the ADVANTG software package. For low-energy photons, the runtime was reduced by a factor of 10 5 when using the reciprocity method for kerma computation for immersion of a phantom in contaminated water. This is particularly significant since impractically long simulation times are required to achieve reasonable statistical uncertainties in organ dose for low-energy photons in this source medium and geometry. Althoughmore » the MCNP Monte Carlo code is used in this paper, the reciprocity technique can be used equally well with other Monte Carlo codes.« less

  12. Computer Program for the Design and Off-Design Performance of Turbojet and Turbofan Engine Cycles

    NASA Technical Reports Server (NTRS)

    Morris, S. J.

    1978-01-01

    The rapid computer program is designed to be run in a stand-alone mode or operated within a larger program. The computation is based on a simplified one-dimensional gas turbine cycle. Each component in the engine is modeled thermo-dynamically. The component efficiencies used in the thermodynamic modeling are scaled for the off-design conditions from input design point values using empirical trends which are included in the computer code. The engine cycle program is capable of producing reasonable engine performance prediction with a minimum of computer execute time. The current computer execute time on the IBM 360/67 for one Mach number, one altitude, and one power setting is about 0.1 seconds. about 0.1 seconds. The principal assumption used in the calculation is that the compressor is operated along a line of maximum adiabatic efficiency on the compressor map. The fluid properties are computed for the combustion mixture, but dissociation is not included. The procedure included in the program is only for the combustion of JP-4, methane, or hydrogen.

  13. An experimental study of an explosively driven flat plate launcher

    NASA Astrophysics Data System (ADS)

    Rae, Philip; Haroz, Erik; Armstrong, Chris; Perry, Lee; M Division Team

    2017-06-01

    For some upcoming experiments it is desired to impact a large explosive assembly with one or more moderate diameter flat metal plates traveling at high velocity (2-3 km s-1). The time of arrival of these plates will need to carefully controlled and delayed (i.e. the time(s) of arrival known to approximately a microsecond). For this reason, producing a flyer plate from more traditional gun assemblies is not possible. Previous researchers have demonstrated the ability to throw reasonably flat metal flyers from the so-called Forest flyer geometry. The defining characteristics of this design are a carefully controlled reduction in explosive area from a larger explosive plane-wave-lens and booster pad to a smaller flyer plate to improve the planarity of the drive available and an air gap between the explosive booster and the plate to reduce the peak tensile stresses generated in the plate to suppress spalling. This experimental series comprised a number of different design variants and plate and explosive drive materials. The aim was to calibrate a predictive computational modeling capability on this kind of system in preparation for later more radical design ideas best tested in a computer before undertaking the expensive business of construction.

  14. Probabilistic Reasoning for Robustness in Automated Planning

    NASA Technical Reports Server (NTRS)

    Schaffer, Steven; Clement, Bradley; Chien, Steve

    2007-01-01

    A general-purpose computer program for planning the actions of a spacecraft or other complex system has been augmented by incorporating a subprogram that reasons about uncertainties in such continuous variables as times taken to perform tasks and amounts of resources to be consumed. This subprogram computes parametric probability distributions for time and resource variables on the basis of user-supplied models of actions and resources that they consume. The current system accepts bounded Gaussian distributions over action duration and resource use. The distributions are then combined during planning to determine the net probability distribution of each resource at any time point. In addition to a full combinatoric approach, several approximations for arriving at these combined distributions are available, including maximum-likelihood and pessimistic algorithms. Each such probability distribution can then be integrated to obtain a probability that execution of the plan under consideration would violate any constraints on the resource. The key idea is to use these probabilities of conflict to score potential plans and drive a search toward planning low-risk actions. An output plan provides a balance between the user s specified averseness to risk and other measures of optimality.

  15. Students' inductive reasoning skills and the relevance of prior knowledge: an exploratory study with a computer-based training course on the topic of acne vulgaris.

    PubMed

    Horn-Ritzinger, Sabine; Bernhardt, Johannes; Horn, Michael; Smolle, Josef

    2011-04-01

    The importance of inductive instruction in medical education is increasingly growing. Little is known about the relevance of prior knowledge regarding students' inductive reasoning abilities. The purpose is to evaluate this inductive teaching method as a means of fostering higher levels of learning and to explore how individual differences in prior knowledge (high [HPK] vs. low [LPK]) contribute to students' inductive reasoning skills. Twenty-six LPK and 18 HPK students could train twice with an interactive computer-based training object to discover the underlying concept before doing the final comprehension check. Students had a median of 76.9% of correct answers in the first, 90.9% in the second training, and answered 92% of the final assessment questions correctly. More important, 86% of all students succeeded with inductive learning, among them 83% of the HPK students and 89% of the LPK students. Prior knowledge did not predict performance on overall comprehension. This inductive instructional strategy fostered students' deep approaches to learning in a time-effective way.

  16. Abstract-Reasoning Software for Coordinating Multiple Agents

    NASA Technical Reports Server (NTRS)

    Clement, Bradley; Barrett, Anthony; Rabideau, Gregg; Knight, Russell

    2003-01-01

    A computer program for scheduling the activities of multiple agents that share limited resources has been incorporated into the Automated Scheduling and Planning Environment (ASPEN) software system, aspects of which have been reported in several previous NASA Tech Briefs articles. In the original intended application, the agents would be multiple spacecraft and/or robotic vehicles engaged in scientific exploration of distant planets. The program could also be used on Earth in such diverse settings as production lines and military maneuvers. This program includes a planning/scheduling subprogram of the iterative repair type that reasons about the activities of multiple agents at abstract levels in order to greatly improve the scheduling of their use of shared resources. The program summarizes the information about the constraints on, and resource requirements of, abstract activities on the basis of the constraints and requirements that pertain to their potential refinements (decomposition into less-abstract and ultimately to primitive activities). The advantage of reasoning about summary information is that time needed to find consistent schedules is exponentially smaller than the time that would be needed for reasoning about the same tasks at the primitive level.

  17. Comparison of two computer programs by predicting turbulent mixing of helium in a ducted supersonic airstream

    NASA Technical Reports Server (NTRS)

    Pan, Y. S.; Drummond, J. P.; Mcclinton, C. R.

    1978-01-01

    Two parabolic flow computer programs, SHIP (a finite-difference program) and COMOC (a finite-element program), are used for predicting three-dimensional turbulent reacting flow fields in supersonic combustors. The theoretical foundation of the two computer programs are described, and then the programs are applied to a three-dimensional turbulent mixing experiment. The cold (nonreacting) flow experiment was performed to study the mixing of helium jets with a supersonic airstream in a rectangular duct. Surveys of the flow field at an upstream were used as the initial data by programs; surveys at a downstream station provided comparison to assess program accuracy. Both computer programs predicted the experimental results and data trends reasonably well. However, the comparison between the computations from the two programs indicated that SHIP was more accurate in computation and more efficient in both computer storage and computing time than COMOC.

  18. Viking Afterbody Heating Computations and Comparisons to Flight Data

    NASA Technical Reports Server (NTRS)

    Edquist, Karl T.; Wright, Michael J.; Allen, Gary A., Jr.

    2006-01-01

    Computational fluid dynamics predictions of Viking Lander 1 entry vehicle afterbody heating are compared to flight data. The analysis includes a derivation of heat flux from temperature data at two base cover locations, as well as a discussion of available reconstructed entry trajectories. Based on the raw temperature-time history data, convective heat flux is derived to be 0.63-1.10 W/cm2 for the aluminum base cover at the time of thermocouple failure. Peak heat flux at the fiberglass base cover thermocouple is estimated to be 0.54-0.76 W/cm2, occurring 16 seconds after peak stagnation point heat flux. Navier-Stokes computational solutions are obtained with two separate codes using an 8- species Mars gas model in chemical and thermal non-equilibrium. Flowfield solutions using local time-stepping did not result in converged heating at either thermocouple location. A global time-stepping approach improved the computational stability, but steady state heat flux was not reached for either base cover location. Both thermocouple locations lie within a separated flow region of the base cover that is likely unsteady. Heat flux computations averaged over the solution history are generally below the flight data and do not vary smoothly over time for both base cover locations. Possible reasons for the mismatch between flight data and flowfield solutions include underestimated conduction effects and limitations of the computational methods.

  19. Viking Afterbody Heating Computations and Comparisons to Flight Data

    NASA Technical Reports Server (NTRS)

    Edquist, Karl T.; Wright, Michael J.; Allen, Gary A., Jr.

    2006-01-01

    Computational fluid dynamics predictions of Viking Lander 1 entry vehicle afterbody heating are compared to flight data. The analysis includes a derivation of heat flux from temperature data at two base cover locations, as well as a discussion of available reconstructed entry trajectories. Based on the raw temperature-time history data, convective heat flux is derived to be 0.63-1.10 W/sq cm for the aluminum base cover at the time of thermocouple failure. Peak heat flux at the fiberglass base cover thermocouple is estimated to be 0.54-0.76 W/sq cm, occurring 16 seconds after peak stagnation point heat flux. Navier-Stokes computational solutions are obtained with two separate codes using an 8-species Mars gas model in chemical and thermal non-equilibrium. Flowfield solutions using local time-stepping did not result in converged heating at either thermocouple location. A global time-stepping approach improved the computational stability, but steady state heat flux was not reached for either base cover location. Both thermocouple locations lie within a separated flow region of the base cover that is likely unsteady. Heat flux computations averaged over the solution history are generally below the flight data and do not vary smoothly over time for both base cover locations. Possible reasons for the mismatch between flight data and flowfield solutions include underestimated conduction effects and limitations of the computational methods.

  20. Psychological Trauma as a Reason for Computer Game Addiction among Adolescents

    ERIC Educational Resources Information Center

    Oskenbay, Fariza; Tolegenova, Aliya; Kalymbetova, Elmira; Chung, Man Cheung; Faizullina, Aida; Jakupov, Maksat

    2016-01-01

    This study explores psychological trauma as a reason for computer game addiction among adolescents. The findings of this study show that there is a connection between psychological trauma and computer game addiction. Some psychologists note that the main cause of any type of addiction derives from psychological trauma, and that finding such…

  1. Does Computer Use Matter? The Influence of Computer Usage on Eighth-Grade Students' Mathematics Reasoning

    ERIC Educational Resources Information Center

    Ayieko, Rachel A.; Gokbel, Elif N.; Nelson, Bryan

    2017-01-01

    This study uses the 2011 Trends in International Mathematics and Science Study to investigate the relationships among students' and teachers' computer use, and eighth-grade students' mathematical reasoning in three high-achieving nations: Finland, Chinese Taipei, and Singapore. The study found a significant negative relationship in all three…

  2. Computer model to simulate testing at the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Mineck, Raymond E.; Owens, Lewis R., Jr.; Wahls, Richard A.; Hannon, Judith A.

    1995-01-01

    A computer model has been developed to simulate the processes involved in the operation of the National Transonic Facility (NTF), a large cryogenic wind tunnel at the Langley Research Center. The simulation was verified by comparing the simulated results with previously acquired data from three experimental wind tunnel test programs in the NTF. The comparisons suggest that the computer model simulates reasonably well the processes that determine the liquid nitrogen (LN2) consumption, electrical consumption, fan-on time, and the test time required to complete a test plan at the NTF. From these limited comparisons, it appears that the results from the simulation model are generally within about 10 percent of the actual NTF test results. The use of actual data acquisition times in the simulation produced better estimates of the LN2 usage, as expected. Additional comparisons are needed to refine the model constants. The model will typically produce optimistic results since the times and rates included in the model are typically the optimum values. Any deviation from the optimum values will lead to longer times or increased LN2 and electrical consumption for the proposed test plan. Computer code operating instructions and listings of sample input and output files have been included.

  3. 7 CFR 993.58 - Deferment of time for withholding.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ..., and shall be in an amount computed by multiplying the pounds of natural condition prunes for which... shall be used by the committee to purchase from handlers a quantity of natural condition prunes, up to..., with reserve pool funds for distribution to equity holders. (3) If for any reason the committee is...

  4. 7 CFR 993.58 - Deferment of time for withholding.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., and shall be in an amount computed by multiplying the pounds of natural condition prunes for which... shall be used by the committee to purchase from handlers a quantity of natural condition prunes, up to..., with reserve pool funds for distribution to equity holders. (3) If for any reason the committee is...

  5. 32 CFR 750.36 - Time limitations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... United States under the FTCA must be presented in writing within 2 years after the claim accrues. 28 U.S... or reasonably should have discovered the existence of the act giving rise to the claim. In computing...(c). (c) Suits. A civil action is barred unless suit is filed against the United States not later...

  6. Processing Conversational Implicatures: Alternatives and Counterfactual Reasoning

    ERIC Educational Resources Information Center

    Tiel, Bob; Schaeken, Walter

    2017-01-01

    In a series of experiments, Bott and Noveck (2004) found that the computation of scalar inferences, a variety of conversational implicature, caused a delay in response times. In order to determine what aspect of the inferential process that underlies scalar inferences caused this delay, we extended their paradigm to three other kinds of…

  7. Computational aerodynamics and artificial intelligence

    NASA Technical Reports Server (NTRS)

    Mehta, U. B.; Kutler, P.

    1984-01-01

    The general principles of artificial intelligence are reviewed and speculations are made concerning how knowledge based systems can accelerate the process of acquiring new knowledge in aerodynamics, how computational fluid dynamics may use expert systems, and how expert systems may speed the design and development process. In addition, the anatomy of an idealized expert system called AERODYNAMICIST is discussed. Resource requirements for using artificial intelligence in computational fluid dynamics and aerodynamics are examined. Three main conclusions are presented. First, there are two related aspects of computational aerodynamics: reasoning and calculating. Second, a substantial portion of reasoning can be achieved with artificial intelligence. It offers the opportunity of using computers as reasoning machines to set the stage for efficient calculating. Third, expert systems are likely to be new assets of institutions involved in aeronautics for various tasks of computational aerodynamics.

  8. Tool for Analysis and Reduction of Scientific Data

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    The Automated Scheduling and Planning Environment (ASPEN) computer program has been updated to version 3.0. ASPEN as a whole (up to version 2.0) has been summarized, and selected aspects of ASPEN have been discussed in several previous NASA Tech Briefs articles. Restated briefly, ASPEN is a modular, reconfigurable, application software framework for solving batch problems that involve reasoning about time, activities, states, and resources. Applications of ASPEN can include planning spacecraft missions, scheduling of personnel, and managing supply chains, inventories, and production lines. ASPEN 3.0 can be customized for a wide range of applications and for a variety of computing environments that include various central processing units and randomaccess memories. Domain-specific reasoning modules (e.g., modules for determining orbits for spacecraft) can easily be plugged into ASPEN 3.0. Improvements over other, similar software that have been incorporated into ASPEN 3.0 include a provision for more expressive time-line values, new parsing capabilities afforded by an ASPEN language based on Extensible Markup Language, improved search capabilities, and improved interfaces to other, utility-type software (notably including MATLAB).

  9. Great Computational Intelligence in the Formal Sciences via Analogical Reasoning

    DTIC Science & Technology

    2017-05-08

    computational harnessing of traditional mathematical statistics (as e.g. covered in Hogg, Craig & McKean 2005) is used to power statistical learning techniques...AFRL-AFOSR-VA-TR-2017-0099 Great Computational Intelligence in the Formal Sciences via Analogical Reasoning Selmer Bringsjord RENSSELAER POLYTECHNIC...08-05-2017 2. REPORT TYPE Final Performance 3. DATES COVERED (From - To) 15 Oct 2011 to 31 Dec 2016 4. TITLE AND SUBTITLE Great Computational

  10. Toward a Global Bundle Adjustment of SPOT 5 - HRS Images

    NASA Astrophysics Data System (ADS)

    Massera, S.; Favé, P.; Gachet, R.; Orsoni, A.

    2012-07-01

    The HRS (High Resolution Stereoscopic) instrument carried on SPOT 5 enables quasi-simultaneous acquisition of stereoscopic images on wide segments - 120 km wide - with two forward and backward-looking telescopes observing the Earth with an angle of 20° ahead and behind the vertical. For 8 years IGN (Institut Géographique National) has been developing techniques to achieve spatiotriangulation of these images. During this time the capacities of bundle adjustment of SPOT 5 - HRS spatial images have largely improved. Today a global single block composed of about 20,000 images can be computed in reasonable calculation time. The progression was achieved step by step: first computed blocks were only composed of 40 images, then bigger blocks were computed. Finally only one global block is now computed. In the same time calculation tools have improved: for example the adjustment of 2,000 images of North Africa takes about 2 minutes whereas 8 hours were needed two years ago. To reach such a result a new independent software was developed to compute fast and efficient bundle adjustments. In the same time equipment - GCPs (Ground Control Points) and tie points - and techniques have also evolved over the last 10 years. Studies were made to get recommendations about the equipment in order to make an accurate single block. Tie points can now be quickly and automatically computed with SURF (Speeded Up Robust Features) techniques. Today the updated equipment is composed of about 500 GCPs and studies show that the ideal configuration is around 100 tie points by square degree. With such an equipment, the location of the global HRS block becomes a few meters accurate whereas non adjusted images are only 15 m accurate. This paper will describe the methods used in IGN Espace to compute a global single block composed of almost 20,000 HRS images, 500 GCPs and several million of tie points in reasonable calculation time. Many advantages can be found to use such a block. Because the global block is unique it becomes easier to manage the historic and the different evolutions of the computations (new images, new GCPs or tie points). The location is now unique and consequently coherent all around the world, avoiding steps and artifacts on the borders of DSMs (Digital Surface Models) and OrthoImages historically calculated from different blocks. No extrapolation far from GCPs in the limits of images is done anymore. Using the global block as a reference will allow new images from other sources to be easily located on this reference.

  11. An efficient method for computing unsteady transonic aerodynamics of swept wings with control surfaces

    NASA Technical Reports Server (NTRS)

    Liu, D. D.; Kao, Y. F.; Fung, K. Y.

    1989-01-01

    A transonic equivalent strip (TES) method was further developed for unsteady flow computations of arbitrary wing planforms. The TES method consists of two consecutive correction steps to a given nonlinear code such as LTRAN2; namely, the chordwise mean flow correction and the spanwise phase correction. The computation procedure requires direct pressure input from other computed or measured data. Otherwise, it does not require airfoil shape or grid generation for given planforms. To validate the computed results, four swept wings of various aspect ratios, including those with control surfaces, are selected as computational examples. Overall trends in unsteady pressures are established with those obtained by XTRAN3S codes, Isogai's full potential code and measured data by NLR and RAE. In comparison with these methods, the TES has achieved considerable saving in computer time and reasonable accuracy which suggests immediate industrial applications.

  12. Temporal Constraint Reasoning With Preferences

    NASA Technical Reports Server (NTRS)

    Khatib, Lina; Morris, Paul; Morris, Robert; Rossi, Francesca

    2001-01-01

    A number of reasoning problems involving the manipulation of temporal information can naturally be viewed as implicitly inducing an ordering of potential local decisions involving time (specifically, associated with durations or orderings of events) on the basis of preferences. For example. a pair of events might be constrained to occur in a certain order, and, in addition. it might be preferable that the delay between them be as large, or as small, as possible. This paper explores problems in which a set of temporal constraints is specified, where each constraint is associated with preference criteria for making local decisions about the events involved in the constraint, and a reasoner must infer a complete solution to the problem such that, to the extent possible, these local preferences are met in the best way. A constraint framework for reasoning about time is generalized to allow for preferences over event distances and durations, and we study the complexity of solving problems in the resulting formalism. It is shown that while in general such problems are NP-hard, some restrictions on the shape of the preference functions, and on the structure of the preference set, can be enforced to achieve tractability. In these cases, a simple generalization of a single-source shortest path algorithm can be used to compute a globally preferred solution in polynomial time.

  13. A distributed version of the NASA Engine Performance Program

    NASA Technical Reports Server (NTRS)

    Cours, Jeffrey T.; Curlett, Brian P.

    1993-01-01

    Distributed NEPP, a version of the NASA Engine Performance Program, uses the original NEPP code but executes it in a distributed computer environment. Multiple workstations connected by a network increase the program's speed and, more importantly, the complexity of the cases it can handle in a reasonable time. Distributed NEPP uses the public domain software package, called Parallel Virtual Machine, allowing it to execute on clusters of machines containing many different architectures. It includes the capability to link with other computers, allowing them to process NEPP jobs in parallel. This paper discusses the design issues and granularity considerations that entered into programming Distributed NEPP and presents the results of timing runs.

  14. Numerical Simulation of Flow Through an Artificial Heart

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.; Kutler, Paul; Kwak, Dochan; Kiris, Cetin

    1989-01-01

    A solution procedure was developed that solves the unsteady, incompressible Navier-Stokes equations, and was used to numerically simulate viscous incompressible flow through a model of the Pennsylvania State artificial heart. The solution algorithm is based on the artificial compressibility method, and uses flux-difference splitting to upwind the convective terms; a line-relaxation scheme is used to solve the equations. The time-accuracy of the method is obtained by iteratively solving the equations at each physical time step. The artificial heart geometry involves a piston-type action with a moving solid wall. A single H-grid is fit inside the heart chamber. The grid is continuously compressed and expanded with a constant number of grid points to accommodate the moving piston. The computational domain ends at the valve openings where nonreflective boundary conditions based on the method of characteristics are applied. Although a number of simplifing assumptions were made regarding the geometry, the computational results agreed reasonably well with an experimental picture. The computer time requirements for this flow simulation, however, are quite extensive. Computational study of this type of geometry would benefit greatly from improvements in computer hardware speed and algorithm efficiency enhancements.

  15. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems

    PubMed Central

    Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R

    2006-01-01

    Background We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. Results We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Conclusion Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems. PMID:17081289

  16. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems.

    PubMed

    Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R

    2006-11-02

    We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems.

  17. Probabilistic Reasoning for Plan Robustness

    NASA Technical Reports Server (NTRS)

    Schaffer, Steve R.; Clement, Bradley J.; Chien, Steve A.

    2005-01-01

    A planning system must reason about the uncertainty of continuous variables in order to accurately project the possible system state over time. A method is devised for directly reasoning about the uncertainty in continuous activity duration and resource usage for planning problems. By representing random variables as parametric distributions, computing projected system state can be simplified in some cases. Common approximation and novel methods are compared for over-constrained and lightly constrained domains. The system compares a few common approximation methods for an iterative repair planner. Results show improvements in robustness over the conventional non-probabilistic representation by reducing the number of constraint violations witnessed by execution. The improvement is more significant for larger problems and problems with higher resource subscription levels but diminishes as the system is allowed to accept higher risk levels.

  18. The Comparison of Inductive Reasoning under Risk Conditions between Chinese and Japanese Based on Computational Models: Toward the Application to CAE for Foreign Language

    ERIC Educational Resources Information Center

    Zhang, Yujie; Terai, Asuka; Nakagawa, Masanori

    2013-01-01

    Inductive reasoning under risk conditions is an important thinking process not only for sciences but also in our daily life. From this viewpoint, it is very useful for language learning to construct computational models of inductive reasoning which realize the CAE for foreign languages. This study proposes the comparison of inductive reasoning…

  19. Knowledge Representation and Ontologies

    NASA Astrophysics Data System (ADS)

    Grimm, Stephan

    Knowledge representation and reasoning aims at designing computer systems that reason about a machine-interpretable representation of the world. Knowledge-based systems have a computational model of some domain of interest in which symbols serve as surrogates for real world domain artefacts, such as physical objects, events, relationships, etc. [1]. The domain of interest can cover any part of the real world or any hypothetical system about which one desires to represent knowledge for com-putational purposes. A knowledge-based system maintains a knowledge base, which stores the symbols of the computational model in the form of statements about the domain, and it performs reasoning by manipulating these symbols. Applications can base their decisions on answers to domain-relevant questions posed to a knowledge base.

  20. COLA: Optimizing Stream Processing Applications via Graph Partitioning

    NASA Astrophysics Data System (ADS)

    Khandekar, Rohit; Hildrum, Kirsten; Parekh, Sujay; Rajan, Deepak; Wolf, Joel; Wu, Kun-Lung; Andrade, Henrique; Gedik, Buğra

    In this paper, we describe an optimization scheme for fusing compile-time operators into reasonably-sized run-time software units called processing elements (PEs). Such PEs are the basic deployable units in System S, a highly scalable distributed stream processing middleware system. Finding a high quality fusion significantly benefits the performance of streaming jobs. In order to maximize throughput, our solution approach attempts to minimize the processing cost associated with inter-PE stream traffic while simultaneously balancing load across the processing hosts. Our algorithm computes a hierarchical partitioning of the operator graph based on a minimum-ratio cut subroutine. We also incorporate several fusion constraints in order to support real-world System S jobs. We experimentally compare our algorithm with several other reasonable alternative schemes, highlighting the effectiveness of our approach.

  1. Soft Computing Techniques for the Protein Folding Problem on High Performance Computing Architectures.

    PubMed

    Llanes, Antonio; Muñoz, Andrés; Bueno-Crespo, Andrés; García-Valverde, Teresa; Sánchez, Antonia; Arcas-Túnez, Francisco; Pérez-Sánchez, Horacio; Cecilia, José M

    2016-01-01

    The protein-folding problem has been extensively studied during the last fifty years. The understanding of the dynamics of global shape of a protein and the influence on its biological function can help us to discover new and more effective drugs to deal with diseases of pharmacological relevance. Different computational approaches have been developed by different researchers in order to foresee the threedimensional arrangement of atoms of proteins from their sequences. However, the computational complexity of this problem makes mandatory the search for new models, novel algorithmic strategies and hardware platforms that provide solutions in a reasonable time frame. We present in this revision work the past and last tendencies regarding protein folding simulations from both perspectives; hardware and software. Of particular interest to us are both the use of inexact solutions to this computationally hard problem as well as which hardware platforms have been used for running this kind of Soft Computing techniques.

  2. Data Processing Aspects of MEDLARS

    PubMed Central

    Austin, Charles J.

    1964-01-01

    The speed and volume requirements of MEDLARS necessitate the use of high-speed data processing equipment, including paper-tape typewriters, a digital computer, and a special device for producing photo-composed output. Input to the system is of three types: variable source data, including citations from the literature and search requests; changes to such master files as the medical subject headings list and the journal record file; and operating instructions such as computer programs and procedures for machine operators. MEDLARS builds two major stores of data on magnetic tape. The Processed Citation File includes bibliographic citations in expanded form for high-quality printing at periodic intervals. The Compressed Citation File is a coded, time-sequential citation store which is used for high-speed searching against demand request input. Major design considerations include converting variable-length, alphanumeric data to mechanical form quickly and accurately; serial searching by the computer within a reasonable period of time; high-speed printing that must be of graphic quality; and efficient maintenance of various complex computer files. PMID:14119287

  3. DATA PROCESSING ASPECTS OF MEDLARS.

    PubMed

    AUSTIN, C J

    1964-01-01

    The speed and volume requirements of MEDLARS necessitate the use of high-speed data processing equipment, including paper-tape typewriters, a digital computer, and a special device for producing photo-composed output. Input to the system is of three types: variable source data, including citations from the literature and search requests; changes to such master files as the medical subject headings list and the journal record file; and operating instructions such as computer programs and procedures for machine operators. MEDLARS builds two major stores of data on magnetic tape. The Processed Citation File includes bibliographic citations in expanded form for high-quality printing at periodic intervals. The Compressed Citation File is a coded, time-sequential citation store which is used for high-speed searching against demand request input. Major design considerations include converting variable-length, alphanumeric data to mechanical form quickly and accurately; serial searching by the computer within a reasonable period of time; high-speed printing that must be of graphic quality; and efficient maintenance of various complex computer files.

  4. Use of a database for managing qualitative research data.

    PubMed

    Ross, B A

    1994-01-01

    In this article, a process for handling text data in qualitative research projects by using existing word-processing and database programs is described. When qualitative data are managed using this method, the information is more readily available and the coding and organization of the data are enhanced. Furthermore, the narrative always remains intact regardless of how it is arranged or re-arranged, and there is a concomitant time savings and increased accuracy. The author hopes that this article will inspire some readers to explore additional methods and processes for computer-aided, nonstatistical data management. The study referred to in this article (Ross, 1991) was a qualitative research project which sought to find out how teaching faculty in nursing and education used computers in their professional work. Ajzen and Fishbein's (1980) Theory of Reasoned Action formed the theoretical basis for this work. This theory proposes that behavior, in this study the use of computers, is the result of intentions and that intentions are the result of attitudes and social norms. The study found that although computer use was sometimes the result of attitudes, more often it seemed to be the result of subjective (perceived) norms or intervening variables. Teaching faculty apparently did not initially make reasoned judgments about the computers or the programs they used, but chose to use whatever was required or available.

  5. Water resources of Borrego Valley and vicinity, San Diego County, California; Phase 2, Development of a ground-water flow model

    USGS Publications Warehouse

    Mitten, H.T.; Lines, G.C.; Berenbrock, Charles; Durbin, T.J.

    1988-01-01

    Because of the imbalance between recharge and pumpage, groundwater levels declined as much as 100 ft in some areas of Borrego Valley, California during drinking 1945-80. As an aid to analyzing the effects of pumping on the groundwater system, a three-dimensional finite-element groundwater flow model was developed. The model was calibrated for both steady-state (1945) and transient-state (1946-79) conditions. For the steady-state calibration, hydraulic conductivities of the three aquifers were varied within reasonable limits to obtain an acceptable match between measured and computed hydraulic heads. Recharge from streamflow infiltration (4,800 acre-ft/yr) was balanced by computed evapotranspiration (3,900 acre-ft/yr) and computed subsurface outflow from the model area (930 acre-ft/yr). For the transient state calibration, the volumes and distribution of net groundwater pumpage were estimated from land-use data and estimates of consumptive use for irrigated crops. The pumpage was assigned to the appropriate nodes in the model for each of seventeen 2-year time steps representing the period 1946-79. The specific yields of the three aquifers were varied within reasonable limits to obtain an acceptable match between measured and computed hydraulic heads. Groundwater pumpage input to the model was compensated by declines in both the computed evapotranspiration and the amount of groundwater in storage. (USGS)

  6. A computer program for estimating instream travel times and concentrations of a potential contaminant in the Yellowstone River, Montana

    USGS Publications Warehouse

    McCarthy, Peter M.

    2006-01-01

    The Yellowstone River is very important in a variety of ways to the residents of southeastern Montana; however, it is especially vulnerable to spilled contaminants. In 2004, the U.S. Geological Survey, in cooperation with Montana Department of Environmental Quality, initiated a study to develop a computer program to rapidly estimate instream travel times and concentrations of a potential contaminant in the Yellowstone River using regression equations developed in 1999 by the U.S. Geological Survey. The purpose of this report is to describe these equations and their limitations, describe the development of a computer program to apply the equations to the Yellowstone River, and provide detailed instructions on how to use the program. This program is available online at [http://pubs.water.usgs.gov/sir2006-5057/includes/ytot.xls]. The regression equations provide estimates of instream travel times and concentrations in rivers where little or no contaminant-transport data are available. Equations were developed and presented for the most probable flow velocity and the maximum probable flow velocity. These velocity estimates can then be used to calculate instream travel times and concentrations of a potential contaminant. The computer program was developed so estimation equations for instream travel times and concentrations can be solved quickly for sites along the Yellowstone River between Corwin Springs and Sidney, Montana. The basic types of data needed to run the program are spill data, streamflow data, and data for locations of interest along the Yellowstone River. Data output from the program includes spill location, river mileage at specified locations, instantaneous discharge, mean-annual discharge, drainage area, and channel slope. Travel times and concentrations are provided for estimates of the most probable velocity of the peak concentration and the maximum probable velocity of the peak concentration. Verification of estimates of instream travel times and concentrations for the Yellowstone River requires information about the flow velocity throughout the 520 mi of river in the study area. Dye-tracer studies would provide the best data about flow velocities and would provide the best verification of instream travel times and concentrations estimated from this computer program; however, data from such studies does not currently (2006) exist and new studies would be expensive and time-consuming. An alternative approach used in this study for verification of instream travel times is based on the use of flood-wave velocities determined from recorded streamflow hydrographs at selected mainstem streamflow-gaging stations along the Yellowstone River. The ratios of flood-wave velocity to the most probable velocity for the base flow estimated from the computer program are within the accepted range of 2.5 to 4.0 and indicate that flow velocities estimated from the computer program are reasonable for the Yellowstone River. The ratios of flood-wave velocity to the maximum probable velocity are within a range of 1.9 to 2.8 and indicate that the maximum probable flow velocities estimated from the computer program, which corresponds to the shortest travel times and maximum probable concentrations, are conservative and reasonable for the Yellowstone River.

  7. Computations on Wings With Full-Span Oscillating Control Surfaces Using Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.

    2013-01-01

    A dual-level parallel procedure is presented for computing large databases to support aerospace vehicle design. This procedure has been developed as a single Unix script within the Parallel Batch Submission environment utilizing MPIexec and runs MPI based analysis software. It has been developed to provide a process for aerospace designers to generate data for large numbers of cases with the highest possible fidelity and reasonable wall clock time. A single job submission environment has been created to avoid keeping track of multiple jobs and the associated system administration overhead. The process has been demonstrated for computing large databases for the design of typical aerospace configurations, a launch vehicle and a rotorcraft.

  8. Computer versus paper--does it make any difference in test performance?

    PubMed

    Karay, Yassin; Schauber, Stefan K; Stosch, Christoph; Schüttpelz-Brauns, Katrin

    2015-01-01

    CONSTRUCT: In this study, we examine the differences in test performance between the paper-based and the computer-based version of the Berlin formative Progress Test. In this context it is the first study that allows controlling for students' prior performance. Computer-based tests make possible a more efficient examination procedure for test administration and review. Although university staff will benefit largely from computer-based tests, the question arises if computer-based tests influence students' test performance. A total of 266 German students from the 9th and 10th semester of medicine (comparable with the 4th-year North American medical school schedule) participated in the study (paper = 132, computer = 134). The allocation of the test format was conducted as a randomized matched-pair design in which students were first sorted according to their prior test results. The organizational procedure, the examination conditions, the room, and seating arrangements, as well as the order of questions and answers, were identical in both groups. The sociodemographic variables and pretest scores of both groups were comparable. The test results from the paper and computer versions did not differ. The groups remained within the allotted time, but students using the computer version (particularly the high performers) needed significantly less time to complete the test. In addition, we found significant differences in guessing behavior. Low performers using the computer version guess significantly more than low-performing students in the paper-pencil version. Participants in computer-based tests are not at a disadvantage in terms of their test results. The computer-based test required less processing time. The reason for the longer processing time when using the paper-pencil version might be due to the time needed to write the answer down, controlling for transferring the answer correctly. It is still not known why students using the computer version (particularly low-performing students) guess at a higher rate. Further studies are necessary to understand this finding.

  9. Visual perception can account for the close relation between numerosity processing and computational fluency.

    PubMed

    Zhou, Xinlin; Wei, Wei; Zhang, Yiyun; Cui, Jiaxin; Chen, Chuansheng

    2015-01-01

    Studies have shown that numerosity processing (e.g., comparison of numbers of dots in two dot arrays) is significantly correlated with arithmetic performance. Researchers have attributed this association to the fact that both tasks share magnitude processing. The current investigation tested an alternative hypothesis, which states that visual perceptual ability (as measured by a figure-matching task) can account for the close relation between numerosity processing and arithmetic performance (computational fluency). Four hundred and twenty four third- to fifth-grade children (220 boys and 204 girls, 8.0-11.0 years old; 120 third graders, 146 fourth graders, and 158 fifth graders) were recruited from two schools (one urban and one suburban) in Beijing, China. Six classes were randomly selected from each school, and all students in each selected class participated in the study. All children were given a series of cognitive and mathematical tests, including numerosity comparison, figure matching, forward verbal working memory, visual tracing, non-verbal matrices reasoning, mental rotation, choice reaction time, arithmetic tests and curriculum-based mathematical achievement test. Results showed that figure-matching ability had higher correlations with numerosity processing and computational fluency than did other cognitive factors (e.g., forward verbal working memory, visual tracing, non-verbal matrix reasoning, mental rotation, and choice reaction time). More important, hierarchical multiple regression showed that figure matching ability accounted for the well-established association between numerosity processing and computational fluency. In support of the visual perception hypothesis, the results suggest that visual perceptual ability, rather than magnitude processing, may be the shared component of numerosity processing and arithmetic performance.

  10. Visual perception can account for the close relation between numerosity processing and computational fluency

    PubMed Central

    Zhou, Xinlin; Wei, Wei; Zhang, Yiyun; Cui, Jiaxin; Chen, Chuansheng

    2015-01-01

    Studies have shown that numerosity processing (e.g., comparison of numbers of dots in two dot arrays) is significantly correlated with arithmetic performance. Researchers have attributed this association to the fact that both tasks share magnitude processing. The current investigation tested an alternative hypothesis, which states that visual perceptual ability (as measured by a figure-matching task) can account for the close relation between numerosity processing and arithmetic performance (computational fluency). Four hundred and twenty four third- to fifth-grade children (220 boys and 204 girls, 8.0–11.0 years old; 120 third graders, 146 fourth graders, and 158 fifth graders) were recruited from two schools (one urban and one suburban) in Beijing, China. Six classes were randomly selected from each school, and all students in each selected class participated in the study. All children were given a series of cognitive and mathematical tests, including numerosity comparison, figure matching, forward verbal working memory, visual tracing, non-verbal matrices reasoning, mental rotation, choice reaction time, arithmetic tests and curriculum-based mathematical achievement test. Results showed that figure-matching ability had higher correlations with numerosity processing and computational fluency than did other cognitive factors (e.g., forward verbal working memory, visual tracing, non-verbal matrix reasoning, mental rotation, and choice reaction time). More important, hierarchical multiple regression showed that figure matching ability accounted for the well-established association between numerosity processing and computational fluency. In support of the visual perception hypothesis, the results suggest that visual perceptual ability, rather than magnitude processing, may be the shared component of numerosity processing and arithmetic performance. PMID:26441740

  11. Effect of element size on the solution accuracies of finite-element heat transfer and thermal stress analyses of space shuttle orbiter

    NASA Technical Reports Server (NTRS)

    Ko, William L.; Olona, Timothy

    1987-01-01

    The effect of element size on the solution accuracies of finite-element heat transfer and thermal stress analyses of space shuttle orbiter was investigated. Several structural performance and resizing (SPAR) thermal models and NASA structural analysis (NASTRAN) structural models were set up for the orbiter wing midspan bay 3. The thermal model was found to be the one that determines the limit of finite-element fineness because of the limitation of computational core space required for the radiation view factor calculations. The thermal stresses were found to be extremely sensitive to a slight variation of structural temperature distributions. The minimum degree of element fineness required for the thermal model to yield reasonably accurate solutions was established. The radiation view factor computation time was found to be insignificant compared with the total computer time required for the SPAR transient heat transfer analysis.

  12. An acceleration framework for synthetic aperture radar algorithms

    NASA Astrophysics Data System (ADS)

    Kim, Youngsoo; Gloster, Clay S.; Alexander, Winser E.

    2017-04-01

    Algorithms for radar signal processing, such as Synthetic Aperture Radar (SAR) are computationally intensive and require considerable execution time on a general purpose processor. Reconfigurable logic can be used to off-load the primary computational kernel onto a custom computing machine in order to reduce execution time by an order of magnitude as compared to kernel execution on a general purpose processor. Specifically, Field Programmable Gate Arrays (FPGAs) can be used to accelerate these kernels using hardware-based custom logic implementations. In this paper, we demonstrate a framework for algorithm acceleration. We used SAR as a case study to illustrate the potential for algorithm acceleration offered by FPGAs. Initially, we profiled the SAR algorithm and implemented a homomorphic filter using a hardware implementation of the natural logarithm. Experimental results show a linear speedup by adding reasonably small processing elements in Field Programmable Gate Array (FPGA) as opposed to using a software implementation running on a typical general purpose processor.

  13. DualSPHysics: A numerical tool to simulate real breakwaters

    NASA Astrophysics Data System (ADS)

    Zhang, Feng; Crespo, Alejandro; Altomare, Corrado; Domínguez, José; Marzeddu, Andrea; Shang, Shao-ping; Gómez-Gesteira, Moncho

    2018-02-01

    The open-source code DualSPHysics is used in this work to compute the wave run-up in an existing dike in the Chinese coast using realistic dimensions, bathymetry and wave conditions. The GPU computing power of the DualSPHysics allows simulating real-engineering problems that involve complex geometries with a high resolution in a reasonable computational time. The code is first validated by comparing the numerical free-surface elevation, the wave orbital velocities and the time series of the run-up with physical data in a wave flume. Those experiments include a smooth dike and an armored dike with two layers of cubic blocks. After validation, the code is applied to a real case to obtain the wave run-up under different incident wave conditions. In order to simulate the real open sea, the spurious reflections from the wavemaker are removed by using an active wave absorption technique.

  14. A new system of computer-assisted navigation leading to reduction in operating time in uncemented total hip replacement in a matched population.

    PubMed

    Chaudhry, Fouad A; Ismail, Sanaa Z; Davis, Edward T

    2018-05-01

    Computer-assisted navigation techniques are used to optimise component placement and alignment in total hip replacement. It has developed in the last 10 years but despite its advantages only 0.3% of all total hip replacements in England and Wales are done using computer navigation. One of the reasons for this is that computer-assisted technology increases operative time. A new method of pelvic registration has been developed without the need to register the anterior pelvic plane (BrainLab hip 6.0) which has shown to improve the accuracy of THR. The purpose of this study was to find out if the new method reduces the operating time. This was a retrospective analysis of comparing operating time in computer navigated primary uncemented total hip replacement using two methods of registration. Group 1 included 128 cases that were performed using BrainLab versions 2.1-5.1. This version relied on the acquisition of the anterior pelvic plane for registration. Group 2 included 128 cases that were performed using the newest navigation software, BrainLab hip 6.0 (registration possible with the patient in the lateral decubitus position). The operating time was 65.79 (40-98) minutes using the old method of registration and was 50.87 (33-74) minutes using the new method of registration. This difference was statistically significant. The body mass index (BMI) was comparable in both groups. The study supports the use of new method of registration in improving the operating time in computer navigated primary uncemented total hip replacements.

  15. Information processing in echo state networks at the edge of chaos.

    PubMed

    Boedecker, Joschka; Obst, Oliver; Lizier, Joseph T; Mayer, N Michael; Asada, Minoru

    2012-09-01

    We investigate information processing in randomly connected recurrent neural networks. It has been shown previously that the computational capabilities of these networks are maximized when the recurrent layer is close to the border between a stable and an unstable dynamics regime, the so called edge of chaos. The reasons, however, for this maximized performance are not completely understood. We adopt an information-theoretical framework and are for the first time able to quantify the computational capabilities between elements of these networks directly as they undergo the phase transition to chaos. Specifically, we present evidence that both information transfer and storage in the recurrent layer are maximized close to this phase transition, providing an explanation for why guiding the recurrent layer toward the edge of chaos is computationally useful. As a consequence, our study suggests self-organized ways of improving performance in recurrent neural networks, driven by input data. Moreover, the networks we study share important features with biological systems such as feedback connections and online computation on input streams. A key example is the cerebral cortex, which was shown to also operate close to the edge of chaos. Consequently, the behavior of model systems as studied here is likely to shed light on reasons why biological systems are tuned into this specific regime.

  16. Efficient mapping algorithms for scheduling robot inverse dynamics computation on a multiprocessor system

    NASA Technical Reports Server (NTRS)

    Lee, C. S. G.; Chen, C. L.

    1989-01-01

    Two efficient mapping algorithms for scheduling the robot inverse dynamics computation consisting of m computational modules with precedence relationship to be executed on a multiprocessor system consisting of p identical homogeneous processors with processor and communication costs to achieve minimum computation time are presented. An objective function is defined in terms of the sum of the processor finishing time and the interprocessor communication time. The minimax optimization is performed on the objective function to obtain the best mapping. This mapping problem can be formulated as a combination of the graph partitioning and the scheduling problems; both have been known to be NP-complete. Thus, to speed up the searching for a solution, two heuristic algorithms were proposed to obtain fast but suboptimal mapping solutions. The first algorithm utilizes the level and the communication intensity of the task modules to construct an ordered priority list of ready modules and the module assignment is performed by a weighted bipartite matching algorithm. For a near-optimal mapping solution, the problem can be solved by the heuristic algorithm with simulated annealing. These proposed optimization algorithms can solve various large-scale problems within a reasonable time. Computer simulations were performed to evaluate and verify the performance and the validity of the proposed mapping algorithms. Finally, experiments for computing the inverse dynamics of a six-jointed PUMA-like manipulator based on the Newton-Euler dynamic equations were implemented on an NCUBE/ten hypercube computer to verify the proposed mapping algorithms. Computer simulation and experimental results are compared and discussed.

  17. A transient response analysis of the space shuttle vehicle during liftoff

    NASA Technical Reports Server (NTRS)

    Brunty, J. A.

    1990-01-01

    A proposed transient response method is formulated for the liftoff analysis of the space shuttle vehicles. It uses a power series approximation with unknown coefficients for the interface forces between the space shuttle and mobile launch platform. This allows the equation of motion of the two structures to be solved separately with the unknown coefficients at the end of each step. These coefficients are obtained by enforcing the interface compatibility conditions between the two structures. Once the unknown coefficients are determined, the total response is computed for that time step. The method is validated by a numerical example of a cantilevered beam and by the liftoff analysis of the space shuttle vehicles. The proposed method is compared to an iterative transient response analysis method used by Martin Marietta for their space shuttle liftoff analysis. It is shown that the proposed method uses less computer time than the iterative method and does not require as small a time step for integration. The space shuttle vehicle model is reduced using two different types of component mode synthesis (CMS) methods, the Lanczos method and the Craig and Bampton CMS method. By varying the cutoff frequency in the Craig and Bampton method it was shown that the space shuttle interface loads can be computed with reasonable accuracy. Both the Lanczos CMS method and Craig and Bampton CMS method give similar results. A substantial amount of computer time is saved using the Lanczos CMS method over that of the Craig and Bampton method. However, when trying to compute a large number of Lanczos vectors, input/output computer time increased and increased the overall computer time. The application of several liftoff release mechanisms that can be adapted to the proposed method are discussed.

  18. The engine design engine. A clustered computer platform for the aerodynamic inverse design and analysis of a full engine

    NASA Technical Reports Server (NTRS)

    Sanz, J.; Pischel, K.; Hubler, D.

    1992-01-01

    An application for parallel computation on a combined cluster of powerful workstations and supercomputers was developed. A Parallel Virtual Machine (PVM) is used as message passage language on a macro-tasking parallelization of the Aerodynamic Inverse Design and Analysis for a Full Engine computer code. The heterogeneous nature of the cluster is perfectly handled by the controlling host machine. Communication is established via Ethernet with the TCP/IP protocol over an open network. A reasonable overhead is imposed for internode communication, rendering an efficient utilization of the engaged processors. Perhaps one of the most interesting features of the system is its versatile nature, that permits the usage of the computational resources available that are experiencing less use at a given point in time.

  19. An M-estimator for reduced-rank system identification.

    PubMed

    Chen, Shaojie; Liu, Kai; Yang, Yuguang; Xu, Yuting; Lee, Seonjoo; Lindquist, Martin; Caffo, Brian S; Vogelstein, Joshua T

    2017-01-15

    High-dimensional time-series data from a wide variety of domains, such as neuroscience, are being generated every day. Fitting statistical models to such data, to enable parameter estimation and time-series prediction, is an important computational primitive. Existing methods, however, are unable to cope with the high-dimensional nature of these data, due to both computational and statistical reasons. We mitigate both kinds of issues by proposing an M-estimator for Reduced-rank System IDentification ( MR. SID). A combination of low-rank approximations, ℓ 1 and ℓ 2 penalties, and some numerical linear algebra tricks, yields an estimator that is computationally efficient and numerically stable. Simulations and real data examples demonstrate the usefulness of this approach in a variety of problems. In particular, we demonstrate that MR. SID can accurately estimate spatial filters, connectivity graphs, and time-courses from native resolution functional magnetic resonance imaging data. MR. SID therefore enables big time-series data to be analyzed using standard methods, readying the field for further generalizations including non-linear and non-Gaussian state-space models.

  20. An M-estimator for reduced-rank system identification

    PubMed Central

    Chen, Shaojie; Liu, Kai; Yang, Yuguang; Xu, Yuting; Lee, Seonjoo; Lindquist, Martin; Caffo, Brian S.; Vogelstein, Joshua T.

    2018-01-01

    High-dimensional time-series data from a wide variety of domains, such as neuroscience, are being generated every day. Fitting statistical models to such data, to enable parameter estimation and time-series prediction, is an important computational primitive. Existing methods, however, are unable to cope with the high-dimensional nature of these data, due to both computational and statistical reasons. We mitigate both kinds of issues by proposing an M-estimator for Reduced-rank System IDentification ( MR. SID). A combination of low-rank approximations, ℓ1 and ℓ2 penalties, and some numerical linear algebra tricks, yields an estimator that is computationally efficient and numerically stable. Simulations and real data examples demonstrate the usefulness of this approach in a variety of problems. In particular, we demonstrate that MR. SID can accurately estimate spatial filters, connectivity graphs, and time-courses from native resolution functional magnetic resonance imaging data. MR. SID therefore enables big time-series data to be analyzed using standard methods, readying the field for further generalizations including non-linear and non-Gaussian state-space models. PMID:29391659

  1. Activity in the fronto-parietal network indicates numerical inductive reasoning beyond calculation: An fMRI study combined with a cognitive model

    PubMed Central

    Liang, Peipeng; Jia, Xiuqin; Taatgen, Niels A.; Borst, Jelmer P.; Li, Kuncheng

    2016-01-01

    Numerical inductive reasoning refers to the process of identifying and extrapolating the rule involved in numeric materials. It is associated with calculation, and shares the common activation of the fronto-parietal regions with calculation, which suggests that numerical inductive reasoning may correspond to a general calculation process. However, compared with calculation, rule identification is critical and unique to reasoning. Previous studies have established the central role of the fronto-parietal network for relational integration during rule identification in numerical inductive reasoning. The current question of interest is whether numerical inductive reasoning exclusively corresponds to calculation or operates beyond calculation, and whether it is possible to distinguish between them based on the activity pattern in the fronto-parietal network. To directly address this issue, three types of problems were created: numerical inductive reasoning, calculation, and perceptual judgment. Our results showed that the fronto-parietal network was more active in numerical inductive reasoning which requires more exchanges between intermediate representations and long-term declarative knowledge during rule identification. These results survived even after controlling for the covariates of response time and error rate. A computational cognitive model was developed using the cognitive architecture ACT-R to account for the behavioral results and brain activity in the fronto-parietal network. PMID:27193284

  2. Activity in the fronto-parietal network indicates numerical inductive reasoning beyond calculation: An fMRI study combined with a cognitive model.

    PubMed

    Liang, Peipeng; Jia, Xiuqin; Taatgen, Niels A; Borst, Jelmer P; Li, Kuncheng

    2016-05-19

    Numerical inductive reasoning refers to the process of identifying and extrapolating the rule involved in numeric materials. It is associated with calculation, and shares the common activation of the fronto-parietal regions with calculation, which suggests that numerical inductive reasoning may correspond to a general calculation process. However, compared with calculation, rule identification is critical and unique to reasoning. Previous studies have established the central role of the fronto-parietal network for relational integration during rule identification in numerical inductive reasoning. The current question of interest is whether numerical inductive reasoning exclusively corresponds to calculation or operates beyond calculation, and whether it is possible to distinguish between them based on the activity pattern in the fronto-parietal network. To directly address this issue, three types of problems were created: numerical inductive reasoning, calculation, and perceptual judgment. Our results showed that the fronto-parietal network was more active in numerical inductive reasoning which requires more exchanges between intermediate representations and long-term declarative knowledge during rule identification. These results survived even after controlling for the covariates of response time and error rate. A computational cognitive model was developed using the cognitive architecture ACT-R to account for the behavioral results and brain activity in the fronto-parietal network.

  3. Comparison of storm-time changes of geomagnetic field at ground and at MAGSAT altitudes

    NASA Technical Reports Server (NTRS)

    Kane, R. P.; Trivedi, N. B.

    1981-01-01

    Computations concerning variations of the geomagnetic field at MAGSAT altitudes were investigated. Using MAGSAT data for the X, Y, and Z components of the geomagnetic field, a computer conversion to yield the H component was performed. Two methods of determining delta H normalized to a constant geocentric distance R sub 0 = 6800 were investigated, and the utility of elta H at times of magnetic storms was considered. Delta H at a geographical latitude of 0 at dawn and dusk, the standard Dst, and K sub p histograms were plotted and compared. Magnetic anomalies are considered. Examination of data from the majority of the 400 passes of MAGSAT considered show a reasonable delta H versus latitude variation. Discrepancies in values are discussed.

  4. Causal Reasoning in Medicine: Analysis of a Protocol.

    ERIC Educational Resources Information Center

    Kuipers, Benjamin; Kassirer, Jerome P.

    1984-01-01

    Describes the construction of a knowledge representation from the identification of the problem (nephrotic syndrome) to a running computer simulation of causal reasoning to provide a vertical slice of the construction of a cognitive model. Interactions between textbook knowledge, observations of human experts, and computational requirements are…

  5. Faultfinder: A diagnostic expert system with graceful degradation for onboard aircraft applications

    NASA Technical Reports Server (NTRS)

    Abbott, Kathy H.; Schutte, Paul C.; Palmer, Michael T.; Ricks, Wendell R.

    1988-01-01

    A research effort was conducted to explore the application of artificial intelligence technology to automation of fault monitoring and diagnosis as an aid to the flight crew. Human diagnostic reasoning was analyzed and actual accident and incident cases were reconstructed. Based on this analysis and reconstruction, diagnostic concepts were conceived and implemented for an aircraft's engine and hydraulic subsystems. These concepts are embedded within a multistage approach to diagnosis that reasons about time-based, causal, and qualitative information, and enables a certain amount of graceful degradation. The diagnostic concepts are implemented in a computer program called Faultfinder that serves as a research prototype.

  6. A Video Lecture and Lab-Based Approach for Learning of Image Processing Concepts

    ERIC Educational Resources Information Center

    Chiu, Chiung-Fang; Lee, Greg C.

    2009-01-01

    The current practice of traditional in-class lecture for learning computer science (CS) in the high schools of Taiwan is in need of revamping. Teachers instruct on the use of commercial software instead of teaching CS concepts to students. The lack of more suitable teaching materials and limited classroom time are the main reasons for the…

  7. Intrusive and Non-Intrusive Instruction in Dynamic Skill Training.

    DTIC Science & Technology

    1981-10-01

    less sensitive to the processing load imposed by the dynaic task together with instructional feedback processing than were the decison - making and...betwee computer based instruction of knowledge systems and computer based instruction of dynamic skills. There is reason to expect that the findings of...knowledge 3Ytm and computer based instruction of dynlamic skill.. There is reason to expect that the findings of research on knowledge system

  8. CHAMPION: Intelligent Hierarchical Reasoning Agents for Enhanced Decision Support

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hohimer, Ryan E.; Greitzer, Frank L.; Noonan, Christine F.

    2011-11-15

    We describe the design and development of an advanced reasoning framework employing semantic technologies, organized within a hierarchy of computational reasoning agents that interpret domain specific information. Designed based on an inspirational metaphor of the pattern recognition functions performed by the human neocortex, the CHAMPION reasoning framework represents a new computational modeling approach that derives invariant knowledge representations through memory-prediction belief propagation processes that are driven by formal ontological language specification and semantic technologies. The CHAMPION framework shows promise for enhancing complex decision making in diverse problem domains including cyber security, nonproliferation and energy consumption analysis.

  9. Designing computer learning environments for engineering and computer science: The scaffolded knowledge integration framework

    NASA Astrophysics Data System (ADS)

    Linn, Marcia C.

    1995-06-01

    Designing effective curricula for complex topics and incorporating technological tools is an evolving process. One important way to foster effective design is to synthesize successful practices. This paper describes a framework called scaffolded knowledge integration and illustrates how it guided the design of two successful course enhancements in the field of computer science and engineering. One course enhancement, the LISP Knowledge Integration Environment, improved learning and resulted in more gender-equitable outcomes. The second course enhancement, the spatial reasoning environment, addressed spatial reasoning in an introductory engineering course. This enhancement minimized the importance of prior knowledge of spatial reasoning and helped students develop a more comprehensive repertoire of spatial reasoning strategies. Taken together, the instructional research programs reinforce the value of the scaffolded knowledge integration framework and suggest directions for future curriculum reformers.

  10. Heart Fibrillation and Parallel Supercomputers

    NASA Technical Reports Server (NTRS)

    Kogan, B. Y.; Karplus, W. J.; Chudin, E. E.

    1997-01-01

    The Luo and Rudy 3 cardiac cell mathematical model is implemented on the parallel supercomputer CRAY - T3D. The splitting algorithm combined with variable time step and an explicit method of integration provide reasonable solution times and almost perfect scaling for rectilinear wave propagation. The computer simulation makes it possible to observe new phenomena: the break-up of spiral waves caused by intracellular calcium and dynamics and the non-uniformity of the calcium distribution in space during the onset of the spiral wave.

  11. The ’TIS’ Intelligent Gateway Computer: An Alternative to the ’Doomsday Scenario’,

    DTIC Science & Technology

    1983-02-01

    century. The traditional intermediary, the librarian, or information specialist, will have joined the fate of dinosaurs by that time, Lewis said, unless...others in similar work environments, (Refs. 2-5) also by Lewis who offers it as a partial alternative to the extinction of the traditional information...1979 1960 1981 1982 Year The underlying reasons probably are that major end users at LLNL: 1) Don’t have the time to bother with routine searching- 2

  12. 48 CFR 952.227-14 - Rights in data-general. (DOE coverage-alternates VI and VII)

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... data regarded as limited rights data or restricted computer software to the Government and third parties at reasonable royalties upon request by the Department of Energy. (k) Contractor licensing. Except... rights data or restricted computer software on terms and conditions reasonable under the circumstances...

  13. 48 CFR 952.227-14 - Rights in data-general. (DOE coverage-alternates VI and VII)

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... data regarded as limited rights data or restricted computer software to the Government and third parties at reasonable royalties upon request by the Department of Energy. (k) Contractor licensing. Except... rights data or restricted computer software on terms and conditions reasonable under the circumstances...

  14. Effects of Computer Algebra System (CAS) with Metacognitive Training on Mathematical Reasoning.

    ERIC Educational Resources Information Center

    Kramarski, Bracha; Hirsch, Chaya

    2003-01-01

    Describes a study that investigated the differential effects of Computer Algebra Systems (CAS) and metacognitive training (META) on mathematical reasoning. Participants were 83 Israeli eighth-grade students. Results showed that CAS embedded within META significantly outperformed the META and CAS alone conditions, which in turn significantly…

  15. Components of Understanding in Proportional Reasoning: A Fuzzy Set Representation of Developmental Progressions.

    ERIC Educational Resources Information Center

    Moore, Colleen F.; And Others

    1991-01-01

    Examined the development of proportional reasoning by means of a temperature mixture task. Results show the importance of distinguishing between intuitive knowledge and formal computational knowledge of proportional concepts. Provides a new perspective on the relation of intuitive and computational knowledge during development. (GLR)

  16. Exploring Cloud Computing for Large-scale Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Guang; Han, Binh; Yin, Jian

    This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address thesemore » challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.« less

  17. Managing competing elastic Grid and Cloud scientific computing applications using OpenNebula

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Lusso, S.; Masera, M.; Vallero, S.

    2015-12-01

    Elastic cloud computing applications, i.e. applications that automatically scale according to computing needs, work on the ideal assumption of infinite resources. While large public cloud infrastructures may be a reasonable approximation of this condition, scientific computing centres like WLCG Grid sites usually work in a saturated regime, in which applications compete for scarce resources through queues, priorities and scheduling policies, and keeping a fraction of the computing cores idle to allow for headroom is usually not an option. In our particular environment one of the applications (a WLCG Tier-2 Grid site) is much larger than all the others and cannot autoscale easily. Nevertheless, other smaller applications can benefit of automatic elasticity; the implementation of this property in our infrastructure, based on the OpenNebula cloud stack, will be described and the very first operational experiences with a small number of strategies for timely allocation and release of resources will be discussed.

  18. A fast parallel 3D Poisson solver with longitudinal periodic and transverse open boundary conditions for space-charge simulations

    NASA Astrophysics Data System (ADS)

    Qiang, Ji

    2017-10-01

    A three-dimensional (3D) Poisson solver with longitudinal periodic and transverse open boundary conditions can have important applications in beam physics of particle accelerators. In this paper, we present a fast efficient method to solve the Poisson equation using a spectral finite-difference method. This method uses a computational domain that contains the charged particle beam only and has a computational complexity of O(Nu(logNmode)) , where Nu is the total number of unknowns and Nmode is the maximum number of longitudinal or azimuthal modes. This saves both the computational time and the memory usage of using an artificial boundary condition in a large extended computational domain. The new 3D Poisson solver is parallelized using a message passing interface (MPI) on multi-processor computers and shows a reasonable parallel performance up to hundreds of processor cores.

  19. Visual Form Perception Can Be a Cognitive Correlate of Lower Level Math Categories for Teenagers.

    PubMed

    Cui, Jiaxin; Zhang, Yiyun; Cheng, Dazhi; Li, Dawei; Zhou, Xinlin

    2017-01-01

    Numerous studies have assessed the cognitive correlates of performance in mathematics, but little research has been conducted to systematically examine the relations between visual perception as the starting point of visuospatial processing and typical mathematical performance. In the current study, we recruited 223 seventh graders to perform a visual form perception task (figure matching), numerosity comparison, digit comparison, exact computation, approximate computation, and curriculum-based mathematical achievement tests. Results showed that, after controlling for gender, age, and five general cognitive processes (choice reaction time, visual tracing, mental rotation, spatial working memory, and non-verbal matrices reasoning), visual form perception had unique contributions to numerosity comparison, digit comparison, and exact computation, but had no significant relation with approximate computation or curriculum-based mathematical achievement. These results suggest that visual form perception is an important independent cognitive correlate of lower level math categories, including the approximate number system, digit comparison, and exact computation.

  20. Approximating local observables on projected entangled pair states

    NASA Astrophysics Data System (ADS)

    Schwarz, M.; Buerschaper, O.; Eisert, J.

    2017-06-01

    Tensor network states are for good reasons believed to capture ground states of gapped local Hamiltonians arising in the condensed matter context, states which are in turn expected to satisfy an entanglement area law. However, the computational hardness of contracting projected entangled pair states in two- and higher-dimensional systems is often seen as a significant obstacle when devising higher-dimensional variants of the density-matrix renormalization group method. In this work, we show that for those projected entangled pair states that are expected to provide good approximations of such ground states of local Hamiltonians, one can compute local expectation values in quasipolynomial time. We therefore provide a complexity-theoretic justification of why state-of-the-art numerical tools work so well in practice. We finally turn to the computation of local expectation values on quantum computers, providing a meaningful application for a small-scale quantum computer.

  1. Measurement of seeing and the atmospheric time constant by differential scintillations.

    PubMed

    Tokovinin, Andrei

    2002-02-20

    A simple differential analysis of stellar scintillations measured simultaneously with two apertures opens the possibility to estimate seeing. Moreover, some information on the vertical turbulence distribution can be obtained. A general expression for the differential scintillation index for apertures of arbitrary shape and for finite exposure time is derived, and its applications are studied. Correction for exposure time bias by use of the ratio of scintillation indices with and without time binning is studied. A bandpass-filtered scintillation in a small aperture (computed as the differential-exposure index) provides a reasonably good estimate of the atmospheric time constant for adaptive optics.

  2. Higher Order Time Integration Schemes for the Unsteady Navier-Stokes Equations on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Jothiprasad, Giridhar; Mavriplis, Dimitri J.; Caughey, David A.

    2002-01-01

    The rapid increase in available computational power over the last decade has enabled higher resolution flow simulations and more widespread use of unstructured grid methods for complex geometries. While much of this effort has been focused on steady-state calculations in the aerodynamics community, the need to accurately predict off-design conditions, which may involve substantial amounts of flow separation, points to the need to efficiently simulate unsteady flow fields. Accurate unsteady flow simulations can easily require several orders of magnitude more computational effort than a corresponding steady-state simulation. For this reason, techniques for improving the efficiency of unsteady flow simulations are required in order to make such calculations feasible in the foreseeable future. The purpose of this work is to investigate possible reductions in computer time due to the choice of an efficient time-integration scheme from a series of schemes differing in the order of time-accuracy, and by the use of more efficient techniques to solve the nonlinear equations which arise while using implicit time-integration schemes. This investigation is carried out in the context of a two-dimensional unstructured mesh laminar Navier-Stokes solver.

  3. A Stochastic Spiking Neural Network for Virtual Screening.

    PubMed

    Morro, A; Canals, V; Oliver, A; Alomar, M L; Galan-Prado, F; Ballester, P J; Rossello, J L

    2018-04-01

    Virtual screening (VS) has become a key computational tool in early drug design and screening performance is of high relevance due to the large volume of data that must be processed to identify molecules with the sought activity-related pattern. At the same time, the hardware implementations of spiking neural networks (SNNs) arise as an emerging computing technique that can be applied to parallelize processes that normally present a high cost in terms of computing time and power. Consequently, SNN represents an attractive alternative to perform time-consuming processing tasks, such as VS. In this brief, we present a smart stochastic spiking neural architecture that implements the ultrafast shape recognition (USR) algorithm achieving two order of magnitude of speed improvement with respect to USR software implementations. The neural system is implemented in hardware using field-programmable gate arrays allowing a highly parallelized USR implementation. The results show that, due to the high parallelization of the system, millions of compounds can be checked in reasonable times. From these results, we can state that the proposed architecture arises as a feasible methodology to efficiently enhance time-consuming data-mining processes such as 3-D molecular similarity search.

  4. Dragon Ears airborne acoustic array: CSP analysis applied to cross array to compute real-time 2D acoustic sound field

    NASA Astrophysics Data System (ADS)

    Cerwin, Steve; Barnes, Julie; Kell, Scott; Walters, Mark

    2003-09-01

    This paper describes development and application of a novel method to accomplish real-time solid angle acoustic direction finding using two 8-element orthogonal microphone arrays. The developed prototype system was intended for localization and signature recognition of ground-based sounds from a small UAV. Recent advances in computer speeds have enabled the implementation of microphone arrays in many audio applications. Still, the real-time presentation of a two-dimensional sound field for the purpose of audio target localization is computationally challenging. In order to overcome this challenge, a crosspower spectrum phase1 (CSP) technique was applied to each 8-element arm of a 16-element cross array to provide audio target localization. In this paper, we describe the technique and compare it with two other commonly used techniques; Cross-Spectral Matrix2 and MUSIC3. The results show that the CSP technique applied to two 8-element orthogonal arrays provides a computationally efficient solution with reasonable accuracy and tolerable artifacts, sufficient for real-time applications. Additional topics include development of a synchronized 16-channel transmitter and receiver to relay the airborne data to the ground-based processor and presentation of test data demonstrating both ground-mounted operation and airborne localization of ground-based gunshots and loud engine sounds.

  5. On the radiated EMI current extraction of dc transmission line based on corona current statistical measurements

    NASA Astrophysics Data System (ADS)

    Yi, Yong; Chen, Zhengying; Wang, Liming

    2018-05-01

    Corona-originated discharge of DC transmission lines is the main reason for the radiated electromagnetic interference (EMI) field in the vicinity of transmission lines. A joint time-frequency analysis technique was proposed to extract the radiated EMI current (excitation current) of DC corona based on corona current statistical measurements. A reduced-scale experimental platform was setup to measure the statistical distributions of current waveform parameters of aluminum conductor steel reinforced. Based on the measured results, the peak value, root-mean-square value and average value with 9 kHz and 200 Hz band-with of 0.5 MHz radiated EMI current were calculated by the technique proposed and validated with conventional excitation function method. Radio interference (RI) was calculated based on the radiated EMI current and a wire-to-plate platform was built for the validity of the RI computation results. The reason for the certain deviation between the computations and measurements was detailed analyzed.

  6. Method and system for data clustering for very large databases

    NASA Technical Reports Server (NTRS)

    Livny, Miron (Inventor); Zhang, Tian (Inventor); Ramakrishnan, Raghu (Inventor)

    1998-01-01

    Multi-dimensional data contained in very large databases is efficiently and accurately clustered to determine patterns therein and extract useful information from such patterns. Conventional computer processors may be used which have limited memory capacity and conventional operating speed, allowing massive data sets to be processed in a reasonable time and with reasonable computer resources. The clustering process is organized using a clustering feature tree structure wherein each clustering feature comprises the number of data points in the cluster, the linear sum of the data points in the cluster, and the square sum of the data points in the cluster. A dense region of data points is treated collectively as a single cluster, and points in sparsely occupied regions can be treated as outliers and removed from the clustering feature tree. The clustering can be carried out continuously with new data points being received and processed, and with the clustering feature tree being restructured as necessary to accommodate the information from the newly received data points.

  7. Performance Modeling of Experimental Laser Lightcrafts

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Chen, Yen-Sen; Liu, Jiwen; Myrabo, Leik N.; Mead, Franklin B., Jr.; Turner, Jim (Technical Monitor)

    2001-01-01

    A computational plasma aerodynamics model is developed to study the performance of a laser propelled Lightcraft. The computational methodology is based on a time-accurate, three-dimensional, finite-difference, chemically reacting, unstructured grid, pressure-based formulation. The underlying physics are added and tested systematically using a building-block approach. The physics modeled include non-equilibrium thermodynamics, non-equilibrium air-plasma finite-rate kinetics, specular ray tracing, laser beam energy absorption and refraction by plasma, non-equilibrium plasma radiation, and plasma resonance. A series of transient computations are performed at several laser pulse energy levels and the simulated physics are discussed and compared with those of tests and literatures. The predicted coupling coefficients for the Lightcraft compared reasonably well with those of tests conducted on a pendulum apparatus.

  8. Understanding Preprocedure Patient Flow in IR.

    PubMed

    Zafar, Abdul Mueed; Suri, Rajeev; Nguyen, Tran Khanh; Petrash, Carson Cope; Fazal, Zanira

    2016-08-01

    To quantify preprocedural patient flow in interventional radiology (IR) and to identify potential contributors to preprocedural delays. An administrative dataset was used to compute time intervals required for various preprocedural patient-flow processes. These time intervals were compared across on-time/delayed cases and inpatient/outpatient cases by Mann-Whitney U test. Spearman ρ was used to assess any correlation of the rank of a procedure on a given day and the procedure duration to the preprocedure time. A linear-regression model of preprocedure time was used to further explore potential contributing factors. Any identified reason(s) for delay were collated. P < .05 was considered statistically significant. Of the total 1,091 cases, 65.8% (n = 718) were delayed. Significantly more outpatient cases started late compared with inpatient cases (81.4% vs 45.0%; P < .001, χ(2) test). The multivariate linear regression model showed outpatient status, length of delay in arrival, and longer procedure times to be significantly associated with longer preprocedure times. Late arrival of patients (65.9%), unavailability of physicians (18.4%), and unavailability of procedure room (13.0%) were the three most frequently identified reasons for delay. The delay was multifactorial in 29.6% of cases (n = 213). Objective measurement of preprocedural IR patient flow demonstrated considerable waste and highlighted high-yield areas of possible improvement. A data-driven approach may aid efficient delivery of IR care. Copyright © 2016 SIR. Published by Elsevier Inc. All rights reserved.

  9. Geometric Reasoning for Automated Planning

    NASA Technical Reports Server (NTRS)

    Clement, Bradley J.; Knight, Russell L.; Broderick, Daniel

    2012-01-01

    An important aspect of mission planning for NASA s operation of the International Space Station is the allocation and management of space for supplies and equipment. The Stowage, Configuration Analysis, and Operations Planning teams collaborate to perform the bulk of that planning. A Geometric Reasoning Engine is developed in a way that can be shared by the teams to optimize item placement in the context of crew planning. The ISS crew spends (at the time of this writing) a third or more of their time moving supplies and equipment around. Better logistical support and optimized packing could make a significant impact on operational efficiency of the ISS. Currently, computational geometry and motion planning do not focus specifically on the optimized orientation and placement of 3D objects based on multiple distance and containment preferences and constraints. The software performs reasoning about the manipulation of 3D solid models in order to maximize an objective function based on distance. It optimizes for 3D orientation and placement. Spatial placement optimization is a general problem and can be applied to object packing or asset relocation.

  10. Elaborated Corrective Feedback and the Acquisition of Reasoning Skills: A Study of Computer-Assisted Instruction.

    ERIC Educational Resources Information Center

    Collins, Maria; And Others

    1987-01-01

    Thirteen learning disabled and 15 remedial high school students were taught reasoning skills using computer-assisted instruction and were given basic or elaborated corrections. Criterion-referenced test scores were significantly higher for the elaborated-corrections treatment on the post- and maintenance tests and on a transfer test assessing…

  11. Proportional Reasoning in the Laboratory: An Intervention Study in Vocational Education

    ERIC Educational Resources Information Center

    Bakker, Arthur; Groenveld, Djonie; Wijers, Monica; Akkerman, Sanne F.; Gravemeijer, Koeno P. E.

    2014-01-01

    Based on insights into the nature of vocational mathematical knowledge, we designed a computer tool with which students in laboratory schools at senior secondary vocational school level could develop a better proficiency in the proportional reasoning involved in dilution. We did so because we had identified computations of concentrations of…

  12. Profile modification computations for LHCD experiments on PBX-M using the TSC/LSC model

    NASA Astrophysics Data System (ADS)

    Kaita, R.; Ignat, D. W.; Jardin, S. C.; Okabayashi, M.; Sun, Y. C.

    1996-02-01

    The TSC-LSC computational model of the dynamics of lower hybrid current drive has been exercised extensively in comparison with data from a Princeton Beta Experiment-Modification (PBX-M) discharge where the measured q(0) attained values slightly above unity. Several significant, but plausible, assumptions had to be introduced to keep the computation from behaving pathologically over time, producing singular profiles of plasma current density and q. Addition of a heuristic current diffusion estimate, or more exactly, a smoothing of the rf-driven current with a diffusion-like equation, greatly improved the behavior of the computation, and brought theory and measurement into reasonable agreement. The model was then extended to longer pulse lengths and higher powers to investigate performance to be expected in future PBX-M current profile modification experiments.

  13. Real-Time Signal Processing Systems

    DTIC Science & Technology

    1992-10-29

    Programmer’s Model 50 15. Synchronization 67 16. Parameter Passage to Routines VIA Stacks 68 17. Typical VPH Activity Flow Chart 70 18. CPH...computing facilities to take advantage of cost effective solutions. A proliferation of different microprocessors and development systems spread among the... activities are completed, the roles of the VPH memory banks are reversed. This function-swapping is the primary reason, for the efficiency and high

  14. On the development of efficient algorithms for three dimensional fluid flow

    NASA Technical Reports Server (NTRS)

    Maccormack, R. W.

    1988-01-01

    The difficulties of constructing efficient algorithms for three-dimensional flow are discussed. Reasonable candidates are analyzed and tested, and most are found to have obvious shortcomings. Yet, there is promise that an efficient class of algorithms exist between the severely time-step sized-limited explicit or approximately factored algorithms and the computationally intensive direct inversion of large sparse matrices by Gaussian elimination.

  15. A three-dimensional semianalytical model of hydraulic fracture growth through weak barriers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luiskutty, C.T.; Tomutes, L.; Palmer, I.D.

    1989-08-01

    The goal of this research was to develop a fracture model for length/height ratio {le}4 that includes 2D flow (and a line source corresponding to the perforated interval) but makes approximations that allow a semianalytical solution, with large computer-time savings over the fully numerical mode. The height, maximum width, and pressure at the wellbore in this semianalytical model are calculated and compared with the results of the fully three-dimensional (3D) model. There is reasonable agreement in all parameters, the maximum discrepancy being 24%. Comparisons of fracture volume and leakoff volume also show reasonable agreement in volume and fluid efficiencies. Themore » values of length/height ratio, in the four cases in which agreement is found, vary from 1.5 to 3.7. The model offers a useful first-order (or screening) calculation of fracture-height growth through weak barriers (e.g., low stress contrasts). When coupled with the model developed for highly elongated fractures of length/height ratio {ge}4, which are also found to be in basic agreement with the fully numerical model, this new model provides the capability for approximating fracture-height growth through barriers for vertical fracture shapes that vary from penny to highly elongated. The computer time required is estimated to be less than the time required for the fully numerical model by a factor of 10 or more.« less

  16. The interaction of representation and reasoning.

    PubMed

    Bundy, Alan

    2013-09-08

    Automated reasoning is an enabling technology for many applications of informatics. These applications include verifying that a computer program meets its specification; enabling a robot to form a plan to achieve a task and answering questions by combining information from diverse sources, e.g. on the Internet, etc. How is automated reasoning possible? Firstly, knowledge of a domain must be stored in a computer, usually in the form of logical formulae. This knowledge might, for instance, have been entered manually, retrieved from the Internet or perceived in the environment via sensors, such as cameras. Secondly, rules of inference are applied to old knowledge to derive new knowledge. Automated reasoning techniques have been adapted from logic, a branch of mathematics that was originally designed to formalize the reasoning of humans, especially mathematicians. My special interest is in the way that representation and reasoning interact. Successful reasoning is dependent on appropriate representation of both knowledge and successful methods of reasoning. Failures of reasoning can suggest changes of representation. This process of representational change can also be automated. We will illustrate the automation of representational change by drawing on recent work in my research group.

  17. "mysterium Cosmographicum", for Orchestra, Narrator/actor, and Computer Music on Tape. (with Original Composition)

    NASA Astrophysics Data System (ADS)

    Keefe, Robert Michael

    Mysterium Cosmographicum is a musical chronicle of an astronomy treatise by the German astronomer Johannes Kepler (1571-1630). Kepler's Mysterium Cosmographicum (Tubingen, 1596), or "Secret of the Universe," was a means by which he justified the existence of the six planets discovered during his lifetime. Kepler, through flawless a priori reasoning, goes to great lengths to explain that the reason there are six and only six planets (Mercury, Venus, Earth, Mars, Jupiter, and Saturn) is because God had placed one of the five regular solids (tetrahedron, cube, octa-, dodeca-, and icosahedron) around each orbiting body. Needless to say, the publication was not very successful, nor did it gain much comment from Kepler's peers, Galileo Galilei (1564-1642) and Tycho Brahe (1546-1601). But hidden within the Mysterium Cosmographicum, almost like a new planet waiting to be discovered, is one of Kepler's three laws of planetary motion, a law that held true for planets discovered long after Kepler's lifetime. Mysterium Cosmographicum is a monologue with music in three parts for orchestra, narrator/actor, and computer music on tape. All musical data structures are generated via an interactive Pascal computer program that computes latitudinal and longitudinal coordinates for each of the nine planets as seen from a fixed point on Earth for any given time frame. These coordinates are then mapped onto selected musical parameters as determined by the composer. Whenever Kepler reads from his treatise or from a lecture or correspondence, the monologue is supported by orchestral planetary data generated from the exact place, date, and time of the treatise, lecture, or correspondence. To the best of my knowledge, Mysterium Cosmographicum is the first composition ever written that employs planetary data as a supporting chronology to action and monologue.

  18. Effect of misalignment on mechanical behavior of metals in creep. [computer programs

    NASA Technical Reports Server (NTRS)

    Wu, H. C.

    1979-01-01

    Application of the endochronic theory of viscoplasticity to creep, creep recovery, and stress relaxation at the small strain and short time range produced the following results: (1) The governing constitutive equations for constant-strain-rate stress-strain behavior, creep, creep recovery, and stress relaxation were derived by imposing appropriate constraints on the general constitutive equation of the endochronic theory. (2) A set of material constants was found which correlate strain-hardening, creep, creep recovery, and stress relaxation. (3) The theory predicts with reasonable accuracy the creep and creep recovery behaviors at short time. (4) The initial strain history prior to the creep stage affects the subsequent creep significantly. (5) A critical stress was established for creep recovery. A computer program, written for the misalignment problem is reported.

  19. MEGA16 - Computer program for analysis and extrapolation of stress-rupture data

    NASA Technical Reports Server (NTRS)

    Ensign, C. R.

    1981-01-01

    The computerized form of the minimum commitment method of interpolating and extrapolating stress versus time to failure data, MEGA16, is described. Examples are given of its many plots and tabular outputs for a typical set of data. The program assumes a specific model equation and then provides a family of predicted isothermals for any set of data with at least 12 stress-rupture results from three different temperatures spread over reasonable stress and time ranges. It is written in FORTRAN 4 using IBM plotting subroutines and its runs on an IBM 370 time sharing system.

  20. Computer-Based Assessment of School Readiness and Early Reasoning

    ERIC Educational Resources Information Center

    Csapó, Beno; Molnár, Gyöngyvér; Nagy, József

    2014-01-01

    This study explores the potential of using online tests for the assessment of school readiness and for monitoring early reasoning. Four tests of a face-to-face-administered school readiness test battery (speech sound discrimination, relational reasoning, counting and basic numeracy, and deductive reasoning) and a paper-and-pencil inductive…

  1. Acceleration and Velocity Sensing from Measured Strain

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi; Truax, Roger

    2015-01-01

    A simple approach for computing acceleration and velocity of a structure from the strain is proposed in this study. First, deflection and slope of the structure are computed from the strain using a two-step theory. Frequencies of the structure are computed from the time histories of strain using a parameter estimation technique together with an autoregressive moving average model. From deflection, slope, and frequencies of the structure, acceleration and velocity of the structure can be obtained using the proposed approach. Simple harmonic motion is assumed for the acceleration computations, and the central difference equation with a linear autoregressive model is used for the computations of velocity. A cantilevered rectangular wing model is used to validate the simple approach. Quality of the computed deflection, acceleration, and velocity values are independent of the number of fibers. The central difference equation with a linear autoregressive model proposed in this study follows the target response with reasonable accuracy. Therefore, the handicap of the backward difference equation, phase shift, is successfully overcome.

  2. Transformations and representations supporting spatial perspective taking

    PubMed Central

    Yu, Alfred B.; Zacks, Jeffrey M.

    2018-01-01

    Spatial perspective taking is the ability to reason about spatial relations relative to another’s viewpoint. Here, we propose a mechanistic hypothesis that relates mental representations of one’s viewpoint to the transformations used for spatial perspective taking. We test this hypothesis using a novel behavioral paradigm that assays patterns of response time and variation in those patterns across people. The results support the hypothesis that people maintain a schematic representation of the space around their body, update that representation to take another’s perspective, and thereby to reason about the space around their body. This is a powerful computational mechanism that can support imitation, coordination of behavior, and observational learning. PMID:29545731

  3. The adaption and use of research codes for performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liebetrau, A.M.

    1987-05-01

    Models of real-world phenomena are developed for many reasons. The models are usually, if not always, implemented in the form of a computer code. The characteristics of a code are determined largely by its intended use. Realizations or implementations of detailed mathematical models of complex physical and/or chemical processes are often referred to as research or scientific (RS) codes. Research codes typically require large amounts of computing time. One example of an RS code is a finite-element code for solving complex systems of differential equations that describe mass transfer through some geologic medium. Considerable computing time is required because computationsmore » are done at many points in time and/or space. Codes used to evaluate the overall performance of real-world physical systems are called performance assessment (PA) codes. Performance assessment codes are used to conduct simulated experiments involving systems that cannot be directly observed. Thus, PA codes usually involve repeated simulations of system performance in situations that preclude the use of conventional experimental and statistical methods. 3 figs.« less

  4. Using computer aided case based reasoning to support clinical reasoning in community occupational therapy.

    PubMed

    Taylor, Bruce; Robertson, David; Wiratunga, Nirmalie; Craw, Susan; Mitchell, Dawn; Stewart, Elaine

    2007-08-01

    Community occupational therapists have long been involved in the provision of environmental control systems. Diverse electronic technologies with the potential to improve the health and quality of life of selected clients have developed rapidly in recent years. Occupational therapists employ clinical reasoning in order to determine the most appropriate technology to meet the needs of individual clients. This paper describes a number of the drivers that may increase the adoption of information and communication technologies in the occupational therapy profession. It outlines case based reasoning as understood in the domains of expert systems and knowledge management and presents the preliminary results of an ongoing investigation into the potential of a prototype computer aided case based reasoning tool to support the clinical reasoning of community occupational therapists in the process of assisting clients to choose home electronic assistive or smart house technology.

  5. Flame-Vortex Studies to Quantify Markstein Numbers Needed to Model Flame Extinction Limits

    NASA Technical Reports Server (NTRS)

    Driscoll, James F.; Feikema, Douglas A.

    2003-01-01

    This has quantified a database of Markstein numbers for unsteady flames; future work will quantify a database of flame extinction limits for unsteady conditions. Unsteady extinction limits have not been documented previously; both a stretch rate and a residence time must be measured, since extinction requires that the stretch rate be sufficiently large for a sufficiently long residence time. Ma was measured for an inwardly-propagating flame (IPF) that is negatively-stretched under microgravity conditions. Computations also were performed using RUN-1DL to explain the measurements. The Markstein number of an inwardly-propagating flame, for both the microgravity experiment and the computations, is significantly larger than that of an outwardy-propagating flame. The computed profiles of the various species within the flame suggest reasons. Computed hydrogen concentrations build up ahead of the IPF but not the OPF. Understanding was gained by running the computations for both simplified and full-chemistry conditions. Numerical Simulations. To explain the experimental findings, numerical simulations of both inwardly and outwardly propagating spherical flames (with complex chemistry) were generated using the RUN-1DL code, which includes 16 species and 46 reactions.

  6. High-School Students' Reasoning while Constructing Plant Growth Models in a Computer-Supported Educational Environment. Research Report

    ERIC Educational Resources Information Center

    Ergazaki, Marida; Komis, Vassilis; Zogza, Vassiliki

    2005-01-01

    This paper highlights specific aspects of high-school students' reasoning while coping with a modeling task of plant growth in a computer-supported educational environment. It is particularly concerned with the modeling levels ('macro-phenomenological' and 'micro-conceptual' level) activated by peers while exploring plant growth and with their…

  7. Visual Reasoning in Computational Environment: A Case of Graph Sketching

    ERIC Educational Resources Information Center

    Leung, Allen; Chan, King Wah

    2004-01-01

    This paper reports the case of a form six (grade 12) Hong Kong student's exploration of graph sketching in a computational environment. In particular, the student summarized his discovery in the form of two empirical laws. The student was interviewed and the interviewed data were used to map out a possible path of his visual reasoning. Critical…

  8. Cultural Commonalities and Differences in Spatial Problem-Solving: A Computational Analysis

    ERIC Educational Resources Information Center

    Lovett, Andrew; Forbus, Kenneth

    2011-01-01

    A fundamental question in human cognition is how people reason about space. We use a computational model to explore cross-cultural commonalities and differences in spatial cognition. Our model is based upon two hypotheses: (1) the structure-mapping model of analogy can explain the visual comparisons used in spatial reasoning; and (2) qualitative,…

  9. The Effects of Learning a Computer Programming Language on the Logical Reasoning of School Children.

    ERIC Educational Resources Information Center

    Seidman, Robert H.

    The research reported in this paper explores the syntactical and semantic link between computer programming statements and logical principles, and addresses the effects of learning a programming language on logical reasoning ability. Fifth grade students in a public school in Syracuse, New York, were randomly selected as subjects, and then…

  10. The Difficult Process of Scientific Modelling: An Analysis Of Novices' Reasoning During Computer-Based Modelling

    ERIC Educational Resources Information Center

    Sins, Patrick H. M.; Savelsbergh, Elwin R.; van Joolingen, Wouter R.

    2005-01-01

    Although computer modelling is widely advocated as a way to offer students a deeper understanding of complex phenomena, the process of modelling is rather complex itself and needs scaffolding. In order to offer adequate support, a thorough understanding of the reasoning processes students employ and of difficulties they encounter during a…

  11. Developing Strategic and Reasoning Abilities with Computer Games at Primary School Level

    ERIC Educational Resources Information Center

    Bottino, R. M.; Ferlino, L.; Ott, M.; Tavella, M.

    2007-01-01

    The paper reports a small-scale, long-term pilot project designed to foster strategic and reasoning abilities in young primary school pupils by engaging them in a number of computer games, mainly those usually called mind games (brainteasers, puzzlers, etc.). In this paper, the objectives, work methodology, experimental setting, and tools used in…

  12. Cortical Specializations Underlying Fast Computations

    PubMed Central

    Volgushev, Maxim

    2016-01-01

    The time course of behaviorally relevant environmental events sets temporal constraints on neuronal processing. How does the mammalian brain make use of the increasingly complex networks of the neocortex, while making decisions and executing behavioral reactions within a reasonable time? The key parameter determining the speed of computations in neuronal networks is a time interval that neuronal ensembles need to process changes at their input and communicate results of this processing to downstream neurons. Theoretical analysis identified basic requirements for fast processing: use of neuronal populations for encoding, background activity, and fast onset dynamics of action potentials in neurons. Experimental evidence shows that populations of neocortical neurons fulfil these requirements. Indeed, they can change firing rate in response to input perturbations very quickly, within 1 to 3 ms, and encode high-frequency components of the input by phase-locking their spiking to frequencies up to 300 to 1000 Hz. This implies that time unit of computations by cortical ensembles is only few, 1 to 3 ms, which is considerably faster than the membrane time constant of individual neurons. The ability of cortical neuronal ensembles to communicate on a millisecond time scale allows for complex, multiple-step processing and precise coordination of neuronal activity in parallel processing streams, while keeping the speed of behavioral reactions within environmentally set temporal constraints. PMID:25689988

  13. Module Extraction for Efficient Object Queries over Ontologies with Large ABoxes

    PubMed Central

    Xu, Jia; Shironoshita, Patrick; Visser, Ubbo; John, Nigel; Kabuka, Mansur

    2015-01-01

    The extraction of logically-independent fragments out of an ontology ABox can be useful for solving the tractability problem of querying ontologies with large ABoxes. In this paper, we propose a formal definition of an ABox module, such that it guarantees complete preservation of facts about a given set of individuals, and thus can be reasoned independently w.r.t. the ontology TBox. With ABox modules of this type, isolated or distributed (parallel) ABox reasoning becomes feasible, and more efficient data retrieval from ontology ABoxes can be attained. To compute such an ABox module, we present a theoretical approach and also an approximation for SHIQ ontologies. Evaluation of the module approximation on different types of ontologies shows that, on average, extracted ABox modules are significantly smaller than the entire ABox, and the time for ontology reasoning based on ABox modules can be improved significantly. PMID:26848490

  14. A Computational Model of Reasoning from the Clinical Literature

    PubMed Central

    Rennels, Glenn D.

    1986-01-01

    This paper explores the premise that a formalized representation of empirical studies can play a central role in computer-based decision support. The specific motivations underlying this research include the following propositions: 1. Reasoning from experimental evidence contained in the clinical literature is central to the decisions physicians make in patient care. 2. A computational model, based upon a declarative representation for published reports of clinical studies, can drive a computer program that selectively tailors knowledge of the clinical literature as it is applied to a particular case. 3. The development of such a computational model is an important first step toward filling a void in computer-based decision support systems. Furthermore, the model may help us better understand the general principles of reasoning from experimental evidence both in medicine and other domains. Roundsman is a developmental computer system which draws upon structured representations of the clinical literature in order to critique plans for the management of primary breast cancer. Roundsman is able to produce patient-specific analyses of breast cancer management options based on the 24 clinical studies currently encoded in its knowledge base. The Roundsman system is a first step in exploring how the computer can help to bring a critical analysis of the relevant literature to the physician, structured around a particular patient and treatment decision.

  15. PCA-based artifact removal algorithm for stroke detection using UWB radar imaging.

    PubMed

    Ricci, Elisa; di Domenico, Simone; Cianca, Ernestina; Rossi, Tommaso; Diomedi, Marina

    2017-06-01

    Stroke patients should be dispatched at the highest level of care available in the shortest time. In this context, a transportable system in specialized ambulances, able to evaluate the presence of an acute brain lesion in a short time interval (i.e., few minutes), could shorten delay of treatment. UWB radar imaging is an emerging diagnostic branch that has great potential for the implementation of a transportable and low-cost device. Transportability, low cost and short response time pose challenges to the signal processing algorithms of the backscattered signals as they should guarantee good performance with a reasonably low number of antennas and low computational complexity, tightly related to the response time of the device. The paper shows that a PCA-based preprocessing algorithm can: (1) achieve good performance already with a computationally simple beamforming algorithm; (2) outperform state-of-the-art preprocessing algorithms; (3) enable a further improvement in the performance (and/or decrease in the number of antennas) by using a multistatic approach with just a modest increase in computational complexity. This is an important result toward the implementation of such a diagnostic device that could play an important role in emergency scenario.

  16. Graded meshes in bio-thermal problems with transmission-line modeling method.

    PubMed

    Milan, Hugo F M; Carvalho, Carlos A T; Maia, Alex S C; Gebremedhin, Kifle G

    2014-10-01

    In this study, the transmission-line modeling (TLM) applied to bio-thermal problems was improved by incorporating several novel computational techniques, which include application of graded meshes which resulted in 9 times faster in computational time and uses only a fraction (16%) of the computational resources used by regular meshes in analyzing heat flow through heterogeneous media. Graded meshes, unlike regular meshes, allow heat sources to be modeled in all segments of the mesh. A new boundary condition that considers thermal properties and thus resulting in a more realistic modeling of complex problems is introduced. Also, a new way of calculating an error parameter is introduced. The calculated temperatures between nodes were compared against the results obtained from the literature and agreed within less than 1% difference. It is reasonable, therefore, to conclude that the improved TLM model described herein has great potential in heat transfer of biological systems. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Genetic algorithms in teaching artificial intelligence (automated generation of specific algebras)

    NASA Astrophysics Data System (ADS)

    Habiballa, Hashim; Jendryscik, Radek

    2017-11-01

    The problem of teaching essential Artificial Intelligence (AI) methods is an important task for an educator in the branch of soft-computing. The key focus is often given to proper understanding of the principle of AI methods in two essential points - why we use soft-computing methods at all and how we apply these methods to generate reasonable results in sensible time. We present one interesting problem solved in the non-educational research concerning automated generation of specific algebras in the huge search space. We emphasize above mentioned points as an educational case study of an interesting problem in automated generation of specific algebras.

  18. Software environment for implementing engineering applications on MIMD computers

    NASA Technical Reports Server (NTRS)

    Lopez, L. A.; Valimohamed, K. A.; Schiff, S.

    1990-01-01

    In this paper the concept for a software environment for developing engineering application systems for multiprocessor hardware (MIMD) is presented. The philosophy employed is to solve the largest problems possible in a reasonable amount of time, rather than solve existing problems faster. In the proposed environment most of the problems concerning parallel computation and handling of large distributed data spaces are hidden from the application program developer, thereby facilitating the development of large-scale software applications. Applications developed under the environment can be executed on a variety of MIMD hardware; it protects the application software from the effects of a rapidly changing MIMD hardware technology.

  19. Knowledge representation by connection matrices: A method for the on-board implementation of large expert systems

    NASA Technical Reports Server (NTRS)

    Kellner, A.

    1987-01-01

    Extremely large knowledge sources and efficient knowledge access characterizing future real-life artificial intelligence applications represent crucial requirements for on-board artificial intelligence systems due to obvious computer time and storage constraints on spacecraft. A type of knowledge representation and corresponding reasoning mechanism is proposed which is particularly suited for the efficient processing of such large knowledge bases in expert systems.

  20. Multiresolution Algorithms for Processing Giga-Models: Real-time Visualization, Reasoning, and Interaction

    DTIC Science & Technology

    2012-04-23

    Interactive Virtual Hair Salon , Presence, (05 2007): 237. doi: 2012/04/17 12:55:26 31 Theodore Kim, Jason Sewall, Avneesh Sud, Ming Lin. Fast...in Games , Utrecht, Netherlands, Nov. 2009. Keynote Speaker, IADIS International Conference on Computer Graphics and Visualization, Portugal, June 2009...Keynote Speaker, ACM Symposium on Virtual Reality Software and Technology, Bordeaux, France, October 2008. Invited Speaker, Motion in Games , Utrecht

  1. Memory Reconsolidation and Computational Learning

    DTIC Science & Technology

    2010-03-01

    Cooper and H.T. Siegelmann, "Memory Reconsolidation for Natural Language Processing," Cognitive Neurodynamics , 3, 2009: 365-372. M.M. Olsen, N...computerized memories and other state of the art cognitive architectures, our memory system has the ability to process on-line and in real-time as...on both continuous and binary inputs, unlike state of the art methods in case based reasoning and in cognitive architectures, which are bound to

  2. Computer use and needs of internists: a survey of members of the American College of Physicians-American Society of Internal Medicine.

    PubMed Central

    Lacher, D.; Nelson, E.; Bylsma, W.; Spena, R.

    2000-01-01

    The American College of Physicians-American Society of Internal Medicine conducted a membership survey in late 1998 to assess their activities, needs, and attitudes. A total of 9,466 members (20.9% response rate) reported on 198 items related to computer use and needs of internists. Eighty-two percent of the respondents reported that they use computers for personal or professional reasons. Physicians younger than 50 years old who had full- or part-time academic affiliation reported using computers more frequently for medical applications. About two thirds of respondents who had access to computers connected to the Internet at least weekly, with most using the Internet from home for e-mail and nonmedical uses. Physicians expressed concerns about Internet security, confidentiality, and accuracy, and the lack of time to browse the Internet. In practice settings, internists used computers for administrative and financial functions. Less than 19% of respondents had partial or complete electronic clinical functions in their offices. Less than 7% of respondents exchanged e-mail with their patients on a weekly or daily basis. Also, less than 15% of respondents used computers for continuing medical education (CME). Respondents reported they wanted to increase their general computer skills and enhance their knowledge of computer-based information sources for patient care, electronic medical record systems, computer-based CME, and telemedicine While most respondents used computers and connected to the Internet, few physicians utilized computers for clinical management. Medical organizations face the challenge of increasing physician use of clinical systems and electronic CME. PMID:11079924

  3. Computer use and needs of internists: a survey of members of the American College of Physicians-American Society of Internal Medicine.

    PubMed

    Lacher, D; Nelson, E; Bylsma, W; Spena, R

    2000-01-01

    The American College of Physicians-American Society of Internal Medicine conducted a membership survey in late 1998 to assess their activities, needs, and attitudes. A total of 9,466 members (20.9% response rate) reported on 198 items related to computer use and needs of internists. Eighty-two percent of the respondents reported that they use computers for personal or professional reasons. Physicians younger than 50 years old who had full- or part-time academic affiliation reported using computers more frequently for medical applications. About two thirds of respondents who had access to computers connected to the Internet at least weekly, with most using the Internet from home for e-mail and nonmedical uses. Physicians expressed concerns about Internet security, confidentiality, and accuracy, and the lack of time to browse the Internet. In practice settings, internists used computers for administrative and financial functions. Less than 19% of respondents had partial or complete electronic clinical functions in their offices. Less than 7% of respondents exchanged e-mail with their patients on a weekly or daily basis. Also, less than 15% of respondents used computers for continuing medical education (CME). Respondents reported they wanted to increase their general computer skills and enhance their knowledge of computer-based information sources for patient care, electronic medical record systems, computer-based CME, and telemedicine While most respondents used computers and connected to the Internet, few physicians utilized computers for clinical management. Medical organizations face the challenge of increasing physician use of clinical systems and electronic CME.

  4. On the interpretation of kernels - Computer simulation of responses to impulse pairs

    NASA Technical Reports Server (NTRS)

    Hung, G.; Stark, L.; Eykhoff, P.

    1983-01-01

    A method is presented for the use of a unit impulse response and responses to impulse pairs of variable separation in the calculation of the second-degree kernels of a quadratic system. A quadratic system may be built from simple linear terms of known dynamics and a multiplier. Computer simulation results on quadratic systems with building elements of various time constants indicate reasonably that the larger time constant term before multiplication dominates in the envelope of the off-diagonal kernel curves as these move perpendicular to and away from the main diagonal. The smaller time constant term before multiplication combines with the effect of the time constant after multiplication to dominate in the kernel curves in the direction of the second-degree impulse response, i.e., parallel to the main diagonal. Such types of insight may be helpful in recognizing essential aspects of (second-degree) kernels; they may be used in simplifying the model structure and, perhaps, add to the physical/physiological understanding of the underlying processes.

  5. The 4th R: Reasoning.

    ERIC Educational Resources Information Center

    Miles, Curtis

    1983-01-01

    Reviews sources of information on materials for teaching reasoning with a microcomputer. Suggests microcomputer magazines, catalogs of commercial materials, CONDUIT (a nonprofit organization devoted to educational computer use), and local microcomputer users groups. Lists Apple II software for strategy games with reasoning applications. (DMM)

  6. Why Don't All Professors Use Computers?

    ERIC Educational Resources Information Center

    Drew, David Eli

    1989-01-01

    Discusses the adoption of computer technology at universities and examines reasons why some professors don't use computers. Topics discussed include computer applications, including artificial intelligence, social science research, statistical analysis, and cooperative research; appropriateness of the technology for the task; the Computer Aptitude…

  7. RenderMan design principles

    NASA Technical Reports Server (NTRS)

    Apodaca, Tony; Porter, Tom

    1989-01-01

    The two worlds of interactive graphics and realistic graphics have remained separate. Fast graphics hardware runs simple algorithms and generates simple looking images. Photorealistic image synthesis software runs slowly on large expensive computers. The time has come for these two branches of computer graphics to merge. The speed and expense of graphics hardware is no longer the barrier to the wide acceptance of photorealism. There is every reason to believe that high quality image synthesis will become a standard capability of every graphics machine, from superworkstation to personal computer. The significant barrier has been the lack of a common language, an agreed-upon set of terms and conditions, for 3-D modeling systems to talk to 3-D rendering systems for computing an accurate rendition of that scene. Pixar has introduced RenderMan to serve as that common language. RenderMan, specifically the extensibility it offers in shading calculations, is discussed.

  8. Numerical simulation of a mini PEMFC stack

    NASA Astrophysics Data System (ADS)

    Liu, Zhixiang; Mao, Zongqiang; Wang, Cheng; Zhuge, Weilin; Zhang, Yangjun

    Fuel cell modeling and simulation has aroused much attention recently because it can probe transport and reaction mechanism. In this paper, a computational fuel cell dynamics (CFCD) method was applied to simulate a proton exchange membrane fuel cell (PEMFC) stack for the first time. The air cooling mini fuel cell stack consisted of six cells, in which the active area was 8 cm 2 (2 cm × 4 cm). With reasonable simplification, the computational elements were effectively reduced and allowed a simulation which could be conducted on a personal computer without large-scale parallel computation. The results indicated that the temperature gradient inside the fuel cell stack was determined by the flow rate of the cooling air. If the air flow rate is too low, the stack could not be effectively cooled and the temperature will rise to a range that might cause unstable stack operation.

  9. Comparing errors in ED computer-assisted vs conventional pediatric drug dosing and administration.

    PubMed

    Yamamoto, Loren; Kanemori, Joan

    2010-06-01

    Compared to fixed-dose single-vial drug administration in adults, pediatric drug dosing and administration requires a series of calculations, all of which are potentially error prone. The purpose of this study is to compare error rates and task completion times for common pediatric medication scenarios using computer program assistance vs conventional methods. Two versions of a 4-part paper-based test were developed. Each part consisted of a set of medication administration and/or dosing tasks. Emergency department and pediatric intensive care unit nurse volunteers completed these tasks using both methods (sequence assigned to start with a conventional or a computer-assisted approach). Completion times, errors, and the reason for the error were recorded. Thirty-eight nurses completed the study. Summing the completion of all 4 parts, the mean conventional total time was 1243 seconds vs the mean computer program total time of 879 seconds (P < .001). The conventional manual method had a mean of 1.8 errors vs the computer program with a mean of 0.7 errors (P < .001). Of the 97 total errors, 36 were due to misreading the drug concentration on the label, 34 were due to calculation errors, and 8 were due to misplaced decimals. Of the 36 label interpretation errors, 18 (50%) occurred with digoxin or insulin. Computerized assistance reduced errors and the time required for drug administration calculations. A pattern of errors emerged, noting that reading/interpreting certain drug labels were more error prone. Optimizing the layout of drug labels could reduce the error rate for error-prone labels. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  10. The Application of Multiobjective Evolutionary Algorithms to an Educational Computational Model of Science Information Processing: A Computational Experiment in Science Education

    ERIC Educational Resources Information Center

    Lamb, Richard L.; Firestone, Jonah B.

    2017-01-01

    Conflicting explanations and unrelated information in science classrooms increase cognitive load and decrease efficiency in learning. This reduced efficiency ultimately limits one's ability to solve reasoning problems in the science. In reasoning, it is the ability of students to sift through and identify critical pieces of information that is of…

  11. The Effects of Computer Programming on High School Students' Reasoning Skills and Mathematical Self-Efficacy and Problem Solving

    ERIC Educational Resources Information Center

    Psycharis, Sarantos; Kallia, Maria

    2017-01-01

    In this paper we investigate whether computer programming has an impact on high school student's reasoning skills, problem solving and self-efficacy in Mathematics. The quasi-experimental design was adopted to implement the study. The sample of the research comprised 66 high school students separated into two groups, the experimental and the…

  12. Reasoning Abilities in Primary School: A Pilot Study on Poor Achievers vs. Normal Achievers in Computer Game Tasks

    ERIC Educational Resources Information Center

    Dagnino, Francesca Maria; Ballauri, Margherita; Benigno, Vincenza; Caponetto, Ilaria; Pesenti, Elia

    2013-01-01

    This paper presents the results of preliminary research on the assessment of reasoning abilities in primary school poor achievers vs. normal achievers using computer game tasks. Subjects were evaluated by means of cognitive assessment on logical abilities and academic skills. The aim of this study is to better understand the relationship between…

  13. Performance Modeling of an Experimental Laser Propelled Lightcraft

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Chen, Yen-Sen; Liu, Jiwen; Myrabo, Leik N.; Mead, Franklin B., Jr.

    2000-01-01

    A computational plasma aerodynamics model is developed to study the performance of an experimental laser propelled lightcraft. The computational methodology is based on a time-accurate, three-dimensional, finite-difference, chemically reacting, unstructured grid, pressure- based formulation. The underlying physics are added and tested systematically using a building-block approach. The physics modeled include non-equilibn'um thermodynamics, non-equilibrium air-plasma finite-rate kinetics, specular ray tracing, laser beam energy absorption and equi refraction by plasma, non-equilibrium plasma radiation, and plasma resonance. A series of transient computations are performed at several laser pulse energy levels and the simulated physics are discussed and compared with those of tests and literature. The predicted coupling coefficients for the lightcraft compared reasonably well with those of tests conducted on a pendulum apparatus.

  14. Secure and Efficient Signature Scheme Based on NTRU for Mobile Payment

    NASA Astrophysics Data System (ADS)

    Xia, Yunhao; You, Lirong; Sun, Zhe; Sun, Zhixin

    2017-10-01

    Mobile payment becomes more and more popular, however the traditional public-key encryption algorithm has higher requirements for hardware which is not suitable for mobile terminals of limited computing resources. In addition, these public-key encryption algorithms do not have the ability of anti-quantum computing. This paper researches public-key encryption algorithm NTRU for quantum computation through analyzing the influence of parameter q and k on the probability of generating reasonable signature value. Two methods are proposed to improve the probability of generating reasonable signature value. Firstly, increase the value of parameter q. Secondly, add the authentication condition that meet the reasonable signature requirements during the signature phase. Experimental results show that the proposed signature scheme can realize the zero leakage of the private key information of the signature value, and increase the probability of generating the reasonable signature value. It also improve rate of the signature, and avoid the invalid signature propagation in the network, but the scheme for parameter selection has certain restrictions.

  15. The interaction of representation and reasoning

    PubMed Central

    Bundy, Alan

    2013-01-01

    Automated reasoning is an enabling technology for many applications of informatics. These applications include verifying that a computer program meets its specification; enabling a robot to form a plan to achieve a task and answering questions by combining information from diverse sources, e.g. on the Internet, etc. How is automated reasoning possible? Firstly, knowledge of a domain must be stored in a computer, usually in the form of logical formulae. This knowledge might, for instance, have been entered manually, retrieved from the Internet or perceived in the environment via sensors, such as cameras. Secondly, rules of inference are applied to old knowledge to derive new knowledge. Automated reasoning techniques have been adapted from logic, a branch of mathematics that was originally designed to formalize the reasoning of humans, especially mathematicians. My special interest is in the way that representation and reasoning interact. Successful reasoning is dependent on appropriate representation of both knowledge and successful methods of reasoning. Failures of reasoning can suggest changes of representation. This process of representational change can also be automated. We will illustrate the automation of representational change by drawing on recent work in my research group. PMID:24062623

  16. Accelerating phylogenetics computing on the desktop: experiments with executing UPGMA in programmable logic.

    PubMed

    Davis, J P; Akella, S; Waddell, P H

    2004-01-01

    Having greater computational power on the desktop for processing taxa data sets has been a dream of biologists/statisticians involved in phylogenetics data analysis. Many existing algorithms have been highly optimized-one example being Felsenstein's PHYLIP code, written in C, for UPGMA and neighbor joining algorithms. However, the ability to process more than a few tens of taxa in a reasonable amount of time using conventional computers has not yielded a satisfactory speedup in data processing, making it difficult for phylogenetics practitioners to quickly explore data sets-such as might be done from a laptop computer. We discuss the application of custom computing techniques to phylogenetics. In particular, we apply this technology to speed up UPGMA algorithm execution by a factor of a hundred, against that of PHYLIP code running on the same PC. We report on these experiments and discuss how custom computing techniques can be used to not only accelerate phylogenetics algorithm performance on the desktop, but also on larger, high-performance computing engines, thus enabling the high-speed processing of data sets involving thousands of taxa.

  17. Approximate, computationally efficient online learning in Bayesian spiking neurons.

    PubMed

    Kuhlmann, Levin; Hauser-Raspe, Michael; Manton, Jonathan H; Grayden, David B; Tapson, Jonathan; van Schaik, André

    2014-03-01

    Bayesian spiking neurons (BSNs) provide a probabilistic interpretation of how neurons perform inference and learning. Online learning in BSNs typically involves parameter estimation based on maximum-likelihood expectation-maximization (ML-EM) which is computationally slow and limits the potential of studying networks of BSNs. An online learning algorithm, fast learning (FL), is presented that is more computationally efficient than the benchmark ML-EM for a fixed number of time steps as the number of inputs to a BSN increases (e.g., 16.5 times faster run times for 20 inputs). Although ML-EM appears to converge 2.0 to 3.6 times faster than FL, the computational cost of ML-EM means that ML-EM takes longer to simulate to convergence than FL. FL also provides reasonable convergence performance that is robust to initialization of parameter estimates that are far from the true parameter values. However, parameter estimation depends on the range of true parameter values. Nevertheless, for a physiologically meaningful range of parameter values, FL gives very good average estimation accuracy, despite its approximate nature. The FL algorithm therefore provides an efficient tool, complementary to ML-EM, for exploring BSN networks in more detail in order to better understand their biological relevance. Moreover, the simplicity of the FL algorithm means it can be easily implemented in neuromorphic VLSI such that one can take advantage of the energy-efficient spike coding of BSNs.

  18. People's reasons for wanting to complete probation: Use and predictive validity in an e-health intervention.

    PubMed

    Spohr, Stephanie A; Taxman, Faye S; Walters, Scott T

    2017-04-01

    The criminal justice system tends to emphasize external contingencies (e.g., fees, jail time) to motivate offender compliance. However, people's reasons for desistance vary considerably. This study evaluated the acceptability, utility, and predictive validity of questions that ask about people's reasons for wanting to successfully complete probation. Substance-using probationers (N=113) participated in a web-based computer intervention that targeted substance use and treatment initiation. Questions around seven dimensions of reasons for completing probation were developed to provide tailored feedback during the web-based program. A principle components factor analysis found that survey items loaded onto two distinct factors. Factor one, "Tangible Loss" focused on external and present-focused reasons. Factor two, "Better Life" focused on internal and future-focused reasons. There was a significant negative association between Better Life scores and days of substance use after two months (β=-0.31, SE=0.13, p<0.05). There was a significant positive association with Better Life scores and days of treatment attendance (β=1.46, SE=0.26, p<0.001). Tangible Loss scores were no associated with substance use and treatment attendance. These findings may help to create more effective motivational tracks in e-health interventions, and may complement traditional motivation measures with an explicit focus on people's stated reasons for wanting to complete probation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Competent Reasoning with Rational Numbers.

    ERIC Educational Resources Information Center

    Smith, John P. III

    1995-01-01

    Analyzed students' reasoning with fractions. Found that skilled students applied strategies specifically tailored to restricted classes of fractions and produced reliable solutions with a minimum of computation effort. Results suggest that competent reasoning depends on a knowledge base that includes numerically specific and invented strategies,…

  20. Media use as a reason for meal skipping and fast eating in secondary school children.

    PubMed

    Van den Bulck, J; Eggermont, S

    2006-04-01

    This study examined self-reported meal skipping and eating faster than usual with the goal of watching television or playing computer games. Respondents reported their media use and indicated how often they skipped a meal to watch a favourite television programme or to play a computer game, and how often they ate faster than usual in order to watch television or play a computer game. Respondents were 2546 adolescents of 13 (first year of secondary school) and 16 years (fourth year of secondary school) of age. About one respondent in 10 skipped at least one meal every week for either television viewing or computer game playing. Weekly meal skipping for television viewing occurs more regularly in boys and first-year students, but particularly in teenagers who view 5 h or more daily (15% of the sample). The category of teenagers who play computer games four times a week or more (25.3% of the sample) is at increased risk of meal skipping; those who play more than four times a week are 10 times more likely weekly to skip a meal. A quarter of the adolescents eat faster at least once a week to be able to watch television or play a computer game. Regardless of gender and school year, teenagers' risk of eating faster progressively increases with their use of the media. Those who watch 4 h or more daily are about seven times more likely to skip a meal for television and those who play computer games at least four times a week are nine times more likely weekly to skip a meal. Unhealthy eating habits can be a side effect of heavy or excessive media use. Teenagers' use of television or game computers during nonworking or out-of-school hours partly displaces the amount of time that needs to be spent at meals. Practitioners and educators may try to encourage or restore a pattern of healthful meal consumption habits by reducing the amount of media use, and by supporting parental rule-making regarding children's eating habits and media use.

  1. Visualizing complex processes using a cognitive-mapping tool to support the learning of clinical reasoning.

    PubMed

    Wu, Bian; Wang, Minhong; Grotzer, Tina A; Liu, Jun; Johnson, Janice M

    2016-08-22

    Practical experience with clinical cases has played an important role in supporting the learning of clinical reasoning. However, learning through practical experience involves complex processes difficult to be captured by students. This study aimed to examine the effects of a computer-based cognitive-mapping approach that helps students to externalize the reasoning process and the knowledge underlying the reasoning process when they work with clinical cases. A comparison between the cognitive-mapping approach and the verbal-text approach was made by analyzing their effects on learning outcomes. Fifty-two third-year or higher students from two medical schools participated in the study. Students in the experimental group used the computer-base cognitive-mapping approach, while the control group used the verbal-text approach, to make sense of their thinking and actions when they worked with four simulated cases over 4 weeks. For each case, students in both groups reported their reasoning process (involving data capture, hypotheses formulation, and reasoning with justifications) and the underlying knowledge (involving identified concepts and the relationships between the concepts) using the given approach. The learning products (cognitive maps or verbal text) revealed that students in the cognitive-mapping group outperformed those in the verbal-text group in the reasoning process, but not in making sense of the knowledge underlying the reasoning process. No significant differences were found in a knowledge posttest between the two groups. The computer-based cognitive-mapping approach has shown a promising advantage over the verbal-text approach in improving students' reasoning performance. Further studies are needed to examine the effects of the cognitive-mapping approach in improving the construction of subject-matter knowledge on the basis of practical experience.

  2. Visual Form Perception Can Be a Cognitive Correlate of Lower Level Math Categories for Teenagers

    PubMed Central

    Cui, Jiaxin; Zhang, Yiyun; Cheng, Dazhi; Li, Dawei; Zhou, Xinlin

    2017-01-01

    Numerous studies have assessed the cognitive correlates of performance in mathematics, but little research has been conducted to systematically examine the relations between visual perception as the starting point of visuospatial processing and typical mathematical performance. In the current study, we recruited 223 seventh graders to perform a visual form perception task (figure matching), numerosity comparison, digit comparison, exact computation, approximate computation, and curriculum-based mathematical achievement tests. Results showed that, after controlling for gender, age, and five general cognitive processes (choice reaction time, visual tracing, mental rotation, spatial working memory, and non-verbal matrices reasoning), visual form perception had unique contributions to numerosity comparison, digit comparison, and exact computation, but had no significant relation with approximate computation or curriculum-based mathematical achievement. These results suggest that visual form perception is an important independent cognitive correlate of lower level math categories, including the approximate number system, digit comparison, and exact computation. PMID:28824513

  3. Atomic temporal interval relations in branching time: calculation and application

    NASA Astrophysics Data System (ADS)

    Anger, Frank D.; Ladkin, Peter B.; Rodriguez, Rita V.

    1991-03-01

    A practical method of reasoning about intervals in a branching-time model which is dense, unbounded, future-branching, without rejoining branches is presented. The discussion is based on heuristic constraint- propagation techniques using the relation algebra of binary temporal relations among the intervals over the branching-time model. This technique has been applied with success to models of intervals over linear time by Allen and others, and is of cubic-time complexity. To extend it to branding-time models, it is necessary to calculate compositions of the relations; thus, the table of compositions for the 'atomic' relations is computed, enabling the rapid determination of the composition of arbitrary relations, expressed as disjunctions or unions of the atomic relations.

  4. A BCM theory of meta-plasticity for online self-reorganizing fuzzy-associative learning.

    PubMed

    Tan, Javan; Quek, Chai

    2010-06-01

    Self-organizing neurofuzzy approaches have matured in their online learning of fuzzy-associative structures under time-invariant conditions. To maximize their operative value for online reasoning, these self-sustaining mechanisms must also be able to reorganize fuzzy-associative knowledge in real-time dynamic environments. Hence, it is critical to recognize that they would require self-reorganizational skills to rebuild fluid associative structures when their existing organizations fail to respond well to changing circumstances. In this light, while Hebbian theory (Hebb, 1949) is the basic computational framework for associative learning, it is less attractive for time-variant online learning because it suffers from stability limitations that impedes unlearning. Instead, this paper adopts the Bienenstock-Cooper-Munro (BCM) theory of neurological learning via meta-plasticity principles (Bienenstock et al., 1982) that provides for both online associative and dissociative learning. For almost three decades, BCM theory has been shown to effectively brace physiological evidence of synaptic potentiation (association) and depression (dissociation) into a sound mathematical framework for computational learning. This paper proposes an interpretation of the BCM theory of meta-plasticity for an online self-reorganizing fuzzy-associative learning system to realize online-reasoning capabilities. Experimental findings are twofold: 1) the analysis using S&P-500 stock index illustrated that the self-reorganizing approach could follow the trajectory shifts in the time-variant S&P-500 index for about 60 years, and 2) the benchmark profiles showed that the fuzzy-associative approach yielded comparable results with other fuzzy-precision models with similar online objectives.

  5. The vectorization of a ray tracing program for image generation

    NASA Technical Reports Server (NTRS)

    Plunkett, D. J.; Cychosz, J. M.; Bailey, M. J.

    1984-01-01

    Ray tracing is a widely used method for producing realistic computer generated images. Ray tracing involves firing an imaginary ray from a view point, through a point on an image plane, into a three dimensional scene. The intersections of the ray with the objects in the scene determines what is visible at the point on the image plane. This process must be repeated many times, once for each point (commonly called a pixel) in the image plane. A typical image contains more than a million pixels making this process computationally expensive. A traditional ray tracing program processes one ray at a time. In such a serial approach, as much as ninety percent of the execution time is spent computing the intersection of a ray with the surface in the scene. With the CYBER 205, many rays can be intersected with all the bodies im the scene with a single series of vector operations. Vectorization of this intersection process results in large decreases in computation time. The CADLAB's interest in ray tracing stems from the need to produce realistic images of mechanical parts. A high quality image of a part during the design process can increase the productivity of the designer by helping him visualize the results of his work. To be useful in the design process, these images must be produced in a reasonable amount of time. This discussion will explain how the ray tracing process was vectorized and gives examples of the images obtained.

  6. Ensemble Sampling vs. Time Sampling in Molecular Dynamics Simulations of Thermal Conductivity

    DOE PAGES

    Gordiz, Kiarash; Singh, David J.; Henry, Asegun

    2015-01-29

    In this report we compare time sampling and ensemble averaging as two different methods available for phase space sampling. For the comparison, we calculate thermal conductivities of solid argon and silicon structures, using equilibrium molecular dynamics. We introduce two different schemes for the ensemble averaging approach, and show that both can reduce the total simulation time as compared to time averaging. It is also found that velocity rescaling is an efficient mechanism for phase space exploration. Although our methodology is tested using classical molecular dynamics, the ensemble generation approaches may find their greatest utility in computationally expensive simulations such asmore » first principles molecular dynamics. For such simulations, where each time step is costly, time sampling can require long simulation times because each time step must be evaluated sequentially and therefore phase space averaging is achieved through sequential operations. On the other hand, with ensemble averaging, phase space sampling can be achieved through parallel operations, since each ensemble is independent. For this reason, particularly when using massively parallel architectures, ensemble sampling can result in much shorter simulation times and exhibits similar overall computational effort.« less

  7. The indexed time table approach for planning and acting

    NASA Technical Reports Server (NTRS)

    Ghallab, Malik; Alaoui, Amine Mounir

    1989-01-01

    A representation is discussed of symbolic temporal relations, called IxTeT, that is both powerful enough at the reasoning level for tasks such as plan generation, refinement and modification, and efficient enough for dealing with real time constraints in action monitoring and reactive planning. Such representation for dealing with time is needed in a teleoperated space robot. After a brief survey of known approaches, the proposed representation shows its computational efficiency for managing a large data base of temporal relations. Reactive planning with IxTeT is described and exemplified through the problem of mission planning and modification for a simple surveying satellite.

  8. Principal Investigator in a Box Technical Description Document. 2.0

    NASA Technical Reports Server (NTRS)

    Groleau, Nick; Frainier, Richard

    1994-01-01

    This document provides a brief overview of the PI-in-a-Box system, which can be used for automatic real-time reaction to incoming data. We will therefore outline the current system's capabilities and limitations, and hint at how best to think about PI-in-a-Box as a tool for real-time analysis and reaction in section two, below. We also believe that the solution to many commercial real-time process problems requires data acquisition and analysis combined with rule-based reasoning and/or an intuitive user interface. We will develop the technology reuse potential in section three. Currently, the system runs only on Apple Computer's Macintosh series.

  9. Super resolution reconstruction of infrared images based on classified dictionary learning

    NASA Astrophysics Data System (ADS)

    Liu, Fei; Han, Pingli; Wang, Yi; Li, Xuan; Bai, Lu; Shao, Xiaopeng

    2018-05-01

    Infrared images always suffer from low-resolution problems resulting from limitations of imaging devices. An economical approach to combat this problem involves reconstructing high-resolution images by reasonable methods without updating devices. Inspired by compressed sensing theory, this study presents and demonstrates a Classified Dictionary Learning method to reconstruct high-resolution infrared images. It classifies features of the samples into several reasonable clusters and trained a dictionary pair for each cluster. The optimal pair of dictionaries is chosen for each image reconstruction and therefore, more satisfactory results is achieved without the increase in computational complexity and time cost. Experiments and results demonstrated that it is a viable method for infrared images reconstruction since it improves image resolution and recovers detailed information of targets.

  10. Interesting viewpoints to those who will put Ada into practice

    NASA Technical Reports Server (NTRS)

    Carlsson, Arne

    1986-01-01

    Ada will most probably be used as the programming language for computers in the NASA Space Station. It is reasonable to suppose that Ada will be used for at least embedded computers, because the high software costs for these embedded computers were the reason why Ada activities were initiated about ten years ago. The on-board computers are designed for use in space applications, where maintenance by man is impossible. All manipulation of such computers has to be performed in an autonomous way or remote with commands from the ground. In a manned Space Station some maintenance work can be performed by service people on board, but there are still a lot of applications, which require autonomous computers, for example, vital Space Station functions and unmanned orbital transfer vehicles. Those aspect which have come out of the analysis of Ada characteristics together with the experience of requirements for embedded on-board computers in space applications are examined.

  11. Computer-Game Construction: A Gender-Neutral Attractor to Computing Science

    ERIC Educational Resources Information Center

    Carbonaro, Mike; Szafron, Duane; Cutumisu, Maria; Schaeffer, Jonathan

    2010-01-01

    Enrollment in Computing Science university programs is at a dangerously low level. A major reason for this is the general lack of interest in Computing Science by females. In this paper, we discuss our experience with using a computer game construction environment as a vehicle to encourage female participation in Computing Science. Experiments…

  12. Predicting Development of Mathematical Word Problem Solving Across the Intermediate Grades

    PubMed Central

    Tolar, Tammy D.; Fuchs, Lynn; Cirino, Paul T.; Fuchs, Douglas; Hamlett, Carol L.; Fletcher, Jack M.

    2012-01-01

    This study addressed predictors of the development of word problem solving (WPS) across the intermediate grades. At beginning of 3rd grade, 4 cohorts of students (N = 261) were measured on computation, language, nonverbal reasoning skills, and attentive behavior and were assessed 4 times from beginning of 3rd through end of 5th grade on 2 measures of WPS at low and high levels of complexity. Language skills were related to initial performance at both levels of complexity and did not predict growth at either level. Computational skills had an effect on initial performance in low- but not high-complexity problems and did not predict growth at either level of complexity. Attentive behavior did not predict initial performance but did predict growth in low-complexity, whereas it predicted initial performance but not growth for high-complexity problems. Nonverbal reasoning predicted initial performance and growth for low-complexity WPS, but only growth for high-complexity WPS. This evidence suggests that although mathematical structure is fixed, different cognitive resources may act as limiting factors in WPS development when the WPS context is varied. PMID:23325985

  13. Thermodynamic heuristics with case-based reasoning: combined insights for RNA pseudoknot secondary structure.

    PubMed

    Al-Khatib, Ra'ed M; Rashid, Nur'Aini Abdul; Abdullah, Rosni

    2011-08-01

    The secondary structure of RNA pseudoknots has been extensively inferred and scrutinized by computational approaches. Experimental methods for determining RNA structure are time consuming and tedious; therefore, predictive computational approaches are required. Predicting the most accurate and energy-stable pseudoknot RNA secondary structure has been proven to be an NP-hard problem. In this paper, a new RNA folding approach, termed MSeeker, is presented; it includes KnotSeeker (a heuristic method) and Mfold (a thermodynamic algorithm). The global optimization of this thermodynamic heuristic approach was further enhanced by using a case-based reasoning technique as a local optimization method. MSeeker is a proposed algorithm for predicting RNA pseudoknot structure from individual sequences, especially long ones. This research demonstrates that MSeeker improves the sensitivity and specificity of existing RNA pseudoknot structure predictions. The performance and structural results from this proposed method were evaluated against seven other state-of-the-art pseudoknot prediction methods. The MSeeker method had better sensitivity than the DotKnot, FlexStem, HotKnots, pknotsRG, ILM, NUPACK and pknotsRE methods, with 79% of the predicted pseudoknot base-pairs being correct.

  14. GPU-accelerated compressed-sensing (CS) image reconstruction in chest digital tomosynthesis (CDT) using CUDA programming

    NASA Astrophysics Data System (ADS)

    Choi, Sunghoon; Lee, Haenghwa; Lee, Donghoon; Choi, Seungyeon; Shin, Jungwook; Jang, Woojin; Seo, Chang-Woo; Kim, Hee-Joung

    2017-03-01

    A compressed-sensing (CS) technique has been rapidly applied in medical imaging field for retrieving volumetric data from highly under-sampled projections. Among many variant forms, CS technique based on a total-variation (TV) regularization strategy shows fairly reasonable results in cone-beam geometry. In this study, we implemented the TV-based CS image reconstruction strategy in our prototype chest digital tomosynthesis (CDT) R/F system. Due to the iterative nature of time consuming processes in solving a cost function, we took advantage of parallel computing using graphics processing units (GPU) by the compute unified device architecture (CUDA) programming to accelerate our algorithm. In order to compare the algorithmic performance of our proposed CS algorithm, conventional filtered back-projection (FBP) and simultaneous algebraic reconstruction technique (SART) reconstruction schemes were also studied. The results indicated that the CS produced better contrast-to-noise ratios (CNRs) in the physical phantom images (Teflon region-of-interest) by factors of 3.91 and 1.93 than FBP and SART images, respectively. The resulted human chest phantom images including lung nodules with different diameters also showed better visual appearance in the CS images. Our proposed GPU-accelerated CS reconstruction scheme could produce volumetric data up to 80 times than CPU programming. Total elapsed time for producing 50 coronal planes with 1024×1024 image matrix using 41 projection views were 216.74 seconds for proposed CS algorithms on our GPU programming, which could match the clinically feasible time ( 3 min). Consequently, our results demonstrated that the proposed CS method showed a potential of additional dose reduction in digital tomosynthesis with reasonable image quality in a fast time.

  15. Real-Time linux dynamic clamp: a fast and flexible way to construct virtual ion channels in living cells.

    PubMed

    Dorval, A D; Christini, D J; White, J A

    2001-10-01

    We describe a system for real-time control of biological and other experiments. This device, based around the Real-Time Linux operating system, was tested specifically in the context of dynamic clamping, a demanding real-time task in which a computational system mimics the effects of nonlinear membrane conductances in living cells. The system is fast enough to represent dozens of nonlinear conductances in real time at clock rates well above 10 kHz. Conductances can be represented in deterministic form, or more accurately as discrete collections of stochastically gating ion channels. Tests were performed using a variety of complex models of nonlinear membrane mechanisms in excitable cells, including simulations of spatially extended excitable structures, and multiple interacting cells. Only in extreme cases does the computational load interfere with high-speed "hard" real-time processing (i.e., real-time processing that never falters). Freely available on the worldwide web, this experimental control system combines good performance. immense flexibility, low cost, and reasonable ease of use. It is easily adapted to any task involving real-time control, and excels in particular for applications requiring complex control algorithms that must operate at speeds over 1 kHz.

  16. Scalable Analysis Methods and In Situ Infrastructure for Extreme Scale Knowledge Discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duque, Earl P.N.; Whitlock, Brad J.

    High performance computers have for many years been on a trajectory that gives them extraordinary compute power with the addition of more and more compute cores. At the same time, other system parameters such as the amount of memory per core and bandwidth to storage have remained constant or have barely increased. This creates an imbalance in the computer, giving it the ability to compute a lot of data that it cannot reasonably save out due to time and storage constraints. While technologies have been invented to mitigate this problem (burst buffers, etc.), software has been adapting to employ inmore » situ libraries which perform data analysis and visualization on simulation data while it is still resident in memory. This avoids the need to ever have to pay the costs of writing many terabytes of data files. Instead, in situ enables the creation of more concentrated data products such as statistics, plots, and data extracts, which are all far smaller than the full-sized volume data. With the increasing popularity of in situ, multiple in situ infrastructures have been created, each with its own mechanism for integrating with a simulation. To make it easier to instrument a simulation with multiple in situ infrastructures and include custom analysis algorithms, this project created the SENSEI framework.« less

  17. Simultaneous analysis of large INTEGRAL/SPI1 datasets: Optimizing the computation of the solution and its variance using sparse matrix algorithms

    NASA Astrophysics Data System (ADS)

    Bouchet, L.; Amestoy, P.; Buttari, A.; Rouet, F.-H.; Chauvin, M.

    2013-02-01

    Nowadays, analyzing and reducing the ever larger astronomical datasets is becoming a crucial challenge, especially for long cumulated observation times. The INTEGRAL/SPI X/γ-ray spectrometer is an instrument for which it is essential to process many exposures at the same time in order to increase the low signal-to-noise ratio of the weakest sources. In this context, the conventional methods for data reduction are inefficient and sometimes not feasible at all. Processing several years of data simultaneously requires computing not only the solution of a large system of equations, but also the associated uncertainties. We aim at reducing the computation time and the memory usage. Since the SPI transfer function is sparse, we have used some popular methods for the solution of large sparse linear systems; we briefly review these methods. We use the Multifrontal Massively Parallel Solver (MUMPS) to compute the solution of the system of equations. We also need to compute the variance of the solution, which amounts to computing selected entries of the inverse of the sparse matrix corresponding to our linear system. This can be achieved through one of the latest features of the MUMPS software that has been partly motivated by this work. In this paper we provide a brief presentation of this feature and evaluate its effectiveness on astrophysical problems requiring the processing of large datasets simultaneously, such as the study of the entire emission of the Galaxy. We used these algorithms to solve the large sparse systems arising from SPI data processing and to obtain both their solutions and the associated variances. In conclusion, thanks to these newly developed tools, processing large datasets arising from SPI is now feasible with both a reasonable execution time and a low memory usage.

  18. Solving the Coupled System Improves Computational Efficiency of the Bidomain Equations

    PubMed Central

    Southern, James A.; Plank, Gernot; Vigmond, Edward J.; Whiteley, Jonathan P.

    2017-01-01

    The bidomain equations are frequently used to model the propagation of cardiac action potentials across cardiac tissue. At the whole organ level the size of the computational mesh required makes their solution a significant computational challenge. As the accuracy of the numerical solution cannot be compromised, efficiency of the solution technique is important to ensure that the results of the simulation can be obtained in a reasonable time whilst still encapsulating the complexities of the system. In an attempt to increase efficiency of the solver, the bidomain equations are often decoupled into one parabolic equation that is computationally very cheap to solve and an elliptic equation that is much more expensive to solve. In this study the performance of this uncoupled solution method is compared with an alternative strategy in which the bidomain equations are solved as a coupled system. This seems counter-intuitive as the alternative method requires the solution of a much larger linear system at each time step. However, in tests on two 3-D rabbit ventricle benchmarks it is shown that the coupled method is up to 80% faster than the conventional uncoupled method — and that parallel performance is better for the larger coupled problem. PMID:19457741

  19. A solar radiation model for use in climate studies

    NASA Technical Reports Server (NTRS)

    Chou, Ming-Dah

    1992-01-01

    A solar radiation routine is developed for use in climate studies that includes absorption and scattering due to ozone, water vapor, oxygen, carbon dioxide, clouds, and aerosols. Rayleigh scattering is also included. Broadband parameterization is used to compute the absorption by water vapor in a clear atmosphere, and the k-distribution method is applied to compute fluxes in a scattering atmosphere. The reflectivity and transmissivity of a scattering layer are computed analytically using the delta-four-stream discrete-ordinate approximation. The two-stream adding method is then applied to compute fluxes for a composite of clear and scattering layers. Compared to the results of high spectral resolution and detailed multiple-scattering calculations, fluxes and heating rate are accurately computed to within a few percent. The high accuracy of the flux and heating-rate calculations is achieved with a reasonable amount of computing time. With the UV and visible region grouped into four bands, this solar radiation routine is useful not only for climate studies but also for studies on photolysis in the upper atmosphere and photosynthesis in the biosphere.

  20. Terascale Computing in Accelerator Science and Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ko, Kwok

    2002-08-21

    We have entered the age of ''terascale'' scientific computing. Processors and system architecture both continue to evolve; hundred-teraFLOP computers are expected in the next few years, and petaFLOP computers toward the end of this decade are conceivable. This ever-increasing power to solve previously intractable numerical problems benefits almost every field of science and engineering and is revolutionizing some of them, notably including accelerator physics and technology. At existing accelerators, it will help us optimize performance, expand operational parameter envelopes, and increase reliability. Design decisions for next-generation machines will be informed by unprecedented comprehensive and accurate modeling, as well as computer-aidedmore » engineering; all this will increase the likelihood that even their most advanced subsystems can be commissioned on time, within budget, and up to specifications. Advanced computing is also vital to developing new means of acceleration and exploring the behavior of beams under extreme conditions. With continued progress it will someday become reasonable to speak of a complete numerical model of all phenomena important to a particular accelerator.« less

  1. Computer and Internet use among Undergraduate Medical Students in Iran

    PubMed Central

    Ayatollahi, Ali; Ayatollahi, Jamshid; Ayatollahi, Fatemeh; Ayatollahi, Reza; Shahcheraghi, Seyed Hossein

    2014-01-01

    Objective: Although computer technologies are now widely used in medicine, little is known about its use among medical students in Iran. The aim of this study was to determine the competence and access to computer and internet among the medical students. Methods: In this descriptive study, medical students of Shahid Sadoughi University of Medical Science, Yazd, Iran from the fifth years were asked to answer a questionnaire during a time-tabled lecture slot. The chi-square test was used to compare the frequency of computer and internet use between the two genders, and the level of statistical significance for all test was set at 0.05. Results: All the students have a personal computer and internet access. There were no statistically significant differences between men and women for the computer and internet access, use wireless device to access internet, having laptop and e-mail address and the difficulties encountered using internet. The main reason for less utilization of internet was slow speed of data transfer. Conclusions: Because of the wide range of computer skills and internet information among medical students in our institution, a single computer and internet course for all students would not be useful nor would it be accepted. PMID:25225525

  2. Active flutter suppression using optical output feedback digital controllers

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A method for synthesizing digital active flutter suppression controllers using the concept of optimal output feedback is presented. A convergent algorithm is employed to determine constrained control law parameters that minimize an infinite time discrete quadratic performance index. Low order compensator dynamics are included in the control law and the compensator parameters are computed along with the output feedback gain as part of the optimization process. An input noise adjustment procedure is used to improve the stability margins of the digital active flutter controller. Sample rate variation, prefilter pole variation, control structure variation and gain scheduling are discussed. A digital control law which accommodates computation delay can stabilize the wing with reasonable rms performance and adequate stability margins.

  3. Multiple DNA and protein sequence alignment on a workstation and a supercomputer.

    PubMed

    Tajima, K

    1988-11-01

    This paper describes a multiple alignment method using a workstation and supercomputer. The method is based on the alignment of a set of aligned sequences with the new sequence, and uses a recursive procedure of such alignment. The alignment is executed in a reasonable computation time on diverse levels from a workstation to a supercomputer, from the viewpoint of alignment results and computational speed by parallel processing. The application of the algorithm is illustrated by several examples of multiple alignment of 12 amino acid and DNA sequences of HIV (human immunodeficiency virus) env genes. Colour graphic programs on a workstation and parallel processing on a supercomputer are discussed.

  4. Towards An Engineering Discipline of Computational Security

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mili, Ali; Sheldon, Frederick T; Jilani, Lamia Labed

    2007-01-01

    George Boole ushered the era of modern logic by arguing that logical reasoning does not fall in the realm of philosophy, as it was considered up to his time, but in the realm of mathematics. As such, logical propositions and logical arguments are modeled using algebraic structures. Likewise, we submit that security attributes must be modeled as formal mathematical propositions that are subject to mathematical analysis. In this paper, we approach this problem by attempting to model security attributes in a refinement-like framework that has traditionally been used to represent reliability and safety claims. Keywords: Computable security attributes, survivability, integrity,more » dependability, reliability, safety, security, verification, testing, fault tolerance.« less

  5. Real-time implementation of an interactive jazz accompaniment system

    NASA Astrophysics Data System (ADS)

    Deshpande, Nikhil

    Modern computational algorithms and digital signal processing (DSP) are able to combine with human performers without forced or predetermined structure in order to create dynamic and real-time accompaniment systems. With modern computing power and intelligent algorithm layout and design, it is possible to achieve more detailed auditory analysis of live music. Using this information, computer code can follow and predict how a human's musical performance evolves, and use this to react in a musical manner. This project builds a real-time accompaniment system to perform together with live musicians, with a focus on live jazz performance and improvisation. The system utilizes a new polyphonic pitch detector and embeds it in an Ableton Live system - combined with Max for Live - to perform elements of audio analysis, generation, and triggering. The system also relies on tension curves and information rate calculations from the Creative Artificially Intuitive and Reasoning Agent (CAIRA) system to help understand and predict human improvisation. These metrics are vital to the core system and allow for extrapolated audio analysis. The system is able to react dynamically to a human performer, and can successfully accompany the human as an entire rhythm section.

  6. Role of Computational Fluid Dynamics and Wind Tunnels in Aeronautics R and D

    NASA Technical Reports Server (NTRS)

    Malik, Murjeeb R.; Bushnell, Dennis M.

    2012-01-01

    The purpose of this report is to investigate the status and future projections for the question of supplantation of wind tunnels by computation in design and to intuit the potential impact of computation approaches on wind-tunnel utilization all with an eye toward reducing the infrastructure cost at aeronautics R&D centers. Wind tunnels have been closing for myriad reasons, and such closings have reduced infrastructure costs. Further cost reductions are desired, and the work herein attempts to project which wind-tunnel capabilities can be replaced in the future and, if possible, the timing of such. If the possibility exists to project when a facility could be closed, then maintenance and other associated costs could be rescheduled accordingly (i.e., before the fact) to obtain an even greater infrastructure cost reduction.

  7. A design for integration.

    PubMed

    Fenna, D

    1977-09-01

    For nearly two decades, the development of computerized information systems has struggled for acceptable compromises between the unattainable "total system" and the unacceptable separate applications. Integration of related applications is essential if the computer is to be exploited fully, yet relative simplicity is necessary for systems to be implemented in a reasonable time-scale. This paper discusses a system being progressively developed from minimal beginnings but which, from the outset, had a highly flexible and fully integrated system basis. The system is for batch processing, but can accommodate on-line data input; it is similar in its approach to many transaction-processing real-time systems.

  8. Preliminary description of the area navigation software for a microcomputer-based Loran-C receiver

    NASA Technical Reports Server (NTRS)

    Oguri, F.

    1983-01-01

    The development of new software implementation of this software on a microcomputer (MOS 6502) to provide high quality navigation information is described. This software development provides Area/Route Navigation (RNAV) information from Time Differences (TDs) in raw form using an elliptical Earth model and a spherical model. The software is prepared for the microcomputer based Loran-C receiver. To compute navigation infomation, a (MOS 6502) microcomputer and a mathematical chip (AM 9511A) were combined with the Loran-C receiver. Final data reveals that this software does indeed provide accurate information with reasonable execution times.

  9. An efficient temporal logic for robotic task planning

    NASA Technical Reports Server (NTRS)

    Becker, Jeffrey M.

    1989-01-01

    Computations required for temporal reasoning can be prohibitively expensive if fully general representations are used. Overly simple representations, such as totally ordered sequence of time points, are inadequate for use in a nonlinear task planning system. A middle ground is identified which is general enough to support a capable nonlinear task planner, but specialized enough that the system can support online task planning in real time. A Temporal Logic System (TLS) was developed during the Intelligent Task Automation (ITA) project to support robotic task planning. TLS is also used within the ITA system to support plan execution, monitoring, and exception handling.

  10. A Case against Computer Symbolic Manipulation in School Mathematics Today.

    ERIC Educational Resources Information Center

    Waits, Bert K.; Demana, Franklin

    1992-01-01

    Presented are two reasons discouraging computer symbol manipulation systems use in school mathematics at present: cost for computer laboratories or expensive pocket computers; and impracticality of exact solution representations. Although development with this technology in mathematics education advances, graphing calculators are recommended to…

  11. 77 FR 58576 - Certain Wireless Communication Devices, Portable Music and Data Processing Devices, Computers...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-21

    ... Devices, Portable Music and Data Processing Devices, Computers, and Components Thereof; Institution of... communication devices, portable music and data processing devices, computers, and components thereof by reason... certain wireless communication devices, portable music and data processing devices, computers, and...

  12. Are human beings humean robots?

    NASA Astrophysics Data System (ADS)

    Génova, Gonzalo; Quintanilla Navarro, Ignacio

    2018-01-01

    David Hume, the Scottish philosopher, conceives reason as the slave of the passions, which implies that human reason has predetermined objectives it cannot question. An essential element of an algorithm running on a computational machine (or Logical Computing Machine, as Alan Turing calls it) is its having a predetermined purpose: an algorithm cannot question its purpose, because it would cease to be an algorithm. Therefore, if self-determination is essential to human intelligence, then human beings are neither Humean beings, nor computational machines. We examine also some objections to the Turing Test as a model to understand human intelligence.

  13. A new Gaussian MCTDH program: Implementation and validation on the levels of the water and glycine molecules

    NASA Astrophysics Data System (ADS)

    Skouteris, D.; Barone, V.

    2014-06-01

    We report the main features of a new general implementation of the Gaussian Multi-Configuration Time-Dependent Hartree model. The code allows effective computations of time-dependent phenomena, including calculation of vibronic spectra (in one or more electronic states), relative state populations, etc. Moreover, by expressing the Dirac-Frenkel variational principle in terms of an effective Hamiltonian, we are able to provide a new reliable estimate of the representation error. After validating the code on simple one-dimensional systems, we analyze the harmonic and anharmonic vibrational spectra of water and glycine showing that reliable and converged energy levels can be obtained with reasonable computing resources. The data obtained on water and glycine are compared with results of previous calculations using the vibrational second-order perturbation theory method. Additional features and perspectives are also shortly discussed.

  14. Professional Computer Education Organizations--A Resource for Administrators.

    ERIC Educational Resources Information Center

    Ricketts, Dick

    Professional computer education organizations serve a valuable function by generating, collecting, and disseminating information concerning the role of the computer in education. This report touches briefly on the reasons for the rapid and successful development of professional computer education organizations. A number of attributes of effective…

  15. Translations on Eastern Europe, Scientific Affairs, Number 590

    DTIC Science & Technology

    1978-06-28

    time amounting to hardly a few percent. S° ’ their numbe* is bv n° *eans small. Attention should be intensi- fy ^ °peratiral analySis of the reasons ...management is increasingly more dependent oa the state of the plant information system and the ability of managers to use it for making appropriate...to follow through and analyze certain aspects of the use of computer equipment in the country and draw a few conclusions and make a few assessments

  16. On the Role of Built-in Electric Fields on the Ignition of Oxide Coated NanoAluminum: Ion Mobility versus Fickian Diffusion

    DTIC Science & Technology

    2010-01-01

    on Al ion diffu- sion can be computed using the Nernst –Planck equation . The Nernst –Plank equation is given in Eq. 4,22 J = − D dC dx − zFDC RT d dx...The use of the bulk diffusion equation is reason- able since during the time scales considered the movement of only the atoms initially on the surface

  17. Lessons Learned from Numerical Simulations of the F-16XL Aircraft at Flight Conditions

    NASA Technical Reports Server (NTRS)

    Rizzi, Arthur; Jirasek, Adam; Lamar, John; Crippa, Simone; Badcock, Kenneth; Boelens, Oklo

    2009-01-01

    Nine groups participating in the Cranked Arrow Wing Aerodynamics Project International (CAWAPI) project have contributed steady and unsteady viscous simulations of a full-scale, semi-span model of the F-16XL aircraft. Three different categories of flight Reynolds/Mach number combinations were computed and compared with flight-test measurements for the purpose of code validation and improved understanding of the flight physics. Steady-state simulations are done with several turbulence models of different complexity with no topology information required and which overcome Boussinesq-assumption problems in vortical flows. Detached-eddy simulation (DES) and its successor delayed detached-eddy simulation (DDES) have been used to compute the time accurate flow development. Common structured and unstructured grids as well as individually-adapted unstructured grids were used. Although discrepancies are observed in the comparisons, overall reasonable agreement is demonstrated for surface pressure distribution, local skin friction and boundary velocity profiles at subsonic speeds. The physical modeling, steady or unsteady, and the grid resolution both contribute to the discrepancies observed in the comparisons with flight data, but at this time it cannot be determined how much each part contributes to the whole. Overall it can be said that the technology readiness of CFD-simulation technology for the study of vehicle performance has matured since 2001 such that it can be used today with a reasonable level of confidence for complex configurations.

  18. Direct numerical simulation of sheared turbulent flow

    NASA Technical Reports Server (NTRS)

    Harris, Vascar G.

    1994-01-01

    The summer assignment to study sheared turbulent flow was divided into three phases which were: (1) literature survey, (2) computational familiarization, and (3) pilot computational studies. The governing equations of fluid dynamics or Navier-Stokes equations describe the velocity, pressure, and density as functions of position and time. In principle, when combined with conservation equations for mass, energy, and thermodynamic state of the fluid a determinate system could be obtained. In practice the Navier-Stokes equations have not been solved due to the nonlinear nature and complexity of these equations. Consequently, the importance of experiments in gaining insight for understanding the physics of the problem has been an ongoing process. Reasonable computer simulations of the problem have occured as the computational speed and storage of computers has evolved. The importance of the microstructure of the turbulence dictates the need for high resolution grids in extracting solutions which contain the physical mechanisms which are essential to a successful simulation. The recognized breakthrough occurred as a result of the pioneering work of Orzag and Patterson in which the Navier-Stokes equations were solved numerically utilizing a time saving toggling technique between physical and wave space, known as a spectral method. An equally analytically unsolvable problem, containing the same quasi-chaotic nature as turbulence, is known as the three body problem which was studied computationally as a first step this summer. This study was followed by computations of a two dimensional (2D) free shear layer.

  19. Logic as Marr's Computational Level: Four Case Studies.

    PubMed

    Baggio, Giosuè; van Lambalgen, Michiel; Hagoort, Peter

    2015-04-01

    We sketch four applications of Marr's levels-of-analysis methodology to the relations between logic and experimental data in the cognitive neuroscience of language and reasoning. The first part of the paper illustrates the explanatory power of computational level theories based on logic. We show that a Bayesian treatment of the suppression task in reasoning with conditionals is ruled out by EEG data, supporting instead an analysis based on defeasible logic. Further, we describe how results from an EEG study on temporal prepositions can be reanalyzed using formal semantics, addressing a potential confound. The second part of the article demonstrates the predictive power of logical theories drawing on EEG data on processing progressive constructions and on behavioral data on conditional reasoning in people with autism. Logical theories can constrain processing hypotheses all the way down to neurophysiology, and conversely neuroscience data can guide the selection of alternative computational level models of cognition. Copyright © 2014 Cognitive Science Society, Inc.

  20. Different strategies in solving series completion inductive reasoning problems: an fMRI and computational study.

    PubMed

    Liang, Peipeng; Jia, Xiuqin; Taatgen, Niels A; Zhong, Ning; Li, Kuncheng

    2014-08-01

    Neural correlate of human inductive reasoning process is still unclear. Number series and letter series completion are two typical inductive reasoning tasks, and with a common core component of rule induction. Previous studies have demonstrated that different strategies are adopted in number series and letter series completion tasks; even the underlying rules are identical. In the present study, we examined cortical activation as a function of two different reasoning strategies for solving series completion tasks. The retrieval strategy, used in number series completion tasks, involves direct retrieving of arithmetic knowledge to get the relations between items. The procedural strategy, used in letter series completion tasks, requires counting a certain number of times to detect the relations linking two items. The two strategies require essentially the equivalent cognitive processes, but have different working memory demands (the procedural strategy incurs greater demands). The procedural strategy produced significant greater activity in areas involved in memory retrieval (dorsolateral prefrontal cortex, DLPFC) and mental representation/maintenance (posterior parietal cortex, PPC). An ACT-R model of the tasks successfully predicted behavioral performance and BOLD responses. The present findings support a general-purpose dual-process theory of inductive reasoning regarding the cognitive architecture. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Aeroelasticity Benchmark Assessment: Subsonic Fixed Wing Program

    NASA Technical Reports Server (NTRS)

    Florance, Jennifer P.; Chwalowski, Pawel; Wieseman, Carol D.

    2010-01-01

    The fundamental technical challenge in computational aeroelasticity is the accurate prediction of unsteady aerodynamic phenomena and the effect on the aeroelastic response of a vehicle. Currently, a benchmarking standard for use in validating the accuracy of computational aeroelasticity codes does not exist. Many aeroelastic data sets have been obtained in wind-tunnel and flight testing throughout the world; however, none have been globally presented or accepted as an ideal data set. There are numerous reasons for this. One reason is that often, such aeroelastic data sets focus on the aeroelastic phenomena alone (flutter, for example) and do not contain associated information such as unsteady pressures and time-correlated structural dynamic deflections. Other available data sets focus solely on the unsteady pressures and do not address the aeroelastic phenomena. Other discrepancies can include omission of relevant data, such as flutter frequency and / or the acquisition of only qualitative deflection data. In addition to these content deficiencies, all of the available data sets present both experimental and computational technical challenges. Experimental issues include facility influences, nonlinearities beyond those being modeled, and data processing. From the computational perspective, technical challenges include modeling geometric complexities, coupling between the flow and the structure, grid issues, and boundary conditions. The Aeroelasticity Benchmark Assessment task seeks to examine the existing potential experimental data sets and ultimately choose the one that is viewed as the most suitable for computational benchmarking. An initial computational evaluation of that configuration will then be performed using the Langley-developed computational fluid dynamics (CFD) software FUN3D1 as part of its code validation process. In addition to the benchmarking activity, this task also includes an examination of future research directions. Researchers within the Aeroelasticity Branch will examine other experimental efforts within the Subsonic Fixed Wing (SFW) program (such as testing of the NASA Common Research Model (CRM)) and other NASA programs and assess aeroelasticity issues and research topics.

  2. A stochastic approach for model reduction and memory function design in hydrogeophysical inversion

    NASA Astrophysics Data System (ADS)

    Hou, Z.; Kellogg, A.; Terry, N.

    2009-12-01

    Geophysical (e.g., seismic, electromagnetic, radar) techniques and statistical methods are essential for research related to subsurface characterization, including monitoring subsurface flow and transport processes, oil/gas reservoir identification, etc. For deep subsurface characterization such as reservoir petroleum exploration, seismic methods have been widely used. Recently, electromagnetic (EM) methods have drawn great attention in the area of reservoir characterization. However, considering the enormous computational demand corresponding to seismic and EM forward modeling, it is usually a big problem to have too many unknown parameters in the modeling domain. For shallow subsurface applications, the characterization can be very complicated considering the complexity and nonlinearity of flow and transport processes in the unsaturated zone. It is warranted to reduce the dimension of parameter space to a reasonable level. Another common concern is how to make the best use of time-lapse data with spatial-temporal correlations. This is even more critical when we try to monitor subsurface processes using geophysical data collected at different times. The normal practice is to get the inverse images individually. These images are not necessarily continuous or even reasonably related, because of the non-uniqueness of hydrogeophysical inversion. We propose to use a stochastic framework by integrating minimum-relative-entropy concept, quasi Monto Carlo sampling techniques, and statistical tests. The approach allows efficient and sufficient exploration of all possibilities of model parameters and evaluation of their significances to geophysical responses. The analyses enable us to reduce the parameter space significantly. The approach can be combined with Bayesian updating, allowing us to treat the updated ‘posterior’ pdf as a memory function, which stores all the information up to date about the distributions of soil/field attributes/properties, then consider the memory function as a new prior and generate samples from it for further updating when more geophysical data is available. We applied this approach for deep oil reservoir characterization and for shallow subsurface flow monitoring. The model reduction approach reliably helps reduce the joint seismic/EM/radar inversion computational time to reasonable levels. Continuous inversion images are obtained using time-lapse data with the “memory function” applied in the Bayesian inversion.

  3. Thermal Conductivities in Solids from First Principles: Accurate Computations and Rapid Estimates

    NASA Astrophysics Data System (ADS)

    Carbogno, Christian; Scheffler, Matthias

    In spite of significant research efforts, a first-principles determination of the thermal conductivity κ at high temperatures has remained elusive. Boltzmann transport techniques that account for anharmonicity perturbatively become inaccurate under such conditions. Ab initio molecular dynamics (MD) techniques using the Green-Kubo (GK) formalism capture the full anharmonicity, but can become prohibitively costly to converge in time and size. We developed a formalism that accelerates such GK simulations by several orders of magnitude and that thus enables its application within the limited time and length scales accessible in ab initio MD. For this purpose, we determine the effective harmonic potential occurring during the MD, the associated temperature-dependent phonon properties and lifetimes. Interpolation in reciprocal and frequency space then allows to extrapolate to the macroscopic scale. For both force-field and ab initio MD, we validate this approach by computing κ for Si and ZrO2, two materials known for their particularly harmonic and anharmonic character. Eventually, we demonstrate how these techniques facilitate reasonable estimates of κ from existing MD calculations at virtually no additional computational cost.

  4. A production planning model considering uncertain demand using two-stage stochastic programming in a fresh vegetable supply chain context.

    PubMed

    Mateo, Jordi; Pla, Lluis M; Solsona, Francesc; Pagès, Adela

    2016-01-01

    Production planning models are achieving more interest for being used in the primary sector of the economy. The proposed model relies on the formulation of a location model representing a set of farms susceptible of being selected by a grocery shop brand to supply local fresh products under seasonal contracts. The main aim is to minimize overall procurement costs and meet future demand. This kind of problem is rather common in fresh vegetable supply chains where producers are located in proximity either to processing plants or retailers. The proposed two-stage stochastic model determines which suppliers should be selected for production contracts to ensure high quality products and minimal time from farm-to-table. Moreover, Lagrangian relaxation and parallel computing algorithms are proposed to solve these instances efficiently in a reasonable computational time. The results obtained show computational gains from our algorithmic proposals in front of the usage of plain CPLEX solver. Furthermore, the results ensure the competitive advantages of using the proposed model by purchase managers in the fresh vegetables industry.

  5. Operation ARA: A Computerized Learning Game that Teaches Critical Thinking and Scientific Reasoning

    ERIC Educational Resources Information Center

    Halpern, Diane F.; Millis, Keith; Graesser, Arthur C.; Butler, Heather; Forsyth, Carol; Cai, Zhiqiang

    2012-01-01

    Operation ARA (Acquiring Research Acumen) is a computerized learning game that teaches critical thinking and scientific reasoning. It is a valuable learning tool that utilizes principles from the science of learning and serious computer games. Students learn the skills of scientific reasoning by engaging in interactive dialogs with avatars. They…

  6. Using Computer Simulations for Promoting Model-Based Reasoning: Epistemological and Educational Dimensions

    ERIC Educational Resources Information Center

    Develaki, Maria

    2017-01-01

    Scientific reasoning is particularly pertinent to science education since it is closely related to the content and methodologies of science and contributes to scientific literacy. Much of the research in science education investigates the appropriate framework and teaching methods and tools needed to promote students' ability to reason and…

  7. Documentation of a computer program to simulate stream-aquifer relations using a modular, finite-difference, ground-water flow model

    USGS Publications Warehouse

    Prudic, David E.

    1989-01-01

    Computer models are widely used to simulate groundwater flow for evaluating and managing the groundwater resource of many aquifers, but few are designed to also account for surface flow in streams. A computer program was written for use in the US Geological Survey modular finite difference groundwater flow model to account for the amount of flow in streams and to simulate the interaction between surface streams and groundwater. The new program is called the Streamflow-Routing Package. The Streamflow-Routing Package is not a true surface water flow model, but rather is an accounting program that tracks the flow in one or more streams which interact with groundwater. The program limits the amount of groundwater recharge to the available streamflow. It permits two or more streams to merge into one with flow in the merged stream equal to the sum of the tributary flows. The program also permits diversions from streams. The groundwater flow model with the Streamflow-Routing Package has an advantage over the analytical solution in simulating the interaction between aquifer and stream because it can be used to simulate complex systems that cannot be readily solved analytically. The Streamflow-Routing Package does not include a time function for streamflow but rather streamflow entering the modeled area is assumed to be instantly available to downstream reaches during each time period. This assumption is generally reasonable because of the relatively slow rate of groundwater flow. Another assumption is that leakage between streams and aquifers is instantaneous. This assumption may not be reasonable if the streams and aquifers are separated by a thick unsaturated zone. Documentation of the Streamflow-Routing Package includes data input instructions; flow charts, narratives, and listings of the computer program for each of four modules; and input data sets and printed results for two test problems, and one example problem. (Lantz-PTT)

  8. Parallel methodology to capture cyclic variability in motored engines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ameen, Muhsin M.; Yang, Xiaofeng; Kuo, Tang-Wei

    2016-07-28

    Numerical prediction of of cycle-to-cycle variability (CCV) in SI engines is extremely challenging for two key reasons: (i) high-fidelity methods such as large eddy simulation (LES) are require to accurately capture the in-cylinder turbulent flowfield, and (ii) CCV is experienced over long timescales and hence the simulations need to be performed for hundreds of consecutive cycles. In this study, a new methodology is proposed to dissociate this long time-scale problem into several shorter time-scale problems, which can considerably reduce the computational time without sacrificing the fidelity of the simulations. The strategy is to perform multiple single-cycle simulations in parallel bymore » effectively perturbing the simulation parameters such as the initial and boundary conditions. It is shown that by perturbing the initial velocity field effectively based on the intensity of the in-cylinder turbulence, the mean and variance of the in-cylinder flowfield is captured reasonably well. Adding perturbations in the initial pressure field and the boundary pressure improves the predictions. It is shown that this new approach is able to give accurate predictions of the flowfield statistics in less than one-tenth of time required for the conventional approach of simulating consecutive engine cycles.« less

  9. The Cognitive Predictors of Computational Skill with Whole versus Rational Numbers: An Exploratory Study.

    PubMed

    Seethaler, Pamela M; Fuchs, Lynn S; Star, Jon R; Bryant, Joan

    2011-10-01

    The purpose of the present study was to explore the 3(rd)-grade cognitive predictors of 5th-grade computational skill with rational numbers and how those are similar to and different from the cognitive predictors of whole-number computational skill. Students (n = 688) were assessed on incoming whole-number calculation skill, language, nonverbal reasoning, concept formation, processing speed, and working memory in the fall of 3(rd) grade. Students were followed longitudinally and assessed on calculation skill with whole numbers and with rational numbers in the spring of 5(th) grade. The unique predictors of skill with whole-number computation were incoming whole-number calculation skill, nonverbal reasoning, concept formation, and working memory (numerical executive control). In addition to these cognitive abilities, language emerged as a unique predictor of rational-number computational skill.

  10. The Cognitive Predictors of Computational Skill with Whole versus Rational Numbers: An Exploratory Study

    PubMed Central

    Seethaler, Pamela M.; Fuchs, Lynn S.; Star, Jon R.; Bryant, Joan

    2011-01-01

    The purpose of the present study was to explore the 3rd-grade cognitive predictors of 5th-grade computational skill with rational numbers and how those are similar to and different from the cognitive predictors of whole-number computational skill. Students (n = 688) were assessed on incoming whole-number calculation skill, language, nonverbal reasoning, concept formation, processing speed, and working memory in the fall of 3rd grade. Students were followed longitudinally and assessed on calculation skill with whole numbers and with rational numbers in the spring of 5th grade. The unique predictors of skill with whole-number computation were incoming whole-number calculation skill, nonverbal reasoning, concept formation, and working memory (numerical executive control). In addition to these cognitive abilities, language emerged as a unique predictor of rational-number computational skill. PMID:21966180

  11. Colovesical fistula causing an uncommon reason for failure of computed tomography colonography: a case report.

    PubMed

    Neroladaki, Angeliki; Breguet, Romain; Botsikas, Diomidis; Terraz, Sylvain; Becker, Christoph D; Montet, Xavier

    2012-07-23

    Computed tomography colonography, or virtual colonoscopy, is a good alternative to optical colonoscopy. However, suboptimal patient preparation or colon distension may reduce the diagnostic accuracy of this imaging technique. We report the case of an 83-year-old Caucasian woman who presented with a five-month history of pneumaturia and fecaluria and an acute episode of macrohematuria, leading to a high clinical suspicion of a colovesical fistula. The fistula was confirmed by standard contrast-enhanced computed tomography. Optical colonoscopy was performed to exclude the presence of an underlying colonic neoplasm. Since optical colonoscopy was incomplete, computed tomography colonography was performed, but also failed due to inadequate colon distension. The insufflated air directly accumulated within the bladder via the large fistula. Clinicians should consider colovesical fistula as a potential reason for computed tomography colonography failure.

  12. Microprogramming Handbook. Second Edition.

    ERIC Educational Resources Information Center

    Microdata Corp., Santa Ana, CA.

    Instead of instructions residing in the main memory as in a fixed instruction computer, a micro-programable computer has a separete read-only memory which is alterable so that the system can be efficiently adapted to the application at hand. Microprogramable computers are faster than fixed instruction computers for several reasons: instruction…

  13. Computers and Instruction: Implications of the Rising Tide of Criticism for Reading Education.

    ERIC Educational Resources Information Center

    Balajthy, Ernest

    1988-01-01

    Examines two major reasons that schools have adopted computers without careful prior examination and planning. Surveys a variety of criticisms targeted toward some aspects of computer-based instruction in reading in an effort to direct attention to the beneficial implications of computers in the classroom. (MS)

  14. An Educational Approach to Computationally Modeling Dynamical Systems

    ERIC Educational Resources Information Center

    Chodroff, Leah; O'Neal, Tim M.; Long, David A.; Hemkin, Sheryl

    2009-01-01

    Chemists have used computational science methodologies for a number of decades and their utility continues to be unabated. For this reason we developed an advanced lab in computational chemistry in which students gain understanding of general strengths and weaknesses of computation-based chemistry by working through a specific research problem.…

  15. Establishing a Computer Literacy Requirement for All Students.

    ERIC Educational Resources Information Center

    Kieffer, Linda M.

    Several factors have indicated the necessity of formally requiring computer literacy at the university level. This paper discusses the reasoning for, the development of, and content of two computer literacy courses required of all freshmen. The first course contains computer awareness and knowledge that students should have upon entering the…

  16. Bin recycling strategy for improving the histogram precision on GPU

    NASA Astrophysics Data System (ADS)

    Cárdenas-Montes, Miguel; Rodríguez-Vázquez, Juan José; Vega-Rodríguez, Miguel A.

    2016-07-01

    Histogram is an easily comprehensible way to present data and analyses. In the current scientific context with access to large volumes of data, the processing time for building histogram has dramatically increased. For this reason, parallel construction is necessary to alleviate the impact of the processing time in the analysis activities. In this scenario, GPU computing is becoming widely used for reducing until affordable levels the processing time of histogram construction. Associated to the increment of the processing time, the implementations are stressed on the bin-count accuracy. Accuracy aspects due to the particularities of the implementations are not usually taken into consideration when building histogram with very large data sets. In this work, a bin recycling strategy to create an accuracy-aware implementation for building histogram on GPU is presented. In order to evaluate the approach, this strategy was applied to the computation of the three-point angular correlation function, which is a relevant function in Cosmology for the study of the Large Scale Structure of Universe. As a consequence of the study a high-accuracy implementation for histogram construction on GPU is proposed.

  17. CSM solutions of rotating blade dynamics using integrating matrices

    NASA Technical Reports Server (NTRS)

    Lakin, William D.

    1992-01-01

    The dynamic behavior of flexible rotating beams continues to receive considerable research attention as it constitutes a fundamental problem in applied mechanics. Further, beams comprise parts of many rotating structures of engineering significance. A topic of particular interest at the present time involves the development of techniques for obtaining the behavior in both space and time of a rotor acted upon by a simple airload loading. Most current work on problems of this type use solution techniques based on normal modes. It is certainly true that normal modes cannot be disregarded, as knowledge of natural blade frequencies is always important. However, the present work has considered a computational structural mechanics (CSM) approach to rotor blade dynamics problems in which the physical properties of the rotor blade provide input for a direct numerical solution of the relevant boundary-and-initial-value problem. Analysis of the dynamics of a given rotor system may require solution of the governing equations over a long time interval corresponding to many revolutions of the loaded flexible blade. For this reason, most of the common techniques in computational mechanics, which treat the space-time behavior concurrently, cannot be applied to the rotor dynamics problem without a large expenditure of computational resources. By contrast, the integrating matrix technique of computational mechanics has the ability to consistently incorporate boundary conditions and 'remove' dependence on a space variable. For problems involving both space and time, this feature of the integrating matrix approach thus can generate a 'splitting' which forms the basis of an efficient CSM method for numerical solution of rotor dynamics problems.

  18. Large holographic displays for real-time applications

    NASA Astrophysics Data System (ADS)

    Schwerdtner, A.; Häussler, R.; Leister, N.

    2008-02-01

    Holography is generally accepted as the ultimate approach to display three-dimensional scenes or objects. Principally, the reconstruction of an object from a perfect hologram would appear indistinguishable from viewing the corresponding real-world object. Up to now two main obstacles have prevented large-screen Computer-Generated Holograms (CGH) from achieving a satisfactory laboratory prototype not to mention a marketable one. The reason is a small cell pitch CGH resulting in a huge number of hologram cells and a very high computational load for encoding the CGH. These seemingly inevitable technological hurdles for a long time have not been cleared limiting the use of holography to special applications, such as optical filtering, interference, beam forming, digital holography for capturing the 3-D shape of objects, and others. SeeReal Technologies has developed a new approach for real-time capable CGH using the socalled Tracked Viewing Windows technology to overcome these problems. The paper will show that today's state of the art reconfigurable Spatial Light Modulators (SLM), especially today's feasible LCD panels are suited for reconstructing large 3-D scenes which can be observed from large viewing angles. For this to achieve the original holographic concept of containing information from the entire scene in each part of the CGH has been abandoned. This substantially reduces the hologram resolution and thus the computational load by several orders of magnitude making thus real-time computation possible. A monochrome real-time prototype measuring 20 inches has been built and demonstrated at last year's SID conference and exhibition 2007 and at several other events.

  19. A distributed agent architecture for real-time knowledge-based systems: Real-time expert systems project, phase 1

    NASA Technical Reports Server (NTRS)

    Lee, S. Daniel

    1990-01-01

    We propose a distributed agent architecture (DAA) that can support a variety of paradigms based on both traditional real-time computing and artificial intelligence. DAA consists of distributed agents that are classified into two categories: reactive and cognitive. Reactive agents can be implemented directly in Ada to meet hard real-time requirements and be deployed on on-board embedded processors. A traditional real-time computing methodology under consideration is the rate monotonic theory that can guarantee schedulability based on analytical methods. AI techniques under consideration for reactive agents are approximate or anytime reasoning that can be implemented using Bayesian belief networks as in Guardian. Cognitive agents are traditional expert systems that can be implemented in ART-Ada to meet soft real-time requirements. During the initial design of cognitive agents, it is critical to consider the migration path that would allow initial deployment on ground-based workstations with eventual deployment on on-board processors. ART-Ada technology enables this migration while Lisp-based technologies make it difficult if not impossible. In addition to reactive and cognitive agents, a meta-level agent would be needed to coordinate multiple agents and to provide meta-level control.

  20. Spectral methods in time for a class of parabolic partial differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ierley, G.; Spencer, B.; Worthing, R.

    1992-09-01

    In this paper, we introduce a fully spectral solution for the partial differential equation u[sub t] + uu[sub x] + vu[sub xx] + [mu]u[sub xxx] + [lambda]u[sub xxxx] = O. For periodic boundary conditions in space, the use of a Fourier expansion in x admits of a particularly efficient algorithm with respect to expansion of the time dependence in a Chebyshev series. Boundary conditions other than periodic may still be treated with reasonable, though lesser, efficiency. for all cases, very high accuracy is attainable at moderate computational cost relative to the expense of variable order finite difference methods in time.more » 14 refs., 9 figs.« less

  1. Computers for the Faculty: How on a Limited Budget.

    ERIC Educational Resources Information Center

    Arman, Hal; Kostoff, John

    An informal investigation of the use of computers at Delta College (DC) in Michigan revealed reasonable use of computers by faculty in disciplines such as mathematics, business, and technology, but very limited use in the humanities and social sciences. In an effort to increase faculty computer usage, DC decided to make computers available to any…

  2. Computer Anxiety: Relationship to Math Anxiety and Holland Types.

    ERIC Educational Resources Information Center

    Bellando, Jayne; Winer, Jane L.

    Although the number of computers in the school system is increasing, many schools are not using computers to their capacity. One reason for this may be computer anxiety on the part of the teacher. A review of the computer anxiety literature reveals little information on the subject, and findings from previous studies suggest that basic controlled…

  3. Applications of Out-of-Domain Knowledge in Students' Reasoning about Computer Program State

    ERIC Educational Resources Information Center

    Lewis, Colleen Marie

    2012-01-01

    To meet a growing demand and a projected deficit in the supply of computer professionals (NCWIT, 2009), it is of vital importance to expand students' access to computer science. However, many researchers in the computer science education community unproductively assume that some students lack an innate ability for computer science and…

  4. Clinical Computer Applications in Mental Health

    PubMed Central

    Greist, John H.; Klein, Marjorie H.; Erdman, Harold P.; Jefferson, James W.

    1982-01-01

    Direct patient-computer interviews were among the earliest applications of computing in medicine. Yet patient interviewing and other clinical applications have lagged behind fiscal/administrative uses. Several reasons for delays in the development and implementation of clinical computing programs and their resolution are discussed. Patient interviewing, clinician consultation and other applications of clinical computing in mental health are reviewed.

  5. Toward Accessing Spatial Structure from Building Information Models

    NASA Astrophysics Data System (ADS)

    Schultz, C.; Bhatt, M.

    2011-08-01

    Data about building designs and layouts is becoming increasingly more readily available. In the near future, service personal (such as maintenance staff or emergency rescue workers) arriving at a building site will have immediate real-time access to enormous amounts of data relating to structural properties, utilities, materials, temperature, and so on. The critical problem for users is the taxing and error prone task of interpreting such a large body of facts in order to extract salient information. This is necessary for comprehending a situation and deciding on a plan of action, and is a particularly serious issue in time-critical and safety-critical activities such as firefighting. Current unifying building models such as the Industry Foundation Classes (IFC), while being comprehensive, do not directly provide data structures that focus on spatial reasoning and spatial modalities that are required for high-level analytical tasks. The aim of the research presented in this paper is to provide computational tools for higher level querying and reasoning that shift the cognitive burden of dealing with enormous amounts of data away from the user. The user can then spend more energy and time in planning and decision making in order to accomplish the tasks at hand. We present an overview of our framework that provides users with an enhanced model of "built-up space". In order to test our approach using realistic design data (in terms of both scale and the nature of the building models) we describe how our system interfaces with IFC, and we conduct timing experiments to determine the practicality of our approach. We discuss general computational approaches for deriving higher-level spatial modalities by focusing on the example of route graphs. Finally, we present a firefighting scenario with alternative route graphs to motivate the application of our framework.

  6. [A new concept in digestive surgery: the computer assisted surgical procedure, from virtual reality to telemanipulation].

    PubMed

    Marescaux, J; Clément, J M; Nord, M; Russier, Y; Tassetti, V; Mutter, D; Cotin, S; Ayache, N

    1997-11-01

    Surgical simulation increasingly appears to be an essential aspect of tomorrow's surgery. The development of a hepatic surgery simulator is an advanced concept calling for a new writing system which will transform the medical world: virtual reality. Virtual reality extends the perception of our five senses by representing more than the real state of things by the means of computer sciences and robotics. It consists of three concepts: immersion, navigation and interaction. Three reasons have led us to develop this simulator: the first is to provide the surgeon with a comprehensive visualisation of the organ. The second reason is to allow for planning and surgical simulation that could be compared with the detailed flight-plan for a commercial jet pilot. The third lies in the fact that virtual reality is an integrated part of the concept of computer assisted surgical procedure. The project consists of a sophisticated simulator which has to include five requirements: visual fidelity, interactivity, physical properties, physiological properties, sensory input and output. In this report we will describe how to get a realistic 3D model of the liver from bi-dimensional 2D medical images for anatomical and surgical training. The introduction of a tumor and the consequent planning and virtual resection is also described, as are force feedback and real-time interaction.

  7. SU-D-206-02: Evaluation of Partial Storage of the System Matrix for Cone Beam Computed Tomography Using a GPU Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matenine, D; Cote, G; Mascolo-Fortin, J

    2016-06-15

    Purpose: Iterative reconstruction algorithms in computed tomography (CT) require a fast method for computing the intersections between the photons’ trajectories and the object, also called ray-tracing or system matrix computation. This work evaluates different ways to store the system matrix, aiming to reconstruct dense image grids in reasonable time. Methods: We propose an optimized implementation of the Siddon’s algorithm using graphics processing units (GPUs) with a novel data storage scheme. The algorithm computes a part of the system matrix on demand, typically, for one projection angle. The proposed method was enhanced with accelerating options: storage of larger subsets of themore » system matrix, systematic reuse of data via geometric symmetries, an arithmetic-rich parallel code and code configuration via machine learning. It was tested on geometries mimicking a cone beam CT acquisition of a human head. To realistically assess the execution time, the ray-tracing routines were integrated into a regularized Poisson-based reconstruction algorithm. The proposed scheme was also compared to a different approach, where the system matrix is fully pre-computed and loaded at reconstruction time. Results: Fast ray-tracing of realistic acquisition geometries, which often lack spatial symmetry properties, was enabled via the proposed method. Ray-tracing interleaved with projection and backprojection operations required significant additional time. In most cases, ray-tracing was shown to use about 66 % of the total reconstruction time. In absolute terms, tracing times varied from 3.6 s to 7.5 min, depending on the problem size. The presence of geometrical symmetries allowed for non-negligible ray-tracing and reconstruction time reduction. Arithmetic-rich parallel code and machine learning permitted a modest reconstruction time reduction, in the order of 1 %. Conclusion: Partial system matrix storage permitted the reconstruction of higher 3D image grid sizes and larger projection datasets at the cost of additional time, when compared to the fully pre-computed approach. This work was supported in part by the Fonds de recherche du Quebec - Nature et technologies (FRQ-NT). The authors acknowledge partial support by the CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council of Canada (Grant No. 432290).« less

  8. Fuzzy inductive reasoning: a consolidated approach to data-driven construction of complex dynamical systems

    NASA Astrophysics Data System (ADS)

    Nebot, Àngela; Mugica, Francisco

    2012-10-01

    Fuzzy inductive reasoning (FIR) is a modelling and simulation methodology derived from the General Systems Problem Solver. It compares favourably with other soft computing methodologies, such as neural networks, genetic or neuro-fuzzy systems, and with hard computing methodologies, such as AR, ARIMA, or NARMAX, when it is used to predict future behaviour of different kinds of systems. This paper contains an overview of the FIR methodology, its historical background, and its evolution.

  9. A Variational Bayes Genomic-Enabled Prediction Model with Genotype × Environment Interaction

    PubMed Central

    Montesinos-López, Osval A.; Montesinos-López, Abelardo; Crossa, José; Montesinos-López, José Cricelio; Luna-Vázquez, Francisco Javier; Salinas-Ruiz, Josafhat; Herrera-Morales, José R.; Buenrostro-Mariscal, Raymundo

    2017-01-01

    There are Bayesian and non-Bayesian genomic models that take into account G×E interactions. However, the computational cost of implementing Bayesian models is high, and becomes almost impossible when the number of genotypes, environments, and traits is very large, while, in non-Bayesian models, there are often important and unsolved convergence problems. The variational Bayes method is popular in machine learning, and, by approximating the probability distributions through optimization, it tends to be faster than Markov Chain Monte Carlo methods. For this reason, in this paper, we propose a new genomic variational Bayes version of the Bayesian genomic model with G×E using half-t priors on each standard deviation (SD) term to guarantee highly noninformative and posterior inferences that are not sensitive to the choice of hyper-parameters. We show the complete theoretical derivation of the full conditional and the variational posterior distributions, and their implementations. We used eight experimental genomic maize and wheat data sets to illustrate the new proposed variational Bayes approximation, and compared its predictions and implementation time with a standard Bayesian genomic model with G×E. Results indicated that prediction accuracies are slightly higher in the standard Bayesian model with G×E than in its variational counterpart, but, in terms of computation time, the variational Bayes genomic model with G×E is, in general, 10 times faster than the conventional Bayesian genomic model with G×E. For this reason, the proposed model may be a useful tool for researchers who need to predict and select genotypes in several environments. PMID:28391241

  10. A Variational Bayes Genomic-Enabled Prediction Model with Genotype × Environment Interaction.

    PubMed

    Montesinos-López, Osval A; Montesinos-López, Abelardo; Crossa, José; Montesinos-López, José Cricelio; Luna-Vázquez, Francisco Javier; Salinas-Ruiz, Josafhat; Herrera-Morales, José R; Buenrostro-Mariscal, Raymundo

    2017-06-07

    There are Bayesian and non-Bayesian genomic models that take into account G×E interactions. However, the computational cost of implementing Bayesian models is high, and becomes almost impossible when the number of genotypes, environments, and traits is very large, while, in non-Bayesian models, there are often important and unsolved convergence problems. The variational Bayes method is popular in machine learning, and, by approximating the probability distributions through optimization, it tends to be faster than Markov Chain Monte Carlo methods. For this reason, in this paper, we propose a new genomic variational Bayes version of the Bayesian genomic model with G×E using half-t priors on each standard deviation (SD) term to guarantee highly noninformative and posterior inferences that are not sensitive to the choice of hyper-parameters. We show the complete theoretical derivation of the full conditional and the variational posterior distributions, and their implementations. We used eight experimental genomic maize and wheat data sets to illustrate the new proposed variational Bayes approximation, and compared its predictions and implementation time with a standard Bayesian genomic model with G×E. Results indicated that prediction accuracies are slightly higher in the standard Bayesian model with G×E than in its variational counterpart, but, in terms of computation time, the variational Bayes genomic model with G×E is, in general, 10 times faster than the conventional Bayesian genomic model with G×E. For this reason, the proposed model may be a useful tool for researchers who need to predict and select genotypes in several environments. Copyright © 2017 Montesinos-López et al.

  11. A DAG Scheduling Scheme on Heterogeneous Computing Systems Using Tuple-Based Chemical Reaction Optimization

    PubMed Central

    Jiang, Yuyi; Shao, Zhiqing; Guo, Yi

    2014-01-01

    A complex computing problem can be solved efficiently on a system with multiple computing nodes by dividing its implementation code into several parallel processing modules or tasks that can be formulated as directed acyclic graph (DAG) problems. The DAG jobs may be mapped to and scheduled on the computing nodes to minimize the total execution time. Searching an optimal DAG scheduling solution is considered to be NP-complete. This paper proposed a tuple molecular structure-based chemical reaction optimization (TMSCRO) method for DAG scheduling on heterogeneous computing systems, based on a very recently proposed metaheuristic method, chemical reaction optimization (CRO). Comparing with other CRO-based algorithms for DAG scheduling, the design of tuple reaction molecular structure and four elementary reaction operators of TMSCRO is more reasonable. TMSCRO also applies the concept of constrained critical paths (CCPs), constrained-critical-path directed acyclic graph (CCPDAG) and super molecule for accelerating convergence. In this paper, we have also conducted simulation experiments to verify the effectiveness and efficiency of TMSCRO upon a large set of randomly generated graphs and the graphs for real world problems. PMID:25143977

  12. A DAG scheduling scheme on heterogeneous computing systems using tuple-based chemical reaction optimization.

    PubMed

    Jiang, Yuyi; Shao, Zhiqing; Guo, Yi

    2014-01-01

    A complex computing problem can be solved efficiently on a system with multiple computing nodes by dividing its implementation code into several parallel processing modules or tasks that can be formulated as directed acyclic graph (DAG) problems. The DAG jobs may be mapped to and scheduled on the computing nodes to minimize the total execution time. Searching an optimal DAG scheduling solution is considered to be NP-complete. This paper proposed a tuple molecular structure-based chemical reaction optimization (TMSCRO) method for DAG scheduling on heterogeneous computing systems, based on a very recently proposed metaheuristic method, chemical reaction optimization (CRO). Comparing with other CRO-based algorithms for DAG scheduling, the design of tuple reaction molecular structure and four elementary reaction operators of TMSCRO is more reasonable. TMSCRO also applies the concept of constrained critical paths (CCPs), constrained-critical-path directed acyclic graph (CCPDAG) and super molecule for accelerating convergence. In this paper, we have also conducted simulation experiments to verify the effectiveness and efficiency of TMSCRO upon a large set of randomly generated graphs and the graphs for real world problems.

  13. Amplify scientific discovery with artificial intelligence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gil, Yolanda; Greaves, Mark T.; Hendler, James

    Computing innovations have fundamentally changed many aspects of scientific inquiry. For example, advances in robotics, high-end computing, networking, and databases now underlie much of what we do in science such as gene sequencing, general number crunching, sharing information between scientists, and analyzing large amounts of data. As computing has evolved at a rapid pace, so too has its impact in science, with the most recent computing innovations repeatedly being brought to bear to facilitate new forms of inquiry. Recently, advances in Artificial Intelligence (AI) have deeply penetrated many consumer sectors, including for example Apple’s Siri™ speech recognition system, real-time automatedmore » language translation services, and a new generation of self-driving cars and self-navigating drones. However, AI has yet to achieve comparable levels of penetration in scientific inquiry, despite its tremendous potential in aiding computers to help scientists tackle tasks that require scientific reasoning. We contend that advances in AI will transform the practice of science as we are increasingly able to effectively and jointly harness human and machine intelligence in the pursuit of major scientific challenges.« less

  14. Computations of turbulent lean premixed combustion using conditional moment closure

    NASA Astrophysics Data System (ADS)

    Amzin, Shokri; Swaminathan, Nedunchezhian

    2013-12-01

    Conditional Moment Closure (CMC) is a suitable method for predicting scalars such as carbon monoxide with slow chemical time scales in turbulent combustion. Although this method has been successfully applied to non-premixed combustion, its application to lean premixed combustion is rare. In this study the CMC method is used to compute piloted lean premixed combustion in a distributed combustion regime. The conditional scalar dissipation rate of the conditioning scalar, the progress variable, is closed using an algebraic model and turbulence is modelled using the standard k-ɛ model. The conditional mean reaction rate is closed using a first order CMC closure with the GRI-3.0 chemical mechanism to represent the chemical kinetics of methane oxidation. The PDF of the progress variable is obtained using a presumed shape with the Beta function. The computed results are compared with the experimental measurements and earlier computations using the transported PDF approach. The results show reasonable agreement with the experimental measurements and are consistent with the transported PDF computations. When the compounded effects of shear-turbulence and flame are strong, second order closures may be required for the CMC.

  15. A description of the thruster attitude control simulation and its application to the HEAO-C study

    NASA Technical Reports Server (NTRS)

    Brandon, L. B.

    1971-01-01

    During the design and evaluation of a reaction control system (RCS), it is desirable to have a digital computer program simulating vehicle dynamics, disturbance torques, control torques, and RCS logic. The thruster attitude control simulation (TACS) is just such a computer program. The TACS is a relatively sophisticated digital computer program that includes all the major parameters involved in the attitude control of a vehicle using an RCS for control. It includes the effects of gravity gradient torques and HEAO-C aerodynamic torques so that realistic runs can be made in the areas of fuel consumption and engine actuation rates. Also, the program is general enough that any engine configuration and logic scheme can be implemented in a reasonable amount of time. The results of the application of the TACS in the HEAO-C study are included.

  16. Study of sensor spectral responses and data processing algorithms and architectures for onboard feature identification

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Davis, R. E.; Fales, C. L.; Aherron, R. M.

    1982-01-01

    A computational model of the deterministic and stochastic processes involved in remote sensing is used to study spectral feature identification techniques for real-time onboard processing of data acquired with advanced earth-resources sensors. Preliminary results indicate that: Narrow spectral responses are advantageous; signal normalization improves mean-square distance (MSD) classification accuracy but tends to degrade maximum-likelihood (MLH) classification accuracy; and MSD classification of normalized signals performs better than the computationally more complex MLH classification when imaging conditions change appreciably from those conditions during which reference data were acquired. The results also indicate that autonomous categorization of TM signals into vegetation, bare land, water, snow and clouds can be accomplished with adequate reliability for many applications over a reasonably wide range of imaging conditions. However, further analysis is required to develop computationally efficient boundary approximation algorithms for such categorization.

  17. A web-based remote radiation treatment planning system using the remote desktop function of a computer operating system: a preliminary report.

    PubMed

    Suzuki, Keishiro; Hirasawa, Yukinori; Yaegashi, Yuji; Miyamoto, Hideki; Shirato, Hiroki

    2009-01-01

    We developed a web-based, remote radiation treatment planning system which allowed staff at an affiliated hospital to obtain support from a fully staffed central institution. Network security was based on a firewall and a virtual private network (VPN). Client computers were installed at a cancer centre, at a university hospital and at a staff home. We remotely operated the treatment planning computer using the Remote Desktop function built in to the Windows operating system. Except for the initial setup of the VPN router, no special knowledge was needed to operate the remote radiation treatment planning system. There was a time lag that seemed to depend on the volume of data traffic on the Internet, but it did not affect smooth operation. The initial cost and running cost of the system were reasonable.

  18. The study on servo-control system in the large aperture telescope

    NASA Astrophysics Data System (ADS)

    Hu, Wei; Zhenchao, Zhang; Daxing, Wang

    2008-08-01

    Large astronomical telescope or extremely enormous astronomical telescope servo tracking technique will be one of crucial technology that must be solved in researching and manufacturing. To control technique feature of large astronomical telescope or extremely enormous astronomical telescope, this paper design a sort of large astronomical telescope servo tracking control system. This system composes a principal and subordinate distributed control system, host computer sends steering instruction and receive slave computer functional mode, slave computer accomplish control algorithm and execute real-time control. Large astronomical telescope servo control use direct drive machine, and adopt DSP technology to complete direct torque control algorithm, Such design can not only increase control system performance, but also greatly reduced volume and costs of control system, which has a significant occurrence. The system design scheme can be proved reasonably by calculating and simulating. This system can be applied to large astronomical telescope.

  19. A computer program for multiple decrement life table analyses.

    PubMed

    Poole, W K; Cooley, P C

    1977-06-01

    Life table analysis has traditionally been the tool of choice in analyzing distribution of "survival" times when a parametric form for the survival curve could not be reasonably assumed. Chiang, in two papers [1,2] formalized the theory of life table analyses in a Markov chain framework and derived maximum likelihood estimates of the relevant parameters for the analyses. He also discussed how the techniques could be generalized to consider competing risks and follow-up studies. Although various computer programs exist for doing different types of life table analysis [3] to date, there has not been a generally available, well documented computer program to carry out multiple decrement analyses, either by Chiang's or any other method. This paper describes such a program developed by Research Triangle Institute. A user's manual is available at printing costs which supplements the contents of this paper with a discussion of the formula used in the program listing.

  20. Exploring High School Students Beginning Reasoning about Significance Tests with Technology

    ERIC Educational Resources Information Center

    García, Víctor N.; Sánchez, Ernesto

    2017-01-01

    In the present study we analyze how students reason about or make inferences given a particular hypothesis testing problem (without having studied formal methods of statistical inference) when using Fathom. They use Fathom to create an empirical sampling distribution through computer simulation. It is found that most student´s reasoning rely on…

  1. Distributed intelligence for supervisory control

    NASA Technical Reports Server (NTRS)

    Wolfe, W. J.; Raney, S. D.

    1987-01-01

    Supervisory control systems must deal with various types of intelligence distributed throughout the layers of control. Typical layers are real-time servo control, off-line planning and reasoning subsystems and finally, the human operator. Design methodologies must account for the fact that the majority of the intelligence will reside with the human operator. Hierarchical decompositions and feedback loops as conceptual building blocks that provide a common ground for man-machine interaction are discussed. Examples of types of parallelism and parallel implementation on several classes of computer architecture are also discussed.

  2. Joint Probabilistic Reasoning About Coreference and Relations of Univeral Schema

    DTIC Science & Technology

    2017-10-01

    containing Barack and Michelle Obama state that they are married. A variety of one - shot and iterative methods have addressed the alignment problem [25...speed. Most of the computation time for these linear models is spent on the dot-product between the sparse features of an example and the weights...of the model. In some cases , it is clear that the use of all of these features is excessive and the example can be correctly classified without such

  3. Psychological and System Variables in Team Problem Solving: Experimental Studies of Computer-Mediated Participation.

    DTIC Science & Technology

    1982-12-01

    Farenheit , although it will occasionally go as high as 50 and as low as zero . Heavy clouds cover the sky three quarters of the time, with only one...1962) distinguished between " absolute " information, which at least one person possesses, and "distributed" information which is more generally available...shy individuals in our experimental context. In retrospect, this seems quite reasonable. Since the absolute level of requests to speak was so low among

  4. Understanding titanium-catalysed radical-radical reactions: a DFT study unravels the complex kinetics of ketone-nitrile couplings.

    PubMed

    Streuff, Jan; Himmel, Daniel; Younas, Sara L

    2018-04-03

    The computational investigation of a titanium-catalysed reductive radical-radical coupling is reported. The results match the conclusions from an earlier experimental study and enable a further interpretation of the previously observed complex reaction kinetics. Furthermore, the interplay between neutral and cationic reaction pathways in titanium(iii)-catalysed reactions is investigated for the first time. The results show that hydrochloride additives and reaction byproducts play an important role in the respective equilibria. A full reaction profile is assembled and the computed activation barrier is found to be in reasonable agreement with the experiment. The conclusions are of fundamental importance to the field of low-valent titanium catalysis and the understanding of related catalytic radical-radical coupling reactions.

  5. A Parallel Biological Optimization Algorithm to Solve the Unbalanced Assignment Problem Based on DNA Molecular Computing.

    PubMed

    Wang, Zhaocai; Pu, Jun; Cao, Liling; Tan, Jian

    2015-10-23

    The unbalanced assignment problem (UAP) is to optimally resolve the problem of assigning n jobs to m individuals (m < n), such that minimum cost or maximum profit obtained. It is a vitally important Non-deterministic Polynomial (NP) complete problem in operation management and applied mathematics, having numerous real life applications. In this paper, we present a new parallel DNA algorithm for solving the unbalanced assignment problem using DNA molecular operations. We reasonably design flexible-length DNA strands representing different jobs and individuals, take appropriate steps, and get the solutions of the UAP in the proper length range and O(mn) time. We extend the application of DNA molecular operations and simultaneity to simplify the complexity of the computation.

  6. Computer aided fixture design - A case based approach

    NASA Astrophysics Data System (ADS)

    Tanji, Shekhar; Raiker, Saiesh; Mathew, Arun Tom

    2017-11-01

    Automated fixture design plays important role in process planning and integration of CAD and CAM. An automated fixture setup design system is developed where when fixturing surfaces and points are described allowing modular fixture components to get automatically select for generating fixture units and placed into position with satisfying assembled conditions. In past, various knowledge based system have been developed to implement CAFD in practice. In this paper, to obtain an acceptable automated machining fixture design, a case-based reasoning method with developed retrieval system is proposed. Visual Basic (VB) programming language is used in integrating with SolidWorks API (Application programming interface) module for better retrieval procedure reducing computational time. These properties are incorporated in numerical simulation to determine the best fit for practical use.

  7. Efficient approach to obtain free energy gradient using QM/MM MD simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Asada, Toshio; Koseki, Shiro; The Research Institute for Molecular Electronic Devices

    2015-12-31

    The efficient computational approach denoted as charge and atom dipole response kernel (CDRK) model to consider polarization effects of the quantum mechanical (QM) region is described using the charge response and the atom dipole response kernels for free energy gradient (FEG) calculations in the quantum mechanical/molecular mechanical (QM/MM) method. CDRK model can reasonably reproduce energies and also energy gradients of QM and MM atoms obtained by expensive QM/MM calculations in a drastically reduced computational time. This model is applied on the acylation reaction in hydrated trypsin-BPTI complex to optimize the reaction path on the free energy surface by means ofmore » FEG and the nudged elastic band (NEB) method.« less

  8. 76 FR 41523 - In the Matter of Certain Mobile Communications and Computer Devices and Components Thereof...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-14

    ... Communications and Computer Devices and Components Thereof; Notice of Commission Determination Not To Review an... in its entirety Inv. No. 337-TA-704, Certain Mobile Communications and Computer Devices and... importation of certain mobile communications and computer devices and components thereof by reason of...

  9. 26 CFR 1.167(b)-0 - Methods of computing depreciation.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 2 2014-04-01 2014-04-01 false Methods of computing depreciation. 1.167(b)-0....167(b)-0 Methods of computing depreciation. (a) In general. Any reasonable and consistently applied method of computing depreciation may be used or continued in use under section 167. Regardless of the...

  10. Talking about Code: Integrating Pedagogical Code Reviews into Early Computing Courses

    ERIC Educational Resources Information Center

    Hundhausen, Christopher D.; Agrawal, Anukrati; Agarwal, Pawan

    2013-01-01

    Given the increasing importance of soft skills in the computing profession, there is good reason to provide students withmore opportunities to learn and practice those skills in undergraduate computing courses. Toward that end, we have developed an active learning approach for computing education called the "Pedagogical Code Review"…

  11. Rhetorical Consequences of the Computer Society: Expert Systems and Human Communication.

    ERIC Educational Resources Information Center

    Skopec, Eric Wm.

    Expert systems are computer programs that solve selected problems by modelling domain-specific behaviors of human experts. These computer programs typically consist of an input/output system that feeds data into the computer and retrieves advice, an inference system using the reasoning and heuristic processes of human experts, and a knowledge…

  12. Computing Whether She Belongs: Stereotypes Undermine Girls' Interest and Sense of Belonging in Computer Science

    ERIC Educational Resources Information Center

    Master, Allison; Cheryan, Sapna; Meltzoff, Andrew N.

    2016-01-01

    Computer science has one of the largest gender disparities in science, technology, engineering, and mathematics. An important reason for this disparity is that girls are less likely than boys to enroll in necessary "pipeline courses," such as introductory computer science. Two experiments investigated whether high-school girls' lower…

  13. Changing a Generation's Way of Thinking: Teaching Computational Thinking through Programming

    ERIC Educational Resources Information Center

    Buitrago Flórez, Francisco; Casallas, Rubby; Hernández, Marcela; Reyes, Alejandro; Restrepo, Silvia; Danies, Giovanna

    2017-01-01

    Computational thinking (CT) uses concepts that are essential to computing and information science to solve problems, design and evaluate complex systems, and understand human reasoning and behavior. This way of thinking has important implications in computer sciences as well as in almost every other field. Therefore, we contend that CT should be…

  14. 26 CFR 1.167(b)-0 - Methods of computing depreciation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 2 2010-04-01 2010-04-01 false Methods of computing depreciation. 1.167(b)-0....167(b)-0 Methods of computing depreciation. (a) In general. Any reasonable and consistently applied method of computing depreciation may be used or continued in use under section 167. Regardless of the...

  15. 20 CFR 225.3 - PIA computation formulas.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... computed under one of two normal formulas determined by the employee's eligibility year. In addition, there.... The Average Monthly Earnings PIA formula is used to compute a PIA for one of two reasons: either the... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true PIA computation formulas. 225.3 Section 225.3...

  16. 20 CFR 225.3 - PIA computation formulas.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... computed under one of two normal formulas determined by the employee's eligibility year. In addition, there.... The Average Monthly Earnings PIA formula is used to compute a PIA for one of two reasons: either the... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true PIA computation formulas. 225.3 Section 225.3...

  17. 20 CFR 225.3 - PIA computation formulas.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... computed under one of two normal formulas determined by the employee's eligibility year. In addition, there.... The Average Monthly Earnings PIA formula is used to compute a PIA for one of two reasons: either the... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false PIA computation formulas. 225.3 Section 225.3...

  18. 20 CFR 225.3 - PIA computation formulas.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... computed under one of two normal formulas determined by the employee's eligibility year. In addition, there.... The Average Monthly Earnings PIA formula is used to compute a PIA for one of two reasons: either the... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false PIA computation formulas. 225.3 Section 225.3...

  19. 20 CFR 225.3 - PIA computation formulas.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... computed under one of two normal formulas determined by the employee's eligibility year. In addition, there.... The Average Monthly Earnings PIA formula is used to compute a PIA for one of two reasons: either the... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false PIA computation formulas. 225.3 Section 225.3...

  20. RASCAL: A Rudimentary Adaptive System for Computer-Aided Learning.

    ERIC Educational Resources Information Center

    Stewart, John Christopher

    Both the background of computer-assisted instruction (CAI) systems in general and the requirements of a computer-aided learning system which would be a reasonable assistant to a teacher are discussed. RASCAL (Rudimentary Adaptive System for Computer-Aided Learning) is a first attempt at defining a CAI system which would individualize the learning…

  1. 75 FR 8400 - In the Matter of Certain Notebook Computer Products and Components Thereof; Notice of Investigation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-24

    ... INTERNATIONAL TRADE COMMISSION [Inv. No. 337-TA-705] In the Matter of Certain Notebook Computer... United States after importation of certain notebook computer products and components thereof by reason of... after importation of certain notebook computer products or components thereof that infringe one or more...

  2. 77 FR 32996 - Certain Handheld Electronic Computing Devices, Related Software, and Components Thereof...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-04

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-769] Certain Handheld Electronic Computing Devices, Related Software, and Components Thereof; Termination of the Investigation Based on... electronic computing devices, related software, and components thereof by reason of infringement of certain...

  3. Computer-intensive simulation of solid-state NMR experiments using SIMPSON.

    PubMed

    Tošner, Zdeněk; Andersen, Rasmus; Stevensson, Baltzar; Edén, Mattias; Nielsen, Niels Chr; Vosegaard, Thomas

    2014-09-01

    Conducting large-scale solid-state NMR simulations requires fast computer software potentially in combination with efficient computational resources to complete within a reasonable time frame. Such simulations may involve large spin systems, multiple-parameter fitting of experimental spectra, or multiple-pulse experiment design using parameter scan, non-linear optimization, or optimal control procedures. To efficiently accommodate such simulations, we here present an improved version of the widely distributed open-source SIMPSON NMR simulation software package adapted to contemporary high performance hardware setups. The software is optimized for fast performance on standard stand-alone computers, multi-core processors, and large clusters of identical nodes. We describe the novel features for fast computation including internal matrix manipulations, propagator setups and acquisition strategies. For efficient calculation of powder averages, we implemented interpolation method of Alderman, Solum, and Grant, as well as recently introduced fast Wigner transform interpolation technique. The potential of the optimal control toolbox is greatly enhanced by higher precision gradients in combination with the efficient optimization algorithm known as limited memory Broyden-Fletcher-Goldfarb-Shanno. In addition, advanced parallelization can be used in all types of calculations, providing significant time reductions. SIMPSON is thus reflecting current knowledge in the field of numerical simulations of solid-state NMR experiments. The efficiency and novel features are demonstrated on the representative simulations. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. An efficient General Transit Feed Specification (GTFS) enabled algorithm for dynamic transit accessibility analysis.

    PubMed

    Fayyaz S, S Kiavash; Liu, Xiaoyue Cathy; Zhang, Guohui

    2017-01-01

    The social functions of urbanized areas are highly dependent on and supported by the convenient access to public transportation systems, particularly for the less privileged populations who have restrained auto ownership. To accurately evaluate the public transit accessibility, it is critical to capture the spatiotemporal variation of transit services. This can be achieved by measuring the shortest paths or minimum travel time between origin-destination (OD) pairs at each time-of-day (e.g. every minute). In recent years, General Transit Feed Specification (GTFS) data has been gaining popularity for between-station travel time estimation due to its interoperability in spatiotemporal analytics. Many software packages, such as ArcGIS, have developed toolbox to enable the travel time estimation with GTFS. They perform reasonably well in calculating travel time between OD pairs for a specific time-of-day (e.g. 8:00 AM), yet can become computational inefficient and unpractical with the increase of data dimensions (e.g. all times-of-day and large network). In this paper, we introduce a new algorithm that is computationally elegant and mathematically efficient to address this issue. An open-source toolbox written in C++ is developed to implement the algorithm. We implemented the algorithm on City of St. George's transit network to showcase the accessibility analysis enabled by the toolbox. The experimental evidence shows significant reduction on computational time. The proposed algorithm and toolbox presented is easily transferable to other transit networks to allow transit agencies and researchers perform high resolution transit performance analysis.

  5. An efficient General Transit Feed Specification (GTFS) enabled algorithm for dynamic transit accessibility analysis

    PubMed Central

    Fayyaz S., S. Kiavash; Zhang, Guohui

    2017-01-01

    The social functions of urbanized areas are highly dependent on and supported by the convenient access to public transportation systems, particularly for the less privileged populations who have restrained auto ownership. To accurately evaluate the public transit accessibility, it is critical to capture the spatiotemporal variation of transit services. This can be achieved by measuring the shortest paths or minimum travel time between origin-destination (OD) pairs at each time-of-day (e.g. every minute). In recent years, General Transit Feed Specification (GTFS) data has been gaining popularity for between-station travel time estimation due to its interoperability in spatiotemporal analytics. Many software packages, such as ArcGIS, have developed toolbox to enable the travel time estimation with GTFS. They perform reasonably well in calculating travel time between OD pairs for a specific time-of-day (e.g. 8:00 AM), yet can become computational inefficient and unpractical with the increase of data dimensions (e.g. all times-of-day and large network). In this paper, we introduce a new algorithm that is computationally elegant and mathematically efficient to address this issue. An open-source toolbox written in C++ is developed to implement the algorithm. We implemented the algorithm on City of St. George’s transit network to showcase the accessibility analysis enabled by the toolbox. The experimental evidence shows significant reduction on computational time. The proposed algorithm and toolbox presented is easily transferable to other transit networks to allow transit agencies and researchers perform high resolution transit performance analysis. PMID:28981544

  6. Design and performance of a large vocabulary discrete word recognition system. Volume 1: Technical report. [real time computer technique for voice data processing

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The development, construction, and test of a 100-word vocabulary near real time word recognition system are reported. Included are reasonable replacement of any one or all 100 words in the vocabulary, rapid learning of a new speaker, storage and retrieval of training sets, verbal or manual single word deletion, continuous adaptation with verbal or manual error correction, on-line verification of vocabulary as spoken, system modes selectable via verification display keyboard, relationship of classified word to neighboring word, and a versatile input/output interface to accommodate a variety of applications.

  7. The analysis of a generic air-to-air missile simulation model

    NASA Technical Reports Server (NTRS)

    Kaplan, Joseph A.; Chappell, Alan R.; Mcmanus, John W.

    1994-01-01

    A generic missile model was developed to evaluate the benefits of using a dynamic missile fly-out simulation system versus a static missile launch envelope system for air-to-air combat simulation. This paper examines the performance of a launch envelope model and a missile fly-out model. The launch envelope model bases its probability of killing the target aircraft on the target aircraft's position at the launch time of the weapon. The benefits gained from a launch envelope model are the simplicity of implementation and the minimal computational overhead required. A missile fly-out model takes into account the physical characteristics of the missile as it simulates the guidance, propulsion, and movement of the missile. The missile's probability of kill is based on the missile miss distance (or the minimum distance between the missile and the target aircraft). The problems associated with this method of modeling are a larger computational overhead, the additional complexity required to determine the missile miss distance, and the additional complexity of determining the reason(s) the missile missed the target. This paper evaluates the two methods and compares the results of running each method on a comprehensive set of test conditions.

  8. Neural networks and artificial intelligence in thoracic surgery.

    PubMed

    Esteva, Hugo; Núñez, Tomás G; Rodríguez, Ricardo O

    2007-08-01

    The human brain has billions of neurons and connections that cannot be emulated by computers. This structure could explain the anatomical basis of typically human psychological activities like intuition or artistic creation. On the other hand, the computer-organized way of "reasoning" binary problems through systematic comparison of a large number of data-as AIM does--is impossible to be emulated by humans. At the same time, AIM, through the use of different methods like ANN or DM systems, is able to give individualized answers to otherwise probabilistic population problems. Hence, that is the reason for its application in the assessment of surgical risk in lung resection candidates. With regard to AIM methodology, many issues could be addressed and argued, especially on the data collection because of the retrospective nature of the data on which the available contributions from the literature are based. In the larger studies, patients from different centers treated by different surgical teams were included. Both circumstances could have caused heterogeneity of the study groups, which, in turn, can lead to less-reliable conclusions. Even if limited, our experience became an appealing one because AIM seems to be a potentially useful complementary tool to the nonreplaceable clinical judgment.

  9. A Computer-Assisted Personalized Approach in an Undergraduate Plant Physiology Class1

    PubMed Central

    Artus, Nancy N.; Nadler, Kenneth D.

    1999-01-01

    We used Computer-Assisted Personalized Approach (CAPA), a networked teaching and learning tool that generates computer individualized homework problem sets, in our large-enrollment introductory plant physiology course. We saw significant improvement in student examination performance with regular homework assignments, with CAPA being an effective and efficient substitute for hand-graded homework. Using CAPA, each student received a printed set of similar but individualized problems of a conceptual (qualitative) and/or quantitative nature with quality graphics. Because each set of problems is unique, students were encouraged to work together to clarify concepts but were required to do their own work for credit. Students could enter answers multiple times without penalty, and they were able to obtain immediate feedback and hints until the due date. These features increased student time on task, allowing higher course standards and student achievement in a diverse student population. CAPA handles routine tasks such as grading, recording, summarizing, and posting grades. In anonymous surveys, students indicated an overwhelming preference for homework in CAPA format, citing several features such as immediate feedback, multiple tries, and on-line accessibility as reasons for their preference. We wrote and used more than 170 problems on 17 topics in introductory plant physiology, cataloging them in a computer library for general access. Representative problems are compared and discussed. PMID:10198076

  10. Annotation: a computational solution for streamlining metabolomics analysis

    PubMed Central

    Domingo-Almenara, Xavier; Montenegro-Burke, J. Rafael; Benton, H. Paul; Siuzdak, Gary

    2017-01-01

    Metabolite identification is still considered an imposing bottleneck in liquid chromatography mass spectrometry (LC/MS) untargeted metabolomics. The identification workflow usually begins with detecting relevant LC/MS peaks via peak-picking algorithms and retrieving putative identities based on accurate mass searching. However, accurate mass search alone provides poor evidence for metabolite identification. For this reason, computational annotation is used to reveal the underlying metabolites monoisotopic masses, improving putative identification in addition to confirmation with tandem mass spectrometry. This review examines LC/MS data from a computational and analytical perspective, focusing on the occurrence of neutral losses and in-source fragments, to understand the challenges in computational annotation methodologies. Herein, we examine the state-of-the-art strategies for computational annotation including: (i) peak grouping or full scan (MS1) pseudo-spectra extraction, i.e., clustering all mass spectral signals stemming from each metabolite; (ii) annotation using ion adduction and mass distance among ion peaks; (iii) incorporation of biological knowledge such as biotransformations or pathways; (iv) tandem MS data; and (v) metabolite retention time calibration, usually achieved by prediction from molecular descriptors. Advantages and pitfalls of each of these strategies are discussed, as well as expected future trends in computational annotation. PMID:29039932

  11. Computations of Internal and External Axisymmetric Nozzle Aerodynamics at Transonic Speeds

    NASA Technical Reports Server (NTRS)

    Dalbello, Teryn; Georgiadis, Nicholas; Yoder, Dennis; Keith, Theo

    2003-01-01

    Computational Fluid Dynamics (CFD) analyses of axisymmetric circular-arc boattail nozzles have been completed in support of NASA's Next Generation Launch Technology Program to investigate the effects of high-speed nozzle geometries on the nozzle internal flow and the surrounding boattail regions. These computations span the very difficult transonic flight regime, with shock-induced separations and strong adverse pressure gradients. External afterbody and internal nozzle pressure distributions computed with the Wind code are compared with experimental data. A range of turbulence models were examined in Wind, including an Explicit Algebraic Stress model (EASM). Computations on two nozzle geometries have been completed at freestream Mach numbers ranging from 0.6 to 0.9, driven by nozzle pressure ratios (NPR) ranging from 2.9 to 5. Results obtained on converging-only geometry indicate reasonable agreement to experimental data, with the EASM and Shear Stress Transport (SST) turbulence models providing the best agreement. Calculations completed on a converging-diverging geometry involving large-scale internal flow separation did not converge to a true steady-state solution when run with variable timestepping (steady-state). Calculations obtained using constant timestepping (time-accurate) indicate less variations in flow properties compared with steady-state solutions. This failure to converge to a steady-state solution was found to be the result of difficulties in using variable time-stepping with large-scale separations present in the flow. Nevertheless, time-averaged boattail surface pressure coefficient and internal nozzle pressures show fairly good agreement with experimental data. The SST turbulence model demonstrates the best over-all agreement with experimental data.

  12. Modern Methods for fast generation of digital holograms

    NASA Astrophysics Data System (ADS)

    Tsang, P. W. M.; Liu, J. P.; Cheung, K. W. K.; Poon, T.-C.

    2010-06-01

    With the advancement of computers, digital holography (DH) has become an area of interest that has gained much popularity. Research findings derived from this technology enables holograms representing three dimensional (3-D) scenes to be acquired with optical means, or generated with numerical computation. In both cases, the holograms are in the form of numerical data that can be recorded, transmitted, and processed with digital techniques. On top of that, the availability of high capacity digital storage and wide-band communication technologies also cast light on the emergence of real time video holographic systems, enabling animated 3-D contents to be encoded as holographic data, and distributed via existing medium. At present, development in DH has reached a reasonable degree of maturity, but at the same time the heavy computation involved also imposes difficulty in practical applications. In this paper, a summary on a number of successful accomplishments that have been made recently in overcoming this problem is presented. Subsequently, we shall propose an economical framework that is suitable for real time generation and transmission of holographic video signals over existing distribution media. The proposed framework includes an aspect of extending the depth range of the object scene, which is important for the display of large-scale objects. [Figure not available: see fulltext.

  13. From neural oscillations to reasoning ability: Simulating the effect of the theta-to-gamma cycle length ratio on individual scores in a figural analogy test.

    PubMed

    Chuderski, Adam; Andrelczyk, Krzysztof

    2015-02-01

    Several existing computational models of working memory (WM) have predicted a positive relationship (later confirmed empirically) between WM capacity and the individual ratio of theta to gamma oscillatory band lengths. These models assume that each gamma cycle represents one WM object (e.g., a binding of its features), whereas the theta cycle integrates such objects into the maintained list. As WM capacity strongly predicts reasoning, it might be expected that this ratio also predicts performance in reasoning tasks. However, no computational model has yet explained how the differences in the theta-to-gamma ratio found among adult individuals might contribute to their scores on a reasoning test. Here, we propose a novel model of how WM capacity constraints figural analogical reasoning, aimed at explaining inter-individual differences in reasoning scores in terms of the characteristics of oscillatory patterns in the brain. In the model, the gamma cycle encodes the bindings between objects/features and the roles they play in the relations processed. Asynchrony between consecutive gamma cycles results from lateral inhibition between oscillating bindings. Computer simulations showed that achieving the highest WM capacity required reaching the optimal level of inhibition. When too strong, this inhibition eliminated some bindings from WM, whereas, when inhibition was too weak, the bindings became unstable and fell apart or became improperly grouped. The model aptly replicated several empirical effects and the distribution of individual scores, as well as the patterns of correlations found in the 100-people sample attempting the same reasoning task. Most importantly, the model's reasoning performance strongly depended on its theta-to-gamma ratio in same way as the performance of human participants depended on their WM capacity. The data suggest that proper regulation of oscillations in the theta and gamma bands may be crucial for both high WM capacity and effective complex cognition. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Knowing, Applying, and Reasoning about Arithmetic: Roles of Domain-General and Numerical Skills in Multiple Domains of Arithmetic Learning

    ERIC Educational Resources Information Center

    Zhang, Xiao; Räsänen, Pekka; Koponen, Tuire; Aunola, Kaisa; Lerkkanen, Marja-Kristiina; Nurmi, Jari-Erik

    2017-01-01

    The longitudinal relations of domain-general and numerical skills at ages 6-7 years to 3 cognitive domains of arithmetic learning, namely knowing (written computation), applying (arithmetic word problems), and reasoning (arithmetic reasoning) at age 11, were examined for a representative sample of 378 Finnish children. The results showed that…

  15. The Contribution of Reasoning to the Utilization of Feedback from Software When Solving Mathematical Problems

    ERIC Educational Resources Information Center

    Olsson, Jan

    2018-01-01

    This study investigates how students' reasoning contributes to their utilization of computer-generated feedback. Sixteen 16-year-old students solved a linear function task designed to present a challenge to them using dynamic software, GeoGebra, for assistance. The data were analysed with respect both to character of reasoning and to the use of…

  16. Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam

    2009-01-01

    This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.

  17. 77 FR 34063 - Certain Electronic Devices, Including Mobile Phones and Tablet Computers, and Components Thereof...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-08

    ... Phones and Tablet Computers, and Components Thereof Institution of Investigation AGENCY: U.S... computers, and components thereof by reason of infringement of certain claims of U.S. Patent No. 5,570,369... mobile phones and tablet computers, and components thereof that infringe one or more of claims 1-3 and 5...

  18. Access control and privacy in large distributed systems

    NASA Technical Reports Server (NTRS)

    Leiner, B. M.; Bishop, M.

    1986-01-01

    Large scale distributed systems consists of workstations, mainframe computers, supercomputers and other types of servers, all connected by a computer network. These systems are being used in a variety of applications including the support of collaborative scientific research. In such an environment, issues of access control and privacy arise. Access control is required for several reasons, including the protection of sensitive resources and cost control. Privacy is also required for similar reasons, including the protection of a researcher's proprietary results. A possible architecture for integrating available computer and communications security technologies into a system that meet these requirements is described. This architecture is meant as a starting point for discussion, rather that the final answer.

  19. Designing Computer Learning Environments for Engineering and Computer Science: The Scaffolded Knowledge Integration Framework.

    ERIC Educational Resources Information Center

    Linn, Marcia C.

    1995-01-01

    Describes a framework called scaffolded knowledge integration and illustrates how it guided the design of two successful course enhancements in the field of computer science and engineering: the LISP Knowledge Integration Environment and the spatial reasoning environment. (101 references) (Author/MKR)

  20. 77 FR 52759 - Certain Wireless Communication Devices, Portable Music and Data Processing Devices, Computers and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-30

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-745] Certain Wireless Communication Devices, Portable Music and Data Processing Devices, Computers and Components Thereof; Notice of... communication devices, portable music and data processing devices, computers and components thereof by reason of...

  1. Conversational Simulation in Computer-Assisted Language Learning: Potential and Reality.

    ERIC Educational Resources Information Center

    Coleman, D. Wells

    1988-01-01

    Addresses the potential of conversational simulations for computer-assisted language learning (CALL) and reasons why this potential is largely untapped. Topics discussed include artificial intelligence; microworlds; parsing; realism versus reality in computer software; intelligent tutoring systems; and criteria to clarify what kinds of CALL…

  2. A Computational Investigation of Sooting Limits of Spherical Diffusion Flames

    NASA Technical Reports Server (NTRS)

    Lecoustre, V. R.; Chao, B. H.; Sunderland, P. B.; Urban, D. L.; Stocker, D. P.; Axelbaum, R. L.

    2007-01-01

    Limiting conditions for soot particle inception in spherical diffusion flames were investigated numerically. The flames were modeled using a one-dimensional, time accurate diffusion flame code with detailed chemistry and transport and an optically thick radiation model. Seventeen normal and inverse flames were considered, covering a wide range of stoichiometric mixture fraction, adiabatic flame temperature, and residence time. These flames were previously observed to reach their sooting limits after 2 s of microgravity. Sooting-limit diffusion flames with residence times longer than 200 ms were found to have temperatures near 1190 K where C/O = 0.6, whereas flames with shorter residence times required increased temperatures. Acetylene was found to be a reasonable surrogate for soot precursor species in these flames, having peak mole fractions of about 0.01.

  3. Improving the learning of clinical reasoning through computer-based cognitive representation.

    PubMed

    Wu, Bian; Wang, Minhong; Johnson, Janice M; Grotzer, Tina A

    2014-01-01

    Objective Clinical reasoning is usually taught using a problem-solving approach, which is widely adopted in medical education. However, learning through problem solving is difficult as a result of the contextualization and dynamic aspects of actual problems. Moreover, knowledge acquired from problem-solving practice tends to be inert and fragmented. This study proposed a computer-based cognitive representation approach that externalizes and facilitates the complex processes in learning clinical reasoning. The approach is operationalized in a computer-based cognitive representation tool that involves argument mapping to externalize the problem-solving process and concept mapping to reveal the knowledge constructed from the problems. Methods Twenty-nine Year 3 or higher students from a medical school in east China participated in the study. Participants used the proposed approach implemented in an e-learning system to complete four learning cases in 4 weeks on an individual basis. For each case, students interacted with the problem to capture critical data, generate and justify hypotheses, make a diagnosis, recall relevant knowledge, and update their conceptual understanding of the problem domain. Meanwhile, students used the computer-based cognitive representation tool to articulate and represent the key elements and their interactions in the learning process. Results A significant improvement was found in students' learning products from the beginning to the end of the study, consistent with students' report of close-to-moderate progress in developing problem-solving and knowledge-construction abilities. No significant differences were found between the pretest and posttest scores with the 4-week period. The cognitive representation approach was found to provide more formative assessment. Conclusions The computer-based cognitive representation approach improved the learning of clinical reasoning in both problem solving and knowledge construction.

  4. Improving the learning of clinical reasoning through computer-based cognitive representation

    PubMed Central

    Wu, Bian; Wang, Minhong; Johnson, Janice M.; Grotzer, Tina A.

    2014-01-01

    Objective Clinical reasoning is usually taught using a problem-solving approach, which is widely adopted in medical education. However, learning through problem solving is difficult as a result of the contextualization and dynamic aspects of actual problems. Moreover, knowledge acquired from problem-solving practice tends to be inert and fragmented. This study proposed a computer-based cognitive representation approach that externalizes and facilitates the complex processes in learning clinical reasoning. The approach is operationalized in a computer-based cognitive representation tool that involves argument mapping to externalize the problem-solving process and concept mapping to reveal the knowledge constructed from the problems. Methods Twenty-nine Year 3 or higher students from a medical school in east China participated in the study. Participants used the proposed approach implemented in an e-learning system to complete four learning cases in 4 weeks on an individual basis. For each case, students interacted with the problem to capture critical data, generate and justify hypotheses, make a diagnosis, recall relevant knowledge, and update their conceptual understanding of the problem domain. Meanwhile, students used the computer-based cognitive representation tool to articulate and represent the key elements and their interactions in the learning process. Results A significant improvement was found in students’ learning products from the beginning to the end of the study, consistent with students’ report of close-to-moderate progress in developing problem-solving and knowledge-construction abilities. No significant differences were found between the pretest and posttest scores with the 4-week period. The cognitive representation approach was found to provide more formative assessment. Conclusions The computer-based cognitive representation approach improved the learning of clinical reasoning in both problem solving and knowledge construction. PMID:25518871

  5. Improving the learning of clinical reasoning through computer-based cognitive representation.

    PubMed

    Wu, Bian; Wang, Minhong; Johnson, Janice M; Grotzer, Tina A

    2014-01-01

    Clinical reasoning is usually taught using a problem-solving approach, which is widely adopted in medical education. However, learning through problem solving is difficult as a result of the contextualization and dynamic aspects of actual problems. Moreover, knowledge acquired from problem-solving practice tends to be inert and fragmented. This study proposed a computer-based cognitive representation approach that externalizes and facilitates the complex processes in learning clinical reasoning. The approach is operationalized in a computer-based cognitive representation tool that involves argument mapping to externalize the problem-solving process and concept mapping to reveal the knowledge constructed from the problems. Twenty-nine Year 3 or higher students from a medical school in east China participated in the study. Participants used the proposed approach implemented in an e-learning system to complete four learning cases in 4 weeks on an individual basis. For each case, students interacted with the problem to capture critical data, generate and justify hypotheses, make a diagnosis, recall relevant knowledge, and update their conceptual understanding of the problem domain. Meanwhile, students used the computer-based cognitive representation tool to articulate and represent the key elements and their interactions in the learning process. A significant improvement was found in students' learning products from the beginning to the end of the study, consistent with students' report of close-to-moderate progress in developing problem-solving and knowledge-construction abilities. No significant differences were found between the pretest and posttest scores with the 4-week period. The cognitive representation approach was found to provide more formative assessment. The computer-based cognitive representation approach improved the learning of clinical reasoning in both problem solving and knowledge construction.

  6. Musculoskeletal-see-through mirror: computational modeling and algorithm for whole-body muscle activity visualization in real time.

    PubMed

    Murai, Akihiko; Kurosaki, Kosuke; Yamane, Katsu; Nakamura, Yoshihiko

    2010-12-01

    In this paper, we present a system that estimates and visualizes muscle tensions in real time using optical motion capture and electromyography (EMG). The system overlays rendered musculoskeletal human model on top of a live video image of the subject. The subject therefore has an impression that he/she sees the muscles with tension information through the cloth and skin. The main technical challenge lies in real-time estimation of muscle tension. Since existing algorithms using mathematical optimization to distribute joint torques to muscle tensions are too slow for our purpose, we develop a new algorithm that computes a reasonable approximation of muscle tensions based on the internal connections between muscles known as neuronal binding. The algorithm can estimate the tensions of 274 muscles in only 16 ms, and the whole visualization system runs at about 15 fps. The developed system is applied to assisting sport training, and the user case studies show its usefulness. Possible applications include interfaces for assisting rehabilitation. Copyright © 2010 Elsevier Ltd. All rights reserved.

  7. Is the local linearity of space-time inherited from the linearity of probabilities?

    NASA Astrophysics Data System (ADS)

    Müller, Markus P.; Carrozza, Sylvain; Höhn, Philipp A.

    2017-02-01

    The appearance of linear spaces, describing physical quantities by vectors and tensors, is ubiquitous in all of physics, from classical mechanics to the modern notion of local Lorentz invariance. However, as natural as this seems to the physicist, most computer scientists would argue that something like a ‘local linear tangent space’ is not very typical and in fact a quite surprising property of any conceivable world or algorithm. In this paper, we take the perspective of the computer scientist seriously, and ask whether there could be any inherently information-theoretic reason to expect this notion of linearity to appear in physics. We give a series of simple arguments, spanning quantum information theory, group representation theory, and renormalization in quantum gravity, that supports a surprising thesis: namely, that the local linearity of space-time might ultimately be a consequence of the linearity of probabilities. While our arguments involve a fair amount of speculation, they have the virtue of being independent of any detailed assumptions on quantum gravity, and they are in harmony with several independent recent ideas on emergent space-time in high-energy physics.

  8. HLYWD: a program for post-processing data files to generate selected plots or time-lapse graphics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munro, J.K. Jr.

    1980-05-01

    The program HLYWD is a post-processor of output files generated by large plasma simulation computations or of data files containing a time sequence of plasma diagnostics. It is intended to be used in a production mode for either type of application; i.e., it allows one to generate along with the graphics sequence, segments containing title, credits to those who performed the work, text to describe the graphics, and acknowledgement of funding agency. The current version is designed to generate 3D plots and allows one to select type of display (linear or semi-log scales), choice of normalization of function values formore » display purposes, viewing perspective, and an option to allow continuous rotations of surfaces. This program was developed with the intention of being relatively easy to use, reasonably flexible, and requiring a minimum investment of the user's time. It uses the TV80 library of graphics software and ORDERLIB system software on the CDC 7600 at the National Magnetic Fusion Energy Computing Center at Lawrence Livermore Laboratory in California.« less

  9. Theory-Guided Technology in Computer Science.

    ERIC Educational Resources Information Center

    Ben-Ari, Mordechai

    2001-01-01

    Examines the history of major achievements in computer science as portrayed by winners of the prestigious Turing award and identifies a possibly unique activity called Theory-Guided Technology (TGT). Researchers develop TGT by using theoretical results to create practical technology. Discusses reasons why TGT is practical in computer science and…

  10. Computer-Based Education (CBE): Tomorrow's Traditional System.

    ERIC Educational Resources Information Center

    Rizza, Peter J., Jr.

    1981-01-01

    Examines the role of computer technology in education; discusses reasons for the slow evolution of Computer-Based Education (CBE); explores educational areas in which CBE can be used; presents barriers to widespread use of CBE; and describes the responsibilities of education, government, and business in supporting technology-oriented education.…

  11. Learner Assessment Methods Using a Computer Based Interactive Videodisc System.

    ERIC Educational Resources Information Center

    Ehrlich, Lisa R.

    This paper focuses on item design considerations faced by instructional designers and evaluators when using computer videodisc delivery systems as a means of assessing learner comprehension and competencies. Media characteristics of various interactive computer/videodisc training systems are briefly discussed as well as reasons for using such…

  12. Participation, Interaction and Social Presence: An Exploratory Study of Collaboration in Online Peer Review Groups

    ERIC Educational Resources Information Center

    Zhao, Huahui; Sullivan, Kirk P. H.; Mellenius, Ingmarie

    2014-01-01

    A key reason for using asynchronous computer conferencing in instruction is its potential for supporting collaborative learning. However, few studies have examined collaboration in computer conferencing. This study examined collaboration in six peer review groups within an asynchronous computer conferencing. Eighteen tertiary students participated…

  13. Integrating Human and Computer Intelligence. Technical Report No. 32.

    ERIC Educational Resources Information Center

    Pea, Roy D.

    This paper explores the thesis that advances in computer applications and artificial intelligence have important implications for the study of development and learning in psychology. Current approaches to the use of computers as devices for problem solving, reasoning, and thinking--i.e., expert systems and intelligent tutoring systems--are…

  14. 75 FR 65656 - In the Matter of: Certain Notebook Computer Products and Components Thereof; Notice of Commission...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-26

    ... States after importation of certain notebook computer products and components thereof by reason of... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-705] In the Matter of: Certain Notebook Computer Products and Components Thereof; Notice of Commission Determination Not To Review an Initial...

  15. Metareasoning and Social Evaluations in Cognitive Agents

    NASA Astrophysics Data System (ADS)

    Pinyol, Isaac; Sabater-Mir, Jordi

    Reputation mechanisms have been recognized one of the key technologies when designing multi-agent systems. They are specially relevant in complex open environments, becoming a non-centralized mechanism to control interactions among agents. Cognitive agents tackling such complex societies must use reputation information not only for selecting partners to interact with, but also in metareasoning processes to change reasoning rules. This is the focus of this paper. We argue about the necessity to allow, as a cognitive systems designers, certain degree of freedom in the reasoning rules of the agents. We also describes cognitive approaches of agency that support this idea. Furthermore, taking as a base the computational reputation model Repage, and its integration in a BDI architecture, we use the previous ideas to specify metarules and processes to modify at run-time the reasoning paths of the agent. In concrete we propose a metarule to update the link between Repage and the belief base, and a metarule and a process to update an axiom incorporated in the belief logic of the agent. Regarding this last issue we also provide empirical results that show the evolution of agents that use it.

  16. Modelling Chemical Reasoning to Predict and Invent Reactions.

    PubMed

    Segler, Marwin H S; Waller, Mark P

    2017-05-02

    The ability to reason beyond established knowledge allows organic chemists to solve synthetic problems and invent novel transformations. Herein, we propose a model that mimics chemical reasoning, and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outperforms a rule-based expert system in the reaction prediction task for 180 000 randomly selected binary reactions. The data-driven model generalises even beyond known reaction types, and is thus capable of effectively (re-)discovering novel transformations (even including transition metal-catalysed reactions). Our model enables computers to infer hypotheses about reactivity and reactions by only considering the intrinsic local structure of the graph and because each single reaction prediction is typically achieved in a sub-second time frame, the model can be used as a high-throughput generator of reaction hypotheses for reaction discovery. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. An FPGA Platform for Real-Time Simulation of Spiking Neuronal Networks

    PubMed Central

    Pani, Danilo; Meloni, Paolo; Tuveri, Giuseppe; Palumbo, Francesca; Massobrio, Paolo; Raffo, Luigi

    2017-01-01

    In the last years, the idea to dynamically interface biological neurons with artificial ones has become more and more urgent. The reason is essentially due to the design of innovative neuroprostheses where biological cell assemblies of the brain can be substituted by artificial ones. For closed-loop experiments with biological neuronal networks interfaced with in silico modeled networks, several technological challenges need to be faced, from the low-level interfacing between the living tissue and the computational model to the implementation of the latter in a suitable form for real-time processing. Field programmable gate arrays (FPGAs) can improve flexibility when simple neuronal models are required, obtaining good accuracy, real-time performance, and the possibility to create a hybrid system without any custom hardware, just programming the hardware to achieve the required functionality. In this paper, this possibility is explored presenting a modular and efficient FPGA design of an in silico spiking neural network exploiting the Izhikevich model. The proposed system, prototypically implemented on a Xilinx Virtex 6 device, is able to simulate a fully connected network counting up to 1,440 neurons, in real-time, at a sampling rate of 10 kHz, which is reasonable for small to medium scale extra-cellular closed-loop experiments. PMID:28293163

  18. Emergence of Landauer transport from quantum dynamics: A model Hamiltonian approach

    NASA Astrophysics Data System (ADS)

    Pal, Partha Pratim; Ramakrishna, S.; Seideman, Tamar

    2018-04-01

    The Landauer expression for computing current-voltage characteristics in nanoscale devices is efficient but not suited to transient phenomena and a time-dependent current because it is applicable only when the charge carriers transition into a steady flux after an external perturbation. In this article, we construct a very general expression for time-dependent current in an electrode-molecule-electrode arrangement. Utilizing a model Hamiltonian (consisting of the subsystem energy levels and their electronic coupling terms), we propagate the Schrödinger wave function equation to numerically compute the time-dependent population in the individual subsystems. The current in each electrode (defined in terms of the rate of change of the corresponding population) has two components, one due to the charges originating from the same electrode and the other due to the charges initially residing at the other electrode. We derive an analytical expression for the first component and illustrate that it agrees reasonably with its numerical counterpart at early times. Exploiting the unitary evolution of a wavefunction, we construct a more general Landauer style formula and illustrate the emergence of Landauer transport from our simulations without the assumption of time-independent charge flow. Our generalized Landauer formula is valid at all times for models beyond the wide-band limit, non-uniform electrode density of states and for time and energy-dependent electronic coupling between the subsystems. Subsequently, we investigate the ingredients in our model that regulate the onset time scale of this steady state. We compare the performance of our general current expression with the Landauer current for time-dependent electronic coupling. Finally, we comment on the applicability of the Landauer formula to compute hot-electron current arising upon plasmon decoherence.

  19. Emergence of Landauer transport from quantum dynamics: A model Hamiltonian approach.

    PubMed

    Pal, Partha Pratim; Ramakrishna, S; Seideman, Tamar

    2018-04-14

    The Landauer expression for computing current-voltage characteristics in nanoscale devices is efficient but not suited to transient phenomena and a time-dependent current because it is applicable only when the charge carriers transition into a steady flux after an external perturbation. In this article, we construct a very general expression for time-dependent current in an electrode-molecule-electrode arrangement. Utilizing a model Hamiltonian (consisting of the subsystem energy levels and their electronic coupling terms), we propagate the Schrödinger wave function equation to numerically compute the time-dependent population in the individual subsystems. The current in each electrode (defined in terms of the rate of change of the corresponding population) has two components, one due to the charges originating from the same electrode and the other due to the charges initially residing at the other electrode. We derive an analytical expression for the first component and illustrate that it agrees reasonably with its numerical counterpart at early times. Exploiting the unitary evolution of a wavefunction, we construct a more general Landauer style formula and illustrate the emergence of Landauer transport from our simulations without the assumption of time-independent charge flow. Our generalized Landauer formula is valid at all times for models beyond the wide-band limit, non-uniform electrode density of states and for time and energy-dependent electronic coupling between the subsystems. Subsequently, we investigate the ingredients in our model that regulate the onset time scale of this steady state. We compare the performance of our general current expression with the Landauer current for time-dependent electronic coupling. Finally, we comment on the applicability of the Landauer formula to compute hot-electron current arising upon plasmon decoherence.

  20. Should Computing Be Taught in Single-Sex Environments? An Analysis of the Computing Learning Environment of Upper Secondary Students

    ERIC Educational Resources Information Center

    Logan, Keri

    2007-01-01

    It has been well established in the literature that girls are turning their backs on computing courses at all levels of the education system. One reason given for this is that the computer learning environment is not conducive to girls, and it is often suggested that they would benefit from learning computing in a single-sex environment. The…

  1. High Altitude Balloon Flight Path Prediction and Site Selection Based On Computer Simulations

    NASA Astrophysics Data System (ADS)

    Linford, Joel

    2010-10-01

    Interested in the upper atmosphere, Weber State University Physics department has developed a High Altitude Reconnaissance Balloon for Outreach and Research team, also known as HARBOR. HARBOR enables Weber State University to take a variety of measurements from ground level to altitudes as high as 100,000 feet. The flight paths of these balloons can extend as long as 100 miles from the launch zone, making the choice of where and when to fly critical. To ensure the ability to recover the packages in a reasonable amount of time, days and times are carefully selected using computer simulations limiting flight tracks to approximately 40 miles from the launch zone. The computer simulations take atmospheric data collected by National Oceanic and Atmospheric Administration (NOAA) to plot what flights might have looked like in the past, and to predict future flights. Using these simulations a launch zone has been selected in Duchesne Utah, which has hosted eight successful flights over the course of the last three years, all of which have been recovered. Several secondary launch zones in western Wyoming, Southern Idaho, and Northern Utah are also being considered.

  2. ASME V\\&V challenge problem: Surrogate-based V&V

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beghini, Lauren L.; Hough, Patricia D.

    2015-12-18

    The process of verification and validation can be resource intensive. From the computational model perspective, the resource demand typically arises from long simulation run times on multiple cores coupled with the need to characterize and propagate uncertainties. In addition, predictive computations performed for safety and reliability analyses have similar resource requirements. For this reason, there is a tradeoff between the time required to complete the requisite studies and the fidelity or accuracy of the results that can be obtained. At a high level, our approach is cast within a validation hierarchy that provides a framework in which we perform sensitivitymore » analysis, model calibration, model validation, and prediction. The evidence gathered as part of these activities is mapped into the Predictive Capability Maturity Model to assess credibility of the model used for the reliability predictions. With regard to specific technical aspects of our analysis, we employ surrogate-based methods, primarily based on polynomial chaos expansions and Gaussian processes, for model calibration, sensitivity analysis, and uncertainty quantification in order to reduce the number of simulations that must be done. The goal is to tip the tradeoff balance to improving accuracy without increasing the computational demands.« less

  3. An Approach to Economic Dispatch with Multiple Fuels Based on Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Sriyanyong, Pichet

    2011-06-01

    Particle Swarm Optimization (PSO), a stochastic optimization technique, shows superiority to other evolutionary computation techniques in terms of less computation time, easy implementation with high quality solution, stable convergence characteristic and independent from initialization. For this reason, this paper proposes the application of PSO to the Economic Dispatch (ED) problem, which occurs in the operational planning of power systems. In this study, ED problem can be categorized according to the different characteristics of its cost function that are ED problem with smooth cost function and ED problem with multiple fuels. Taking the multiple fuels into account will make the problem more realistic. The experimental results show that the proposed PSO algorithm is more efficient than previous approaches under consideration as well as highly promising in real world applications.

  4. A Parallel Biological Optimization Algorithm to Solve the Unbalanced Assignment Problem Based on DNA Molecular Computing

    PubMed Central

    Wang, Zhaocai; Pu, Jun; Cao, Liling; Tan, Jian

    2015-01-01

    The unbalanced assignment problem (UAP) is to optimally resolve the problem of assigning n jobs to m individuals (m < n), such that minimum cost or maximum profit obtained. It is a vitally important Non-deterministic Polynomial (NP) complete problem in operation management and applied mathematics, having numerous real life applications. In this paper, we present a new parallel DNA algorithm for solving the unbalanced assignment problem using DNA molecular operations. We reasonably design flexible-length DNA strands representing different jobs and individuals, take appropriate steps, and get the solutions of the UAP in the proper length range and O(mn) time. We extend the application of DNA molecular operations and simultaneity to simplify the complexity of the computation. PMID:26512650

  5. Robustness in linear quadratic feedback design with application to an aircraft control problem

    NASA Technical Reports Server (NTRS)

    Patel, R. V.; Sridhar, B.; Toda, M.

    1977-01-01

    Some new results concerning robustness and asymptotic properties of error bounds of a linear quadratic feedback design are applied to an aircraft control problem. An autopilot for the flare control of the Augmentor Wing Jet STOL Research Aircraft (AWJSRA) is designed based on Linear Quadratic (LQ) theory and the results developed in this paper. The variation of the error bounds to changes in the weighting matrices in the LQ design is studied by computer simulations, and appropriate weighting matrices are chosen to obtain a reasonable error bound for variations in the system matrix and at the same time meet the practical constraints for the flare maneuver of the AWJSRA. Results from the computer simulation of a satisfactory autopilot design for the flare control of the AWJSRA are presented.

  6. Fractional Poisson-Nernst-Planck Model for Ion Channels I: Basic Formulations and Algorithms.

    PubMed

    Chen, Duan

    2017-11-01

    In this work, we propose a fractional Poisson-Nernst-Planck model to describe ion permeation in gated ion channels. Due to the intrinsic conformational changes, crowdedness in narrow channel pores, binding and trapping introduced by functioning units of channel proteins, ionic transport in the channel exhibits a power-law-like anomalous diffusion dynamics. We start from continuous-time random walk model for a single ion and use a long-tailed density distribution function for the particle jump waiting time, to derive the fractional Fokker-Planck equation. Then, it is generalized to the macroscopic fractional Poisson-Nernst-Planck model for ionic concentrations. Necessary computational algorithms are designed to implement numerical simulations for the proposed model, and the dynamics of gating current is investigated. Numerical simulations show that the fractional PNP model provides a more qualitatively reasonable match to the profile of gating currents from experimental observations. Meanwhile, the proposed model motivates new challenges in terms of mathematical modeling and computations.

  7. Constrained multi-objective optimization of storage ring lattices

    NASA Astrophysics Data System (ADS)

    Husain, Riyasat; Ghodke, A. D.

    2018-03-01

    The storage ring lattice optimization is a class of constrained multi-objective optimization problem, where in addition to low beam emittance, a large dynamic aperture for good injection efficiency and improved beam lifetime are also desirable. The convergence and computation times are of great concern for the optimization algorithms, as various objectives are to be optimized and a number of accelerator parameters to be varied over a large span with several constraints. In this paper, a study of storage ring lattice optimization using differential evolution is presented. The optimization results are compared with two most widely used optimization techniques in accelerators-genetic algorithm and particle swarm optimization. It is found that the differential evolution produces a better Pareto optimal front in reasonable computation time between two conflicting objectives-beam emittance and dispersion function in the straight section. The differential evolution was used, extensively, for the optimization of linear and nonlinear lattices of Indus-2 for exploring various operational modes within the magnet power supply capabilities.

  8. A computer program for the simulation of heat and moisture flow in soils

    NASA Technical Reports Server (NTRS)

    Camillo, P.; Schmugge, T. J.

    1981-01-01

    A computer program that simulates the flow of heat and moisture in soils is described. The space-time dependence of temperature and moisture content is described by a set of diffusion-type partial differential equations. The simulator uses a predictor/corrector to numerically integrate them, giving wetness and temperature profiles as a function of time. The simulator was used to generate solutions to diffusion-type partial differential equations for which analytical solutions are known. These equations include both constant and variable diffusivities, and both flux and constant concentration boundary conditions. In all cases, the simulated and analytic solutions agreed to within the error bounds which were imposed on the integrator. Simulations of heat and moisture flow under actual field conditions were also performed. Ground truth data were used for the boundary conditions and soil transport properties. The qualitative agreement between simulated and measured profiles is an indication that the model equations are reasonably accurate representations of the physical processes involved.

  9. Development of an adaptive hp-version finite element method for computational optimal control

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Warner, Michael S.

    1994-01-01

    In this research effort, the usefulness of hp-version finite elements and adaptive solution-refinement techniques in generating numerical solutions to optimal control problems has been investigated. Under NAG-939, a general FORTRAN code was developed which approximated solutions to optimal control problems with control constraints and state constraints. Within that methodology, to get high-order accuracy in solutions, the finite element mesh would have to be refined repeatedly through bisection of the entire mesh in a given phase. In the current research effort, the order of the shape functions in each element has been made a variable, giving more flexibility in error reduction and smoothing. Similarly, individual elements can each be subdivided into many pieces, depending on the local error indicator, while other parts of the mesh remain coarsely discretized. The problem remains to reduce and smooth the error while still keeping computational effort reasonable enough to calculate time histories in a short enough time for on-board applications.

  10. Speed Approach for UAV Collision Avoidance

    NASA Astrophysics Data System (ADS)

    Berdonosov, V. D.; Zivotova, A. A.; Htet Naing, Zaw; Zhuravlev, D. O.

    2018-05-01

    The article represents a new approach of defining potential collision of two or more UAVs in a common aviation area. UAVs trajectories are approximated by two or three trajectories’ points obtained from the ADS-B system. In the process of defining meeting points of trajectories, two cutoff values of the critical speed range, at which a UAVs collision is possible, are calculated. As calculation expressions for meeting points and cutoff values of the critical speed are represented in the analytical form, even if an on-board computer system has limited computational capacity, the time for calculation will be far less than the time of receiving data from ADS-B. For this reason, calculations can be updated at each cycle of new data receiving, and the trajectory approximation can be bounded by straight lines. Such approach allows developing the compact algorithm of collision avoidance, even for a significant amount of UAVs (more than several dozens). To proof the research adequacy, modeling was performed using a software system developed specifically for this purpose.

  11. Inverse identification of unknown finite-duration air pollutant release from a point source in urban environment

    NASA Astrophysics Data System (ADS)

    Kovalets, Ivan V.; Efthimiou, George C.; Andronopoulos, Spyros; Venetsanos, Alexander G.; Argyropoulos, Christos D.; Kakosimos, Konstantinos E.

    2018-05-01

    In this work, we present an inverse computational method for the identification of the location, start time, duration and quantity of emitted substance of an unknown air pollution source of finite time duration in an urban environment. We considered a problem of transient pollutant dispersion under stationary meteorological fields, which is a reasonable assumption for the assimilation of available concentration measurements within 1 h from the start of an incident. We optimized the calculation of the source-receptor function by developing a method which requires integrating as many backward adjoint equations as the available measurement stations. This resulted in high numerical efficiency of the method. The source parameters are computed by maximizing the correlation function of the simulated and observed concentrations. The method has been integrated into the CFD code ADREA-HF and it has been tested successfully by performing a series of source inversion runs using the data of 200 individual realizations of puff releases, previously generated in a wind tunnel experiment.

  12. Analysis of seismograms from a downhole array in sediments near San Francisco Bay

    USGS Publications Warehouse

    Joyner, William B.; Warrick, Richard E.; Oliver, Adolph A.

    1976-01-01

    A four-level downhole array of three-component instruments was established on the southwest shore of San Francisco Bay to monitor the effect of the sediments on low-amplitude seismic ground motion. The deepest instrument is at a depth of 186 meters, two meters below the top of the Franciscan bedrock. Earthquake data from regional distances (29 km ≤ Δ ≤ 485 km) over a wide range of azimuths are compared with the predictions of a simple plane-layered model with material properties independently determined. Spectral ratios between the surface and bedrock computed for the one horizontal component of motion that was analyzed agree rather well with the model predictions; the model predicts the frequencies of the first three peaks within 10 percent in most cases and the height of the peaks within 50 percent in most cases. Surface time histories computed from the theoretical model predict the time variations of amplitude and frequency content reasonably well, but correlations of individual cycles cannot be made between observed and predicted traces.

  13. Homogeneous crystal nucleation in Ni droplets

    NASA Astrophysics Data System (ADS)

    Kožíšek, Zdeněk; Demo, Pavel

    2017-10-01

    Crystal nucleation kinetics is often represented by induction times or metastable zone widths (Kulkarni et al., 2013; Bokeloh et al., 2011). Repeating measurements of supercooling or time delay, at which phase transition is detected, are statistically processed to determine the so-called survivorship function, from which nucleation rate is computed. The size distribution of nuclei is difficult to measure near the critical size directly, and it is not clear which amount of nuclei is formed at the moment when the phase transition is detected. In the present paper, kinetic nucleation equations are solved for the crystal nucleation in Ni liquid droplet to determine the number of nuclei formed within a considered system. Analysis of supercooling experimental data, based on the classical nucleation theory CNT), computes appropriate values of the nucleation rate. However, CNT underestimates the number of nuclei F (F ≪ 1 for supercritical sizes). Taking into account the dependence of the surface energy on nucleus size to data analysis overcomes this discrepancy and leads to reasonable values of the size distribution of nuclei.

  14. Automated Performance Prediction of Message-Passing Parallel Programs

    NASA Technical Reports Server (NTRS)

    Block, Robert J.; Sarukkai, Sekhar; Mehra, Pankaj; Woodrow, Thomas S. (Technical Monitor)

    1995-01-01

    The increasing use of massively parallel supercomputers to solve large-scale scientific problems has generated a need for tools that can predict scalability trends of applications written for these machines. Much work has been done to create simple models that represent important characteristics of parallel programs, such as latency, network contention, and communication volume. But many of these methods still require substantial manual effort to represent an application in the model's format. The NIK toolkit described in this paper is the result of an on-going effort to automate the formation of analytic expressions of program execution time, with a minimum of programmer assistance. In this paper we demonstrate the feasibility of our approach, by extending previous work to detect and model communication patterns automatically, with and without overlapped computations. The predictions derived from these models agree, within reasonable limits, with execution times of programs measured on the Intel iPSC/860 and Paragon. Further, we demonstrate the use of MK in selecting optimal computational grain size and studying various scalability metrics.

  15. The theory of reasoned action as parallel constraint satisfaction: towards a dynamic computational model of health behavior.

    PubMed

    Orr, Mark G; Thrush, Roxanne; Plaut, David C

    2013-01-01

    The reasoned action approach, although ubiquitous in health behavior theory (e.g., Theory of Reasoned Action/Planned Behavior), does not adequately address two key dynamical aspects of health behavior: learning and the effect of immediate social context (i.e., social influence). To remedy this, we put forth a computational implementation of the Theory of Reasoned Action (TRA) using artificial-neural networks. Our model re-conceptualized behavioral intention as arising from a dynamic constraint satisfaction mechanism among a set of beliefs. In two simulations, we show that constraint satisfaction can simultaneously incorporate the effects of past experience (via learning) with the effects of immediate social context to yield behavioral intention, i.e., intention is dynamically constructed from both an individual's pre-existing belief structure and the beliefs of others in the individual's social context. In a third simulation, we illustrate the predictive ability of the model with respect to empirically derived behavioral intention. As the first known computational model of health behavior, it represents a significant advance in theory towards understanding the dynamics of health behavior. Furthermore, our approach may inform the development of population-level agent-based models of health behavior that aim to incorporate psychological theory into models of population dynamics.

  16. The Theory of Reasoned Action as Parallel Constraint Satisfaction: Towards a Dynamic Computational Model of Health Behavior

    PubMed Central

    Orr, Mark G.; Thrush, Roxanne; Plaut, David C.

    2013-01-01

    The reasoned action approach, although ubiquitous in health behavior theory (e.g., Theory of Reasoned Action/Planned Behavior), does not adequately address two key dynamical aspects of health behavior: learning and the effect of immediate social context (i.e., social influence). To remedy this, we put forth a computational implementation of the Theory of Reasoned Action (TRA) using artificial-neural networks. Our model re-conceptualized behavioral intention as arising from a dynamic constraint satisfaction mechanism among a set of beliefs. In two simulations, we show that constraint satisfaction can simultaneously incorporate the effects of past experience (via learning) with the effects of immediate social context to yield behavioral intention, i.e., intention is dynamically constructed from both an individual’s pre-existing belief structure and the beliefs of others in the individual’s social context. In a third simulation, we illustrate the predictive ability of the model with respect to empirically derived behavioral intention. As the first known computational model of health behavior, it represents a significant advance in theory towards understanding the dynamics of health behavior. Furthermore, our approach may inform the development of population-level agent-based models of health behavior that aim to incorporate psychological theory into models of population dynamics. PMID:23671603

  17. Efficient Grammar Induction Algorithm with Parse Forests from Real Corpora

    NASA Astrophysics Data System (ADS)

    Kurihara, Kenichi; Kameya, Yoshitaka; Sato, Taisuke

    The task of inducing grammar structures has received a great deal of attention. The reasons why researchers have studied are different; to use grammar induction as the first stage in building large treebanks or to make up better language models. However, grammar induction has inherent computational complexity. To overcome it, some grammar induction algorithms add new production rules incrementally. They refine the grammar while keeping their computational complexity low. In this paper, we propose a new efficient grammar induction algorithm. Although our algorithm is similar to algorithms which learn a grammar incrementally, our algorithm uses the graphical EM algorithm instead of the Inside-Outside algorithm. We report results of learning experiments in terms of learning speeds. The results show that our algorithm learns a grammar in constant time regardless of the size of the grammar. Since our algorithm decreases syntactic ambiguities in each step, our algorithm reduces required time for learning. This constant-time learning considerably affects learning time for larger grammars. We also reports results of evaluation of criteria to choose nonterminals. Our algorithm refines a grammar based on a nonterminal in each step. Since there can be several criteria to decide which nonterminal is the best, we evaluate them by learning experiments.

  18. A study of mapping exogenous knowledge representations into CONFIG

    NASA Technical Reports Server (NTRS)

    Mayfield, Blayne E.

    1992-01-01

    Qualitative reasoning is reasoning with a small set of qualitative values that is an abstraction of a larger and perhaps infinite set of quantitative values. The use of qualitative and quantitative reasoning together holds great promise for performance improvement in applications that suffer from large and/or imprecise knowledge domains. Included among these applications are the modeling, simulation, analysis, and fault diagnosis of physical systems. Several research groups continue to discover and experiment with new qualitative representations and reasoning techniques. However, due to the diversity of these techniques, it is difficult for the programs produced to exchange system models easily. The availability of mappings to transform knowledge from the form used by one of these programs to that used by another would open the doors for comparative analysis of these programs in areas such as completeness, correctness, and performance. A group at the Johnson Space Center (JSC) is working to develop CONFIG, a prototype qualitative modeling, simulation, and analysis tool for fault diagnosis applications in the U.S. space program. The availability of knowledge mappings from the programs produced by other research groups to CONFIG may provide savings in CONFIG's development costs and time, and may improve CONFIG's performance. The study of such mappings is the purpose of the research described in this paper. Two other research groups that have worked with the JSC group in the past are the Northwest University Group and the University of Texas at Austin Group. The former has produced a qualitative reasoning tool named SIMGEN, and the latter has produced one named QSIM. Another program produced by the Austin group is CC, a preprocessor that permits users to develop input for eventual use by QSIM, but in a more natural format. CONFIG and CC are both based on a component-connection ontology, so a mapping from CC's knowledge representation to CONFIG's knowledge representation was chosen as the focus of this study. A mapping from CC to CONFIG was developed. Due to differences between the two programs, however, the mapping transforms some of the CC knowledge to CONFIG as documentation rather than as knowledge in a form useful to computation. The study suggests that it may be worthwhile to pursue the mappings further. By implementing the mapping as a program, actual comparisons of computational efficiency and quality of results can be made between the QSIM and CONFIG programs. A secondary study may reveal that the results of the two programs augment one another, contradict one another, or differ only slightly. If the latter, the qualitative reasoning techniques may be compared in other areas, such as computational efficiency.

  19. Predicting airborne particle deposition by a modified Markov chain model for fast estimation of potential contaminant spread

    NASA Astrophysics Data System (ADS)

    Mei, Xiong; Gong, Guangcai

    2018-07-01

    As potential carriers of hazardous pollutants, airborne particles may deposit onto surfaces due to gravitational settling. A modified Markov chain model to predict gravity induced particle dispersion and deposition is proposed in the paper. The gravity force is considered as a dominant weighting factor to adjust the State Transfer Matrix, which represents the probabilities of the change of particle spatial distributions between consecutive time steps within an enclosure. The model performance has been further validated by particle deposition in a ventilation chamber and a horizontal turbulent duct flow in pre-existing literatures. Both the proportion of deposited particles and the dimensionless deposition velocity are adopted to characterize the validation results. Comparisons between our simulated results and the experimental data from literatures show reasonable accuracy. Moreover, it is also found that the dimensionless deposition velocity can be remarkably influenced by particle size and stream-wise velocity in a typical horizontal flow. This study indicates that the proposed model can predict the gravity-dominated airborne particle deposition with reasonable accuracy and acceptable computing time.

  20. Automatic pickup of arrival time of channel wave based on multi-channel constraints

    NASA Astrophysics Data System (ADS)

    Wang, Bao-Li

    2018-03-01

    Accurately detecting the arrival time of a channel wave in a coal seam is very important for in-seam seismic data processing. The arrival time greatly affects the accuracy of the channel wave inversion and the computed tomography (CT) result. However, because the signal-to-noise ratio of in-seam seismic data is reduced by the long wavelength and strong frequency dispersion, accurately timing the arrival of channel waves is extremely difficult. For this purpose, we propose a method that automatically picks up the arrival time of channel waves based on multi-channel constraints. We first estimate the Jaccard similarity coefficient of two ray paths, then apply it as a weight coefficient for stacking the multichannel dispersion spectra. The reasonableness and effectiveness of the proposed method is verified in an actual data application. Most importantly, the method increases the degree of automation and the pickup precision of the channel-wave arrival time.

  1. A Semantic Approach with Decision Support for Safety Service in Smart Home Management

    PubMed Central

    Huang, Xiaoci; Yi, Jianjun; Zhu, Xiaomin; Chen, Shaoli

    2016-01-01

    Research on smart homes (SHs) has increased significantly in recent years because of the convenience provided by having an assisted living environment. The functions of SHs as mentioned in previous studies, particularly safety services, are seldom discussed or mentioned. Thus, this study proposes a semantic approach with decision support for safety service in SH management. The focus of this contribution is to explore a context awareness and reasoning approach for risk recognition in SH that enables the proper decision support for flexible safety service provision. The framework of SH based on a wireless sensor network is described from the perspective of neighbourhood management. This approach is based on the integration of semantic knowledge in which a reasoner can make decisions about risk recognition and safety service. We present a management ontology for a SH and relevant monitoring contextual information, which considers its suitability in a pervasive computing environment and is service-oriented. We also propose a rule-based reasoning method to provide decision support through reasoning techniques and context-awareness. A system prototype is developed to evaluate the feasibility, time response and extendibility of the approach. The evaluation of our approach shows that it is more effective in daily risk event recognition. The decisions for service provision are shown to be accurate. PMID:27527170

  2. A Semantic Approach with Decision Support for Safety Service in Smart Home Management.

    PubMed

    Huang, Xiaoci; Yi, Jianjun; Zhu, Xiaomin; Chen, Shaoli

    2016-08-03

    Research on smart homes (SHs) has increased significantly in recent years because of the convenience provided by having an assisted living environment. The functions of SHs as mentioned in previous studies, particularly safety services, are seldom discussed or mentioned. Thus, this study proposes a semantic approach with decision support for safety service in SH management. The focus of this contribution is to explore a context awareness and reasoning approach for risk recognition in SH that enables the proper decision support for flexible safety service provision. The framework of SH based on a wireless sensor network is described from the perspective of neighbourhood management. This approach is based on the integration of semantic knowledge in which a reasoner can make decisions about risk recognition and safety service. We present a management ontology for a SH and relevant monitoring contextual information, which considers its suitability in a pervasive computing environment and is service-oriented. We also propose a rule-based reasoning method to provide decision support through reasoning techniques and context-awareness. A system prototype is developed to evaluate the feasibility, time response and extendibility of the approach. The evaluation of our approach shows that it is more effective in daily risk event recognition. The decisions for service provision are shown to be accurate.

  3. Computer-Aided Experiment Planning toward Causal Discovery in Neuroscience.

    PubMed

    Matiasz, Nicholas J; Wood, Justin; Wang, Wei; Silva, Alcino J; Hsu, William

    2017-01-01

    Computers help neuroscientists to analyze experimental results by automating the application of statistics; however, computer-aided experiment planning is far less common, due to a lack of similar quantitative formalisms for systematically assessing evidence and uncertainty. While ontologies and other Semantic Web resources help neuroscientists to assimilate required domain knowledge, experiment planning requires not only ontological but also epistemological (e.g., methodological) information regarding how knowledge was obtained. Here, we outline how epistemological principles and graphical representations of causality can be used to formalize experiment planning toward causal discovery. We outline two complementary approaches to experiment planning: one that quantifies evidence per the principles of convergence and consistency, and another that quantifies uncertainty using logical representations of constraints on causal structure. These approaches operationalize experiment planning as the search for an experiment that either maximizes evidence or minimizes uncertainty. Despite work in laboratory automation, humans must still plan experiments and will likely continue to do so for some time. There is thus a great need for experiment-planning frameworks that are not only amenable to machine computation but also useful as aids in human reasoning.

  4. On the use of inexact, pruned hardware in atmospheric modelling

    PubMed Central

    Düben, Peter D.; Joven, Jaume; Lingamneni, Avinash; McNamara, Hugh; De Micheli, Giovanni; Palem, Krishna V.; Palmer, T. N.

    2014-01-01

    Inexact hardware design, which advocates trading the accuracy of computations in exchange for significant savings in area, power and/or performance of computing hardware, has received increasing prominence in several error-tolerant application domains, particularly those involving perceptual or statistical end-users. In this paper, we evaluate inexact hardware for its applicability in weather and climate modelling. We expand previous studies on inexact techniques, in particular probabilistic pruning, to floating point arithmetic units and derive several simulated set-ups of pruned hardware with reasonable levels of error for applications in atmospheric modelling. The set-up is tested on the Lorenz ‘96 model, a toy model for atmospheric dynamics, using software emulation for the proposed hardware. The results show that large parts of the computation tolerate the use of pruned hardware blocks without major changes in the quality of short- and long-time diagnostics, such as forecast errors and probability density functions. This could open the door to significant savings in computational cost and to higher resolution simulations with weather and climate models. PMID:24842031

  5. A computer program for calculating the perfect gas inviscid flow field about blunt axisymmetric bodies at an angle of attack of 0 deg

    NASA Technical Reports Server (NTRS)

    Zoby, E. V.; Graves, R. A., Jr.

    1973-01-01

    A method for the rapid calculation of the inviscid shock layer about blunt axisymmetric bodies at an angle of attack of 0 deg has been developed. The procedure is of an inverse nature, that is, a shock wave is assumed and calculations proceed along rays normal to the shock. The solution is iterated until the given body is computed. The flow field solution procedure is programed at the Langley Research Center for the Control Data 6600 computer. The geometries specified in the program are sphores, ellipsoids, paraboloids, and hyperboloids which may conical afterbodies. The normal momentum equation is replaced with an approximate algebraic expression. This simplification significantly reduces machine computation time. Comparisons of the present results with shock shapes and surface pressure distributions obtained by the more exact methods indicate that the program provides reasonably accurate results for smooth bodies in axisymmetric flow. However, further research is required to establish the proper approximate form of the normal momentum equation for the two-dimensional case.

  6. Computational Pollutant Environment Assessment from Propulsion-System Testing

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; McConnaughey, Paul; Chen, Yen-Sen; Warsi, Saif

    1996-01-01

    An asymptotic plume growth method based on a time-accurate three-dimensional computational fluid dynamics formulation has been developed to assess the exhaust-plume pollutant environment from a simulated RD-170 engine hot-fire test on the F1 Test Stand at Marshall Space Flight Center. Researchers have long known that rocket-engine hot firing has the potential for forming thermal nitric oxides, as well as producing carbon monoxide when hydrocarbon fuels are used. Because of the complex physics involved, most attempts to predict the pollutant emissions from ground-based engine testing have used simplified methods, which may grossly underpredict and/or overpredict the pollutant formations in a test environment. The objective of this work has been to develop a computational fluid dynamics-based methodology that replicates the underlying test-stand flow physics to accurately and efficiently assess pollutant emissions from ground-based rocket-engine testing. A nominal RD-170 engine hot-fire test was computed, and pertinent test-stand flow physics was captured. The predicted total emission rates compared reasonably well with those of the existing hydrocarbon engine hot-firing test data.

  7. Towards a computational- and algorithmic-level account of concept blending using analogies and amalgams

    NASA Astrophysics Data System (ADS)

    Besold, Tarek R.; Kühnberger, Kai-Uwe; Plaza, Enric

    2017-10-01

    Concept blending - a cognitive process which allows for the combination of certain elements (and their relations) from originally distinct conceptual spaces into a new unified space combining these previously separate elements, and enables reasoning and inference over the combination - is taken as a key element of creative thought and combinatorial creativity. In this article, we summarise our work towards the development of a computational-level and algorithmic-level account of concept blending, combining approaches from computational analogy-making and case-based reasoning (CBR). We present the theoretical background, as well as an algorithmic proposal integrating higher-order anti-unification matching and generalisation from analogy with amalgams from CBR. The feasibility of the approach is then exemplified in two case studies.

  8. A state space approach for piecewise-linear recurrent neural networks for identifying computational dynamics from neural measurements.

    PubMed

    Durstewitz, Daniel

    2017-06-01

    The computational and cognitive properties of neural systems are often thought to be implemented in terms of their (stochastic) network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit recordings or neuroimaging data, is an important step toward understanding its computations. Ideally, one would not only seek a (lower-dimensional) state space representation of the dynamics, but would wish to have access to its statistical properties and their generative equations for in-depth analysis. Recurrent neural networks (RNNs) are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs) within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maximization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, and using inference through particle filters for comparison, the approach is applied to multiple single-unit recordings from the rodent anterior cingulate cortex (ACC) obtained during performance of a classical working memory task, delayed alternation. Models estimated from kernel-smoothed spike time data were able to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, however, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast) maximum-likelihood estimation framework for PLRNNs that may enable to recover relevant aspects of the nonlinear dynamics underlying observed neuronal time series, and directly link these to computational properties.

  9. Scientific Reasoning across Different Domains.

    ERIC Educational Resources Information Center

    Glaser, Robert; And Others

    This study seeks to establish which scientific reasoning skills are primarily domain-general and which appear to be domain-specific. The subjects, 12 university undergraduates, each participated in self-directed experimentation with three different content domains. The experimentation contexts were computer-based laboratories in d.c. circuits…

  10. Combining qualitative and quantitative spatial and temporal information in a hierarchical structure: Approximate reasoning for plan execution monitoring

    NASA Technical Reports Server (NTRS)

    Hoebel, Louis J.

    1993-01-01

    The problem of plan generation (PG) and the problem of plan execution monitoring (PEM), including updating, queries, and resource-bounded replanning, have different reasoning and representation requirements. PEM requires the integration of qualitative and quantitative information. PEM is the receiving of data about the world in which a plan or agent is executing. The problem is to quickly determine the relevance of the data, the consistency of the data with respect to the expected effects, and if execution should continue. Only spatial and temporal aspects of the plan are addressed for relevance in this work. Current temporal reasoning systems are deficient in computational aspects or expressiveness. This work presents a hybrid qualitative and quantitative system that is fully expressive in its assertion language while offering certain computational efficiencies. In order to proceed, methods incorporating approximate reasoning using hierarchies, notions of locality, constraint expansion, and absolute parameters need be used and are shown to be useful for the anytime nature of PEM.

  11. Fixed-Base Comb with Window-Non-Adjacent Form (NAF) Method for Scalar Multiplication

    PubMed Central

    Seo, Hwajeong; Kim, Hyunjin; Park, Taehwan; Lee, Yeoncheol; Liu, Zhe; Kim, Howon

    2013-01-01

    Elliptic curve cryptography (ECC) is one of the most promising public-key techniques in terms of short key size and various crypto protocols. For this reason, many studies on the implementation of ECC on resource-constrained devices within a practical execution time have been conducted. To this end, we must focus on scalar multiplication, which is the most expensive operation in ECC. A number of studies have proposed pre-computation and advanced scalar multiplication using a non-adjacent form (NAF) representation, and more sophisticated approaches have employed a width-w NAF representation and a modified pre-computation table. In this paper, we propose a new pre-computation method in which zero occurrences are much more frequent than in previous methods. This method can be applied to ordinary group scalar multiplication, but it requires large pre-computation table, so we combined the previous method with ours for practical purposes. This novel structure establishes a new feature that adjusts speed performance and table size finely, so we can customize the pre-computation table for our own purposes. Finally, we can establish a customized look-up table for embedded microprocessors. PMID:23881143

  12. A computational model of in vitro angiogenesis based on extracellular matrix fibre orientation.

    PubMed

    Edgar, Lowell T; Sibole, Scott C; Underwood, Clayton J; Guilkey, James E; Weiss, Jeffrey A

    2013-01-01

    Recent interest in the process of vascularisation within the biomedical community has motivated numerous new research efforts focusing on the process of angiogenesis. Although the role of chemical factors during angiogenesis has been well documented, the role of mechanical factors, such as the interaction between angiogenic vessels and the extracellular matrix, remains poorly understood. In vitro methods for studying angiogenesis exist; however, measurements available using such techniques often suffer from limited spatial and temporal resolutions. For this reason, computational models have been extensively employed to investigate various aspects of angiogenesis. This paper outlines the formulation and validation of a simple and robust computational model developed to accurately simulate angiogenesis based on length, branching and orientation morphometrics collected from vascularised tissue constructs. Microvessels were represented as a series of connected line segments. The morphology of the vessels was determined by a linear combination of the collagen fibre orientation, the vessel density gradient and a random walk component. Excellent agreement was observed between computational and experimental morphometric data over time. Computational predictions of microvessel orientation within an anisotropic matrix correlated well with experimental data. The accuracy of this modelling approach makes it a valuable platform for investigating the role of mechanical interactions during angiogenesis.

  13. Computer-assisted cognitive remediation therapy: cognition, self-esteem and quality of life in schizophrenia.

    PubMed

    Garrido, Gemma; Barrios, Maite; Penadés, Rafael; Enríquez, Maria; Garolera, Maite; Aragay, Núria; Pajares, Marta; Vallès, Vicenç; Delgado, Luis; Alberni, Joan; Faixa, Carlota; Vendrell, Josep M

    2013-11-01

    Quality of life (QoL) is an important outcome in the treatment of schizophrenia. Cognitive deficits have an impact on functional outcomes. Cognitive remediation therapy is emerging as a psychological intervention that targets cognitive impairment, but the effect of computer-assisted cognitive remediation on neuropsychology and social functioning and wellbeing remains unclear. The aim of the current study is to investigate the neurocognitive outcomes of computer-assisted cognitive remediation (CACR) therapy in a sample of schizophrenia patients, and to measure the quality of life and self-esteem as secondary outcomes. Sixty-seven people with schizophrenia were randomly assigned to computer-assisted cognitive remediation or an active control condition. The main outcomes were neuropsychological measures and secondary outcomes (self-esteem and quality of life). Measurements were recorded at baseline and post-treatment. The CACR therapy group improved in speed of processing, working memory and reasoning and problem-solving cognitive domains. QoL and self-esteem measures also showed significant improvements over time in this group. Computer-assisted cognitive remediation therapy for people with schizophrenia achieved improvements in neuropsychological performance and in QoL and self-esteem measurements. © 2013 Elsevier B.V. All rights reserved.

  14. Two-dimensional simulation of quantum reflection

    NASA Astrophysics Data System (ADS)

    Galiffi, Emanuele; Sünderhauf, Christoph; DeKieviet, Maarten; Wimberger, Sandro

    2017-05-01

    A propagation method for the scattering of a quantum wave packet from a potential surface is presented. It is used to model the quantum reflection of single atoms from a corrugated (metallic) surface. Our numerical procedure works well in two spatial dimensions requiring only reasonable amounts of memory and computing time. The effects of the surface corrugation on the reflectivity are investigated via simulations with a paradigm potential. These indicate that our approach should allow for future tests of realistic, effective potentials obtained from theory in a quantitative comparison to experimental data.

  15. Interaction function of oscillating coupled neurons

    PubMed Central

    Dodla, Ramana; Wilson, Charles J.

    2013-01-01

    Large scale simulations of electrically coupled neuronal oscillators often employ the phase coupled oscillator paradigm to understand and predict network behavior. We study the nature of the interaction between such coupled oscillators using weakly coupled oscillator theory. By employing piecewise linear approximations for phase response curves and voltage time courses, and parameterizing their shapes, we compute the interaction function for all such possible shapes and express it in terms of discrete Fourier modes. We find that reasonably good approximation is achieved with four Fourier modes that comprise of both sine and cosine terms. PMID:24229210

  16. Proceedings of the Navy Symposium on Aeroballistics (12th) Held at the David Taylor Naval Ship Research and Development Center, Bethesda, Maryland on 12-13-14 May 1981. Volume II.

    DTIC Science & Technology

    1981-05-14

    security reasons, all classified papers will be presented in the fifth session on Thursday. The Proceedings will include the unclassified papers in...minutes CPU time on a Cyber 175 computer. Example - Supersonic Inlet at Low Speed - An axisymmetric mixed compresion inlet, Figure 9, designed for a...encouraging in the sense that there is plenty of room for improvement to the friction formula, and thus for the accuracy of the line of separation. In

  17. Part-whole reasoning in medical ontologies revisited--introducing SEP triplets into classification-based description logics.

    PubMed

    Schulz, S; Romacker, M; Hahn, U

    1998-01-01

    The development of powerful and comprehensive medical ontologies that support formal reasoning on a large scale is one of the key requirements for clinical computing in the next millennium. Taxonomic medical knowledge, a major portion of these ontologies, is mainly characterized by generalization and part-whole relations between concepts. While reasoning in generalization hierarchies is quite well understood, no fully conclusive mechanism as yet exists for part-whole reasoning. The approach we take emulates part-whole reasoning via classification-based reasoning using SEP triplets, a special data structure for encoding part-whole relations that is fully embedded in the formal framework of standard description logics.

  18. Adversarial reasoning: challenges and approaches

    NASA Astrophysics Data System (ADS)

    Kott, Alexander; Ownby, Michael

    2005-05-01

    This paper defines adversarial reasoning as computational approaches to inferring and anticipating an enemy's perceptions, intents and actions. It argues that adversarial reasoning transcends the boundaries of game theory and must also leverage such disciplines as cognitive modeling, control theory, AI planning and others. To illustrate the challenges of applying adversarial reasoning to real-world problems, the paper explores the lessons learned in the CADET -- a battle planning system that focuses on brigade-level ground operations and involves adversarial reasoning. From this example of current capabilities, the paper proceeds to describe RAID -- a DARPA program that aims to build capabilities in adversarial reasoning, and how such capabilities would address practical requirements in Defense and other application areas.

  19. Part-whole reasoning in medical ontologies revisited--introducing SEP triplets into classification-based description logics.

    PubMed Central

    Schulz, S.; Romacker, M.; Hahn, U.

    1998-01-01

    The development of powerful and comprehensive medical ontologies that support formal reasoning on a large scale is one of the key requirements for clinical computing in the next millennium. Taxonomic medical knowledge, a major portion of these ontologies, is mainly characterized by generalization and part-whole relations between concepts. While reasoning in generalization hierarchies is quite well understood, no fully conclusive mechanism as yet exists for part-whole reasoning. The approach we take emulates part-whole reasoning via classification-based reasoning using SEP triplets, a special data structure for encoding part-whole relations that is fully embedded in the formal framework of standard description logics. Images Figure 3 PMID:9929335

  20. Razonamiento de Estudiantes Universitarios sobre Variabilidad e Intervalos de Confianza en un Contexto Inferencial Informal = University Students' Reasoning on Variability and Confidence Intervals in Inferential Informal Context

    ERIC Educational Resources Information Center

    Inzunsa Cazares, Santiago

    2016-01-01

    This article presents the results of a qualitative research with a group of 15 university students of social sciences on informal inferential reasoning developed in a computer environment on concepts involved in the confidence intervals. The results indicate that students developed a correct reasoning about sampling variability and visualized…

  1. Progressive Band Selection

    NASA Technical Reports Server (NTRS)

    Fisher, Kevin; Chang, Chein-I

    2009-01-01

    Progressive band selection (PBS) reduces spectral redundancy without significant loss of information, thereby reducing hyperspectral image data volume and processing time. Used onboard a spacecraft, it can also reduce image downlink time. PBS prioritizes an image's spectral bands according to priority scores that measure their significance to a specific application. Then it uses one of three methods to select an appropriate number of the most useful bands. Key challenges for PBS include selecting an appropriate criterion to generate band priority scores, and determining how many bands should be retained in the reduced image. The image's Virtual Dimensionality (VD), once computed, is a reasonable estimate of the latter. We describe the major design details of PBS and test PBS in a land classification experiment.

  2. Using Rasch Measurement to Develop a Computer Modeling-Based Instrument to Assess Students' Conceptual Understanding of Matter

    ERIC Educational Resources Information Center

    Wei, Silin; Liu, Xiufeng; Wang, Zuhao; Wang, Xingqiao

    2012-01-01

    Research suggests that difficulty in making connections among three levels of chemical representations--macroscopic, submicroscopic, and symbolic--is a primary reason for student alternative conceptions of chemistry concepts, and computer modeling is promising to help students make the connections. However, no computer modeling-based assessment…

  3. The Role of Context-Related Parameters in Adults' Mental Computational Acts

    ERIC Educational Resources Information Center

    Naresh, Nirmala; Presmeg, Norma

    2012-01-01

    Researchers who have carried out studies pertaining to mental computation and everyday mathematics point out that adults and children reason intuitively based upon experiences within specific contexts; they use invented strategies of their own to solve real-life problems. We draw upon research areas of mental computation and everyday mathematics…

  4. What Do Computer Science Students Think about Software Piracy?

    ERIC Educational Resources Information Center

    Konstantakis, Nikos I.; Palaigeorgiou, George E.; Siozos, Panos D.; Tsoukalas, Ioannis A.

    2010-01-01

    Today, software piracy is an issue of global importance. Computer science students are the future information and communication technologies professionals and it is important to study the way they approach this issue. In this article, we attempt to study attitudes, behaviours and the corresponding reasoning of computer science students in Greece…

  5. Using E-mail in a Math/Computer Core Course.

    ERIC Educational Resources Information Center

    Gurwitz, Chaya

    This paper notes the advantages of using e-mail in computer literacy classes, and discusses the results of incorporating an e-mail assignment in the "Introduction to Mathematical Reasoning and Computer Programming" core course at Brooklyn College (New York). The assignment consisted of several steps. The students first read and responded…

  6. 75 FR 4583 - In the Matter of: Certain Electronic Devices, Including Mobile Phones, Portable Music Players...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-28

    ..., Including Mobile Phones, Portable Music Players, and Computers; Notice of Investigation AGENCY: U.S... music players, and computers, by reason of infringement of certain claims of U.S. Patent Nos. 6,714,091... importation of certain electronic devices, including mobile phones, portable music players, or computers that...

  7. Forty years of collaborative computational crystallography.

    PubMed

    Agirre, Jon; Dodson, Eleanor

    2018-01-01

    A brief overview is provided of the history of collaborative computational crystallography, with an emphasis on the Collaborative Computational Project No. 4. The key steps in its development are outlined, with consideration also given to the underlying reasons which contributed, and ultimately led to, the unprecedented success of this venture. © 2017 The Protein Society.

  8. All about Reading and Technology.

    ERIC Educational Resources Information Center

    Karbal, Harold, Ed.

    1985-01-01

    The central theme in this journal issue is the use of the computer in teaching reading. The following articles are included: "The Use of Computers in the Reading Program: A District Approach" by Nora Forester; "Reading and Computers: A Partnership" by Dr. Martha Irwin; "Rom, Ram and Reason" by Candice Carlile; "Word Processing: Practical Ideas and…

  9. Gender Differences in Computer-Administered Versus Paper-Based Tests.

    ERIC Educational Resources Information Center

    Wallace, Patricia; Clariana, Roy B.

    2005-01-01

    For many reasons, paper-based tests of course content are shifting to computer-based administration. This investigation examined student performance on two separate tests delivered either by computer or paper with the first test near the start of the course and the second at the end of the course. Undergraduate-level freshman business majors…

  10. Factors Affecting Utilization of Information Output of Computer-Based Modeling Procedures in Local Government Organizations.

    ERIC Educational Resources Information Center

    Komsky, Susan

    Fiscal Impact Budgeting Systems (FIBS) are sophisticated computer based modeling procedures used in local government organizations, whose results, however, are often overlooked or ignored by decision makers. A study attempted to discover the reasons for this situation by focusing on four factors: potential usefulness, faith in computers,…

  11. 75 FR 17437 - In the Matter of Certain Notebook Computer Products and Components Thereof; Notice of Commission...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-06

    ... importation of certain notebook computer products and components thereof by reason of infringement of the '693... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-705] In the Matter of Certain Notebook Computer Products and Components Thereof; Notice of Commission Decision Not To Review an Initial...

  12. Computers and Education: The Wrong Approach Continually Being Executed

    ERIC Educational Resources Information Center

    Walker, Jacob J.

    2005-01-01

    This opinion paper explores the use of computers in the U.S. Public Education System, concluding that technology is underutilized and not fulfilled most of the educational promises attributed to the tool. Drawing upon research and personal experience, the paper explores 8 possible reasons for the problem, including: the computer itself; not enough…

  13. Using computer agents to explain medical documents to patients with low health literacy.

    PubMed

    Bickmore, Timothy W; Pfeifer, Laura M; Paasche-Orlow, Michael K

    2009-06-01

    Patients are commonly presented with complex documents that they have difficulty understanding. The objective of this study was to design and evaluate an animated computer agent to explain research consent forms to potential research participants. Subjects were invited to participate in a simulated consent process for a study involving a genetic repository. Explanation of the research consent form by the computer agent was compared to explanation by a human and a self-study condition in a randomized trial. Responses were compared according to level of health literacy. Participants were most satisfied with the consent process and most likely to sign the consent form when it was explained by the computer agent, regardless of health literacy level. Participants with adequate health literacy demonstrated the highest level of comprehension with the computer agent-based explanation compared to the other two conditions. However, participants with limited health literacy showed poor comprehension levels in all three conditions. Participants with limited health literacy reported several reasons, such as lack of time constraints, ability to re-ask questions, and lack of bias, for preferring the computer agent-based explanation over a human-based one. Animated computer agents can perform as well as or better than humans in the administration of informed consent. Animated computer agents represent a viable method for explaining health documents to patients.

  14. Reformulating Non-Monotonic Theories for Inference and Updating

    NASA Technical Reports Server (NTRS)

    Grosof, Benjamin N.

    1992-01-01

    We aim to help build programs that do large-scale, expressive non-monotonic reasoning (NMR): especially, 'learning agents' that store, and revise, a body of conclusions while continually acquiring new, possibly defeasible, premise beliefs. Currently available procedures for forward inference and belief revision are exhaustive, and thus impractical: they compute the entire non-monotonic theory, then re-compute from scratch upon updating with new axioms. These methods are thus badly intractable. In most theories of interest, even backward reasoning is combinatoric (at least NP-hard). Here, we give theoretical results for prioritized circumscription that show how to reformulate default theories so as to make forward inference be selective, as well as concurrent; and to restrict belief revision to a part of the theory. We elaborate a detailed divide-and-conquer strategy. We develop concepts of structure in NM theories, by showing how to reformulate them in a particular fashion: to be conjunctively decomposed into a collection of smaller 'part' theories. We identify two well-behaved special cases that are easily recognized in terms of syntactic properties: disjoint appearances of predicates, and disjoint appearances of individuals (terms). As part of this, we also definitionally reformulate the global axioms, one by one, in addition to applying decomposition. We identify a broad class of prioritized default theories, generalizing default inheritance, for which our results especially bear fruit. For this asocially monadic class, decomposition permits reasoning to be localized to individuals (ground terms), and reduced to propositional. Our reformulation methods are implementable in polynomial time, and apply to several other NM formalisms beyond circumscription.

  15. ESTIMATION OF PHYSIOCHEMICAL PROPERTIES OF ORGANIC COMPOUNDS BY SPARC

    EPA Science Inventory

    The computer program SPARC (SPARC Performs Automated Reasoning in Chemistry) has been under development for several years to estimate physical properties and chemical reactivity parameters of organic compounds strictly from molecular structure. SPARC uses computational algorithms...

  16. 26 CFR 1.148-5A - Yield and valuation of investments.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    .... For this purpose, present value is computed using the taxable discount rate used by the parties to compute the commission or, if not readily ascertainable, a reasonable taxable discount rate. [T.D. 8538...

  17. Teaching World History With Computers: Why Do I Do It and What's Involved.

    ERIC Educational Resources Information Center

    Tucker, Sara W.

    2002-01-01

    Identifies reasons for using computers to teach world history. Discusses how instructors can acquire and use digital classroom resources. Describes how to develop and use online courses and course Web pages. (PAL)

  18. Computational study of noise in a large signal transduction network.

    PubMed

    Intosalmi, Jukka; Manninen, Tiina; Ruohonen, Keijo; Linne, Marja-Leena

    2011-06-21

    Biochemical systems are inherently noisy due to the discrete reaction events that occur in a random manner. Although noise is often perceived as a disturbing factor, the system might actually benefit from it. In order to understand the role of noise better, its quality must be studied in a quantitative manner. Computational analysis and modeling play an essential role in this demanding endeavor. We implemented a large nonlinear signal transduction network combining protein kinase C, mitogen-activated protein kinase, phospholipase A2, and β isoform of phospholipase C networks. We simulated the network in 300 different cellular volumes using the exact Gillespie stochastic simulation algorithm and analyzed the results in both the time and frequency domain. In order to perform simulations in a reasonable time, we used modern parallel computing techniques. The analysis revealed that time and frequency domain characteristics depend on the system volume. The simulation results also indicated that there are several kinds of noise processes in the network, all of them representing different kinds of low-frequency fluctuations. In the simulations, the power of noise decreased on all frequencies when the system volume was increased. We concluded that basic frequency domain techniques can be applied to the analysis of simulation results produced by the Gillespie stochastic simulation algorithm. This approach is suited not only to the study of fluctuations but also to the study of pure noise processes. Noise seems to have an important role in biochemical systems and its properties can be numerically studied by simulating the reacting system in different cellular volumes. Parallel computing techniques make it possible to run massive simulations in hundreds of volumes and, as a result, accurate statistics can be obtained from computational studies. © 2011 Intosalmi et al; licensee BioMed Central Ltd.

  19. German dental faculty attitudes towards computer-assisted learning and their correlation with personal and professional profiles.

    PubMed

    Welk, A; Rosin, M; Seyer, D; Splieth, C; Siemer, M; Meyer, G

    2005-08-01

    Compared with its potential, computer technology use is still lacking in medical/dental education. To investigate the primary advantages of computer-assisted learning (CAL) systems in German dental education, as well as the reasons for their relatively low degree of use correlated with personal and professional profiles of respondents. A questionnaire was mailed to heads in the departments of conservative dentistry and prosthetic dentistry in all dental schools in Germany. Besides investigating the advantages and barriers to the use of computer technology, the questionnaire also contained questions regarding each respondent's gender, age, academic rank, experience in academia and computer skills. The response rate to the questionnaire was 90% (112 of 125). The results indicated a distinct discrepancy between the desire for and actual occurrence of lectures, seminars, etc. to instruct students in ways to search for and acquire knowledge, especially using computer technology. The highest-ranked advantages of CAL systems in order, as seen by respondents, were the possibilities for individual learning, increased motivation, and both objective theoretical tests and practical tests. The highest-ranked reasons for the low degree of usage of CAL systems in order were the inability to finance, followed equally by a lack of studies of CAL and poor cost-advantage ratio, and too much effort required to integrate CAL into the curriculum. Moreover, the higher the computer skills of the respondents, the more they noted insufficient quality of CAL systems (r = 0.200, P = 0.035) and content differences from their own dental faculty's expert opinions (r = 0.228, P = 0.016) as reasons for low use. The correlations of the attitudes towards CAL with the personal and professional profiles showed not only statistical significant reinforcements of, but also interesting deviations from, the average responses.

  20. The foundations of the human cultural niche

    PubMed Central

    Derex, Maxime; Boyd, Robert

    2015-01-01

    Technological innovations have allowed humans to settle in habitats for which they are poorly suited biologically. However, our understanding of how humans produce complex technologies is limited. We used a computer-based experiment, involving humans and learning bots, to investigate how reasoning abilities, social learning mechanisms and population structure affect the production of virtual artefacts. We found that humans' reasoning abilities play an important role in the production of innovations, but that groups of individuals are able to produce artefacts that are more complex than any isolated individual can produce during the same amount of time. We show that this group-level ability to produce complex innovations is maximized when social information is easy to acquire and when individuals are organized into large and partially connected populations. These results suggest that the transition to behavioural modernity could have been triggered by a change in ancestral between-group interaction patterns. PMID:26400015

  1. Benchmarking the SPHINX and CTH shock physics codes for three problems in ballistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, L.T.; Hertel, E.; Schwalbe, L.

    1998-02-01

    The CTH Eulerian hydrocode, and the SPHINX smooth particle hydrodynamics (SPH) code were used to model a shock tube, two long rod penetrations into semi-infinite steel targets, and a long rod penetration into a spaced plate array. The results were then compared to experimental data. Both SPHINX and CTH modeled the one-dimensional shock tube problem well. Both codes did a reasonable job in modeling the outcome of the axisymmetric rod impact problem. Neither code correctly reproduced the depth of penetration in both experiments. In the 3-D problem, both codes reasonably replicated the penetration of the rod through the first plate.more » After this, however, the predictions of both codes began to diverge from the results seen in the experiment. In terms of computer resources, the run times are problem dependent, and are discussed in the text.« less

  2. Explanation-aware computing of the prognosis for breast cancer supported by IK-DCBRC: Technical innovation.

    PubMed

    Khelassi, Abdeldjalil

    2014-01-01

    Active research is being conducted to determine the prognosis for breast cancer. However, the uncertainty is a major obstacle in this domain of medical research. In that context, explanation-aware computing has the potential for providing meaningful interactions between complex medical applications and users, which would ensure a significant reduction of uncertainty and risks. This paper presents an explanation-aware agent, supported by Intensive Knowledge-Distributed Case-Based Reasoning Classifier (IK-DCBRC), to reduce the uncertainty and risks associated with the diagnosis of breast cancer. A meaningful explanation is generated by inferring from a rule-based system according to the level of abstraction and the reasoning traces. The computer-aided detection is conducted by IK-DCBRC, which is a multi-agent system that applies the case-based reasoning paradigm and a fuzzy similarity function for the automatic prognosis by the class of breast tumors, i.e. malignant or benign, from a pattern of cytological images. A meaningful interaction between the physician and the computer-aided diagnosis system, IK-DCBRC, is achieved via an intelligent agent. The physician can observe the trace of reasoning, terms, justifications, and the strategy to be used to decrease the risks and doubts associated with the automatic diagnosis. The capability of the system we have developed was proven by an example in which conflicts were clarified and transparency was ensured. The explanation agent ensures the transparency of the automatic diagnosis of breast cancer supported by IK-DCBRC, which decreases uncertainty and risks and detects some conflicts.

  3. Working Notes from the 1992 AAAI Spring Symposium on Practical Approaches to Scheduling and Planning

    NASA Technical Reports Server (NTRS)

    Drummond, Mark; Fox, Mark; Tate, Austin; Zweben, Monte

    1992-01-01

    The symposium presented issues involved in the development of scheduling systems that can deal with resource and time limitations. To qualify, a system must be implemented and tested to some degree on non-trivial problems (ideally, on real-world problems). However, a system need not be fully deployed to qualify. Systems that schedule actions in terms of metric time constraints typically represent and reason about an external numeric clock or calendar and can be contrasted with those systems that represent time purely symbolically. The following topics are discussed: integrating planning and scheduling; integrating symbolic goals and numerical utilities; managing uncertainty; incremental rescheduling; managing limited computation time; anytime scheduling and planning algorithms, systems; dependency analysis and schedule reuse; management of schedule and plan execution; and incorporation of discrete event techniques.

  4. Digital Photography--a Question of Ethics.

    ERIC Educational Resources Information Center

    Meltzer, Bonnie

    1996-01-01

    Discusses reasons for computer manipulation of photographs and the need to educate students about the ethics involving digital scanning. Focuses on ownership, copyrights, reasons for changing a photograph, reader-interpretation, and the context in which the photograph is used. Suggests a photo illustration project to teach students about these…

  5. The LHCb software and computing upgrade for Run 3: opportunities and challenges

    NASA Astrophysics Data System (ADS)

    Bozzi, C.; Roiser, S.; LHCb Collaboration

    2017-10-01

    The LHCb detector will be upgraded for the LHC Run 3 and will be readout at 30 MHz, corresponding to the full inelastic collision rate, with major implications on the full software trigger and offline computing. If the current computing model and software framework are kept, the data storage capacity and computing power required to process data at this rate, and to generate and reconstruct equivalent samples of simulated events, will exceed the current capacity by at least one order of magnitude. A redesign of the software framework, including scheduling, the event model, the detector description and the conditions database, is needed to fully exploit the computing power of multi-, many-core architectures, and coprocessors. Data processing and the analysis model will also change towards an early streaming of different data types, in order to limit storage resources, with further implications for the data analysis workflows. Fast simulation options will allow to obtain a reasonable parameterization of the detector response in considerably less computing time. Finally, the upgrade of LHCb will be a good opportunity to review and implement changes in the domains of software design, test and review, and analysis workflow and preservation. In this contribution, activities and recent results in all the above areas are presented.

  6. Visual saliency-based fast intracoding algorithm for high efficiency video coding

    NASA Astrophysics Data System (ADS)

    Zhou, Xin; Shi, Guangming; Zhou, Wei; Duan, Zhemin

    2017-01-01

    Intraprediction has been significantly improved in high efficiency video coding over H.264/AVC with quad-tree-based coding unit (CU) structure from size 64×64 to 8×8 and more prediction modes. However, these techniques cause a dramatic increase in computational complexity. An intracoding algorithm is proposed that consists of perceptual fast CU size decision algorithm and fast intraprediction mode decision algorithm. First, based on the visual saliency detection, an adaptive and fast CU size decision method is proposed to alleviate intraencoding complexity. Furthermore, a fast intraprediction mode decision algorithm with step halving rough mode decision method and early modes pruning algorithm is presented to selectively check the potential modes and effectively reduce the complexity of computation. Experimental results show that our proposed fast method reduces the computational complexity of the current HM to about 57% in encoding time with only 0.37% increases in BD rate. Meanwhile, the proposed fast algorithm has reasonable peak signal-to-noise ratio losses and nearly the same subjective perceptual quality.

  7. Performance Evaluation of Communication Software Systems for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Fatoohi, Rod

    1996-01-01

    In recent years there has been an increasing interest in object-oriented distributed computing since it is better quipped to deal with complex systems while providing extensibility, maintainability, and reusability. At the same time, several new high-speed network technologies have emerged for local and wide area networks. However, the performance of networking software is not improving as fast as the networking hardware and the workstation microprocessors. This paper gives an overview and evaluates the performance of the Common Object Request Broker Architecture (CORBA) standard in a distributed computing environment at NASA Ames Research Center. The environment consists of two testbeds of SGI workstations connected by four networks: Ethernet, FDDI, HiPPI, and ATM. The performance results for three communication software systems are presented, analyzed and compared. These systems are: BSD socket programming interface, IONA's Orbix, an implementation of the CORBA specification, and the PVM message passing library. The results show that high-level communication interfaces, such as CORBA and PVM, can achieve reasonable performance under certain conditions.

  8. Students' learning of clinical sonography: use of computer-assisted instruction and practical class.

    PubMed

    Wood, A K; Dadd, M J; Lublin, J R

    1996-08-01

    The application of information technology to teaching radiology will profoundly change the way learning is mediated to students. In this project, the integration of veterinary medical students' knowledge of sonography was promoted by a computer-assisted instruction program and a subsequent practical class. The computer-assisted instruction program emphasized the physical principles of clinical sonography and contained simulations and user-active experiments. In the practical class, the students used an actual sonographic machine for the first time and made images of a tissue-equivalent phantom. Students' responses to questionnaires were analyzed. On completing the overall project, 96% of the students said that they now understood sonographic concepts very or reasonably well, and 98% had become very or moderately interested in clinical sonography. The teaching and learning initiatives enhanced an integrated approach to learning, stimulated student interest and curiosity, improved understanding of sonographic principles, and contributed to an increased confidence and skill in using sonographic equipment.

  9. Realization of planning design of mechanical manufacturing system by Petri net simulation model

    NASA Astrophysics Data System (ADS)

    Wu, Yanfang; Wan, Xin; Shi, Weixiang

    1991-09-01

    Planning design is to work out a more overall long-term plan. In order to guarantee a mechanical manufacturing system (MMS) designed to obtain maximum economical benefit, it is necessary to carry out a reasonable planning design for the system. First, some principles on planning design for MMS are introduced. Problems of production scheduling and their decision rules for computer simulation are presented. Realizable method of each production scheduling decision rule in Petri net model is discussed. Second, the solution of conflict rules for conflict problems during running Petri net is given. Third, based on the Petri net model of MMS which includes part flow and tool flow, according to the principle of minimum event time advance, a computer dynamic simulation of the Petri net model, that is, a computer dynamic simulation of MMS, is realized. Finally, the simulation program is applied to a simulation exmple, so the scheme of a planning design for MMS can be evaluated effectively.

  10. ESTIMATION OF PHYSICAL PROPERTIES AND CHEMICAL REACTIVITY PARAMETERS OF ORGANIC COMPOUNDS

    EPA Science Inventory

    The computer program SPARC (Sparc Performs Automated Reasoning in Chemistry)has been under development for several years to estimate physical properties and chemical reactivity parameters of organic compounds strictly from molecular structure. SPARC uses computational algorithms ...

  11. Massively-parallel FDTD simulations to address mask electromagnetic effects in hyper-NA immersion lithography

    NASA Astrophysics Data System (ADS)

    Tirapu Azpiroz, Jaione; Burr, Geoffrey W.; Rosenbluth, Alan E.; Hibbs, Michael

    2008-03-01

    In the Hyper-NA immersion lithography regime, the electromagnetic response of the reticle is known to deviate in a complicated manner from the idealized Thin-Mask-like behavior. Already, this is driving certain RET choices, such as the use of polarized illumination and the customization of reticle film stacks. Unfortunately, full 3-D electromagnetic mask simulations are computationally intensive. And while OPC-compatible mask electromagnetic field (EMF) models can offer a reasonable tradeoff between speed and accuracy for full-chip OPC applications, full understanding of these complex physical effects demands higher accuracy. Our paper describes recent advances in leveraging High Performance Computing as a critical step towards lithographic modeling of the full manufacturing process. In this paper, highly accurate full 3-D electromagnetic simulation of very large mask layouts are conducted in parallel with reasonable turnaround time, using a Blue- Gene/L supercomputer and a Finite-Difference Time-Domain (FDTD) code developed internally within IBM. A 3-D simulation of a large 2-D layout spanning 5μm×5μm at the wafer plane (and thus (20μm×20μm×0.5μm at the mask) results in a simulation with roughly 12.5GB of memory (grid size of 10nm at the mask, single-precision computation, about 30 bytes/grid point). FDTD is flexible and easily parallelizable to enable full simulations of such large layout in approximately an hour using one BlueGene/L "midplane" containing 512 dual-processor nodes with 256MB of memory per processor. Our scaling studies on BlueGene/L demonstrate that simulations up to 100μm × 100μm at the mask can be computed in a few hours. Finally, we will show that the use of a subcell technique permits accurate simulation of features smaller than the grid discretization, thus improving on the tradeoff between computational complexity and simulation accuracy. We demonstrate the correlation of the real and quadrature components that comprise the Boundary Layer representation of the EMF behavior of a mask blank to intensity measurements of the mask diffraction patterns by an Aerial Image Measurement System (AIMS) with polarized illumination. We also discuss how this model can become a powerful tool for the assessment of the impact to the lithographic process of a mask blank.

  12. Infrastructure Systems for Advanced Computing in E-science applications

    NASA Astrophysics Data System (ADS)

    Terzo, Olivier

    2013-04-01

    In the e-science field are growing needs for having computing infrastructure more dynamic and customizable with a model of use "on demand" that follow the exact request in term of resources and storage capacities. The integration of grid and cloud infrastructure solutions allows us to offer services that can adapt the availability in terms of up scaling and downscaling resources. The main challenges for e-sciences domains will on implement infrastructure solutions for scientific computing that allow to adapt dynamically the demands of computing resources with a strong emphasis on optimizing the use of computing resources for reducing costs of investments. Instrumentation, data volumes, algorithms, analysis contribute to increase the complexity for applications who require high processing power and storage for a limited time and often exceeds the computational resources that equip the majority of laboratories, research Unit in an organization. Very often it is necessary to adapt or even tweak rethink tools, algorithms, and consolidate existing applications through a phase of reverse engineering in order to adapt them to a deployment on Cloud infrastructure. For example, in areas such as rainfall monitoring, meteorological analysis, Hydrometeorology, Climatology Bioinformatics Next Generation Sequencing, Computational Electromagnetic, Radio occultation, the complexity of the analysis raises several issues such as the processing time, the scheduling of tasks of processing, storage of results, a multi users environment. For these reasons, it is necessary to rethink the writing model of E-Science applications in order to be already adapted to exploit the potentiality of cloud computing services through the uses of IaaS, PaaS and SaaS layer. An other important focus is on create/use hybrid infrastructure typically a federation between Private and public cloud, in fact in this way when all resources owned by the organization are all used it will be easy with a federate cloud infrastructure to add some additional resources form the Public cloud for following the needs in term of computational and storage resources and release them where process are finished. Following the hybrid model, the scheduling approach is important for managing both cloud models. Thanks to this model infrastructure every time resources are available for additional request in term of IT capacities that can used "on demand" for a limited time without having to proceed to purchase additional servers.

  13. Finding Waldo: Learning about Users from their Interactions.

    PubMed

    Brown, Eli T; Ottley, Alvitta; Zhao, Helen; Quan Lin; Souvenir, Richard; Endert, Alex; Chang, Remco

    2014-12-01

    Visual analytics is inherently a collaboration between human and computer. However, in current visual analytics systems, the computer has limited means of knowing about its users and their analysis processes. While existing research has shown that a user's interactions with a system reflect a large amount of the user's reasoning process, there has been limited advancement in developing automated, real-time techniques that mine interactions to learn about the user. In this paper, we demonstrate that we can accurately predict a user's task performance and infer some user personality traits by using machine learning techniques to analyze interaction data. Specifically, we conduct an experiment in which participants perform a visual search task, and apply well-known machine learning algorithms to three encodings of the users' interaction data. We achieve, depending on algorithm and encoding, between 62% and 83% accuracy at predicting whether each user will be fast or slow at completing the task. Beyond predicting performance, we demonstrate that using the same techniques, we can infer aspects of the user's personality factors, including locus of control, extraversion, and neuroticism. Further analyses show that strong results can be attained with limited observation time: in one case 95% of the final accuracy is gained after a quarter of the average task completion time. Overall, our findings show that interactions can provide information to the computer about its human collaborator, and establish a foundation for realizing mixed-initiative visual analytics systems.

  14. Viscosity Measurement using Drop Coalescence in Microgravity

    NASA Technical Reports Server (NTRS)

    Antar, Basil N.; Ethridge, Edwin; Maxwell, Daniel

    1999-01-01

    We present in here details of a new method, using drop coalescence, for application in microgravity environment for determining the viscosity of highly viscous undercooled liquids. The method has the advantage of eliminating heterogeneous nucleation at container walls caused by crystallization of undercooled liquids during processing. Also, due to the rapidity of the measurement, homogeneous nucleation would be avoided. The technique relies on both a highly accurate solution to the Navier-Stokes equations as well as on data gathered from experiments conducted in near zero gravity environment. The liquid viscosity is determined by allowing the computed free surface shape relaxation time to be adjusted in response to the measured free surface velocity of two coalescing drops. Results are presented from two validation experiments of the method which were conducted recently on board the NASA KC-135 aircraft. In these tests the viscosity of a highly viscous liquid, such as glycerine at different temperatures, was determined to reasonable accuracy using the liquid coalescence method. The experiments measured the free surface velocity of two glycerine drops coalescing under the action of surface tension alone in low gravity environment using high speed photography. The free surface velocity was then compared with the computed values obtained from different viscosity values. The results of these experiments were found to agree reasonably well with the calculated values.

  15. Driving a car with custom-designed fuzzy inferencing VLSI chips and boards

    NASA Technical Reports Server (NTRS)

    Pin, Francois G.; Watanabe, Yutaka

    1993-01-01

    Vehicle control in a-priori unknown, unpredictable, and dynamic environments requires many calculational and reasoning schemes to operate on the basis of very imprecise, incomplete, or unreliable data. For such systems, in which all the uncertainties can not be engineered away, approximate reasoning may provide an alternative to the complexity and computational requirements of conventional uncertainty analysis and propagation techniques. Two types of computer boards including custom-designed VLSI chips were developed to add a fuzzy inferencing capability to real-time control systems. All inferencing rules on a chip are processed in parallel, allowing execution of the entire rule base in about 30 microseconds, and therefore, making control of 'reflex-type' of motions envisionable. The use of these boards and the approach using superposition of elemental sensor-based behaviors for the development of qualitative reasoning schemes emulating human-like navigation in a-priori unknown environments are first discussed. Then how the human-like navigation scheme implemented on one of the qualitative inferencing boards was installed on a test-bed platform to investigate two control modes for driving a car in a-priori unknown environments on the basis of sparse and imprecise sensor data is described. In the first mode, the car navigates fully autonomously, while in the second mode, the system acts as a driver's aid providing the driver with linguistic (fuzzy) commands to turn left or right and speed up or slow down depending on the obstacles perceived by the sensors. Experiments with both modes of control are described in which the system uses only three acoustic range (sonar) sensor channels to perceive the environment. Simulation results as well as indoors and outdoors experiments are presented and discussed to illustrate the feasibility and robustness of autonomous navigation and/or safety enhancing driver's aid using the new fuzzy inferencing hardware system and some human-like reasoning schemes which may include as little as six elemental behaviors embodied in fourteen qualitative rules.

  16. Effects of Mathematics Computer Games on Special Education Students' Multiplicative Reasoning Ability

    ERIC Educational Resources Information Center

    Bakker, Marjoke; van den Heuvel-Panhuizen, Marja; Robitzsch, Alexander

    2016-01-01

    This study examined the effects of a teacher-delivered intervention with online mathematics mini-games on special education students' multiplicative reasoning ability (multiplication and division). The games involved declarative, procedural, as well as conceptual knowledge of multiplicative relations, and were accompanied with teacher-led lessons…

  17. Investigating Student Use of Electronic Support Tools and Mathematical Reasoning

    ERIC Educational Resources Information Center

    Higgins, Kristina N.; Crawford, Lindy; Huscroft-D'Angelo, Jacqueline; Horney, Mark

    2016-01-01

    Mathematical reasoning involves comprehending mathematical information and concepts in a logical way and forming conclusions and generalizations based on this comprehension. Computer-based learning has been incorporated into classrooms across the country, and specific aspects of technology need to be studied to determine how programs are…

  18. Genetics Reasoning with Multiple External Representations.

    ERIC Educational Resources Information Center

    Tsui, Chi-Yan; Treagust, David F.

    2003-01-01

    Explores a case study of a class of 10th grade students whose learning of genetics involved activities using BioLogica, a computer program that features multiple external representations (MERs). Findings indicate that the MERs in BioLogica contributed to students' development of genetics reasoning by engendering their motivation and interest but…

  19. The Effects of Inquiry-Based Computer Simulation with Cooperative Learning on Scientific Thinking and Conceptual Understanding of Gas Laws

    ERIC Educational Resources Information Center

    Abdullah, Sopiah; Shariff, Adilah

    2008-01-01

    The purpose of the study was to investigate the effects of inquiry-based computer simulation with heterogeneous-ability cooperative learning (HACL) and inquiry-based computer simulation with friendship cooperative learning (FCL) on (a) scientific reasoning (SR) and (b) conceptual understanding (CU) among Form Four students in Malaysian Smart…

  20. Inequities in Computer Education Due to Gender, Race, and Socioeconomic Status.

    ERIC Educational Resources Information Center

    Urban, Cynthia M.

    Recent reports have revealed that inequalities exist between males and females, racial minorities and whites, and rich and poor in accessibility to and use of computers. This study reviews the research in the field of computer-based education to determine the extent of and reasons for these inequities. The annotated research articles are arranged…

  1. Strategies for the Creation, Design and Implementation of Effective Interactive Computer-Aided Learning Software in Numerate Business Subjects--The Byzantium Experience.

    ERIC Educational Resources Information Center

    Wilkinson-Riddle, G. J.; Patel, Ashok

    1998-01-01

    Discusses courseware development, including intelligent tutoring systems, under the Teaching and Learning Technology Programme and the Byzantium project that was designed to define computer-aided learning performance standards suitable for numerate business subjects; examine reasons to use computer-aided learning; and improve access to educational…

  2. Computer program and user documentation medical data tape retrieval system

    NASA Technical Reports Server (NTRS)

    Anderson, J.

    1971-01-01

    This volume provides several levels of documentation for the program module of the NASA medical directorate mini-computer storage and retrieval system. A biomedical information system overview describes some of the reasons for the development of the mini-computer storage and retrieval system. It briefly outlines all of the program modules which constitute the system.

  3. Investigating the Status of Tablet Computers and E-Books Use of Open Education Faculty Students: A Case Study

    ERIC Educational Resources Information Center

    Koçak, Ömer; Yildirim, Önder; Kursun, Engin; Yildirim, Gürkan

    2016-01-01

    The increase in tablet computers and e-books use brings to mind the question of how users benefited from these technologies. In this sense, the present study investigated the status of students' tablet computers and e-books use and the reasons why students prefer to use and not use of tablet computers and e-books. Students' study habits while…

  4. Understanding health care communication preferences of veteran primary care users.

    PubMed

    LaVela, Sherri L; Schectman, Gordon; Gering, Jeffrey; Locatelli, Sara M; Gawron, Andrew; Weaver, Frances M

    2012-09-01

    To assess veterans' health communication preferences (in-person, telephone, or electronic) for primary care needs and the impact of computer use on preferences. Structured patient interviews (n=448). Bivariate analyses examined preferences for primary care by 'infrequent' vs. 'regular' computer users. Only 54% were regular computer users, nearly all of whom had ever used the internet. 'Telephone' was preferred for 6 of 10 reasons (general medical questions, medication questions and refills, preventive care reminders, scheduling, and test results); although telephone was preferred by markedly fewer regular computer users. 'In-person' was preferred for new/ongoing conditions/symptoms, treatment instructions, and next care steps; these preferences were unaffected by computer use frequency. Among regular computer users, 1/3 preferred 'electronic' for preventive reminders (37%), test results (34%), and refills (32%). For most primary care needs, telephone communication was preferred, although by a greater proportion of infrequent vs. regular computer users. In-person communication was preferred for reasons that may require an exam or visual instructions. About 1/3 of regular computer users prefer electronic communication for routine needs, e.g., preventive reminders, test results, and refills. These findings can be used to plan patient-centered care that is aligned with veterans' preferred health communication methods. Published by Elsevier Ireland Ltd.

  5. Computer-assisted learning and simulation systems in dentistry--a challenge to society.

    PubMed

    Welk, A; Splieth, Ch; Wierinck, E; Gilpatrick, R O; Meyer, G

    2006-07-01

    Computer technology is increasingly used in practical training at universities. However, in spite of their potential, computer-assisted learning (CAL) and computer-assisted simulation (CAS) systems still appear to be underutilized in dental education. Advantages, challenges, problems, and solutions of computer-assisted learning and simulation in dentistry are discussed by means of MEDLINE, open Internet platform searches, and key results of a study among German dental schools. The advantages of computer-assisted learning are seen for example in self-paced and self-directed learning and increased motivation. It is useful for both objective theoretical and practical tests and for training students to handle complex cases. CAL can lead to more structured learning and can support training in evidence-based decision-making. The reasons for the still relatively rare implementation of CAL/CAS systems in dental education include an inability to finance, lack of studies of CAL/CAS, and too much effort required to integrate CAL/CAS systems into the curriculum. To overcome the reasons for the relative low degree of computer technology use, we should strive for multicenter research and development projects monitored by the appropriate national and international scientific societies, so that the potential of computer technology can be fully realized in graduate, postgraduate, and continuing dental education.

  6. Efficient calculation of nuclear spin-rotation constants from auxiliary density functional theory.

    PubMed

    Zuniga-Gutierrez, Bernardo; Camacho-Gonzalez, Monica; Bendana-Castillo, Alfonso; Simon-Bastida, Patricia; Calaminici, Patrizia; Köster, Andreas M

    2015-09-14

    The computation of the spin-rotation tensor within the framework of auxiliary density functional theory (ADFT) in combination with the gauge including atomic orbital (GIAO) scheme, to treat the gauge origin problem, is presented. For the spin-rotation tensor, the calculation of the magnetic shielding tensor represents the most demanding computational task. Employing the ADFT-GIAO methodology, the central processing unit time for the magnetic shielding tensor calculation can be dramatically reduced. In this work, the quality of spin-rotation constants obtained with the ADFT-GIAO methodology is compared with available experimental data as well as with other theoretical results at the Hartree-Fock and coupled-cluster level of theory. It is found that the agreement between the ADFT-GIAO results and the experiment is good and very similar to the ones obtained by the coupled-cluster single-doubles-perturbative triples-GIAO methodology. With the improved computational performance achieved, the computation of the spin-rotation tensors of large systems or along Born-Oppenheimer molecular dynamics trajectories becomes feasible in reasonable times. Three models of carbon fullerenes containing hundreds of atoms and thousands of basis functions are used for benchmarking the performance. Furthermore, a theoretical study of temperature effects on the structure and spin-rotation tensor of the H(12)C-(12)CH-DF complex is presented. Here, the temperature dependency of the spin-rotation tensor of the fluorine nucleus can be used to identify experimentally the so far unknown bent isomer of this complex. To the best of our knowledge this is the first time that temperature effects on the spin-rotation tensor are investigated.

  7. Fine-grained parallelization of fitness functions in bioinformatics optimization problems: gene selection for cancer classification and biclustering of gene expression data.

    PubMed

    Gomez-Pulido, Juan A; Cerrada-Barrios, Jose L; Trinidad-Amado, Sebastian; Lanza-Gutierrez, Jose M; Fernandez-Diaz, Ramon A; Crawford, Broderick; Soto, Ricardo

    2016-08-31

    Metaheuristics are widely used to solve large combinatorial optimization problems in bioinformatics because of the huge set of possible solutions. Two representative problems are gene selection for cancer classification and biclustering of gene expression data. In most cases, these metaheuristics, as well as other non-linear techniques, apply a fitness function to each possible solution with a size-limited population, and that step involves higher latencies than other parts of the algorithms, which is the reason why the execution time of the applications will mainly depend on the execution time of the fitness function. In addition, it is usual to find floating-point arithmetic formulations for the fitness functions. This way, a careful parallelization of these functions using the reconfigurable hardware technology will accelerate the computation, specially if they are applied in parallel to several solutions of the population. A fine-grained parallelization of two floating-point fitness functions of different complexities and features involved in biclustering of gene expression data and gene selection for cancer classification allowed for obtaining higher speedups and power-reduced computation with regard to usual microprocessors. The results show better performances using reconfigurable hardware technology instead of usual microprocessors, in computing time and power consumption terms, not only because of the parallelization of the arithmetic operations, but also thanks to the concurrent fitness evaluation for several individuals of the population in the metaheuristic. This is a good basis for building accelerated and low-energy solutions for intensive computing scenarios.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zuniga-Gutierrez, Bernardo, E-mail: bzuniga.51@gmail.com; Camacho-Gonzalez, Monica; Bendana-Castillo, Alfonso

    The computation of the spin-rotation tensor within the framework of auxiliary density functional theory (ADFT) in combination with the gauge including atomic orbital (GIAO) scheme, to treat the gauge origin problem, is presented. For the spin-rotation tensor, the calculation of the magnetic shielding tensor represents the most demanding computational task. Employing the ADFT-GIAO methodology, the central processing unit time for the magnetic shielding tensor calculation can be dramatically reduced. In this work, the quality of spin-rotation constants obtained with the ADFT-GIAO methodology is compared with available experimental data as well as with other theoretical results at the Hartree-Fock and coupled-clustermore » level of theory. It is found that the agreement between the ADFT-GIAO results and the experiment is good and very similar to the ones obtained by the coupled-cluster single-doubles-perturbative triples-GIAO methodology. With the improved computational performance achieved, the computation of the spin-rotation tensors of large systems or along Born-Oppenheimer molecular dynamics trajectories becomes feasible in reasonable times. Three models of carbon fullerenes containing hundreds of atoms and thousands of basis functions are used for benchmarking the performance. Furthermore, a theoretical study of temperature effects on the structure and spin-rotation tensor of the H{sup 12}C–{sup 12}CH–DF complex is presented. Here, the temperature dependency of the spin-rotation tensor of the fluorine nucleus can be used to identify experimentally the so far unknown bent isomer of this complex. To the best of our knowledge this is the first time that temperature effects on the spin-rotation tensor are investigated.« less

  9. INDUCTIVE SYSTEM HEALTH MONITORING WITH STATISTICAL METRICS

    NASA Technical Reports Server (NTRS)

    Iverson, David L.

    2005-01-01

    Model-based reasoning is a powerful method for performing system monitoring and diagnosis. Building models for model-based reasoning is often a difficult and time consuming process. The Inductive Monitoring System (IMS) software was developed to provide a technique to automatically produce health monitoring knowledge bases for systems that are either difficult to model (simulate) with a computer or which require computer models that are too complex to use for real time monitoring. IMS processes nominal data sets collected either directly from the system or from simulations to build a knowledge base that can be used to detect anomalous behavior in the system. Machine learning and data mining techniques are used to characterize typical system behavior by extracting general classes of nominal data from archived data sets. In particular, a clustering algorithm forms groups of nominal values for sets of related parameters. This establishes constraints on those parameter values that should hold during nominal operation. During monitoring, IMS provides a statistically weighted measure of the deviation of current system behavior from the established normal baseline. If the deviation increases beyond the expected level, an anomaly is suspected, prompting further investigation by an operator or automated system. IMS has shown potential to be an effective, low cost technique to produce system monitoring capability for a variety of applications. We describe the training and system health monitoring techniques of IMS. We also present the application of IMS to a data set from the Space Shuttle Columbia STS-107 flight. IMS was able to detect an anomaly in the launch telemetry shortly after a foam impact damaged Columbia's thermal protection system.

  10. Cellular automatons applied to gas dynamic problems

    NASA Technical Reports Server (NTRS)

    Long, Lyle N.; Coopersmith, Robert M.; Mclachlan, B. G.

    1987-01-01

    This paper compares the results of a relatively new computational fluid dynamics method, cellular automatons, with experimental data and analytical results. This technique has been shown to qualitatively predict fluidlike behavior; however, there have been few published comparisons with experiment or other theories. Comparisons are made for a one-dimensional supersonic piston problem, Stokes first problem, and the flow past a normal flat plate. These comparisons are used to assess the ability of the method to accurately model fluid dynamic behavior and to point out its limitations. Reasonable results were obtained for all three test cases, but the fundamental limitations of cellular automatons are numerous. It may be misleading, at this time, to say that cellular automatons are a computationally efficient technique. Other methods, based on continuum or kinetic theory, would also be very efficient if as little of the physics were included.

  11. A novel parallel architecture for local histogram equalization

    NASA Astrophysics Data System (ADS)

    Ohannessian, Mesrob I.; Choueiter, Ghinwa F.; Diab, Hassan

    2005-07-01

    Local histogram equalization is an image enhancement algorithm that has found wide application in the pre-processing stage of areas such as computer vision, pattern recognition and medical imaging. The computationally intensive nature of the procedure, however, is a main limitation when real time interactive applications are in question. This work explores the possibility of performing parallel local histogram equalization, using an array of special purpose elementary processors, through an HDL implementation that targets FPGA or ASIC platforms. A novel parallelization scheme is presented and the corresponding architecture is derived. The algorithm is reduced to pixel-level operations. Processing elements are assigned image blocks, to maintain a reasonable performance-cost ratio. To further simplify both processor and memory organizations, a bit-serial access scheme is used. A brief performance assessment is provided to illustrate and quantify the merit of the approach.

  12. A distributed data base management system. [for Deep Space Network

    NASA Technical Reports Server (NTRS)

    Bryan, A. I.

    1975-01-01

    Major system design features of a distributed data management system for the NASA Deep Space Network (DSN) designed for continuous two-way deep space communications are described. The reasons for which the distributed data base utilizing third-generation minicomputers is selected as the optimum approach for the DSN are threefold: (1) with a distributed master data base, valid data is available in real-time to support DSN management activities at each location; (2) data base integrity is the responsibility of local management; and (3) the data acquisition/distribution and processing power of a third-generation computer enables the computer to function successfully as a data handler or as an on-line process controller. The concept of the distributed data base is discussed along with the software, data base integrity, and hardware used. The data analysis/update constraint is examined.

  13. Neuromorphic Computing – From Materials Research to Systems Architecture Roundtable

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schuller, Ivan K.; Stevens, Rick; Pino, Robinson

    2015-10-29

    Computation in its many forms is the engine that fuels our modern civilization. Modern computation—based on the von Neumann architecture—has allowed, until now, the development of continuous improvements, as predicted by Moore’s law. However, computation using current architectures and materials will inevitably—within the next 10 years—reach a limit because of fundamental scientific reasons. DOE convened a roundtable of experts in neuromorphic computing systems, materials science, and computer science in Washington on October 29-30, 2015 to address the following basic questions: Can brain-like (“neuromorphic”) computing devices based on new material concepts and systems be developed to dramatically outperform conventional CMOS basedmore » technology? If so, what are the basic research challenges for materials sicence and computing? The overarching answer that emerged was: The development of novel functional materials and devices incorporated into unique architectures will allow a revolutionary technological leap toward the implementation of a fully “neuromorphic” computer. To address this challenge, the following issues were considered: The main differences between neuromorphic and conventional computing as related to: signaling models, timing/clock, non-volatile memory, architecture, fault tolerance, integrated memory and compute, noise tolerance, analog vs. digital, and in situ learning New neuromorphic architectures needed to: produce lower energy consumption, potential novel nanostructured materials, and enhanced computation Device and materials properties needed to implement functions such as: hysteresis, stability, and fault tolerance Comparisons of different implementations: spin torque, memristors, resistive switching, phase change, and optical schemes for enhanced breakthroughs in performance, cost, fault tolerance, and/or manufacturability.« less

  14. Alternative modeling methods for plasma-based Rf ion sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veitzer, Seth A., E-mail: veitzer@txcorp.com; Kundrapu, Madhusudhan, E-mail: madhusnk@txcorp.com; Stoltz, Peter H., E-mail: phstoltz@txcorp.com

    Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H{sup −} source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. Inmore » particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H{sup −} ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD models for the SNS source and present simulation results demonstrating plasma evolution over many Rf periods for different plasma temperatures. We perform the calculations in parallel, on unstructured meshes, using finite-volume solvers in order to obtain results in reasonable time.« less

  15. Alternative modeling methods for plasma-based Rf ion sources.

    PubMed

    Veitzer, Seth A; Kundrapu, Madhusudhan; Stoltz, Peter H; Beckwith, Kristian R C

    2016-02-01

    Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H(-) source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. In particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H(-) ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD models for the SNS source and present simulation results demonstrating plasma evolution over many Rf periods for different plasma temperatures. We perform the calculations in parallel, on unstructured meshes, using finite-volume solvers in order to obtain results in reasonable time.

  16. Brain tissue segmentation in MR images based on a hybrid of MRF and social algorithms.

    PubMed

    Yousefi, Sahar; Azmi, Reza; Zahedi, Morteza

    2012-05-01

    Effective abnormality detection and diagnosis in Magnetic Resonance Images (MRIs) requires a robust segmentation strategy. Since manual segmentation is a time-consuming task which engages valuable human resources, automatic MRI segmentations received an enormous amount of attention. For this goal, various techniques have been applied. However, Markov Random Field (MRF) based algorithms have produced reasonable results in noisy images compared to other methods. MRF seeks a label field which minimizes an energy function. The traditional minimization method, simulated annealing (SA), uses Monte Carlo simulation to access the minimum solution with heavy computation burden. For this reason, MRFs are rarely used in real time processing environments. This paper proposed a novel method based on MRF and a hybrid of social algorithms that contain an ant colony optimization (ACO) and a Gossiping algorithm which can be used for segmenting single and multispectral MRIs in real time environments. Combining ACO with the Gossiping algorithm helps find the better path using neighborhood information. Therefore, this interaction causes the algorithm to converge to an optimum solution faster. Several experiments on phantom and real images were performed. Results indicate that the proposed algorithm outperforms the traditional MRF and hybrid of MRF-ACO in speed and accuracy. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Lightweight Adaptation of Classifiers to Users and Contexts: Trends of the Emerging Domain

    PubMed Central

    Vildjiounaite, Elena; Gimel'farb, Georgy; Kyllönen, Vesa; Peltola, Johannes

    2015-01-01

    Intelligent computer applications need to adapt their behaviour to contexts and users, but conventional classifier adaptation methods require long data collection and/or training times. Therefore classifier adaptation is often performed as follows: at design time application developers define typical usage contexts and provide reasoning models for each of these contexts, and then at runtime an appropriate model is selected from available ones. Typically, definition of usage contexts and reasoning models heavily relies on domain knowledge. However, in practice many applications are used in so diverse situations that no developer can predict them all and collect for each situation adequate training and test databases. Such applications have to adapt to a new user or unknown context at runtime just from interaction with the user, preferably in fairly lightweight ways, that is, requiring limited user effort to collect training data and limited time of performing the adaptation. This paper analyses adaptation trends in several emerging domains and outlines promising ideas, proposed for making multimodal classifiers user-specific and context-specific without significant user efforts, detailed domain knowledge, and/or complete retraining of the classifiers. Based on this analysis, this paper identifies important application characteristics and presents guidelines to consider these characteristics in adaptation design. PMID:26473165

  18. Optimizing R with SparkR on a commodity cluster for biomedical research.

    PubMed

    Sedlmayr, Martin; Würfl, Tobias; Maier, Christian; Häberle, Lothar; Fasching, Peter; Prokosch, Hans-Ulrich; Christoph, Jan

    2016-12-01

    Medical researchers are challenged today by the enormous amount of data collected in healthcare. Analysis methods such as genome-wide association studies (GWAS) are often computationally intensive and thus require enormous resources to be performed in a reasonable amount of time. While dedicated clusters and public clouds may deliver the desired performance, their use requires upfront financial efforts or anonymous data, which is often not possible for preliminary or occasional tasks. We explored the possibilities to build a private, flexible cluster for processing scripts in R based on commodity, non-dedicated hardware of our department. For this, a GWAS-calculation in R on a single desktop computer, a Message Passing Interface (MPI)-cluster, and a SparkR-cluster were compared with regards to the performance, scalability, quality, and simplicity. The original script had a projected runtime of three years on a single desktop computer. Optimizing the script in R already yielded a significant reduction in computing time (2 weeks). By using R-MPI and SparkR, we were able to parallelize the computation and reduce the time to less than three hours (2.6 h) on already available, standard office computers. While MPI is a proven approach in high-performance clusters, it requires rather static, dedicated nodes. SparkR and its Hadoop siblings allow for a dynamic, elastic environment with automated failure handling. SparkR also scales better with the number of nodes in the cluster than MPI due to optimized data communication. R is a popular environment for clinical data analysis. The new SparkR solution offers elastic resources and allows supporting big data analysis using R even on non-dedicated resources with minimal change to the original code. To unleash the full potential, additional efforts should be invested to customize and improve the algorithms, especially with regards to data distribution. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  19. Vehicle Integrated Prognostic Reasoner (VIPR) Final Report

    NASA Technical Reports Server (NTRS)

    Bharadwaj, Raj; Mylaraswamy, Dinkar; Cornhill, Dennis; Biswas, Gautam; Koutsoukos, Xenofon; Mack, Daniel

    2013-01-01

    A systems view is necessary to detect, diagnose, predict, and mitigate adverse events during the flight of an aircraft. While most aircraft subsystems look for simple threshold exceedances and report them to a central maintenance computer, the vehicle integrated prognostic reasoner (VIPR) proactively generates evidence and takes an active role in aircraft-level health assessment. Establishing the technical feasibility and a design trade-space for this next-generation vehicle-level reasoning system (VLRS) is the focus of our work.

  20. BITNET: Past, Present, and Future.

    ERIC Educational Resources Information Center

    Oberst, Daniel J.; Smith, Sheldon B.

    1986-01-01

    Discusses history and development of the academic computer network BITNET, including BITNET Network Support Center's growth and services, and international expansion. Network users, reasons for growth, and future developments are reviewed. A BITNET applications sampler and listings of compatible computers and operating systems, sites, and…

  1. The Economics of Educational Software Portability.

    ERIC Educational Resources Information Center

    Oliveira, Joao Batista Araujo e

    1990-01-01

    Discusses economic issues that affect the portability of educational software. Topics discussed include economic reasons for portability, including cost effectiveness; the nature and behavior of educational computer software markets; the role of producers, buyers, and consumers; potential effects of government policies; computer piracy; and…

  2. Real-time skin feature identification in a time-sequential video stream

    NASA Astrophysics Data System (ADS)

    Kramberger, Iztok

    2005-04-01

    Skin color can be an important feature when tracking skin-colored objects. Particularly this is the case for computer-vision-based human-computer interfaces (HCI). Humans have a highly developed feeling of space and, therefore, it is reasonable to support this within intelligent HCI, where the importance of augmented reality can be foreseen. Joining human-like interaction techniques within multimodal HCI could, or will, gain a feature for modern mobile telecommunication devices. On the other hand, real-time processing plays an important role in achieving more natural and physically intuitive ways of human-machine interaction. The main scope of this work is the development of a stereoscopic computer-vision hardware-accelerated framework for real-time skin feature identification in the sense of a single-pass image segmentation process. The hardware-accelerated preprocessing stage is presented with the purpose of color and spatial filtering, where the skin color model within the hue-saturation-value (HSV) color space is given with a polyhedron of threshold values representing the basis of the filter model. An adaptive filter management unit is suggested to achieve better segmentation results. This enables the adoption of filter parameters to the current scene conditions in an adaptive way. Implementation of the suggested hardware structure is given at the level of filed programmable system level integrated circuit (FPSLIC) devices using an embedded microcontroller as their main feature. A stereoscopic clue is achieved using a time-sequential video stream, but this shows no difference for real-time processing requirements in terms of hardware complexity. The experimental results for the hardware-accelerated preprocessing stage are given by efficiency estimation of the presented hardware structure using a simple motion-detection algorithm based on a binary function.

  3. A novel approach to estimate emissions from large transportation networks: Hierarchical clustering-based link-driving-schedules for EPA-MOVES using dynamic time warping measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aziz, H. M. Abdul; Ukkusuri, Satish V.

    We present that EPA-MOVES (Motor Vehicle Emission Simulator) is often integrated with traffic simulators to assess emission levels of large-scale urban networks with signalized intersections. High variations in speed profiles exist in the context of congested urban networks with signalized intersections. The traditional average-speed-based emission estimation technique with EPA-MOVES provides faster execution while underestimates the emissions in most cases because of ignoring the speed variation at congested networks with signalized intersections. In contrast, the atomic second-by-second speed profile (i.e., the trajectory of each vehicle)-based technique provides accurate emissions at the cost of excessive computational power and time. We addressed thismore » issue by developing a novel method to determine the link-driving-schedules (LDSs) for the EPA-MOVES tool. Our research developed a hierarchical clustering technique with dynamic time warping similarity measures (HC-DTW) to find the LDS for EPA-MOVES that is capable of producing emission estimates better than the average-speed-based technique with execution time faster than the atomic speed profile approach. We applied the HC-DTW on a sample data from a signalized corridor and found that HC-DTW can significantly reduce computational time without compromising the accuracy. The developed technique in this research can substantially contribute to the EPA-MOVES-based emission estimation process for large-scale urban transportation network by reducing the computational time with reasonably accurate estimates. This method is highly appropriate for transportation networks with higher variation in speed such as signalized intersections. Lastly, experimental results show error difference ranging from 2% to 8% for most pollutants except PM 10.« less

  4. A novel approach to estimate emissions from large transportation networks: Hierarchical clustering-based link-driving-schedules for EPA-MOVES using dynamic time warping measures

    DOE PAGES

    Aziz, H. M. Abdul; Ukkusuri, Satish V.

    2017-06-29

    We present that EPA-MOVES (Motor Vehicle Emission Simulator) is often integrated with traffic simulators to assess emission levels of large-scale urban networks with signalized intersections. High variations in speed profiles exist in the context of congested urban networks with signalized intersections. The traditional average-speed-based emission estimation technique with EPA-MOVES provides faster execution while underestimates the emissions in most cases because of ignoring the speed variation at congested networks with signalized intersections. In contrast, the atomic second-by-second speed profile (i.e., the trajectory of each vehicle)-based technique provides accurate emissions at the cost of excessive computational power and time. We addressed thismore » issue by developing a novel method to determine the link-driving-schedules (LDSs) for the EPA-MOVES tool. Our research developed a hierarchical clustering technique with dynamic time warping similarity measures (HC-DTW) to find the LDS for EPA-MOVES that is capable of producing emission estimates better than the average-speed-based technique with execution time faster than the atomic speed profile approach. We applied the HC-DTW on a sample data from a signalized corridor and found that HC-DTW can significantly reduce computational time without compromising the accuracy. The developed technique in this research can substantially contribute to the EPA-MOVES-based emission estimation process for large-scale urban transportation network by reducing the computational time with reasonably accurate estimates. This method is highly appropriate for transportation networks with higher variation in speed such as signalized intersections. Lastly, experimental results show error difference ranging from 2% to 8% for most pollutants except PM 10.« less

  5. A method for automatic matching of multi-timepoint findings for enhanced clinical workflow

    NASA Astrophysics Data System (ADS)

    Raghupathi, Laks; Dinesh, MS; Devarakota, Pandu R.; Valadez, Gerardo Hermosillo; Wolf, Matthias

    2013-03-01

    Non-interventional diagnostics (CT or MR) enables early identification of diseases like cancer. Often, lesion growth assessment done during follow-up is used to distinguish between benign and malignant ones. Thus correspondences need to be found for lesions localized at each time point. Manually matching the radiological findings can be time consuming as well as tedious due to possible differences in orientation and position between scans. Also, the complicated nature of the disease makes the physicians to rely on multiple modalities (PETCT, PET-MR) where it is even more challenging. Here, we propose an automatic feature-based matching that is robust to change in organ volume, subpar or no registration that can be done with very less computations. Traditional matching methods rely mostly on accurate image registration and applying the resulting deformation map on the findings coordinates. This has disadvantages when accurate registration is time-consuming or may not be possible due to vast organ volume differences between scans. Our novel matching proposes supervised learning by taking advantage of the underlying CAD features that are already present and considering the matching as a classification problem. In addition, the matching can be done extremely fast and at reasonable accuracy even when the image registration fails for some reason. Experimental results∗ on real-world multi-time point thoracic CT data showed an accuracy of above 90% with negligible false positives on a variety of registration scenarios.

  6. A short note on the use of the red-black tree in Cartesian adaptive mesh refinement algorithms

    NASA Astrophysics Data System (ADS)

    Hasbestan, Jaber J.; Senocak, Inanc

    2017-12-01

    Mesh adaptivity is an indispensable capability to tackle multiphysics problems with large disparity in time and length scales. With the availability of powerful supercomputers, there is a pressing need to extend time-proven computational techniques to extreme-scale problems. Cartesian adaptive mesh refinement (AMR) is one such method that enables simulation of multiscale, multiphysics problems. AMR is based on construction of octrees. Originally, an explicit tree data structure was used to generate and manipulate an adaptive Cartesian mesh. At least eight pointers are required in an explicit approach to construct an octree. Parent-child relationships are then used to traverse the tree. An explicit octree, however, is expensive in terms of memory usage and the time it takes to traverse the tree to access a specific node. For these reasons, implicit pointerless methods have been pioneered within the computer graphics community, motivated by applications requiring interactivity and realistic three dimensional visualization. Lewiner et al. [1] provides a concise review of pointerless approaches to generate an octree. Use of a hash table and Z-order curve are two key concepts in pointerless methods that we briefly discuss next.

  7. Inference of Vohradský's Models of Genetic Networks by Solving Two-Dimensional Function Optimization Problems

    PubMed Central

    Kimura, Shuhei; Sato, Masanao; Okada-Hatakeyama, Mariko

    2013-01-01

    The inference of a genetic network is a problem in which mutual interactions among genes are inferred from time-series of gene expression levels. While a number of models have been proposed to describe genetic networks, this study focuses on a mathematical model proposed by Vohradský. Because of its advantageous features, several researchers have proposed the inference methods based on Vohradský's model. When trying to analyze large-scale networks consisting of dozens of genes, however, these methods must solve high-dimensional non-linear function optimization problems. In order to resolve the difficulty of estimating the parameters of the Vohradský's model, this study proposes a new method that defines the problem as several two-dimensional function optimization problems. Through numerical experiments on artificial genetic network inference problems, we showed that, although the computation time of the proposed method is not the shortest, the method has the ability to estimate parameters of Vohradský's models more effectively with sufficiently short computation times. This study then applied the proposed method to an actual inference problem of the bacterial SOS DNA repair system, and succeeded in finding several reasonable regulations. PMID:24386175

  8. Bayesian methods for outliers detection in GNSS time series

    NASA Astrophysics Data System (ADS)

    Qianqian, Zhang; Qingming, Gui

    2013-07-01

    This article is concerned with the problem of detecting outliers in GNSS time series based on Bayesian statistical theory. Firstly, a new model is proposed to simultaneously detect different types of outliers based on the conception of introducing different types of classification variables corresponding to the different types of outliers; the problem of outlier detection is converted into the computation of the corresponding posterior probabilities, and the algorithm for computing the posterior probabilities based on standard Gibbs sampler is designed. Secondly, we analyze the reasons of masking and swamping about detecting patches of additive outliers intensively; an unmasking Bayesian method for detecting additive outlier patches is proposed based on an adaptive Gibbs sampler. Thirdly, the correctness of the theories and methods proposed above is illustrated by simulated data and then by analyzing real GNSS observations, such as cycle slips detection in carrier phase data. Examples illustrate that the Bayesian methods for outliers detection in GNSS time series proposed by this paper are not only capable of detecting isolated outliers but also capable of detecting additive outlier patches. Furthermore, it can be successfully used to process cycle slips in phase data, which solves the problem of small cycle slips.

  9. Uncertainty propagation by using spectral methods: A practical application to a two-dimensional turbulence fluid model

    NASA Astrophysics Data System (ADS)

    Riva, Fabio; Milanese, Lucio; Ricci, Paolo

    2017-10-01

    To reduce the computational cost of the uncertainty propagation analysis, which is used to study the impact of input parameter variations on the results of a simulation, a general and simple to apply methodology based on decomposing the solution to the model equations in terms of Chebyshev polynomials is discussed. This methodology, based on the work by Scheffel [Am. J. Comput. Math. 2, 173-193 (2012)], approximates the model equation solution with a semi-analytic expression that depends explicitly on time, spatial coordinates, and input parameters. By employing a weighted residual method, a set of nonlinear algebraic equations for the coefficients appearing in the Chebyshev decomposition is then obtained. The methodology is applied to a two-dimensional Braginskii model used to simulate plasma turbulence in basic plasma physics experiments and in the scrape-off layer of tokamaks, in order to study the impact on the simulation results of the input parameter that describes the parallel losses. The uncertainty that characterizes the time-averaged density gradient lengths, time-averaged densities, and fluctuation density level are evaluated. A reasonable estimate of the uncertainty of these distributions can be obtained with a single reduced-cost simulation.

  10. Investigation of undersampling and reconstruction algorithm dependence on respiratory correlated 4D-MRI for online MR-guided radiation therapy

    NASA Astrophysics Data System (ADS)

    Mickevicius, Nikolai J.; Paulson, Eric S.

    2017-04-01

    The purpose of this work is to investigate the effects of undersampling and reconstruction algorithm on the total processing time and image quality of respiratory phase-resolved 4D MRI data. Specifically, the goal is to obtain quality 4D-MRI data with a combined acquisition and reconstruction time of five minutes or less, which we reasoned would be satisfactory for pre-treatment 4D-MRI in online MRI-gRT. A 3D stack-of-stars, self-navigated, 4D-MRI acquisition was used to scan three healthy volunteers at three image resolutions and two scan durations. The NUFFT, CG-SENSE, SPIRiT, and XD-GRASP reconstruction algorithms were used to reconstruct each dataset on a high performance reconstruction computer. The overall image quality, reconstruction time, artifact prevalence, and motion estimates were compared. The CG-SENSE and XD-GRASP reconstructions provided superior image quality over the other algorithms. The combination of a 3D SoS sequence and parallelized reconstruction algorithms using computing hardware more advanced than those typically seen on product MRI scanners, can result in acquisition and reconstruction of high quality respiratory correlated 4D-MRI images in less than five minutes.

  11. A Review of Issues in Computer-Assisted Counseling and a New Approach to Its Applications in College Selection.

    ERIC Educational Resources Information Center

    Colburn, Kenneth A.

    Issues and approaches in computer-assisted counseling are presented in this thesis. Reasons are considered for the ineffectiveness of school guidance counselors in helping students make college selections, and a reconceptualization of counseling as a technology of helping is suggested. The uses of computers in counseling are examined along with…

  12. Why so Few Women Enroll in Computing? Gender and Ethnic Differences in Students' Perception

    ERIC Educational Resources Information Center

    Varma, Roli

    2010-01-01

    Women are seriously under-represented in computer science and computer engineering (CS/CE) education and, thus, in the information technology (IT) workforce in the USA. This is a grim situation for both the women whose potential remains unutilized and the US society which is dependent on IT. This article examines the reasons behind low enrollment…

  13. Student Perceptions of the Value of a University Laptop Program

    ERIC Educational Resources Information Center

    Orr, Claudia; Sherony, Bruce; Steinhaus, Carol

    2008-01-01

    In the fall of 2000, when a mandatory laptop program was initiated at a Midwest regional university, a faculty research team began a longitudinal study to determine student perceptions of the value of the program. Value was defined as usefulness of the computer, reasonable price for the computer and the quality of the computer. A survey of 1,000…

  14. Evaluating a Computational Model of Social Causality and Responsibility

    DTIC Science & Technology

    2006-01-01

    Evaluating a Computational Model of Social Causality and Responsibility Wenji Mao University of Southern California Institute for Creative...empirically evaluate a computa- tional model of social causality and responsibility against human social judgments. Results from our experimental...developed a general computational model of social cau- sality and responsibility [10, 11] that formalizes the factors people use in reasoning about

  15. The Modeling of Human Intelligence in the Computer as Demonstrated in the Game of DIPLOMAT.

    ERIC Educational Resources Information Center

    Collins, James Edward; Paulsen, Thomas Dean

    An attempt was made to develop human-like behavior in the computer. A theory of the human learning process was described. A computer game was presented which simulated the human capabilities of reasoning and learning. The program was required to make intelligent decisions based on past experiences and critical analysis of the present situation.…

  16. Human Processing of Knowledge from Texts: Acquisition, Integration, and Reasoning.

    ERIC Educational Resources Information Center

    Thorndyke, Perry W.; Hayes-Roth, Barbara

    This report documents a series of studies on how undergraduate students learn from and reason with textual information. The studies described were undertaken to produce models that could serve as the basis for designing computer systems capable of structuring and presenting text material in optimal formats. Divided into sections, the report…

  17. Could Elementary Mathematics Textbooks Help Give Attention to Reasons in the Classroom?

    ERIC Educational Resources Information Center

    Newton, Douglas P.; Newton, Lynn D.

    2007-01-01

    Trainee teachers, new and non-specialist teachers of elementary mathematics have a tendency to avoid thought about reasons in mathematics. Instead, they tend to favour the development of computational skill through the rote application of procedures, routines and algorithms. Could elementary mathematics textbooks serve as models of practice and…

  18. Effects of Students' Prior Knowledge on Scientific Reasoning in Density.

    ERIC Educational Resources Information Center

    Yang, Il-Ho; Kwon, Yong-Ju; Kim, Young-Shin; Jang, Myoung-Duk; Jeong, Jin-Woo; Park, Kuk-Tae

    2002-01-01

    Investigates the effects of students' prior knowledge on the scientific reasoning processes of performing the task of controlling variables with computer simulation and identifies a number of problems that students encounter in scientific discovery. Involves (n=27) 5th grade students and (n=33) 7th grade students. Indicates that students' prior…

  19. GPUs benchmarking in subpixel image registration algorithm

    NASA Astrophysics Data System (ADS)

    Sanz-Sabater, Martin; Picazo-Bueno, Jose Angel; Micó, Vicente; Ferrerira, Carlos; Granero, Luis; Garcia, Javier

    2015-05-01

    Image registration techniques are used among different scientific fields, like medical imaging or optical metrology. The straightest way to calculate shifting between two images is using the cross correlation, taking the highest value of this correlation image. Shifting resolution is given in whole pixels which cannot be enough for certain applications. Better results can be achieved interpolating both images, as much as the desired resolution we want to get, and applying the same technique described before, but the memory needed by the system is significantly higher. To avoid memory consuming we are implementing a subpixel shifting method based on FFT. With the original images, subpixel shifting can be achieved multiplying its discrete Fourier transform by a linear phase with different slopes. This method is high time consuming method because checking a concrete shifting means new calculations. The algorithm, highly parallelizable, is very suitable for high performance computing systems. GPU (Graphics Processing Unit) accelerated computing became very popular more than ten years ago because they have hundreds of computational cores in a reasonable cheap card. In our case, we are going to register the shifting between two images, doing the first approach by FFT based correlation, and later doing the subpixel approach using the technique described before. We consider it as `brute force' method. So we will present a benchmark of the algorithm consisting on a first approach (pixel resolution) and then do subpixel resolution approaching, decreasing the shifting step in every loop achieving a high resolution in few steps. This program will be executed in three different computers. At the end, we will present the results of the computation, with different kind of CPUs and GPUs, checking the accuracy of the method, and the time consumed in each computer, discussing the advantages, disadvantages of the use of GPUs.

  20. Information and communication technology: students' health education in 1st- to 6th-grade South Korea elementary schools.

    PubMed

    Lee, Eunjoo; Park, Hyejin; Whyte, James; Jeong, Eunhoe

    2013-09-01

    The purposes of this study were to (1) identify school nurses' awareness of information and communication technology (ICT) use in students' health education and (2) explore the barriers or reasons for the adoption of ICT in school nursing practice, while (3) presenting strategies to speed ICT diffusion and dissemination into practice. For data collection, 209 primary school nurses in K province of Korea were selected and e-mailed the questionnaires. Collected data were analyzed as frequency, percentages, and chi-square tests. The major reasons to adopt ICT were increasing school nurses' confidence in providing health education and improving teaching methods using diverse multimedia. The major barriers to utilization of ICT were lack of time for preparation of educational materials, lack of software availability, and lack of computer skills of school nurses and students. Several strategies were suggested to speed the diffusion of ICT into students' health education, such as integrating a health education course into regular curriculum. It is important to identify barriers and reasons for adoption of ICT in school nursing. In addition, strategies should be emphasized that result in more rapid diffusion of these technologies in school nursing practice. © 2013, American School Health Association.

Top